4Chan information extraction engine with a Streamlit interface.
To deploy the streamlit app with Docker:
docker run -d \
--name omegalurk \
-v omegalurk_models:/models \
-p 8501:8484 \
ghcr.io/ragingtiger/omegalurk:master
Then simply open your browser to http://localhost:8484
and you
should see the streamlit interface.
Build the automated tests locally as follows (first cd OmegaLurk
):
docker build --target test \
-t omegalurk:tests \
. \
--load
Then run tests as follows:
docker run --rm \
-v $PWD:/OmegaLurk # optional
-it
omegalurk:tests
NOTE: The -v
flag is optional, and is only necessary if you want to develop
the code locally, and test the changes on your local copy of the source code.
-
How to change time zone?
- The default time zone is America/New_York, but this can be changed
by simply setting the
TZ
environment variable ondocker run
as follows:This will set the time zone inside the container to Asia/Bangkok. For a list of available time zones plese see here.docker run -d \ --name omegalurk \ -v omegalurk_models:/models \ -p 8501:8484 \ -e TZ='Asia/Bangkok' ghcr.io/ragingtiger/omegalurk:master
- The default time zone is America/New_York, but this can be changed
by simply setting the
-
Where are the HuggingFace models downloaded to?
- The models are downloaded (cached) in the
/models transformers
directory, which the docker command in the deploy section manages with Docker volumes.
- The models are downloaded (cached) in the
-
Why is my model taking so long to execute?
- When models are run for the first time, you can expect it will take some
time to download from the
Model Hub. You can
always monitor the Docker logs to see the progress of the model
downloads, simply run
docker logs -f omegalurk
to see them.
- When models are run for the first time, you can expect it will take some
time to download from the
Model Hub. You can
always monitor the Docker logs to see the progress of the model
downloads, simply run