Code Monkey home page Code Monkey logo

jeffreyca / spleeter-web Goto Github PK

View Code? Open in Web Editor NEW
401.0 9.0 74.0 10.71 MB

Self-hostable web app for isolating the vocal, accompaniment, bass, and drums of any song. Supports Spleeter, D3Net, Demucs, Tasnet, X-UMX. Built with React and Django.

Home Page: https://jeffreyca.github.io/spleeter-web/

License: MIT License

Python 38.73% JavaScript 0.83% HTML 0.27% CSS 14.33% Dockerfile 0.68% Shell 0.57% TypeScript 44.60%
spleeter source-separation react django crossnet-open-unmix demucs tasnet d3net x-umx

spleeter-web's Introduction

Spleeter Web

Latest release Commits since latest release Docker Compose push (master)

Spleeter Web is a web application for isolating or removing the vocal, accompaniment, bass, and/or drum components of any song. For example, you can use it to isolate the vocals of a track, or you can use it remove the vocals to get an instrumental version of a song.

It supports a number of different source separation models: Spleeter (4stems-model and 5stems-model), Demucs, CrossNet-Open-Unmix, and D3Net.

The app uses Django for the backend API and React for the frontend. Celery is used for the task queue. Docker images are available, including ones with GPU support.

Table of Contents

Features

  • Supports Spleeter, Demucs, CrossNet-Open-Unmix (X-UMX), and D3Net source separation models
    • Each model supports a different set of user-configurable parameters in the UI
  • Dynamic Mixes let you export and play back in realtime your own custom mix of the different components
  • Import tracks by uploading an audio file or by a YouTube link
    • Built-in YouTube search functionality (YouTube Data API key required)
  • Supports lossy (MP3) and lossless (FLAC, WAV) output formats
  • Persistent audio library with ability to stream and download your source tracks and mixes
  • Customize number of background workers working on audio separation and YouTube imports
  • Supports third-party storage backends like S3 and Azure Blob Storage
  • Clean and responsive UI
  • Support for GPU separation
  • Fully Dockerized

Homepage

Upload modal

Mixer

Getting started with Docker

Requirements

Instructions

  1. Clone repo:

    $ git clone https://github.com/JeffreyCA/spleeter-web.git
    $ cd spleeter-web
  2. (Optional) Set the YouTube Data API key (for YouTube search functionality):

    You can skip this step, but you would not be able to import songs by searching with a query. You would still be able to import songs via YouTube links though.

    Create an .env file at the project root with the following contents:

    YOUTUBE_API_KEY=<YouTube Data API key>
    
  3. (Optional) Setup for GPU support: Source separation can be accelerated with a GPU (however only NVIDIA GPUs are supported).

    1. Install NVIDIA drivers for your GPU.

    2. Install the NVIDIA Container Toolkit. If on Windows, refer to this.

    3. Verify Docker works with your GPU by running sudo docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

  4. Download and run prebuilt Docker images:

    Note: On Apple Silicon and other AArch64 systems, the Docker images need to be built from source.

    # CPU separation
    spleeter-web$ docker-compose -f docker-compose.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up
    # GPU separation
    spleeter-web$ docker-compose -f docker-compose.gpu.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up

    Alternatively, you can build the Docker images from source:

    # CPU separation
    spleeter-web$ docker-compose -f docker-compose.yml -f docker-compose.build.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up --build
    # GPU separation
    spleeter-web$ docker-compose -f docker-compose.gpu.yml -f docker-compose.build.gpu.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up --build
  5. Launch Spleeter Web

    Navigate to http://127.0.0.1:80 in your browser. Uploaded tracks and generated mixes will appear in media/uploads and media/separate respectively on your host machine.

Getting started without Docker

If you are on Windows, it's recommended to follow the Docker instructions above. Celery is not well-supported on Windows.

Requirements

  • x86-64 arch (For AArch64 systems, use Docker)
  • 4 GB+ of memory (source separation is memory-intensive)
  • Python 3.8+ (link)
  • Node.js 16+ (link)
  • Redis (link)
  • ffmpeg and ffprobe (link)
    • On macOS, you can install it using Homebrew or MacPorts
    • On Windows, you can follow this guide

Instructions

  1. Set environment variables

    Make sure these variables are set in every terminal session prior to running the commands below.

    # Unix/macOS:
    (env) spleeter-web$ export YOUTUBE_API_KEY=<api key>
    # Windows:
    (env) spleeter-web$ set YOUTUBE_API_KEY=<api key>
  2. Create Python virtual environment

    spleeter-web$ python -m venv env
    # Unix/macOS:
    spleeter-web$ source env/bin/activate
    # Windows:
    spleeter-web$ .\env\Scripts\activate
  3. Install Python dependencies

    (env) spleeter-web$ pip install -r requirements.txt
    (env) spleeter-web$ pip install -r requirements-spleeter.txt --no-dependencies
  4. Install Node dependencies

    spleeter-web$ cd frontend
    spleeter-web/frontend$ npm install
  5. Ensure Redis server is running on localhost:6379 (needed for Celery)

    You can run it on a different host or port, but make sure to update CELERY_BROKER_URL and CELERY_RESULT_BACKEND in settings.py. It must be follow the format: redis://host:port/db.

  6. Apply migrations

    (env) spleeter-web$ python manage.py migrate
  7. Build frontend

    spleeter-web$ npm run build --prefix frontend
  8. Start backend in separate terminal

    (env) spleeter-web$ python manage.py collectstatic && python manage.py runserver 127.0.0.1:8000
  9. Start Celery workers in separate terminal

    Unix/macOS:

    # Start fast worker
    (env) spleeter-web$ celery -A api worker -l INFO -Q fast_queue -c 3
    
    # Start slow worker
    (env) spleeter-web$ celery -A api worker -l INFO -Q slow_queue -c 1

    This launches two Celery workers: one processes fast tasks like YouTube imports and the other processes slow tasks like source separation. The one working on fast tasks can work on 3 tasks concurrently, while the one working on slow tasks only handles a single task at a time (since it's memory-intensive). Feel free to adjust these values to your fitting.

    Windows:

    You'll first need to install gevent. Note however that you will not be able to abort in-progress tasks if using Celery on Windows.

    (env) spleeter-web$ pip install gevent
    # Start fast worker
    (env) spleeter-web$ celery -A api worker -l INFO -Q fast_queue -c 3 --pool=gevent
    
    # Start slow worker
    (env) spleeter-web$ celery -A api worker -l INFO -Q slow_queue -c 1 --pool=gevent
  10. Launch Spleeter Web

    Navigate to http://127.0.0.1:8000 in your browser. Uploaded and mixed tracks will appear in media/uploads and media/separate respectively.

Configuration

Django settings

Settings file Description
django_react/settings.py The base Django settings used when launched in non-Docker context.
django_react/settings_dev.py Contains the override settings used when run in development mode (i.e. DJANGO_DEVELOPMENT is set).
django_react/settings_docker.py The base Django settings used when launched using Docker.
django_react/settings_docker_dev.py Contains the override settings used when run in development mode using Docker (i.e. docker-compose.dev.yml).

Environment variables

Here is a list of all the environment variables you can use to further customize Spleeter Web:

Name Description
CPU_SEPARATION No need to set this if using Docker. Otherwise, set to 1 if you want CPU separation and 0 if you want GPU separation.
DJANGO_DEVELOPMENT Set to true if you want to run development build, which uses settings_dev.py/settings_docker_dev.py and runs Webpack in dev mode.
ALLOW_ALL_HOSTS Set to 1 if you want Django to allow all hosts, overriding any APP_HOST value. This effectively sets the Django setting ALLOWED_HOSTS to [*]. There are security risks associated with doing this. Default: 0
APP_HOST Domain name(s) or public IP(s) of server. To specify multiple hosts, separate them by a comma (,).
API_HOST Hostname of API server (for nginx).
DEFAULT_FILE_STORAGE Whether to use local filesystem or cloud-based storage for storing uploads and separated files. FILE or AWS or AZURE.
AWS_ACCESS_KEY_ID AWS access key. Used when DEFAULT_FILE_STORAGE is set to AWS.
AWS_SECRET_ACCESS_KEY AWS secret access key. Used when DEFAULT_FILE_STORAGE is set to AWS.
AWS_STORAGE_BUCKET_NAME AWS S3 storage bucket name. Used when DEFAULT_FILE_STORAGE is set to AWS.
AWS_S3_CUSTOM_DOMAIN Custom domain, such as for a CDN. Used when DEFAULT_FILE_STORAGE is set to AWS.
AWS_S3_REGION_NAME S3 region (e.g. us-east-1). Used when DEFAULT_FILE_STORAGE is set to AWS.
AWS_S3_SIGNATURE_VERSION Default signature version used for generating presigned urls. To be able to access your s3 objects in all regions through presigned urls, set this to s3v4. Used when DEFAULT_FILE_STORAGE is set to AWS.
AZURE_ACCOUNT_KEY Azure Blob account key. Used when DEFAULT_FILE_STORAGE is set to AZURE.
AZURE_ACCOUNT_NAME Azure Blob account name. Used when DEFAULT_FILE_STORAGE is set to AZURE.
AZURE_CONTAINER Azure Blob container name. Used when DEFAULT_FILE_STORAGE is set to AZURE.
AZURE_CUSTOM_DOMAIN Custom domain, such as for a CDN. Used when DEFAULT_FILE_STORAGE is set to AZURE.
CELERY_BROKER_URL Broker URL for Celery (e.g. redis://localhost:6379/0).
CELERY_RESULT_BACKEND Result backend for Celery (e.g. redis://localhost:6379/0).
CELERY_FAST_QUEUE_CONCURRENCY Number of concurrent YouTube import tasks Celery can process. Docker only.
CELERY_SLOW_QUEUE_CONCURRENCY Number of concurrent source separation tasks Celery can process. Docker only.
CERTBOT_DOMAIN Domain for creating HTTPS certs using Let's Encrypt's Certbot. Docker only.
CERTBOT_EMAIL Email address for creating HTTPS certs using Let's Encrypt's Certbot. Docker only.
D3NET_OPENVINO Set to 1 to use OpenVINO for D3Net CPU separation. Requires Intel CPU.
DEMUCS_SEGMENT_SIZE Length of each split for GPU separation. Default is 40, which requires a around 7 GB of GPU memory. For GPUs with 2-4 GB of memory, experiment with lower values (minimum is 10). Also recommended to set PYTORCH_NO_CUDA_MEMORY_CACHING=1.
D3NET_OPENVINO_THREADS Set to the number of CPU threads for D3Net OpenVINO separation. Default: # of CPUs on the machine. Requires Intel CPU.
DEV_WEBSERVER_PORT Port that development webserver is mapped to on host machine. Docker only.
ENABLE_CROSS_ORIGIN_HEADERS Set to 1 to set Cross-Origin-Embedder-Policy and Cross-Origin-Opener-Policy headers which are required for exporting Dynamic Mixes in-browser.
NGINX_PORT Port that Nginx is mapped to on host machine for HTTP. Docker only.
NGINX_PORT_SSL Port that Nginx is mapped to on host machine for HTTPS. Docker only.
PYTORCH_NO_CUDA_MEMORY_CACHING Set to 1 to disable Pytorch caching for GPU separation. May help with Demucs separation on lower memory GPUs. Also see DEMUCS_SEGMENT_SIZE.
UPLOAD_FILE_SIZE_LIMIT Maximum allowed upload file size (in megabytes). Default is 100.
YOUTUBE_API_KEY YouTube Data API key.
YOUTUBE_LENGTH_LIMIT Maximum allowed YouTube track length (in minutes). Default is 30.
YOUTUBEDL_SOURCE_ADDR Client-side IP address for yt-dlp to bind to. If you are facing 403 Forbidden errors, try setting this to 0.0.0.0 to force all connections through IPv4.
YOUTUBEDL_VERBOSE Set to 1 to enable verbose logging for yt-dlp.

Using cloud storage (Azure Storage, AWS S3, etc.)

By default, Spleeter Web uses the local filesystem to store uploaded files and mixes. It uses django-storages, so you can also configure it to use other storage backends like Azure Storage or AWS S3.

You can set the environment variable DEFAULT_FILE_STORAGE (.env if using Docker) to either FILE (for local storage), AWS (S3 storage), or AZURE (Azure Storage).

Then, depending on which backend you're using, set these additional variables:

AWS S3:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_STORAGE_BUCKET_NAME

Azure Storage:

  • AZURE_ACCOUNT_KEY
  • AZURE_ACCOUNT_NAME
  • AZURE_CONTAINER

CORS

To play back a dynamic mix, you may need to configure your storage service's CORS settings to allow the Access-Control-Allow-Origin header.

If you have ENABLE_CROSS_ORIGIN_HEADERS set, then you'll need to additionally set the Cross-Origin-Resource-Policy response headers of audio files to cross-origin. See this for more details.

Deployment

Spleeter Web can be deployed on a VPS or a cloud server such as Azure VMs, AWS EC2, DigitalOcean, etc. Deploying to cloud container services like ECS is not yet supported out of the box.

  1. Clone this git repo

    $ git clone https://github.com/JeffreyCA/spleeter-web.git
    $ cd spleeter-web
  2. (Optional) If self-hosting, update docker-compose.prod.selfhost.yml and replace ./media with the path where media files should be stored on the server.

  3. In spleeter-web, create an .env file with the production environment variables

    Example .env file:

    APP_HOST=<domain name(s) or public IP(s) of server>
    DEFAULT_FILE_STORAGE=<FILE or AWS or AZURE>       # Optional (default = FILE)
    CELERY_FAST_QUEUE_CONCURRENCY=<concurrency count> # Optional (default = 3)
    CELERY_SLOW_QUEUE_CONCURRENCY=<concurrency count> # Optional (default = 1)
    YOUTUBE_API_KEY=<youtube api key>                 # Optional
    

    See Environment Variables for all the available variables. You can also set these directly in the docker-compose.*.yml files.

  4. Build and start production containers

    For GPU separation, replace docker-compose.yml and docker-compose.build.yml below for docker-compose.gpu.yml and docker-compose.build.gpu.yml respectively.

    If you are self-hosting media files:

    # Use prebuilt images
    spleeter-web$ sudo docker-compose -f docker-compose.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up -d
    # Or build from source
    spleeter-web$ sudo docker-compose -f docker-compose.yml -f docker-compose.build.yml -f docker-compose.prod.yml -f docker-compose.prod.selfhost.yml up --build -d

    Otherwise if using a storage provider:

    # Use prebuilt images
    spleeter-web$ sudo docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
    # Or build from source
    spleeter-web$ sudo docker-compose -f docker-compose.yml -f docker-compose.build.yml -f docker-compose.prod.yml up --build -d
  5. Access Spleeter Web at whatever you set APP_HOST to. Note that it will be running on port 80, not 8000. You can change this by setting NGINX_PORT and NGINX_PORT_SSL.

HTTPS support

Enabling HTTPS allows you to export Dynamic Mixes from your browser. To enable HTTPS, set both CERTBOT_DOMAIN and CERTBOT_EMAIL to your domain name and CERTBOT_EMAIL to your email in .env and include -f docker-compose.https.yml in your docker-compose up command.

Credits

Special thanks to my Sponsors:

And especially to all the researchers and devs behind all the source separation models:

And additional thanks to these wonderful projects:

Turntable icon made from Icon Fonts is licensed by CC BY 3.0.

License

MIT

spleeter-web's People

Contributors

dependabot[bot] avatar github-actions[bot] avatar jeffreyca avatar jtagcat avatar ma5onic avatar microtherion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spleeter-web's Issues

`timed out` after a second of downloading pretrained_model archive

celery-slow logs:

[2022-02-13 10:33:47,946: INFO/MainProcess] Task api.tasks.create_dynamic_mix[fcc56ab5-30ea-44f3-a5b4-c969b9c7bdbc] received
INFO:tensorflow:Apply unet for vocals_spectrogram
[2022-02-13 10:33:52,470: INFO/ForkPoolWorker-1] Apply unet for vocals_spectrogram
2022-02-13 10:33:52.539600: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-02-13 10:33:52.539653: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2022-02-13 10:33:52.539719: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (69eb1e621b67): /proc/driver/nvidia/version does not exist
WARNING:tensorflow:From /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534:_colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
[2022-02-13 10:33:52,585: WARNING/ForkPoolWorker-1] From /usr/local/lib/python3.7/site-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Apply unet for drums_spectrogram
[2022-02-13 10:33:53,628: INFO/ForkPoolWorker-1] Apply unet for drums_spectrogram
INFO:tensorflow:Apply unet for bass_spectrogram
[2022-02-13 10:33:54,692: INFO/ForkPoolWorker-1] Apply unet for bass_spectrogram
INFO:tensorflow:Apply unet for other_spectrogram
[2022-02-13 10:33:55,773: INFO/ForkPoolWorker-1] Apply unet for other_spectrogram
[2022-02-13 10:33:58,728: WARNING/ForkPoolWorker-1] INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/4stems.tar.gz
[2022-02-13 10:33:58,727: INFO/ForkPoolWorker-1] Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/4stems.tar.gz
[2022-02-13 10:34:03,811: WARNING/ForkPoolWorker-1] timed out
[2022-02-13 10:34:03,864: INFO/ForkPoolWorker-1] Task api.tasks.create_dynamic_mix[fcc56ab5-30ea-44f3-a5b4-c969b9c7bdbc] succeeded in 15.91590800700942s: None

No module named 'api.apps'

I used docker to build the image, but port 8000 was occupied and failed to start.

Creating spleeter-web_frontend_1    ... done
Creating spleeter-web_redis_1    ... done
Creating spleeter-web_celery-slow_1 ...
Creating spleeter-web_celery-fast_1 ...
Creating spleeter-web_api_1         ... error
Creating spleeter-web_celery-slow_1 ... done
Creating spleeter-web_celery-fast_1 ... done
0.0:8000 failed: port is already allocated

ERROR: for api  Cannot start service api: driver failed programming external connectivity on endpoint spleeter-web_api_1 (78dfbd67a1d4afbe00a8f9e8145d01ada08d9c41fe3622ef1584334f64a7384c): Bind for 0.0.0.0:8000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

I enter the container, modify the port and try to start the container. The web page prompts "err_connection_reset", and the error "no module named 'API. Apps" is displayed in the container log

Applying migrations
Traceback (most recent call last):
  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
    django.setup()
  File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
    app_config = AppConfig.create(entry)
  File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 212, in create
    mod = import_module(mod_path)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'api.apps'
Traceback (most recent call last):
  File "manage.py", line 21, in <module>
    main()
  File "manage.py", line 17, in main
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
    django.setup()
  File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
    app_config = AppConfig.create(entry)
  File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 212, in create
    mod = import_module(mod_path)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'api.apps'
Starting server
Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner
    self.run()
  File "/usr/local/lib/python3.7/threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 64, in wrapper
    fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 110, in inner_run
    autoreload.raise_last_exception()
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 87, in raise_last_exception
    raise _exception[1]
  File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 375, in execute
    autoreload.check_errors(django.setup)()
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 64, in wrapper
    fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
    app_config = AppConfig.create(entry)
  File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 212, in create
    mod = import_module(mod_path)
  File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'api.apps'

Docker setup doesn't work.

What I did (on a VM dedicated to it):

git clone https://github.com/JeffreyCA/spleeter-web.git && cd spleeter-web && docker-compose -f docker-compose.yml -f docker-compose.dev.yml up > whatisthis.log

whatisthis.log

Also, please, this is unmaintainable. This setup wants to assume, that it has unlimited control of the VM/docker. From a self-hosting perspective, it looks like a day or two to get running, and even more burden down the road.

EDIT: I'm not dead, but don't also have any info on when I'll return to test, add.

Add ability to cancel queued & in-progress tasks

Currently the delete button is disabled for YouTube source tracks whose tasks are queued or in progress.

Ideally, the delete action should revoke the corresponding task if it's in a queued state, and it should terminate the task if it's still in progress.

Huey supports the former, but does not have the ability to terminate a running task. Might be worthwhile to look at Celery as an alternative...

Deploying error

Hi,

Just wondering why the following errors pop up when I run spleeter-web$ sudo docker-compose -f docker-compose.yml -f docker-compose.build.yml -f docker-compose.prod.yml up --build -d from step 4:
image

I am using Azure as my storage provider and have created the .env file as specified in step 3.

Thanks,
Derek

`502 Bad Gateway` + `Waiting for asset creation`

  • frontend:
    Starting frontend
    > [email protected] build /webapp/frontend
    > webpack --config webpack.prod.config.js
    Hash: 0234d98b2e6b059e2ee8
    Version: webpack 4.46.0
    Time: 53586ms
    Built at: 02/12/2022 3:02:48 PM
                  Asset      Size  Chunks                    Chunk Names
            favicon.ico  32.2 KiB          [emitted]
            favicon.svg   117 KiB          [emitted]
                main.js   188 KiB       0  [emitted]         main
             runtime.js  1.46 KiB       1  [emitted]         runtime
             vendors.js  1.33 MiB       2  [emitted]  [big]  vendors
    vendors.js.LICENSE.txt  4.27 KiB          [emitted]
    Entrypoint main [big] = runtime.js vendors.js main.js
    [+YzT] ./node_modules/react-bootstrap/esm/Nav.js + 3 modules 5.11 KiB {2} [built]
     |    4 modules
    [2YZa] ./src/index.tsx + 53 modules 271 KiB {0} [built]
     | ./src/index.tsx 843 bytes [built]
     | ./src/models/Separator.ts 987 bytes [built]
     | ./src/Constants.tsx 568 bytes [built]
     | ./src/Utils.tsx 1.96 KiB [built]
     | ./src/models/YouTubeLinkFetchStatus.ts 464 bytes [built]
     | ./src/models/PartId.ts 59 bytes [built]
     | ./src/models/MusicParts.ts 214 bytes [built]
     | ./src/svg/cancel.svg 483 bytes [built]
     | ./src/svg/remove.svg 599 bytes [built]
     | ./src/svg/restart.svg 463 bytes [built]
     |     + 44 hidden modules
    [6ctO] ./node_modules/react-bootstrap/esm/Navbar.js + 4 modules 11.2 KiB {2} [built]
     |    5 modules
    [7Tkq] ./node_modules/@jeffreyca/react-jinke-music-player/es/index.js + 36 modules 245 KiB {2} [built]
     |    37 modules
    [91je] ./node_modules/react-bootstrap/node_modules/@restart/hooks/esm/useWillUnmount.js + 1 modules 700 bytes {2} [built]
     |    2 modules
    [DjlD] ./node_modules/awesome-debounce-promise/dist/index.es.js + 2 modules 5.74 KiB {2} [built]
     |    3 modules
    [JUMO] ./node_modules/react-bootstrap/esm/ProgressBar.js + 1 modules 5.17 KiB {2} [built]
     |    2 modules
    [LhCv] ./node_modules/history/esm/history.js + 2 modules 30.6 KiB {2} [built]
     |    3 modules
    [QojX] ./node_modules/react-bootstrap/esm/Form.js + 13 modules 23.3 KiB {2} [built]
     |    14 modules
    [XlTo] ./node_modules/tone/build/esm/index.js + 917 modules 1.19 MiB {2} [built]
     |    918 modules
    [dDCJ] ./node_modules/react-bootstrap/esm/OverlayTrigger.js + 61 modules 103 KiB {2} [built]
     |    62 modules
    [dI71] ./node_modules/@babel/runtime/helpers/esm/inheritsLoose.js + 1 modules 481 bytes {2} [built]
     |    2 modules
    [sjrs] ./node_modules/react-bootstrap/esm/ListGroup.js + 1 modules 3.72 KiB {2} [built]
     |    2 modules
    [yLpj] (webpack)/buildin/global.js 472 bytes {2} [built]
    [zM5D] ./node_modules/react-bootstrap/esm/Modal.js + 18 modules 39.5 KiB {2} [built]
     |    19 modules
     + 1750 hidden modules
    exited with code 0
    
    • Shouldn't frontend be generated before (in a Dockerfile) and included as part of an Image(s)?
  • redis: 1:M 12 Feb 2022 15:01:34.804 * Ready to accept connections
  • (before, both celery):
    [2022-02-12 15:03:12,728: INFO/MainProcess] Connected to redis://redis:6379/0
    [2022-02-12 15:03:12,750: INFO/MainProcess] mingle: searching for neighbors
    [2022-02-12 15:03:13,798: INFO/MainProcess] mingle: all alone
    
  • celery-slow: [2022-02-12 15:03:12,984: INFO/MainProcess] celery@215ae8e2168e ready.
  • celery-fast: [2022-02-12 15:03:13,832: INFO/MainProcess] celery@e0e0e3e79c76 ready.
  • nginx: (returning 502)
    /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
    /docker-entrypoint.sh: Configuration complete; ready for start up
    2022/02/12 15:03:20 [notice] 1#1: using the "epoll" event method
    ...
    2022/02/12 15:03:25 [notice] 1#1: start worker process 60
    2022/02/12 15:03:33 [error] 23#23: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.0.2, server: , request: "GET / HTTP/1.1", upstream: "http://192.168.48.6:8000/", host: "redacted_domain"
    172.31.0.2 - - [12/Feb/2022:15:03:33 +0000] "GET / HTTP/1.1" 502 158 "-" "Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0" "172.31.0.1"
    2022/02/12 15:15:14 [error] 23#23: *6 connect() failed (113: Host is unreachable) while connecting to upstream, client: 172.31.0.2, server: , request: "GET / HTTP/1.1", upstream: "http://192.168.48.6:8000/", host: "redacted_domain"
    172.31.0.2 - - [12/Feb/2022:15:15:14 +0000] "GET / HTTP/1.1" 502 158 "-" "Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0" "172.31.0.1"
    
  • api: Waiting for asset creation (even after restarting the api container)`

docker-compose.yml:

networks:
  redacted_domain:
    internal: true
services:
  redacted_domain-redis:
    image: redis:6.0-buster
    user: "65534:65534"
    expose:
      - "6379"
    volumes:
      - /db/redacted_domain:/data
    networks:
      - redacted_domain
    restart: "no"

  redacted_domain-nginx:
    build: https://github.com/JeffreyCA/spleeter-web.git#:nginx
    image: spleeter-web-nginx:antihardcode
#    image: jeffreyca/spleeter-web-nginx:latest
    volumes:
      - /data/redacted_domain/staticfiles:/webapp/staticfiles:rw
    depends_on:
      - redacted_domain-api
    environment:
      - API_HOST=redacted_domain-api
    networks:
      - redacted_domain
      - proxy # on port 80, expose to https://redacted_domain
    restart: "no"

  redacted_domain-api:
    image: jeffreyca/spleeter-web-backend:latest
    volumes:
      - /data/redacted_domain/staticfiles:/webapp/staticfiles:rw
      - /data/redacted_domain/frontassets:/webapp/assets:rw
      - /data/redacted_domain/sqlite:/webapp/sqlite:rw
    env_file:
      - redacted_domain.env
    depends_on:
      - redacted_domain-redis
      - redacted_domain-frontend
    expose:
      - 8000
    networks:
      - redacted_domain
    restart: "no"

  redacted_domain-celery-fast:
    image: jeffreyca/spleeter-web-backend:latest
    entrypoint: ./celery-fast-entrypoint.sh
    volumes:
      - /data/redacted_domain/celery:/webapp/celery:rw
      - /data/redacted_domain/pretrained_models:/webapp/pretrained_models:rw
      - /data/redacted_domain/sqlite:/webapp/sqlite:rw
    env_file:
      - redacted_domain.env
    depends_on:
      - redacted_domain-redis
    dns:
      - "9.9.9.9"
    networks:
      - redacted_domain
    restart: "no"

  redacted_domain-celery-slow:
    image: jeffreyca/spleeter-web-backend:latest
    entrypoint: ./celery-slow-entrypoint.sh
    volumes:
      - /data/redacted_domain/celery:/webapp/celery:rw
      - /data/redacted_domain/pretrained_models:/webapp/pretrained_models:rw
      - /data/redacted_domain/sqlite:/webapp/sqlite:rw
    env_file:
      - redacted_domain.env
    depends_on:
      - redacted_domain-redis
    networks:
      - redacted_domain
    restart: "no"

  redacted_domain-frontend:
    image: jeffreyca/spleeter-web-frontend:latest
    volumes:
      - /data/redacted_domain/frontassets:/webapp/assets:rw
    networks:
      - redacted_domain
    restart: "no"

env:

DJANGO_SETTINGS_MODULE=django_react.settings_docker
CELERY_BROKER_URL=redis://redacted_domain-redis:6379/0
CELERY_RESULT_BACKEND=redis://redacted_domain-redis:6379/0
APP_HOST=redacted_domain
DEFAULT_FILE_STORAGE=api.storage.FileSystemStorage
YOUTUBE_API_KEY=redacted
CPU_SEPARATION=1
CELERY_FAST_QUEUE_CONCURRENCY=3
CELERY_SLOW_QUEUE_CONCURRENCY=1

So, like attempt 5 on getting this running. It's stuck on waiting?

get_file_ext failed

The YouTube search feature still seems to work, but the download from YouTube seems to fail. I think some underlying conversion service this was relying on must have gone down. If I upload an mp3, the audio is split into parts, with only the conversion from YouTube failing.

image

Unraid Support?

Any chance you might be able to add Unraid Support in the future?

I was interested in testing this in Unraid with GPU Support. I'm already using a GPU for encoding with Emby in Unraid and thought I might be able to add this in there.

Anaconda Navigator Start-Up

Hello All,

I installed Anaconda 3.8. I can open the Anaconda Prompt and import files etc.. But I can't open Anaconda navigator.
I am facingproble as below.

Main Error
check_hostname requires server_hostname

Please suggest a solution.

Interrupted splits hang

  1. Start a job
  2. Stop containers
  3. Start containers

Expected: slow-celery resumes / restarts jobs OR show job failed
Actual: slow-celery sits idle, UI shows 'processing'

tensorflow errors and Error from chokidar (/webapp/frontend/node_modules...) errors

Hello @JeffreyCA ,
Hope you are doing well.

I am trying to build your spleeter-web on my local Mac machine and also on my Ubuntu server. But I always get the tensorflow error and Error from chokidar. You can see the error log below.

P/S: I did follow your instruction too. I am starting your spleeter-web with docker.

tensorflow errors log (both happened on local mac machine and Ubuntu server):
| 2021-11-23 18:03:22.755058: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
api_1 | 2021-11-23 18:03:22.755269: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.

Error from chokidar log (This happended on Ubuntu server):

frontend_1 | Error from chokidar (/webapp/frontend/node_modules/@react-icons/all-files/ri): Error: ENOSPC: System limit for number of file watchers reached, watch '/webapp/frontend/node_modules/@react-icons/all-files/ri/RiFileList2Fill.esm.js'
frontend_1 | Error from chokidar (/webapp/frontend/node_modules/@react-icons/all-files/ri): Error: ENOSPC: System limit for number of file watchers reached, watch '/webapp/frontend/node_modules/@react-icons/all-files/ri/RiFileList2Fill.js'
frontend_1 | Error from chokidar (/webapp/frontend/node_modules/@react-icons/all-files/ri): Error: ENOSPC: System limit for number of file watchers reached, watch '/webapp/frontend/node_modules/@react-icons/all-files/ri/RiFileList2Line.d.ts'
frontend_1 | Error from chokidar (/webapp/frontend/node_modules/@react-icons/all-files/ri): Error: ENOSPC: System limit for

I am looking forward to your response!

Regards,
Peter

I try the project many times but could not running

In the ubuntu19/20, centos 7, Mac os 10.14/10.15 I found a problem form the pip
like this:
Running command git clone -q https://github.com/JeffreyCA/youtube-dl.git /webapp/src/youtube-dl
Running command git checkout -q 3afecd1a8c3c4371369984137389d580210c3d15
fatal: reference is not a tree: 3afecd1a8c3c4371369984137389d580210c3d15
ERROR: Command errored out with exit status 128: git checkout -q 3afecd1a8c3c4371369984137389d580210c3d15 Check the logs for full command output.
WARNING: You are using pip version 20.1.1; however, version 20.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
ERROR: Service 'huey' failed to build : The command '/bin/sh -c pip install --upgrade pip -r requirements.txt' returned a non-zero code: 1

The tips told the error is the pip3/pip vision was out of style, But I checked my machine and my pip is 20.3.1.
So what's the root about the pip? I'm tired, and thanks for you project.

anaconda

I write the code in anaconda it does not read

Support additional source separation models

Summary of other models

Model Supported? Paper Source code Vocals (SDR) Drums (SDR) Bass (SDR) Other (SDR) Avg (SDR) Notes
Spleeter Yes Link Yes 6.55 5.93 5.10 4.24 5.46
Demucs Yes Link Yes 6.29 6.08 5.83 4.12 5.58
Conv-Tasnet Yes Link Yes 6.81 6.08 5.66 4.37 5.73 Worse perceived quality than Demucs
X-UMX Yes Link Yes 5.53 6.33 4.54 6.50 5.73 Slow CPU separation
D3Net Yes Link Yes 7.24 7.01 5.25 4.53 6.01 Slow CPU separation
MMDenseLSTM No Link Yes 6.6 6.43 5.16 4.15 5.59 No pretrained models
Meta-TasNet No Link Yes 6.4 5.91 5.58 4.19 5.52 Issues with higher frequencies (sum of sources do not equal original) (pfnet-research/meta-tasnet#4)
Nachmani et al. No Link No 6.92 6.15 5.88 4.32 5.82
LaSAFT No Link Yes 7.33 5.68 5.63 4.87 5.88 Looks promising! Sum of sources do not equal original (ws-choi/Conditioned-Source-Separation-LaSAFT#3 (comment))

Celery keep failing: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL)

Celery keeps crashing on my MacBook:

celery-slow_1  | [2021-08-25 23:24:53,573: ERROR/MainProcess] Process 'ForkPoolWorker-3' pid:27 exited with 'signal 9 (SIGKILL)'
celery-slow_1  | [2021-08-25 23:24:53,588: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL) Job: 3.')
celery-slow_1  | Traceback (most recent call last):
celery-slow_1  |   File "/usr/local/lib/python3.7/site-packages/billiard/pool.py", line 1267, in mark_as_worker_lost
celery-slow_1  |     human_status(exitcode), job._job),
celery-slow_1  | billiard.exceptions.WorkerLostError: Worker exited prematurely: signal 9 (SIGKILL) Job: 3.

Does anyone know what causes this?

Anyway, thanks for this great work!

CAN'T RUN ON WINDOWS LOCAL MACHINE

Hi, I have followed all the procedures in Readme.md for running the app both with docker and without docker. Everything seems fine, no errors but http://0.0.0.0:8000 in my browser gives "This site can’t be reached".

Also, can it be deployed on VPS? How?

Thanks

Help with spleeter training

Hi! I'm inspired by your project but I cannot find any information (step by step algorythm) how to train my own model by .wav tracks collection with spleeter. I know something like .csv and .json files are needed. Can you help me please? I ll collect .wav tracks and you will train model in spleeter. 4stems model is good but doesn't work well

Support dynamic mixing

Something similar to a mixing board where you can dynamically adjust the output levels of each component.

How to give access to other domains?

hey!
how to get access to cors from other domains?
For instance: I want to use api.site.com like backend,
and site.com like frontend, I'll send requests via jquery for downloading youtube videos, creating mixes (separating), and for getting urls to files.

But when I run js code from site.com
There is one problem:
Access to XMLHttpRequest at 'https://api.site.com/api/source-track/youtube/' from origin 'https://site.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

I tried to install django cors headers
BUT it doesn't work, pls help me

Why spleeter doesn't work

Hello!
AFter uploading mp3 script waits.
Showing message: In queue
I am waiting more that one hour, but nothing happened

It's api's response:
{
"id": "6222892a-bcae-49cb-bed0-5a193361421e",
"source_track": "2ffae5d6-62e4-4af1-8d63-834ab3ad9301",
"separator": "spleeter",
"extra_info": [
"256 kbps",
"4 stems (16 kHz)"
],
"artist": "Taq tara",
"title": "Kisla",
"vocals_url": "",
"other_url": "",
"bass_url": "",
"drums_url": "",
"status": "Queued",
"error": "",
"date_created": "2021-12-13T10:40:47.057898Z"
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.