Code Monkey home page Code Monkey logo

cleanarr's People

Contributors

alanoll avatar bassrock avatar dependabot[bot] avatar jzucker2 avatar peter-mcconnell avatar polzy avatar se1exin avatar suika avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cleanarr's Issues

Stuck on "deleting items"

Selected just two duplicate movies then clicked "delete selected items", but Cleanarr is just spinning on "deleting items". It's been about 20 minutes so far so doubting it's just being slow.

Plex > Library > media deletion is enabled.
URL and token are clearly working since it could pull the data from that library at all, right?

Running as a docker in UNRAID with Privileged enabled.

Screenshot of what I see in the Cleanarr page:
https://i.imgur.com/pyvR6tg.png

Finds dupes, but fails to delete them.

Issue is in the title, pretty much. Finds 48.2 GB worth of dupes in my Movies library, but fails to delete them. Running on Docker via docker-compose. Here's the last part of the log:

spawned uWSGI master process (pid: 12) spawned uWSGI worker 1 (pid: 16, cores: 1) spawned uWSGI worker 2 (pid: 17, cores: 1) running "unix_signal:15 gracefully_kill_them_all" (master-start)... 2022-01-11 17:11:07,916 INFO success: quit_on_failure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)\n [pid: 16|app: 0|req: 1/1] 192.168.1.124 () {42 vars in 1620 bytes} [Tue Jan 11 17:11:23 2022] GET /server/info => generated 76 bytes in 24 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0)\n 192.168.1.124 - - [11/Jan/2022:17:11:23 +0000] "GET /server/info HTTP/1.1" 200 76 "http://192.168.1.121:5001/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" "-"\n [pid: 17|app: 0|req: 1/2] 192.168.1.124 () {42 vars in 1624 bytes} [Tue Jan 11 17:11:23 2022] GET /content/dupes => generated 133871 bytes in 6448 msecs (HTTP/1.1 200) 3 headers in 107 bytes (1 switches on core 0)\n 192.168.1.124 - - [11/Jan/2022:17:11:30 +0000] "GET /content/dupes HTTP/1.1" 200 133871 "http://192.168.1.121:5001/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36" "-" 2 Fast 2 Furious 1816\n /movies/2 Fast 2 Furious (2003) Open Matte (1080p Web-DL x265 HEVC 10bit AAC 5.1 RN) [UTR].mkv [pid: 16|app: 0|req: 2/3] 192.168.1.124 () {48 vars in 1743 bytes} [Tue Jan 11 17:11:41 2022] POST /delete/media => generated 180 bytes in 77 msecs (HTTP/1.1 500) 4 headers in 161 bytes (1 switches on core 0)

Failed to load content! - Error

I'm running Cleanarr on unraid and it worked when I first set it up but after deleting a few files and restarting it now errors out with
"Failed to load content!
local variable 'movie' referenced before assignment"

Logs show "main 2022-06-12 20:34:23,514 ERROR main.py:internal_error local variable 'movie' referenced before assignment"

Thanks!

Error after most recent update

After pulling most recent update and adding /config directory i am getting
{"error":"local variable 'episode' referenced before assignment"}
When accessing /content/dupes endpoint

causing
image

my plex seems to be fine and this only stopped after cleanarr update so assume related

"Failed to load content!"

The /content/dupes handler is returning the error:

Failed to load content!
Please check your Plex settings and try again

Here's where the error is thrown:

const renderErrorAlert = () => {
return (
<Alert
intent="danger"
title="Failed to load content!"
>
{loadingError ? loadingError.message : 'Please check your Plex settings and try again.'}
</Alert>
)
}

The HTTP error is 504 (gateway timeout).

Here's the docker-compose file I'm using:

version: '3.3'
services:
  cleanarr:
    image: selexin/cleanarr:latest
    ports:
      - "5050:80"
    environment:
      - BYPASS_SSL_VERIFY=0
      - PLEX_TOKEN=<redacted>
      - PLEX_BASE_URL=http://<LAN_IP>:32400
      - LIBRARY_NAMES=Movies;TV Shows;
      - PLEX_TIMEOUT=7200
      - PAGE_SIZE=50
    restart: always
    volumes:
      - /mnt/cache/appdata/cleanarr:/config

Too Many Dups - Unusable

Okay, so yes, I know I'm going to get the "how could you have this issue" rants...

However, I have a ton of duplicates, so many in-fact that the page sometimes will not load and will timeout.

I am wondering if there is any way to quickly add a feature to only run/show the first X dups.

Any way to add pagination and only show X dups per page?
I really, really want to use this!

Failed to load content!

Failed to load content!
[Errno 2] No such file or directory: '/config/db.json'

Tried multiple times and different settings, cant get past this?

Any idea?
Screenshot 2023-04-03 at 9 22 53 am

Failed to load content! Delimiter

Everything worked fine , with some trial and error got the libraries scanned and saw the duplicates, removed all of them and now getting this error. Cannot seem to figure out what this is.

With Debugging I can see the screenshot below, but cannot finetune where the error might be

Screenshot 2023-05-14 at 09 48 52 Screenshot 2023-05-14 at 09 46 45

Add persistent storage

There are a number of feature requests that require some form of persistent storage:

  • #11 - Store space saved
  • #19 - Store ignored media items
  • #47 - Store ignored media items
  • #50 - Cache results (although caching may not need to be persisted - in-memory may suffice)

SQLite is probably the most robust/contribution friendly - tinydb used instead

Re: Storing ignored items. Since the plex API doesn't support excluding items in a search request, implementation of paging will be required to skip over the ignored items - Implemented

Timing out?

I have a couple of really large directories and it looks like there is a timeout while collecting the directory information. I've tried larger page size setting and super low but it seems that maybe the scan itself is just taking too long to complete?

it seems to be about 110 seconds (+/- a couple) from the beginning of the load each time this occurs. I changed a couple of timeout settings in the browser (Mozilla) but it didn't have an impact on this. In fact, I closed the browser and this continued to run until the roughly 110 second mark. So, it seems to be something on the backend not the UI directly.

Any thoughts?

plexwrapper 2023-01-10 11:28:34,603 DEBUG plexwrapper.py:get_dupe_content Found media: plex://episode/5d9c110102391c001f5e6386
database 2023-01-10 11:28:34,603 DEBUG database.py:get_ignored_item content_key /library/metadata/30531
plexwrapper 2023-01-10 11:28:34,642 DEBUG plexwrapper.py:get_dupe_content Get results from offset 1730 to limit 1740
Tue Jan 10 11:28:34 2023 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /content/dupes (ip 192.168.30.111) !!!
Tue Jan 10 11:28:34 2023 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes (192.168.30.111)
OSError: write error
[pid: 16|app: 0|req: 4/8] 192.168.30.111 () {42 vars in 651 bytes} [Tue Jan 10 11:26:43 2023] GET /content/dupes => generated 0 bytes in 111019 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 0)

Thanks!

Fail to scan a TV series Library

If i remove this library works good, with several libraries, but if i add the "Series TV" library give me this error, is the last library

Failed to load content!
Please check your Plex settings and try again.

The library has 3673 TV Series and 47717 Episodes

my docker compose file is

cleanarr:
image: selexin/cleanarr
container_name: cleanarr
hostname: cleanarr
ports:
- "5000:80"
environment:
- BYPASS_SSL_VERIFY=1
- PLEX_TOKEN=_Z7ioEDxcV8kkcTccmhs
- PLEX_BASE_URL=http://10.0.3.2:32400
- LIBRARY_NAMES=Conciertos;Animación TV;Anime TV;Cortos Animación;Documentales TV;Películas;Películas 4K;Películas Anime;Películas Documentales;Series 4K;Series ES;Series TV
volumes:
- /root/cleanarr:/config
restart: unless-stopped
sonarr:
image: lscr.io/linuxserver/sonarr
container_name: sonarr
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Madrid
volumes:
- /root/sonarr:/config
- /srv/mergerfs/FractalMerger/MergeLimpio:/Media #optional

Thanks for the help

Ignore Specific File/False Positive

Great tool, but I've run into and issue where I have 50 false positives so the queue is jammed. Need to be able to mark as ignored or false positive otherwise I am now trapped at this part of the library.

Add ability to ignore certain media items

Hi, I love this tool. I was hoping if it would be possible to ignore certain files/episodes/movies. For example, I have two versions of each game of thrones episodes (one from the 1080p Blu-ray and another from the 4k Blu-Ray versions) and would like to keep both (to prevent 4k transcoding). The same with the movie tangled. This causes the queue to be filled up (as it is limited to 50 items), so I can't reduce additional items (plus skimming the same items every time can be draining).

Failed to load content! (500) internal_server_error;

Im getting a weird issue when running on Unraid, when ever the container is set up I give it the correct information but it seems that no matter what I seem to do, this is always the same issue:

Failed to load content!
(500) internal_server_error; http://192.168.xx.xx:32400/ <html><head><title>Internal Server Error</title></head><body><h1>500 Internal Server Error</h1></body></html

Screenshot_1
Screenshot_2

Any help would be greatly appreciated! Thanks!

Failed to load content! Expecting ',' delimiter: line 1 column 69 (char 68)

I keep getting this error once Cleanaar has ran and deleted some files... It was working but then hung on a large delete. I have since tried to change the directories, turn on pagination but nothing is helping me with this one. I can't figure out what this error means exactly nor how to fix it...

Screenshot 2023-01-05 at 2 26 52 PM

here is part of the log file about this:

plexwrapper 2023-01-05 14:26:15,156 DEBUG plexwrapper.py:get_dupe_content Get results from offset 0 to limit 45
plexwrapper 2023-01-05 14:26:15,162 DEBUG plexwrapper.py:init Connected to Plex!
plexwrapper 2023-01-05 14:26:15,162 DEBUG plexwrapper.py:init Initializing DB...
database 2023-01-05 14:26:15,162 DEBUG database.py:init DB Init
database 2023-01-05 14:26:15,163 DEBUG database.py:init DB Init Success
plexwrapper 2023-01-05 14:26:15,163 DEBUG plexwrapper.py:init Initialized DB!
database 2023-01-05 14:26:15,163 DEBUG database.py:get_deleted_size library_name Movies
main 2023-01-05 14:26:15,164 ERROR main.py:internal_error Expecting ',' delimiter: line 1 column 69 (char 68)
[pid: 16|app: 0|req: 6/7] 192.168.33.111 () {40 vars in 619 bytes} [Thu Jan 5 14:26:15 2023] GET /server/deleted-sizes => generated 64 bytes in 11 msecs (HTTP/1.1 500) 3 headers in 122 bytes (1 switches on core 0)
192.168.33.111 - - [05/Jan/2023:14:26:15 -0500] "GET /server/deleted-sizes HTTP/1.1" 500 64 "http://192.168.33.18/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:108.0) Gecko/20100101 Firefox/108.0" "-"
[pid: 16|app: 0|req: 7/8] 192.168.33.111 () {40 vars in 593 bytes} [Thu Jan 5 14:26:15 2023] GET /favicon.ico => generated 15406 bytes in 1 msecs via sendfile() (HTTP/1.1 200) 7 headers in 277 bytes (0 switches on core 0)
192.168.33.111 - - [05/Jan/2023:14:26:15 -0500] "GET /favicon.ico HTTP/1.1" 200 15406 "http://192.168.33.18/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:108.0) Gecko/20100101 Firefox/108.0" "-"
plexwrapper 2023-01-05 14:26:15,704 DEBUG plexwrapper.py:get_dupe_content Found media: plex://movie/5d7768265af944001f1f6977
database 2023-01-05 14:26:15,704 DEBUG database.py:get_ignored_item content_key /library/metadata/43677
main 2023-01-05 14:26:15,705 ERROR main.py:internal_error Expecting ',' delimiter: line 1 column 69 (char 68)
[pid: 17|app: 0|req: 2/9] 192.168.33.111 () {40 vars in 605 bytes} [Thu Jan 5 14:26:15 2023] GET /content/dupes => generated 64 bytes in 575 msecs (HTTP/1.1 500) 3 headers in 122 bytes (1 switches on core 0)
192.168.33.111 - - [05/Jan/2023:14:26:15 -0500] "GET /content/dupes HTTP/1.1" 500 64 "http://192.168.33.18/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:108.0) Gecko/20100101 Firefox/108.0" "-"

Any help would be appreciated!

Unable to start "Failed to load content!"

I've just set this up for the first time and cannot get it to work for me. Also can't find a discord to ask for support.

  • i've confirmed 'allow media deletion' is enabled
  • found plex token
  • started via docker-compose
version: '3'

services:

  cleanarr:
    image: selexin/cleanarr:latest
    container_name: cleanarr
    hostname: cleanarr
    ports:
      - 8885:80
    environment:
      - BYPASS_SSL_VERIFY=1
      - PLEX_TOKEN=myplextoken
      - PLEX_BASE_URL=http://192.169.1.93:32400
      - LIBRARY_NAMES=Movies;Movies Kids;TV Shows;TV Shows Kids; TV Shows Older Kids;TV Shows Solo
    volumes:
      - /home/cleanarr:/config
    restart: unless-stopped

The container is within an LXC (ubuntu) on a proxmox node on IP 192.168.1.92

log files https://pastebin.com/2HLQZ9g9

Can't start after most recent update

Traceback (most recent call last):
  File "./main.py", line 6, in <module>
    from flask import Flask, jsonify, request, send_file
  File "/usr/local/lib/python3.7/site-packages/flask/__init__.py", line 14, in <module>
    from jinja2 import escape
  File "/usr/local/lib/python3.7/site-packages/jinja2/__init__.py", line 12, in <module>
    from .environment import Environment
  File "/usr/local/lib/python3.7/site-packages/jinja2/environment.py", line 25, in <module>
    from .defaults import BLOCK_END_STRING
  File "/usr/local/lib/python3.7/site-packages/jinja2/defaults.py", line 3, in <module>
    from .filters import FILTERS as DEFAULT_FILTERS  # noqa: F401
  File "/usr/local/lib/python3.7/site-packages/jinja2/filters.py", line 13, in <module>
    from markupsafe import soft_unicode
ImportError: cannot import name 'soft_unicode' from 'markupsafe' (/usr/local/lib/python3.7/site-packages/markupsafe/__init__.py)
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. GAME OVER ***
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:

#! /usr/bin/env sh

# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head

2023-04-17 10:00:15,868 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2023-04-17 10:00:15,868 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2023-04-17 10:00:15,903 INFO RPC interface 'supervisor' initialized
2023-04-17 10:00:15,903 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2023-04-17 10:00:15,904 INFO supervisord started with pid 1
2023-04-17 10:00:16,907 INFO spawned: 'quit_on_failure' with pid 8
2023-04-17 10:00:16,911 INFO spawned: 'nginx' with pid 9
2023-04-17 10:00:16,914 INFO spawned: 'uwsgi' with pid 10
2023-04-17 10:00:16,918 INFO success: nginx entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-04-17 10:00:16,918 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2023-04-17 10:00:18,404 INFO success: quit_on_failure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-04-17 10:00:20,203 INFO exited: uwsgi (exit status 22; not expected)
2023-04-17 10:00:21,206 WARN received SIGTERM indicating exit request
2023-04-17 10:00:21,206 INFO waiting for quit_on_failure, nginx to die
2023-04-17 10:00:22,230 INFO stopped: nginx (exit status 0)
2023-04-17 10:00:22,231 INFO stopped: quit_on_failure (terminated by SIGTERM)

** Press ANY KEY to close this window ** 



Running on unRaid docker 

Display available or not in GUI

Hi,

would it be easy to add in frontend the status of file (Available or not) ?

I see from there that you have it

"accessible": media_part.accessible,

but frontend doesn't make use of it.

Having a button to autoselect all unavailable would be a great addition too.

Thanks

SSL verification failed

i attempted to set this up, but my server requires a secure connection, it appears that your code doesn't ignore ssl (in this case using the private returns a plex.direct ssl cert)

import requests 
session = requests.Session()
session.verify = False
pms = PlexServer(url, token, session=session)

Keep a lifetime running total of space saved

Originally requested in #10 (Keep a lifetime running total of space saved (either per server and or per library)), this issue has been created to split the work out into it's own item due it's potential additional complexity.

Add support for TV shows

As per #34 - Only Movies libraries are supported at the moment. Support for TV shows should be added as well.

Please allow special characters in libraries names

I've some libraries with "é" character in the name, any tips to make it work ?
As soon as i add them, i got this in log :

today at 20:42:02 2021-06-12 18:42:02,581 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
today at 20:42:02 2021-06-12 18:42:02,581 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET /movies/samples HTTP/1.1" 499 0 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET /movies/samples HTTP/1.1" 499 0 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET /movies/dupes HTTP/1.1" 499 0 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET /static/js/2.6919e901.chunk.js HTTP/1.1" 304 0 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:11 10.0.0.65 - - [12/Jun/2021:18:42:11 +0000] "GET /static/js/main.7782c274.chunk.js HTTP/1.1" 304 0 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:12 spawned uWSGI worker 3 (pid: 17, cores: 1)
today at 20:42:13 Sat Jun 12 18:42:13 2021 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /movies/dupes (ip 10.0.0.65) !!!
today at 20:42:13 Sat Jun 12 18:42:13 2021 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /movies/dupes (10.0.0.65)
today at 20:42:13 OSError: write error
today at 20:42:13 [pid: 17|app: 0|req: 1/1] 10.0.0.65 () {42 vars in 865 bytes} [Sat Jun 12 18:42:12 2021] GET /movies/dupes => generated 0 bytes in 467 msecs (HTTP/1.1 500) 3 headers in 0 bytes (0 switches on core 0)
today at 20:42:13 Sat Jun 12 18:42:13 2021 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /movies/samples (ip 10.0.0.65) !!!
today at 20:42:13 Sat Jun 12 18:42:13 2021 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /movies/samples (10.0.0.65)
today at 20:42:13 OSError: write error
today at 20:42:13 [pid: 16|app: 0|req: 1/2] 10.0.0.65 () {42 vars in 869 bytes} [Sat Jun 12 18:42:06 2021] GET /movies/samples => generated 0 bytes in 6458 msecs (HTTP/1.1 500) 3 headers in 0 bytes (0 switches on core 0)
today at 20:42:13 [pid: 17|app: 0|req: 2/3] 10.0.0.65 () {44 vars in 896 bytes} [Sat Jun 12 18:42:13 2021] GET /movies/dupes => generated 51 bytes in 266 msecs (HTTP/1.1 500) 3 headers in 122 bytes (1 switches on core 0)
today at 20:42:13 10.0.0.65 - - [12/Jun/2021:18:42:13 +0000] "GET /movies/dupes HTTP/1.1" 500 51 "http://10.0.0.150:5554/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0" "-"
today at 20:42:16 Sat Jun 12 18:42:16 2021 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /movies/samples (ip 10.0.0.65) !!!
today at 20:42:16 Sat Jun 12 18:42:16 2021 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /movies/samples (10.0.0.65)
today at 20:42:16 OSError: write error
today at 20:42:16 [pid: 15|app: 0|req: 1/4] 10.0.0.65 () {42 vars in 869 bytes} [Sat Jun 12 18:42:09 2021] GET /movies/samples => generated 0 bytes in 6758 msecs (HTTP/1.1 500) 3 headers in 0 bytes (0 switches on core 0)
today at 20:42:16 worker 2 killed successfully (pid: 16)
today at 20:42:16 uWSGI worker 2 cheaped.

and the web interface says

"Failed to load movies!
'Show' object has no attribute 'media' "

PlexApi issue (media.delete() resulting in Bad Request)

Hello,

(note: this is NOT an issue with this codebase)

Upon testing this software I noticed many 400 Bad Requests when attempting to delete duplicates. On inspection I noticed this was due to media.delete():

https://github.com/se1exin/Cleanarr/blob/master/backend/main.py#L75
https://github.com/se1exin/Cleanarr/blob/master/backend/plexwrapper.py#L131

Digging further I realised this is actually an issue on the Plex end:

curl -XDELETE http://10.0.0.41:32400/library/metadata/9393/media/24939
<html><head><title>Bad Request</title></head><body><h1>400 Bad Request</h1></body></html>

My question is: is this a common issue / have solutions to this problem already been identified? My kneejerk is that it either needs fixed on the Plex end, or the .delete() method needs to be avoided and offer a separate means to delete these files from disk (meaning figuring out access between this service and disk). I've validated my server allows deletions, though it's possible I've missed something else.

Thought I'd ask here before thinking about it too much more. I appreciate this isn't an issue with Cleanarr. A quick search around led me to https://github.com/ngovil21/Plex-Cleaner/blob/master/PlexCleaner.py#L498 but not sure there's much worth taking here.

OSError: write error

Hey guys, thanks for this wonderfull tool! Using it with a 3000 items huge series libruary and getting this error during the first run.
Smaller libruaries are working quite well :)
Any ideas how to solve this? Do you need further information?
Thanks in advance.
Harry

2023-07-07 11:47:22 Fri Jul 7 09:47:22 2023 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /content/dupes?page=15 (ip 172.17.0.1) !!!
2023-07-07 11:47:22 Fri Jul 7 09:47:22 2023 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 306] during GET /content/dupes?page=15 (172.17.0.1)
2023-07-07 11:47:22 OSError: write error
2023-07-07 11:47:22 [pid: 16|app: 0|req: 5/19] 172.17.0.1 () {52 vars in 906 bytes} [Fri Jul 7 09:47:18 2023] GET /content/dupes?page=15 => generated 0 bytes in 3276 msecs (HTTP/1.1 200) 3 headers in 0 bytes (0 switches on core 0)

Call it "Cleanarr"!

That's it. Not really an issue, but just a thought. It'd fit perfectly, and I think more people than just myself would find it funny as hell. ;)

Either way - great work, I'm gonna give this a go now. :D

FEATURE REQUEST: Allow libraries other than “Movies”

First - thanks for this. The idea is spot on and there a lot of potential here.

After a bit of struggling to get this working, I discovered the app is hard coded to expect a single library called “movies”. My temporary workaround was to use docker to map the app/plex folder to a local directory where I downloaded and modified classes.py.

Any chance to allow the user to use their own library names? And ideally support more than 1 library?

Feature Request: API to support cached results

Right now if you want to sure the API content/dupes to grab data it will take 5+ seconds due to the backend querying plex etc.
It would be nice if there way a way to grab this data from the last made query in order to reduce time on the API
Or Separately/Additionally Have a cached API to return Total MB size of Default selected duplicates to delete and total count of duplicates.
Useful stats for dashboard apps for monitoring etc

Docker - check your Plex settings and try again

Hi,
I setup cleanarr from docker compose (as defined on the instruction page) on QNAP container station:

version: '3'

services:

  cleanarr:
    image: selexin/cleanarr:latest
    container_name: cleanarr
    hostname: cleanarr
    ports:
      - "5000:80"
    environment:
      - BYPASS_SSL_VERIFY=1
      - PLEX_TOKEN=somerandomstring
      - PLEX_BASE_URL=https://192.168.1.26:32400
      - LIBRARY_NAMES=Movies;TV
    volumes:
      - /share/Container-Data/cleanarr-new:/config
    restart: unless-stopped

Upon first run, this error is presented, what could gone wrong with this?

Failed to load content!
Please check your Plex settings and try again.

Thanks

Error: Failed to load content!

Was working ,
but now getting error :

Failed to load content!
Expecting ',' delimiter: line 1 column 44 (char 43)

not sure how to debug this

Looking for maintainer(s) for Cleanarr

As I no longer have the time needed to maintain this project, and I no longer use Plex, I am looking for a maintainer to take over this project.

Please respond to this issue if you are interesting in maintaining this project.

Large library causes timeout (504)

172.17.0.1 - - [09/Feb/2020:17:17:17 +0000] "GET /movies/dupes HTTP/1.1" 504 569 "http://localhost:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36" "-"```

Looks like your doing standard timeout 

Failed to load content! HTTPConnectionPool(host='192.168.X.X', port=32400): Read timed out. (read timeout=30)

Hi,
I'm running the docker on Unraid and im getting the following error:

"Failed to load content!
HTTPConnectionPool(host='192.168.X.XXX', port=32400): Read timed out. (read timeout=30)"

The logs state:
192.168.x.x - - [05/Jun/2022:17:27:31 +1000] "GET /server/deleted-sizes HTTP/1.1" 200 13 "http://192.168.x.y:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36" "-"
[pid: 16|app: 0|req: 1/4] 192.168.x.x () {44 vars in 2747 bytes} [Sun Jun 5 17:27:30 2022] GET /content/dupes => generated 100 bytes in 48158 msecs (HTTP/1.1 500) 3 headers in 123 bytes (1 switches on core 0)
192.168.x.x - - [05/Jun/2022:17:28:19 +1000] "GET /content/dupes HTTP/1.1" 500 100 "http://192.168.x.y:5000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.61 Safari/537.36" "-"

It might be an issue with library size as i have 1000+ movies?

I'm an idiot and need instruction please

Hi, I pulled cleanarr into docker hub. I'm looking to find out what file is the config file that i would enter my data into. (ip, token and library names)
Thank you in advance!

Feature Request: Replicate Buttons

Please consider replication all the buttons found at the page top to the page bottom. Would save the effort of scrolling back to the top of the page.

how does this pick dupes

i find this brilliant. but the only problem i have. that it is only picking 3 movies as dupes. when i a lot more then 3 dupes lol.

also wondering how does this pick up dupes etc. i have had this running for more then 4 hours. and it won't pick any more then 3 movies as dupes.

Misc Quick Suggestions

A few misc suggestions that are likely quick to implement:

  • Display the plex server name
  • Maybe even make the name above link to the plex server’s web app (or add a little link arrow/icon next to the name)
  • Keep a lifetime running total of space saved (either per server and or per library)
  • Thumbnails for each movies would spice up the plain table view (might slow down the ui unacceptably, unknown)
  • Maybe even link the movie names to the movie’s page in the plex server’s web app (or add a little link arrow/icon next to the name)

Failed to load content! HTTPConnectionPool(host='144.22.X.X', port=32400): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f60835c7910>: Failed to establish a new connection: [Errno 113] No route to host'))

How can i solve this error

Failed to load content!
HTTPConnectionPool(host='144.22.X.X', port=32400): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f60835c7910>: Failed to establish a new connection: [Errno 113] No route to host'))

Failed to load content! Please check your Plex settings and try again.

I have verified the Plex IP and Token are correct, multiple times, but no luck. Any ideas?

Docker Command:
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Cleanarr' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="HomeSRV" -e HOST_CONTAINERNAME="Cleanarr" -e 'PLEX_BASE_URL'='http://192.168.1.20:32400' -e 'PLEX_TOKEN'='REMOVEDREMOVED' -e 'LIBRARY_NAMES'='Movies' -e 'BYPASS_SSL_VERIFY'='1' -e 'PAGE_SIZE'='2' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:80]/' -l net.unraid.docker.icon='https://raw.githubusercontent.com/Alphacosmos/unraid-templetes/main/Images/plex-library-cleaner.ico' -p '5000:80/tcp' -v '/mnt/user/appdata/plex-library-cleaner':'/config':'rw' 'selexin/cleanarr'
be729993ea2b2b1ebf624234b23ceb2cef80efacdc92d178a3903aabc3298a08

The command finished successfully!

Docker Log:

Run migrations

alembic upgrade head

/usr/lib/python2.7/dist-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2022-05-23 16:23:22,899 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-05-23 16:23:22,899 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2022-05-23 16:23:22,908 INFO RPC interface 'supervisor' initialized
2022-05-23 16:23:22,908 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2022-05-23 16:23:22,908 INFO supervisord started with pid 1
2022-05-23 16:23:23,910 INFO spawned: 'quit_on_failure' with pid 10
2022-05-23 16:23:23,911 INFO spawned: 'nginx' with pid 11
2022-05-23 16:23:23,912 INFO spawned: 'uwsgi' with pid 12
2022-05-23 16:23:23,914 INFO success: nginx entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2022-05-23 16:23:23,914 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini

;uWSGI instance configuration
[uwsgi]
cheaper = 2
processes = 16
ini = /app/uwsgi.ini
module = main
callable = app
ini = /etc/uwsgi/uwsgi.ini
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
hook-master-start = unix_signal:15 gracefully_kill_them_all
need-app = true
die-on-term = true
show-config = true
;end of configuration

*** Starting uWSGI 2.0.20 (64bit) on [Mon May 23 16:23:23 2022] ***
compiled with version: 8.3.0 on 21 April 2022 12:42:40
os: Linux-5.15.40-Unraid #1 SMP Mon May 16 10:05:44 PDT 2022
nodename: 8f7cbbabeca2
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your processes number limit is 127338
your memory page size is 4096 bytes
detected max file descriptor number: 40960
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.7.13 (default, Apr 20 2022, 19:13:06) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x557db2f3bad0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x557db2f3bad0 pid: 12 (default app)
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 12)
spawned uWSGI worker 1 (pid: 16, cores: 1)
spawned uWSGI worker 2 (pid: 17, cores: 1)
running "unix_signal:15 gracefully_kill_them_all" (master-start)...
2022-05-23 16:23:25,313 INFO success: quit_on_failure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"
[pid: 16|app: 0|req: 1/1] 192.168.1.231 () {44 vars in 2697 bytes} [Mon May 23 16:23:25 2022] GET /server/deleted-sizes => generated 13 bytes in 15 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0)
192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/deleted-sizes HTTP/1.1" 200 13 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"
[pid: 16|app: 0|req: 2/2] 192.168.1.231 () {44 vars in 2679 bytes} [Mon May 23 16:23:25 2022] GET /server/info => generated 79 bytes in 7 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0)
192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/info HTTP/1.1" 200 79 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"
[pid: 16|app: 0|req: 3/3] 192.168.1.231 () {44 vars in 2697 bytes} [Mon May 23 16:23:25 2022] GET /server/deleted-sizes => generated 13 bytes in 7 msecs (HTTP/1.1 200) 3 headers in 103 bytes (1 switches on core 0)
192.168.1.231 - - [23/May/2022:16:23:25 -0700] "GET /server/deleted-sizes HTTP/1.1" 200 13 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"
2022/05/23 16:24:25 [error] 13#13: *1 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.231, server: , request: "GET /content/dupes HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock", host: "192.168.1.20:5000", referrer: "http://192.168.1.20:5000/"
192.168.1.231 - - [23/May/2022:16:24:25 -0700] "GET /content/dupes HTTP/1.1" 504 569 "http://192.168.1.20:5000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.49 Safari/537.36 Edg/102.0.1245.14" "-"

stucked in Plex settings and try again

hello, good afternoon!....

i've installed docker and selexin image, but i'm getting over and over same errors
...IP, token and name of library are correct, but there's something strange in the name with or without spaces...
i mean, the docker runs fine in libraries without spaces and none accent mark... but i'm getting the PLEX setting and try again, with spaces and accent marks.

--- concerts: runs fine
--- películas 4K : fail

note the space and the accent mark in the películas 4K library

thansk for reading

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.