ajkis / scripts Goto Github PK
View Code? Open in Web Editor NEWAjki's scripts & guides
Home Page: https://ajkis.github.io/scripts/
Ajki's scripts & guides
Home Page: https://ajkis.github.io/scripts/
Download & Install Virtualbox host: https://www.virtualbox.org/wiki/Downloads
Download Alpine Linux Standard: https://www.alpinelinux.org/downloads/
Open Virtualbox and create new virtual machine:
Type: Linux
Version: Other Linux (64 bit)
Base Memory ( 1024MB + i would recomment at least 2048 )
Storage: 1GB dynamically allocated ( after rclone+samba etc it would still be bellow 500MB)
Storage Optical: Connect alpine ISO image.
Network: Bridged Adapter ( so you can assign your own static IP )
Disable Audio, Serial ports
Boot created virtual machine and once in terminal mode setup your hostname, ip, dns by typing::
setup-hostname
setup-interfaces
setup-dns
After restart network:
/etc/init.d/networking restart
Setup the Alpine Linux on your HDD
setup-alpine
Update Alpine:
apk update
apk upgrade
Install fuse
apk add fuse
Optional
apk add unionfs-fuse
Install Samba
apk add samba samba-common-tools
Before setting up rclone install certificates
mkdir -p /etc/ssl/certs/
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org
Setup rclone and mount folders
Share rclone mount in samba by adding special samba user and creating smb.conf
Create new user:
adduser --no-create-home --disabled-password --disabled-login mysambauser
smbpasswd -a mysambauser
Create config
vi /etc/samba/smb.conf
[global]
workgroup = WORKGROUP
dos charset = cp850
unix charset = ISO-8859-1
[cloud]
comment = TV backup
path = /path to rclone mount /
browsable = yes
writable = yes
read only = no
guest ok = no
write list = mysambauser
read list = mysambauser
valid users = mysambauser
Add samba to run at startup
rc-update add samba
Start samba;
rc-service samba start
You may consider adding your rclone mount script to boot in local.d so add it to boot by:
rc-update add local default
Now you can make new rclone mount script in
vi /etc/local.d/rclonemount.start
#!/bin/bash
rclone mount
--read-only
--allow-non-empty
--allow-other
--max-read-ahead 14G
--acd-templink-threshold 0
--checkers 16
--quiet
--stats 0
rcloneremote:/ /path to mount/ &
exit
vi /etc/local.d/rclonemount.stop
#!/bin/bash
fusermount -uz /path to mount/
exit
You can read more about local.d : cat /etc/local.d/README
Optionally you can enable SSH and run virtualbox in headless mode
To enable SSH
vi /etc/ssh/sshd_config
Set : PermitRootLogin yes
restart SSH
/etc/init.d/sshd restart �
To start your virtalmachine in headless mode, create new batch file with content bellow:
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "alpine" --type "headless"
exit
To stop or better yet stop and save last state you can make batch:
"C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" controlvm "alpine" savestate
exit
Once your virtalbox machine is running you can access it by typing in windows explorer:
\THE IP\NameOfShare ( in our case cloud )
use username and password you set for samba.
scripts/plex/plex-library-stats.sh
Line 79 in f100a23
since one of the last server updates, this also needs
and part.extra_data not like '%deepAnalysisVersion=3%'
It got min age to always keep, max age to always delete, and will stay within the set free space for the chunks inbetween:
#!/bin/bash
if pidof -o %PPID -x "$0"; then
echo "Already running, exit"
exit 1
fi
PLEXDRIVETEMP=/tmp/plexdrive/GoogleAJ/chunks
MINDISKSPACE=50000000
CURDISKSPACE=$(df -k $PLEXDRIVETEMP | tail -1 | awk '{print $4}')
FORCEMINAGE=120
FORCEMAXAGE=1440
HRDISKSPACE=$(echo "$CURDISKSPACE" | awk '{ sum=$1 ; hum[10242]="Gb";hum[1024]="Mb";hum[0]="Kb"; for (x=10243; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s\n",sum/x,hum[x];break } }}')
echo "Starting with $HRDISKSPACE free diskspace"
HRCHUNKSPACE=$(du -s $PLEXDRIVETEMP | awk '{ sum=$1 ; hum[10242]="Gb";hum[1024]="Mb";hum[0]="Kb"; for (x=10243; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s\n",sum/x,hum[x];break } }}')
echo "Starting with $HRCHUNKSPACE used chunkspace"
find $PLEXDRIVETEMP -mmin +$FORCEMAXAGE -amin +$FORCEMAXAGE -cmin +$FORCEMAXAGE -type f -delete
while [[ $CURDISKSPACE -le $MINDISKSPACE ]]
do
IFS= read -r -d $'\0' line < <(find $PLEXDRIVETEMP -mmin +$FORCEMINAGE -amin +$FORCEMINAGE -cmin +$FORCEMINAGE -type f -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
CHUNK="${line#* }"
if [[ -z "$CHUNK" ]]; then
break
fi
echo "$CHUNK"
rm $CHUNK
CURDISKSPACE=$(df -k $PLEXDRIVETEMP | tail -1 | awk '{print $4}')
done
find $PLEXDRIVETEMP -mindepth 1 -empty -delete
CURDISKSPACE=$(df -k $PLEXDRIVETEMP | tail -1 | awk '{print $4}')
HRDISKSPACE=$(echo "$CURDISKSPACE" | awk '{ sum=$1 ; hum[10242]="Gb";hum[1024]="Mb";hum[0]="Kb"; for (x=10243; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s\n",sum/x,hum[x];break } }}')
echo "Ended with $HRDISKSPACE free diskspace"
HRCHUNKSPACE=$(du -s $PLEXDRIVETEMP | awk '{ sum=$1 ; hum[10242]="Gb";hum[1024]="Mb";hum[0]="Kb"; for (x=10243; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s\n",sum/x,hum[x];break } }}')
echo "Ended with $HRCHUNKSPACE used chunkspace"
I saw this error several times in your scrips.
example
https://github.com/ajkis/scripts/blob/master/plex/plex-scan-new.sh
export LD_LIBRARY_PATH=/usr/lib/plexmediaserver
[...]
$LD_LIBRARY_PATH/usr/lib/plexmediaserver/Plex\ Media\ Scanner
should be
export LD_LIBRARY_PATH=/usr/lib/plexmediaserver
$LD_LIBRARY_PATH/Plex\ Media\ Scanner
I am using the rclone-mount-check.sh which works great but I was wondering if there was an official way to have it restart the Plex docker after a successful mount e.g. have it run "docker restart plex"?
I get this error: 2017/05/01 08:47:01 ERROR : tmp/2017-05-01_08-44-01-plex.list: File.Open failed: open for read: bad response: 404: 404 Not Found
2017/05/01 08:48:02 ERROR : tmp/2017-05-01_08-44-01-plex.list: File.Open failed: open for read: bad response: 404: 404 Not Found
At which point I control "C" since the script just hangs.
plex-analyzedeeply.cli line #13 is cut off
c.execute('select meta.id from metadata_items meta join media_items media on media.metadata_item_id = meta.id join media_parts part on part.media_item_id = media.id where part.extra_data not like "%deepAnalysisVersion=2%" and meta.metadata_type i$
Hi,
I'm using your rclone-upload script which does the job well, apart from leaving behind a ton of empty folders.
Am I doing something wrong or is there something I can edit to ensure the folders are removed please?
Thanks
Getting some strange errors when running this script. Seems to work fine on TV shows but not on movies.
files2folders.py: movie: /share/plexmedia/plexmedia/Movies/Boy, The (2016).mkv -> /share/plexmedia/plexmedia/Movies/Boy, The (2016)/Boy, The (2016).mkv files2folders.py: OS error: [Errno 18] Invalid cross-device link - retrying files2folders.py: OS error: [Errno 18] Invalid cross-device link - retrying files2folders.py: OS error: [Errno 18] Invalid cross-device link - aborting
Ubuntu 16.04, running under root context.
From experience running plex / rclone on kubernetes you should make sure that you run nice when running rclone (especially with encrypted volumes):
nice -n -10 rclone mount (etc....)
Hello,
do you know if is there the possibility to add in logs also the name of the file it has been uploaded in this line?
echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD FINISHED IN $(($(date +'%s') - $start)) SECONDS" | tee -a $LOGFILE
Tnx
CHECKFILEPATH="mountcheck"
what should I put there?
a file that is inside my mounted rclone¿
plexdrive-rebuildcache.sh needs to be updated since it doesn't work at all for plexdrive 5.0
At least the command to start plexdrive is missing the "mount" parameter
I got this issue when i try to run the new script for rclone mount.
Any idea ?
./rclone-mount-check.sh: line 51: [: : integer expression expected
echo "($wi) Waiting for mount $mount"
c=$(($c + 1))
if [ "$wi" -ge 4 ] ; then break ; fi
sleep 1
I configured the plex-scan-new.sh, but when i execute the script no newer movies or series were found.
./plexscannew.sh
09.08.2017 13:49:52 PLEX SCAN FOR NEW/MODIFIED FILES AFTER: Wed Aug 9 13:31:00 CEST 2017
Removing previous folder list
Scaning for new files: /mnt/media/movies/
09.08.2017 13:49:53 Movie files scanned in 1 seconds
Scaning for new files: /mnt/media/tvseries/
09.08.2017 13:50:51 TV folders scanned in 58 seconds
09.08.2017 13:50:51 Move & TV folders scanned in 59 seconds
09.08.2017 13:50:51 Setting lastrun for next folder scans
09.08.2017 13:50:51 Remove duplicates
sort: cannot read: /home/plex/.cache/folderlistfile: No such file or directory
09.08.2017 13:50:51 Plex scan started
09.08.2017 13:50:51 Plex scan finished in 0 seconds
09.08.2017 13:50:51 Scan completed in 59 seconds
has anybody an idea?
i can run copyspeedtest and it shows ~10MB/s but if I delete the saved file and redo it then i get ~130MB/s suggesting it is cached somewhere. The first download takes a few seconds to start when the 2nd starts immediately.
1: is there a way to keep this from caching the file or some other way to get the true reading. For now i have it rotate through a list of files.
2: would it be helpful to run this prior to playing a file? I ask this because I am trying to fight the occasional bottle neck between ACD and my VPS (although there really shouldn't be one). It would make sense to do a pre download when the ACD connection slows down if, when plex tries to play or transcode it plex doesn't have to get it from ACD because it's already in cache (or ram or wherever). If this is all true is there a way to do run this script first before plex starts it's thing?
This seems to get some of the stuff, but for TV shows especially I will notice that it will not update it, when a new episode has been added. Am I missing something? My gut is telling me this is because its adding to the second folder in there, (aka the season folder) so the TV Show folder doesn't get modified. Is there a way to correct this?
I don't fully understand what the script is doing. Also I don't have unionFS. Can I get it working to match my setup?
When registering the rclone-mount.service I get this:
The unit files have no installation config (WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and DefaultInstance for template units).
This means they are not meant to be enabled using systemctl.
Solution:
add RequiredBy=plexmediaserver.service
to the [Install] section in rclone-mount.service
The code if [[ -f "$MPOINT/$CHECKFILE" ]]; then
should be written as if [[ -f "$MPOINT/$CHECKFILEPATH" ]]; then
I'm using your plexgeneratefolderlist/plexrefreshlist scripts. It's working pretty well so far :)
If i'm downloading a whole season plex there will be multiple same paths in the plex.list
Example:
/local/The 100/Season 3/E1.mkv
/local/The 100/Season 3/E2.mkv
/local/The 100/Season 3/E3.mkv
will be in list:
/fuse/The 100/Season 3
/fuse/The 100/Season 3
/fuse/The 100/Season 3
so it will scan the season folder as much as it has episodes
Could you add something to avoid multiple season paths? (I'm not an scripter :/)
Just trying to understand under what circumstances the normal Plex Server processes (Scan/Refresh and Scheduled Tasks) don't perform these functions automatically resulting in the need to manually perform them via scripts? Is this just necessary due to complications with ACD hosted content?
The variable name is incorrect as per title, checkfilepath. It is called in the script as CHECKFILE.
By changing this, the script works correctly.
Mount path and CHECKFILE are concatenated to test whether the drive exists, hence ‘path’ looks like a legacy element that has been removed from the script, but variable names have not been updated, hope this helps someone.
I'm trying to us the rclone-upload.cron script. If I run it manually from the CLI it works fine however running out of crontab is giving the following errors (this is from log file so one error per run and below I show it runs fine manually. On Ubuntu Linux.
Ideas?
Crontab entry is
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
/home/owner/rclone-upload.cron: 14: read: arg count
root@Krandor:/home/owner# /home/owner/rclone-upload.cron
19.03.2017 02:27:37 RCLONE UPLOAD STARTED
From line 38: find "$TVLIBRARY" -mindepth 2 -type d -cmin -cnewer $LASTRUNFILE -exec
It is returning: find: invalid argument -cnewer' to
-cmin'
Additional question for this line: TVLIBRARY="/mnt/cloud/series/"
If there is a space in the folder name does the fact that is in quotes take care of that or do I need to add a "\" as in TVLIBRARY="/mnt/cloud/tv\ series/" ?
I've been using plex stats to see I have media missing analysis information. Even after I run both analyze scripts, it shows the same amount of objects missing data. Am I doing something wrong? Do I need to shutdown plex first before running?
5127 files in library
1141 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
1365 files missing deep analyzation info.
Running it in Python3 on Ubutnu 16.04 and it fails on movies with commas in the file name. Simple enough for me to fix (I went through and removed all the commas).
Edit: Nope, something else.
Before creating new rclone crypt make sure you have rclone remote ( Amazon Drive, Google Drive etc...)
To create rclone remote type:
rclone config
Press n for New Remote
Set name eg: acd ( for amazon drive )
Choose type of drive ( Amazon Drive, Google Drive, Dropbox etc... )
Leave client_id, client_secret empty.
Proceed with authorization.
Once you have your rclone remote you can proceed with creation of crypt drive.
rclone config
Press n for New Remote
Press 5 for Encrypt/Decrypt a remote
Set name eg: acdcrypt
Set path eg your original remote name + folder where encrypted files will be acd:/crypt
Press 2 for standard encryption ( filenames and content )
Press Y to set your own password ( do not forget it )
Press Y to set your own salt password ( different from previous one, dont forget it )
Set 128 for password strength
Confirm passwords and remote.
Now you can use rclone copy/sync/move commands to upload your files to encrypte drive.
Example:
rclone move /source path/ acdcrypt:/ -c --no-traverse --transfers=300 --checkers=300 --delete-after --log-file=/var/log/rclone-upload.log
The source path can be anything eg local files or existing mount ( for example if you want to switch from encfs encryption to crypt just set source path to match your encfs unencrypted drive ) ... if you are switching from encfs I would suggest usage of rclone copy:
rclone copy /source path/ acdcrypt:/ -c --no-traverse --transfers=300 --checkers=300 --log-file=/var/log/rclone-upload.log
To mount your new acd crypt mount use:
rclone mount --allow-non-empty --allow-other acdcrypt: /path/acdcrypt/ &
my plex server is running inside the linuxserver docker container. my first question is, if it would be better for my performance to run it directly on my server? and also inside the docker it is not possible to use your database optimization with sqlite. what exactly does this do? faster loading of covers etc. or does it improve playback?
thanks for your help
Should there be a "/" after series below?
#SETTINGS
MAXTIME=30 # Folders newer then xx minutes.
MOVIELIBRARY="/mnt/cloud/movies/"
MOVIESECTION=2
TVLIBRARY="/mnt/cloud/series"
I know you are in the process of converting to a single solution of Rclone for mount and encryption (Crypt). I have been using Rclone Crypt for a while to upload and all my ACD content is encrypted using Rclone Crypt. However, my Plex Server is currently utilizing an ACD_CLI mount that handles the actual streaming/communication with ACD with an Rclone crypt mount layered on top for decrypting the content locally (with unions-fuse layered on top). This has been working relatively well but I would like to simplify...
Current Setup: ACD(Crypt)--->ACD_CLI Mount--->Rclone Mount(Decrypt Only)--->unions-fuse Mount--->PLEX Server
Proposed Setup: ACD(Crypt)--->Rclone Mount(Stream & Decrypt)--->unions-fuse Mount--->PLEX Server
I setup an additional Rclone mount (with crypt) directly to ACD utilizing your recommended settings:
rclone mount \
--read-only \
--allow-non-empty \
--allow-other \
--acd-templink-threshold 0 \
--checkers 16 \
--quiet \
--stats 0 \
crypt: /mnt/acdclone &
Everything appears OK when comparing the contents of this new "direct" Rclone mount with the old "indirect" Rclone mount (utilizing ACD_CLI). However, I noticed that the file stamps between the two are not the same. They seem to differ by my timezone offset (UTC -6). For example:
"Indirect" Rclone mount (utilizing ACD_CLI):
5867220390 Jan 31 08:12 Content1.mkv
New "direct" Rclone mount:
5867220390 Jan 31 02:12 Content1.mkv
Amazon Cloud Drive site (via browser):
pg3in27v7k8adai3gp5qu9mfh6hjp 5.5 GB January 31 2:12 AM
Have you seen this behavior in all your research/testing? It appears on the surface that the "direct" Rclone mount time is more correct because it matches the actual time that I uploaded the file to ACD based on my timezone.
In order to move forward in my testing my next step would be to change my existing unions-fuse mount to utilize this new "direct" Rclone mount. Because of the datetime differences outlined above, I believe my Plex Server is going to be unhappy and want to do a full refresh of all this "changed" content. As you know, this is extremely slow for an entire library when the content is in ACD. Also, if I am unhappy with the results of this "direct" Rclone mount, going back will cause another full refresh. Any ideas on this? Ways to avoid this full refresh each time I switch back/forth for A vs. B testing?
Hi,
i have some questions about script in object.
I have to run first plexgeneratefolderlist and second plexrefreshfolderlist.
This should tell plex to scan only new folder. Right?
But what if i add a file in a folder that already exists?
I think about SerieTv Folder, when i add an episode in Season Folder.
Tnx
Please dont use absolute homefolders in your scripts.
"/home/plex/.cache"
https://github.com/ajkis/scripts/blob/master/plex/plexgeneratefolderlist.sh
Analyze and deep includes subtitles, so the scripts keeps listing and scanning the same files over and over again. The items should also be grouped so it does not rescan the same item multiple times.
I tried to run rclone as a service and found this systemctl service:
https://github.com/ajkis/scripts/blob/master/rclone/rclone-mount.service
system@TrsAtlas:/etc/systemd/system$ sudo systemctl enable rclone-mount.service
The unit files have no installation config (WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and DefaultInstance for template units).
This means they are not meant to be enabled using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
.wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
D-Bus, udev, scripted systemctl call, ...).
4) In case of template units, the unit is meant to be enabled with some
instance name specified.
Due to my lack of systemctl knowledge I do not really know what i should do with this.
What am I missing?
1.) I noticed that your script initiates 2 requests: first to analyze, and then secondly to refresh the metadata. Why do the metadata refresh afterwards... is there any benefit?
2.) For the 1st request to analyze you use PUT which works fine. But for the 2nd request to refresh metadata you use GET which doesn't work, I get HTTP code 400. If I change it to PUT it works fine. I'm assuming GET worked once upon a time but must've stopped in later plex versions but just wanted to double check with you to make sure it wasn't deliberate?
ACD mounted with RCLONE and ENCFS on top
VPS with 8 ram 8 cores
when running this command
$LD_LIBRARY_PATH/Plex\ Media\ Scanner --scan --refresh --section "$TVSECTION" --directory "$FOLDER"
it starts by printing out a bunch of UNKNOWN TYPE 7 and UNKNOWN TYPE 49 for a few minutes before showing GUI messages. What are these unknowns? This only happens when scanning TV Seasons folders and takes much longer than Movies. There are only .mp4 files in the Season folders. Here are example scan times.
08.05.2017 21:57:21 [PLEX SCAN] [ 39 ] /home/jason/plex/media/Movies2/The.Kings.Highway.2016.WEBRip.x264-Ltu
08.05.2017 21:58:31 [PLEX SCAN] [ 70 ] /home/jason/plex/media/Movies2/The Philadelphia Experiment (2012)
08.05.2017 22:11:20 [PLEX SCAN] [ 769 ] /home/jason/plex/media/TV/Gotham/Season 3
08.05.2017 22:28:31 [PLEX SCAN] [ 1031 ] /home/jason/plex/media/TV/Kevin Can Wait/Season 1
08.05.2017 22:35:51 [PLEX SCAN] [ 440 ] /home/jason/plex/media/TV/Outcast/Season 2
08.05.2017 22:45:49 [PLEX SCAN] [ 598 ] /home/jason/plex/media/TV/Supergirl/Season 2
So my question about this is, are these unknowns normal and are the scan times normal?
In my Google Drive the folder Modified date isnt changing but the file is.
I can change the depth to 3 and I can grab the changed file that way but how would I pull out just the directory to put that into a log file to get scanned by plex?
Im not very familiar with scripting so I apologize.
--file does not look like it works for plex
Hello,
just a question.
I have my library on gdrive mounted with rclone.
Do you think running plex-analyze.cli i can get bans?
Does it work only locally on DB or also on Library?
Tnx
May have already been mentioned but how to manage the dir-cache-time with the script plexupdatenew.cron
By default, the folder cache time is set to 5 min. The trigger time (CRON side) may be after the file creation date (server copy side). This will not trigger scanning of the library.
Did u get the point ? 😨
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.