Code Monkey home page Code Monkey logo

Comments (57)

bdschuster avatar bdschuster commented on May 14, 2024 1

ok, so i figured the containers out. I did try the sleep, i even went up as much as 2mins, with same results. After reboot, i'm seeing that the rclean script is still running (we already knew that), and continues to run until it's killed. Got me thinking what it's hung up on. After deep diving into syslogs, i noticed that even though @reboot is in crontab -e, it is still running as my username, meaning the script must be hanging asking for a sudo password. I ran sudo visudo and added <myusername> ALL=NOPASSWD: ALL to the end of the file and saved. Now I can run sudo commands without prompting for password. Rebooted again (without the sleep in cron) and everything came up as it should. Rebooted 3 times, still no issues.

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024 1

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Hiya @bdschuster :)

Let me start with some bad news... although you should be able to import your Plex database, you will need to change the paths of your libraries.

Unfortunately Plex WILL rescan your content to some degree and depending on the size of your libraries, that might take a while (days).

I believe there should be a way to change the paths in the the database itself, but that's not something I have tried myself.

What I'd probably do is create a backup of the Plex database, restore it on a server that charges hourly (Hetzner or Scaleway) and then install Gooby and import Plex.

Then change the paths, do your scanning, create a backup and restore that on your main server.

It'll probably cost you $0.50 for a few days of usage but it's by far the safest method :)

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Thanks! I think i'm going to just try and use a symlink. Anything that you know wrong with that? I mean, I will fix it eventually, but for now.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

I don't think there is anything wrong with using a symlink, but I'm not sure how that would solve your problem. I assume that you currently use /media/Plex (the old location for the mount), right? We'd have to fool the container into using that location (the actual mount doesn't matter).

You could try to edit the yaml file (the one located in /var/local/Gooby/Docker/components) and change the line - ${MEDIA}:/Media to - ${MEDIA}:/media/Plex

That way at least Plex won't have to rescan... or so I assume. I wouldn't attempt it on your main server unless you test it but theoretically it should work without having to rescan anything!

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

it actually looks like it's using /media/Google currently

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Ahh ok - so yes, edit the 20-plex.yaml file and change that line to /media/Google. I don't envy you having to update a working (and primary) system... you better warn your users 😰

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Last question (maybe...lol), will the old backups restore to the new location correctly? and does the old backup, backup the Tautulli data?

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

No, the old backup only backed up Plex I'm afraid, and it will just restore it to the old Plex location. However, the system should offer to import your old Plex, Tautulli, and Sonarr/Radarr databases (= copy them to the new location)

Wishing you luck! Make sure you have 20 backups just in case, ok? I'd hate for you to lose anything in case it didn't go as planned... I mean, it should, but eh, computers have a mind of their own!

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

any idea what could be going on here? when I run system cleanup:

Shutting everything down

ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

and now i appear to be stuck here:
Updating and starting containers

Pulling ombi (linuxserver/ombi:latest)...
latest: Pulling from linuxserver/ombi
84ed7d2f608f: Pull complete
caf09a4b300c: Pull complete
34082dbadae0: Pull complete
3da8e4835db4: Pull complete
49d2e1fcfbf3: Pull complete
b41d4b109c3b: Pull complete
cde4c5a465c5: Pull complete
c2ee1a9950cc: Pull complete
de1e0f981741: Pull complete
172adb5bae7c: Pull complete
Digest: sha256:9276720fe902bcf7c33ad7d2f11da4aeb8f7c2f0f1fb725f68fdad69c1b06e36
ERROR: Cannot overwrite digest sha256:9276720fe902bcf7c33ad7d2f11da4aeb8f7c2f0f1fb725f68fdad69c1b06e36
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Cleaning Docker leftovers

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Hmmm I get that message too every now and then (the HTTP request taking too long). Usually a server reboot solves that... I'm using a dedi with OneProvider.

As for the "cannot overwrite digest" - no idea! I searched Google and it says it is most likely a docker bug... yeah, not helpful!

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

don't hate me, another question....lol. How do you have your Plex scan for new files when they are downloaded from Sonarr/Radarr?

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Hate you? Neverrrr 🤣

It's very simple really - go to Settings - Connect - add Plex. Then enter

Host: plex
Port: 32400
Username + Password for Plex
Update library: yes
Use SSL: no

That's all :)

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

The only issue with that is it initiates a partial scan when download completes, meaning it's still sitting and waiting to be transferred to google drive, so it doesn't find anything and never shows up, untill the next scheduled scan.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Ah I see what you mean... well if you are using the new MergerFS version, theoretically whatever is sitting in the upload folder should be scanned and added to Plex even before it's uploaded to Google. Is that not how it's behaving?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

I'll tell you when i figure out what's going on here....I CANNOT get the remote access working. I'm not sure if this has something to do with my old data or not, but it's driving me crazy. If you have any info...let me know.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

What remote access do you mean, can you give an example please?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

And I'm also having a problem with locking up on reboot then losing all docker containers after a force reboot. I'm on 18.04 Server. I'm going to roll back to 16.04 and reinstall/restore and see what happens.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Ok, with Plex, make sure you manually specify port 8443 in external connections. That should be enough... let me know if you already tried that and we'll see if the advanced settings match.

Sorry about your reboot/locking problem... how strange that you lose them. Could that be a permissions issue perhaps? I ran Gooby on 16.04, 18.04 and even Debian 9 (with the still-not-tested Docker adaptation currently in beta) and I never ran into that issue. Could it be a permissions issue?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

it's a clean wipe of the server, but it is physical, not virtual.
After installing 16.04 Server, The 8443 works! So i'm back online with Plex....but.....If i do a reboot on the server, my containers are gone, I do a systems cleanup, then everything is back up. Any ideas on this?

Also, another weird thing is NZBGET was having issues unraring, had to dig in logs, but found out I had to chmod 777 /mnt/uploads/Downloads , then reboot the container, and all is working, same with Sonarr, Radarr, etc.

Thanks again for all your help, never had these issues before, everything always just worked...lol

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Just rebooted again. Same thing, no containers, did system cleanup, and had to chmod Downloads, restart downloader containers, and all was good again.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Heh, I know right... always someone 😛 Teasing, just sorry you're having issues!

It's a longshot... but can you check if your cron contains this line?
@reboot /opt/Gooby/scripts/cron/rclean.sh > /dev/null 2>&1

Fingers crossed that's what it is :)

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

It wasn't in /etc/crontab, but i added it...i'll reboot a little later and test. It does ask for a user in /etc/crontab now, not sure if it matter if it's not defined.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

I would add it to the user: try crontab -e instead

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

dang, it's actually in crontab -e. So that's not the issue :-(

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Gah!!! I'm at a loss then... so when you reboot, you say the containers are gone. Do you mean the folder /var/local/Gooby/Docker becomes owned by another user, or root only, or disappears altogether?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Weird. Really this shouldn't be an issue, but let's try to manually set the permissions:

sudo chown -R $USER:$USER $HOME
sudo chown -R $USER:$USER /var/local/Gooby
sudo chown -R $USER:$USER /var/local/.Gooby
sudo chown -R $USER:$USER /mnt/uploads

If that's not it, you might want to check what docker version you have: docker -v and docker-compose -v - see if there is anything odd there?
Mine shows:
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad0

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

rebooted, same thing. docker -ps shows nothing. Running /opt/Gooby/scripts/cron/rclean.sh manually comes back "already running". Did system cleanup, back to normal. I don't get it.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

That makes two of us! The fact that you get "already running" means that the script hasn't finished running after the reboot. This could indicate that your system is very, VERY slow and will eventually get there - or it could mean that it can't finish because it gets hung up on something.

The weirdest part is it only hangs after you reboot... not after a regular cleanup. Can you try to wait about 10 minutes after a reboot and see if it sorts itself after a longer time period?

I'm as stumped as you are!

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

i can only assume it's getting hung on something, I've waited almost 30mins already. The system is for sure not slow, it's a physical server, and NO issues with transcoding of running plex or unpacking or anything...lol.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Have you tried a gasp rebuild? That is usually my last resort... and then solves all problems (hopefully that will be the case for you too!)

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

sorry, explain? lol

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

LOL, I just meant wipe the server and start with a fresh installation... if that is an option at all...

from gooby.

deedeefink avatar deedeefink commented on May 14, 2024

Hi folks,
So, I will join the crowd with the same issues. I've been running the server without issues for months, but after installing some Ubuntu/library updates (Can remember which) I started having issues with the mounts coming down.

After a system cleanup everything works, but overnight the mounts goes offline again. This has now repeated for a week or more.

But, hey, happy about reading these looong threads again (Secretly missed it)

Just wanted t say that you're not the only ones having problem. So I'm cheering for a resolution

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Woohoo!!! Sorry, I just love it when I'm not the only one 😊, then I don't look (as) crazy...lol

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

GAH 🗡 😿 👊 Well it's lovely to hear from you @deedeefink but that's not what I wanted to hear, heh.

Just to clarify: @bdschuster has a problem with the containers going down, not the mount, but you @deedeefink mentioned the mounts coming down. Let's verify you're both experiencing the same issue here... can you describe in more detail what exactly goes down in your case, @deedeefink?

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Just a follow up: I have rebooted my server about 3 million times this last week (ok, I exaggerate), and I can't reproduce either of your problems... sorry :(

In better news: we're testing a new syncmount script which not only uploads stuff faster to Google, but it will make @bdschuster particularly happy since it addresses the future date issue you were having.

It's been field tested for a few weeks in a private setting and it seems to work fine, so stay tuned for an update in another week or so 👍 (or if you really can't wait, grab the script in the Debian branch and start testing) 😄

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

don't play with me @TechPerplexed LOL! Also, do you think to figure out where the failure is after rebooting, should I kill the script, then run it manually to see where it is getting hung at? Or any other ideas of figuring out? I could give you a temp login to the server if you wanted to look.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Yeah it's puzzling... so how exactly is it behaving?

Let's see if understood everything. You reboot, and then

  • /mnt/google comes up correctly
  • /mnt/uploads/Downloads doesn't have the correct permissions
  • /var/local/Gooby exists, but
  • none of the containers come up

However, when you run rclean from the menu, it behaves correctly and the containers come up normally, did I get that right?

Any more odd behaviour that you notice?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Yes and No:

  • /mnt/google comes up
  • /var/local/Gooby exits, but
  • none of the containers are listed using docker -ps
    • Can be fixed by running rclean, all containers are now there.
  • /mnt/uploads/Downloads is a strange one to me.
    • It is owned by myusername:myusername
    • Has 776 permissions, which should be fine
    • Containers cannot write to mount (/Downloads), and doesn't appear they are mounted to /mnt/uploads/Downloads. (ie: If something is written manually from inside the container, you can't see it in /mnt/uploads/Downloads or other containers at /Downloads. Here's the kicker on this one, This isn't just happening at reboot. My theroy behing this is that when sync happens and this command comes into play find . -type d -empty -delete, the containers lose their mounts because it doesn't see Downloads any longer. This is only happening when /mnt/uploads/Downlods is empty. I see you have mkdir -p ${UPLOADS} ${UPLOADS}/Downloads after that command, but I think the containers already lost their mount at that point, and you have to restart them (Sonarr/Radarr/NZBGET) to get them mounted again.
      I'm still testing the above theory, but I believe i narrowed it down to that, as permissions seem fine.
      I'm going to test that theory now, and get back to you, but the container ordeal, i'm still trying to look into that, it's just hard since I cannot reboot constantly as people are watching things at some times.

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

So my theory about /mnt/uploads/Downlods is correct. I have confirmed when the script deleted and re-creates the Downloads folder, the containers lose their mounts. A restart of the containers bring them back up till the next time the script runs (if Downloads is emptly). I have corrected this by changing your above command to find . ! -path "*Downloads*" -type d -empty -delete and commented out the mkdir, so it ignores the Downloads directory, and issue does not persist.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Downloads: There you go, even though everything should be identical on two systems, somehow it isn't! Glad you got at least that sorted!

Containers: So really it boils down to reboot vs rclean - and while both run the EXACT same script, somehow they behave differently! Grasping at straws here, but what if you delay the script to run a minute after reboot?
Edit your cron: crontab -e and just add @reboot sleep 60 && instead of @reboot?

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Ok, so looking at your create user script, it does what I mentioned above, BUT, if you already created a user and are not logged as root, it does not ask to create a user and does not run that script, so that's where my problem was.

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

AHHHHHHHHHH thank heavens for that!!!!!!!!!! So.... it was a permissions issue after all.... but I never considered the visudo thing (it's so natural for me to add that, and now even more so since Gooby takes care of creating my new user after each reinstall)

SO pleased you got it sorted - let me close this issue now and file your experience as a learning moment for me 👍

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

I have corrected this by changing your above command to find . ! -path "*Downloads*" -type d -empty -delete and commented out the mkdir, so it ignores the Downloads directory, and issue does not persist.

Oh one question (humble request) for you: I'd like to include this line into the script - feel free to edit & send a pull request (if it's not too much trouble) :)

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Thank you so much!! ❤️

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Done for both master and Debian...Now i'm just trying to figure out where we could fix if you already have a username created so it adds ALL=NOPASSWD: ALL if it does not alrady exist. I think i may know how.

Also, going to update to the Debian branch and check out your new uploading...any suggestions before I do?

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Aren't you clever 😆

Debian branch should work out of the box... (name is a bit of a misnomer, it just means that the improved Docker installation should work for Ubuntu and Debian alike).

The syncmount script has some very exciting new features. One is complete statistics of what you upload for any given time period through the built in scripts (backup & syncmount) - which will require a little bit of self installation. I'm working on the Wiki right as we speak. Handy to keep an eye on the 750G upload max Google imposes, among other uses.

The other big improvement, of course, is a fix for the future date and some significant enhancements on the uploading process. I have to thank my friend kelinger for all those, I think it's no secret that he is the real brain behind this project 👍

Can't wait to hear how it's working for you!

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

Sounds Awesome! Should I just be able to pull it and run a system cleanup?

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

Well if you run a system cleanup, it would revert right back to the master branch... so you'd have to disable that first in the script (or update the one line to sudo git clone -b debian https://github.com/TechPerplexed/Gooby /opt/.Gooby :)

from gooby.

bdschuster avatar bdschuster commented on May 14, 2024

I knew that! I Swear! HAHAHAHAHA 😄 Honestly, i knew it was in there, but forgot about it till you said something, so, yeah, i would have been going crazy...lol

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

LOL trust me... I found out the hard way too 😋

from gooby.

TechPerplexed avatar TechPerplexed commented on May 14, 2024

The updates are now live... Debian branch will be deleted soon. Make sure you update to the Master branch :)

from gooby.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.