Code Monkey home page Code Monkey logo

gooby's Introduction

Gooby is proud to announce:

OmniStream

What is OmniStream? It's next iteration of Gooby - entirely rebuilt from the ground up. It has many exciting new features such as:

  • Multi user support: everything is in the user home folder - including the mounts.
  • Full support of Traefik, with all its advantages such as a single domain certificate if used in conjunction with CloudFlare - no more Let’s Encrypt bans.
  • Omni can create and remove subdomains on the fly - you won't need to manually edit your A records.
  • More customizations than ever, plus a vastly improved menu system - maintaining your media server couldn't be easier.
  • Last but not least, OmniStream is 100% dockerized now, including Rclone and MergerFS - you will never be “waiting on mounts” again!

How to upgrade?

MAKE A BACKUP BEFORE YOU START! Read the information on OmniStream.cloud first. You can start the upgrade by typing omni-upgrade.

What will change?

There are a few things that will change, mainly with your mount locations. We have made sure impact is minimal, so you won't have to rescan your Plex, Emby or Jellyfin libraries again. However you will have to make some changes within your catalogue apps (Radarr, Sonarr, etc) and downloaders (Torrent, Usenet apps). If you created any custom yaml files, those will not port over automatically. You will need to manually keep a copy and adapt them to work with Omni.

Who are behind OmniStream?

The same two people (kelinger and TechPerplexed) that created Gooby will be maintaining OmniStream, so you can still expect the same level of support and care. OmniStream has its own location on GitHub. Once you are ready to upgrade, meet us over there for questions and suggestions.

Is Gooby going away?

No, you can keep using Gooby indefinitely. We won't be actively maintaining it any longer, but you'll never be forced to upgrade. Gooby is here to stay!

If for some reason you still want to install Gooby from scratch, run this command:

sudo wget https://bit.ly/GetGooby2 -O /tmp/install.sh && sudo bash /tmp/install.sh

Disclaimer:

This software is supplied "AS IS" without any warranties and support. You are solely responsible for determining whether Gooby is compatible with your equipment and other software installed on your system. Make sure you have a backup of all your important data!

Donate:

Thank you SO MUCH for your generosity - we promise we'll will think of you when we sip that coffee!

gooby's People

Contributors

adoruta avatar bdschuster avatar coxeroni avatar ploquets avatar scaredwater avatar techperplexed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gooby's Issues

After updating gooplex it doesnt' work anymore

Hello, thanks for the tool, it was very helpful.
Today after I launched gooplex I choose the option to update gooplex itself and now when I type gooplex i get this message: /bin/gooplex: line 15: /opt/GooPlex/menus/main.sh: No such file or directory
It seems the /opt/GooPlex directory got deleted.

[security suggestion] Limit container access to Google drive specific folders

Hello,

This is more of a security issue, not a current error. The thing is, with the configuration as set by Gooby, all containers have full access to the whole Google Drive, because they're mounted with direct access to the root of the user's Drive. This is an important security problem, if a container app gets hacked (and this could conceivably happen, since they're exposed to the internet) then the attacker would gain access to the whole drive instead of a subset of folders. This is something that makes me uncomfortable...

I.e. containers such as Plex, Deluge, Sonarr, Radarr, etc. are inherently exposed to the world and therefore hackable. These containers should only have access to media related folders that contain movies, tv shows, and so on, but not access to other unrelated folders which may contain sensitive personal information, if the user has stored it in his/her drive, which I guess is a usual occurrence.

I know I can change this by manually editing the Docker Composer yaml scripts in /opt/Gooby/scripts/components and changing the following line in section volumes:

For instance, - ${GOOGLE}:/Media -> - ${GOOGLE}/Media:/Media

My guess is that this change would not survive an update and much less a reinstall.

I strongly recommend to make this the default mounting option, to limit any potential breach and to increase the user's personal files safety. It also means that current users would have to make a one-time folder change in order to adapt to the new structure, i.e. create a folder named 'Media' at the root of their Drive, and subsequently moving all movie, tv shows, etc. folders into this new folder, then restarting the containers so that they point to the right folder.

Any thoughts?

[Question] Connecting apps behind http auth

How would you connect Sonarr/Radarr to Jacket when Jackett is behind http auth?

I tired connecting Sonarr/Radarr to Jacket when Jacket is behind a http auth. Both Sonarr and Radarr requires a unique URL per indexer from Jackett/Torznab. I.e. https://jackett.domain.com/api/v2.0/indexers/[NAME OF INDEXER]/results/torznab/ Therefore I would not be able to use the docker container name.

I used the command docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' jackett to get the local IP of Jackett. It returned 172.18.0.4.

I then changed the indexer URL to https://172.18.0.4:9117/api/v2.0/indexers/[NAME OF INDEXER]/results/torznab/ in anticipation that it would work (I used http, not https). Before copying the URL to Radarr, I checked if I could curl it and got this response:

<?xml version="1.0" encoding="UTF-8"?>
<error code="100" description="Invalid API Key"

This would indicate that it works (Given that Radarr would use the right API key :-)) as I allowed through the http auth.

But, it did not :-(
Unable to connect to indexer: Error getting response stream (ReadDoneAsync2): ReceiveFailure: Error getting response stream (ReadDoneAsync2): ReceiveFailure This might be a problem os Radarr, but we should in any case be able to use an IP instead of a domain.

This also impacts OMBI as it no longer accepts the docker container name "plex" as we discussed here: #19

All in all I'm stuck as I thought using the container local IP would just work. Did you manage to connect Jackett vi IP? Also behind http auth?

EDIT:
I see that the IPs are changing each time the container is spun up - So this is an issue :-)

Cannot uninstall/reinstall deluge

Hello there,

I would like to reinstall deluge as it is unfotunately not reachable through its web interface anymore (while the daemon is running).

How can I achieve this ?

thanks !!! Gooplex is really a nice set of tools / apps !

[Request] Lidarr

Any chance of getting Lidarr since ombi supports it now?

Thanks ;-)

More of a question - Upgrade to v2

Hello! Long time! I have some servers I did not update to 2 yet. What is the best way I should go about it? It is one of my primary Plex servers, so I don't want to screw it up or cause rescans.

Let me know. Thanks!

What's the correct way to install an update?

Netdata can be updated according to this message:
screenshot 2018-10-30 at 19 56 12

Since everything is in docker I would assume that the container needs an update first, but how fast does that typically happen? Should I proactively do something or should I just try updating via Gooby until the container is released?

And what about apps (Sonarr/Radarr/Tautulli) that has a built in update-engine, will that work in docker?

GooPlex to Gooby migration

So it's been about a year so I just updated GooPlex...and now it's gone. Luckily Plex and the mount still work, otherwise I would have a very angry household :) but it looks like it's time to migrate to Gooby. I'll probably do a fresh Ubuntu install but the only question I have is around the database migration. It looks like you are backing up the entire Docker container now instead of the path to the Plex installation. I've manually created a backup of the /var/lib/plexmedialibrary folder but how would I restore that with Gooby once I set Plex up?

Slow Speeds within Container?

Hi Again!
Is there anything that would be throttling the speeds of the nzbget container? I have a 1GB up/down, and usually download arounf 800mb/s, i seem to be getting throttled around 100mb/s download.

Thanks in advance,
-Brian

[suggestion] Documentation wiki

Since v2 now offers more or less the infinite Plex automation server I'm switching to Gooby instead of manual installing these apps :-)

However it's not always intuitive how to setup the different applications since they're running in Docker. Therefore I suggest to setup a Wiki here on GitHub to write-up a series of how-too's. I'll be happy to help :-)

As an example I've been struggling with:

  • Connecting apps to apps where I would need to use the [APPNAME] instead of the URL. This applies for some, but not all (See Jackett below)
  • Connecting Sonarr/Radarr to Jackett. Works now with a URL as I'm not using http auth, but I would foresee problems with http auth as one would need to add unique urls to the Torznab feed
  • Plex Plugin folder is moved from it's standard location to a specific Gooby folder. So how to install plugins

And I'm sure there are others.

Let me know what you think

2.1.4: Rclone install and remote name input

Where does the issue occur:
In rclone setup user is asked, whether the remote name is correct to be used by gooby.

Current behaviour:
My remote name which was typed in is ignored and still the standard/first remote is used.

Expected behaviour:
If I type in another remote name, this should be used.

Background:
I encrypt my Gdrive storage and therefore I am adding two rclone remotes after each other, wherein only the second remote is to be mounted by Gooby.

Deluge download folder

For the past two weeks I've not been able to download with Deluge. The torrent is added from ie. Radarr but the metadata is not downloaded and the torrent never starts. File size remains at 0.0 KiB and I never get any Seeders/Peers.

I tired to change the DL folder to the below, but no luck.

  • /Downloads/
  • /Downloads
  • ~/Downloads
  • /home/plexuser/Downloads

I suspected this might be a permissions problem and when I use Midnight Commander i can see that the ~/Downloads folder is a symlink to /mnt/google/Downloads and not a folder locally on my system.

I tried to rename ~/Downloads and create a new "on-system" folder but still no luck downloading. I also installed qBittorrent(nox) with same results = No download

Help is appreciated. Thanks

I'm on Ubuntu 18.04.1 and Gooby 2.1.1

Issues with rclone mounts v2.1.2

Hi,

I am unsure if this is related to that other issue we had some 3 months ago... but the effect seems to be approximately the same. For some reason, the rclone mounts fail after some time.

This is the output of systemd status rclone:

 rclonefs.service - Mount Google Drive (rclone)
   Loaded: loaded (/etc/systemd/system/rclonefs.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2019-01-28 23:24:42 CET; 9h ago
  Process: 14829 ExecStop=/bin/rmdir ${RCLONEMOUNT} (code=exited, status=0/SUCCESS)
  Process: 14826 ExecStop=/bin/fusermount -uz ${RCLONEMOUNT} (code=exited, status=0/SUCCESS)
  Process: 15018 ExecStart=/usr/bin/rclone mount --allow-other --buffer-size 2G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --gid ${USERID} --log-level INFO --log-file ${HOMEDIR}/logs/rclone.log --uid ${GROUPID} --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --rc --drive-v2-download-min-size 48M --config ${HOMEDIR}/.config/rclone/rclone.conf ${RCLONESERVICE}:${RCLONEFOLDER} ${RCLONEMOUNT} (code=killed, signal=KILL)
  Process: 30478 ExecStartPre=/bin/mkdir -p ${RCLONEMOUNT} (code=exited, status=1/FAILURE)
 Main PID: 15018 (code=killed, signal=KILL)
~

I tried restarting the service, it fails:

 systemctl start rclonefs
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to start 'rclonefs.service'.
Authenticating as: coconut
Password:
==== AUTHENTICATION COMPLETE ===
Job for rclonefs.service failed because the control process exited with error code.
See "systemctl status rclonefs.service" and "journalctl -xe" for details.

Also,

ls: cannot access '/mnt/rclone': Transport endpoint is not connected

The rclone log doesn't seem to give any relevant information. It just stops around the time the mount fails, for no particular reason.

When I perform a system cleanup, I get the following:

mountpoint: /mnt/rclone: Transport endpoint is not connected
mountpoint: /mnt/google: No such file or directory
chown: cannot access '/mnt/rclone': Transport endpoint is not connected
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    13  100    13    0     0     33      0 --:--:-- --:--:-- --:--:--    33
done
Job for rclonefs.service failed because the control process exited with error code.
See "systemctl status rclonefs.service" and "journalctl -xe" for details.
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected
Waiting on mounts to be created...
mountpoint: /mnt/rclone: Transport endpoint is not connected

And so on...

After rebooting, the mounts do come up but the docker containers do not. I had to run the system cleanup procedure twice in order to get everything running again.

Any ideas on what might be causing this?

Strangely, I updated gooby it now says 2.1.3., maybe this cleared the issue?

How does the backup function?

Hi,

There is an option in the menu to perform a backup, I tried it yesterday because I'm interested in migrating the system to another server. Is there a tar file somewhere that I can take and restore the backup on another system? There was no indication on what the backup performed, other than it could take a long time. The function is also not documented...

Thanks!

Portainer not being installed?

I have an issue with Portainer not being installed despite Gooby "installing" the app. I choose D, D, I and Gooby goes through the installation and comes up with a-okay message.

But the domain is not accessible (portainer.my-domain.com) - Basically the sub-domains is not responding. When I try to remove Portainer, Gooby reports that the app is not installed???

Gooby v2.0.1

New container added with template YAML not accessible

TL;DR a custom container I added is not accessible through its url redirected by the reverse proxy.

Hello,

I wanted to install a torrent app that works with VPN, and after reviewing different options I settled with this one. I have a working VPN (custom server openvpn) and configured the docker compose yaml script as follows (I took the deluge one as a reference):

#
# transmission - Torrent download engine
#
  transmission:
    restart: unless-stopped
    image: haugene/transmission-openvpn
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun
    container_name: transmission
    hostname: transmission
    cpu_shares: 1024
    ports:
      - 9091:9091
      - 8888:8888
    dns:
      - 8.8.8.8
      - 8.8.4.4
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${CONFIGS}/Transmission:/config
      - ${DOWNLOADS}:/Downloads
      - ${DOWNLOADS}/transmission:/data
      - ${GOOGLE}:/Media
      - ${CONFIGS}/Transmission/openvpn/default.ovpn:/etc/openvpn/custom/default.ovpn
      - ${CONFIGS}/Transmission/openvpn/ca.crt:/etc/openvpn/custom/ca.crt
      - /var/local/Gooby/Transmission/openvpn-credentials.txt:/config/openvpn-credentials.txt
      - /var/local/Gooby/Transmission/openvpn:/etc/openvpn/custom
    environment:
      - PUID=${USERID}
      - PGID=${GROUPID}
      - TZ=${TIMEZONE}
      - UMASK_SET=022
      - VIRTUAL_HOST=transmission.${MYDOMAIN}
      - VIRTUAL_PORT=80
      - VIRTUAL_NETWORK=nginx-proxy
      - LETSENCRYPT_HOST=transmission.${MYDOMAIN}
      - LETSENCRYPT_EMAIL=${MYEMAIL}
      - OPENVPN_PROVIDER=CUSTOM
      - OPENVPN_USERNAME=[User]
      - OPENVPN_PASSWORD=[Password]
#      - OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60
      - LOCAL_NETWORK=127.0.0.1
    healthcheck:
      test: ["CMD-SHELL", "netstat -ntlp | grep :80"]
      interval: 10s
      timeout: 2s
      retries: 3

I believe this docker compose script is correct, as the application starts and I know for a fact that it is running, and connected to the internet through the VPN.

Now, when I try to access the app through the address transmission.mydomain.com, I get a '502 bad gateway' error, alternatively '503 Service Temporarily Unavailable' or even 'NET::ERR_CERT_AUTHORITY_INVALID' . I think that the culprit is the nginx reverse proxy, because when I open a bash CLI inside the container and check the contents of /etc/nginx/conf.d/default.conf , I get the following:

bash-4.4# cat /etc/nginx/conf.d/default.conf | more
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 80;
        access_log /var/log/nginx/access.log vhost;
        return 503;
}
server {
        server_name _; # This is just an invalid value which will never trigger on a real hostname.
        listen 443 ssl http2;
        access_log /var/log/nginx/access.log vhost;
        return 503;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/default.crt;
        ssl_certificate_key /etc/nginx/certs/default.key;
}
# jackett.mydomain.com
upstream jackett.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # jackett
                        server 172.26.0.4:9117;
}
server {
        server_name jackett.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name jackett.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/jackett.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/jackett.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/jackett.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/jackett.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://jackett.mydomain.com;
        }
}
# menu.mydomain.com
upstream menu.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # organizr
                        server 172.26.0.11:80;
}
server {
        server_name menu.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name menu.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/menu.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/menu.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/menu.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/menu.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://menu.mydomain.com;
        }
}
# netdata.mydomain.com
upstream netdata.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # netdata
                        server 172.26.0.7:19999;
}
server {
        server_name netdata.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name netdata.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/netdata.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/netdata.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/netdata.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/netdata.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://netdata.mydomain.com;
                auth_basic      "Restricted netdata.mydomain.com";
                auth_basic_user_file    /etc/nginx/htpasswd/netdata.mydomain.com;
        }
}
# plex.mydomain.com
upstream plex.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # plex
                        server 172.26.0.2:32400;
}
server {
        server_name plex.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name plex.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/plex.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/plex.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/plex.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/plex.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass https://plex.mydomain.com;
        }
}
# portainer.mydomain.com
upstream portainer.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # portainer
                        server 172.26.0.8:9000;
}
server {
        server_name portainer.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name portainer.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/portainer.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/portainer.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/portainer.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/portainer.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://portainer.mydomain.com;
        }
}
# radarr.mydomain.com
upstream radarr.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # radarr
                        server 172.26.0.9:7878;
}
server {
        server_name radarr.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name radarr.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/radarr.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/radarr.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/radarr.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/radarr.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://radarr.mydomain.com;
        }
}
# sonarr.mydomain.com
upstream sonarr.mydomain.com {
                                ## Can be connected with "docker_default" network
                        # sonarr
                        server 172.26.0.10:8989;
}
server {
        server_name sonarr.mydomain.com;
        listen 80 ;
        access_log /var/log/nginx/access.log vhost;
        return 301 https://$host$request_uri;
}
server {
        server_name sonarr.mydomain.com;
        listen 443 ssl http2 ;
        access_log /var/log/nginx/access.log vhost;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:!DSS';
        ssl_prefer_server_ciphers on;
        ssl_session_timeout 5m;
        ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;
        ssl_certificate /etc/nginx/certs/sonarr.mydomain.com.crt;
        ssl_certificate_key /etc/nginx/certs/sonarr.mydomain.com.key;
        ssl_dhparam /etc/nginx/certs/sonarr.mydomain.com.dhparam.pem;
        ssl_stapling on;
        ssl_stapling_verify on;
        ssl_trusted_certificate /etc/nginx/certs/sonarr.mydomain.com.chain.pem;
        add_header Strict-Transport-Security "max-age=31536000" always;
        include /etc/nginx/vhost.d/default;
        location / {
                proxy_pass http://sonarr.mydomain.com;

As you can see, there is no reference to transmission, which I guess is the reason why the address doesn't load up properly. What am I doing wrong? Is there a proper way to add a custom container and have it load with its own subdomain through the reverse proxy?

Thanks!

"Spring clean" might break write access of other containers to their volume paths

The spring clean from inside Gooby resets permissions on the /var/Gooby/Docker folder to be owned by the Gooby user after doing it's "cleaning".

This works for all dockers, which are composed with UID and GID of the Gooby user, like it works for Plex, Emby and the like which come with Gooby.

There are containers, which I manually added, like nextcloud and mariadb, which don't use the UID and GID option and therefore run with different users. nextcloud uses www-data user, while mariadb seems to run as root.

When chowning the Docker-Folder this breaks write permissions of these specific containers, which has to be reverted manually or prevented for the future. Since I am not a docker master, the problem could also lie in my compose file for the respective containers, but frankly I was not able to get both containers up and running with the UID & GID options.

Possible solutions might involve:

  • adapting the spring clean script to not chown the whole folder
  • put the volumes of the respective containers to another folder where permissions stay as they are
  • write another script which runs after yours and chowns the respective folders with the correct user

Interested in seeing your viewpoint, since again I am no docker master ;)

Edit: I just realised nextcloud might work with UID and GID of the Gooby user, but haven't tested it.

/opt/Gooby gone

It happened to me several times that /opt/Gooby is gone, probably after a reboot where the rclean script deletes it. Seems that then follows an error when Gooby is redownloaded, but I haven't debugged that yet. Only witnessed that it is not there anymore and for example no syncing of the upload folder of mergerfs takes place. Dunno how to solve this...

Deluge - 502 Bad Gateway

Fresh installed Ubuntu 18.04.1 and Gooby 2.1.0
I cannot access Deluge at https://deluge.my-domain.com. I've tried to do a System clean and a reboot + waited for all services to come online, but deluge is stubborn and consistently give 502 error.

Together with this issue: #36 it seems that v.2.1.0 are unstable on ubuntu 18.04.1.

@TechPerplexed - Please help with your magic fingers again. Thanks :-)

Netdata uninstall does not recognize netdata

1st of all thanks for the progress.

I installed Netdata via GooPlex version 1.1.1 and wanted to test the Remove function (D,A,R), but it returned "Netdata is not installed yet"

Non-Standard components?

I noticed this project does not have a few useful tools like nzbhydra and mylar. How feasible is it for me to add them on my own without blowing everything up?

[Question] Having problems with OMBI

I wonder which URL/setting you use when connecting OMBI to Radarr/Sonarr. OMBI+Plex connection is working just fine.

I'm:

  1. Loggin into Ombi
  2. Settings -> movies -> Radarr

I've tried all sorts of combinations of:

  • URL to Radarr/Sonarr: radarr.test.mydomain.com
  • External IP of the server
  • Localhost and 127.0.0.1 and internal IP in this case 192.168.0.4
  • With and without SSL enabled and still no luck
  • with and without port number defined.

I'm coming to a point where I think it might be a bug with having everything in Docker or a problem of OMBI. Could also be me :-)

So I wonder if anyone else made the connection work?

[question] Deluge permission problems?

with the fear of having another non-issue I'm experiencing some issues wit Deluge.

When installed the default download location is /root/Downloads. When adding a torrent Deluge returns an error saying that Deluge does not have permissions to /root/Downloads.

I tried to change it to /home/plexuser/Downloads and /plexuser/Downloads/ but in both cases the torrent does not download. There is a Down speed, which slowly drops to zero, but nothing under Downloaded in Deluge and when I go the the CLI I see no files created in /home/plexuser/Downloads.

It could indicate that Deluge does not have the right permissions. Or I did something wrong again ;-)

I removed Deluge, ran system cleanup and reinstalled Deluge. No luck :-(

P.S.
Before Gooby I installed all the apps myself and did not have such problems. I mean, the server/ext. IP is tried and tested :-)

Plex 502 Error

Hi. I got a new server, started, installed Plex. when I try to access the URL, i get a Plex 502 Error

Plex update issue

Hi,

I tried the Plex update via the Gooby menus. It said something along the lines of access denied, so I tried it again. Now it says it is up to date. Meanwhile the update didn't take effect as Plex still reports the older version. Do you know what went wrong with the permissions doing the update?

Thanks!

Docker containers fail to start after server reboot

Hi,

I have a few issues on a new install but I'll split them out into separate posts.

I've installed Gooby on Ubuntu Server (18.04) and I'm finding that after rebooting the server, Plex fails to start. docker container ls and docker ps both fail to show any results.

If I enter the Gooby menu, and try to update Plex, it tells me that it's not installed. If I run the install again, it starts up Plex (and the other 2 containers) and it's accessible with all my previous configuration.

How can I solve this?

Thanks.

Issue with rclone

Hi. I was following your guide to install rclone, and i get this error.

cat: /var/local/.Gooby/plexclaim: No such file or directory done Failed to enable unit: File rclonefs.service: Invalid argument Failed to start rclonefs.service: Unit rclonefs.service is not loaded properly: Invalid argument. See system logs and 'systemctl status rclonefs.service' for

OMBI Issue

Do you hate me yet? 🤣
1st: the ombi image needs netstat (net-tools) installed on it for the health monitor to work, otherwise it shows unhealthy. I already installed it on mine, just an FYI.

2nd: I'm having issues with Orginizr talking back to OMBI. Never had this issue before, i thought i would run it by you. I have Organizr talking to Sonarr/Radarr/Plex/NZBGet with no issues, but OMBI, i get this in the logs of Organizr: OMBI Connect Function - Error: cURL error 35: error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number

Any ideas?

Emby metadata 0 byte size

Setup: Emby-beta (4.1.0.1) with mergerfs on Ubuntu 18.04

I let emby create nfos and put them along with the metadata into the folders of the media files. This regularly creates issues, where 0 byte files are created (mostly the nfos) and therefore no information is stored. This does not always happen, but often.

There is no write permission issue, as other files are created successfully and uploaded to google via the cron script. I am writing here, because luke over at emby told me he never saw that issue before and so I suspect that there is something odd with my setup. The respective thread can be found here: https://emby.media/community/index.php?/topic/69069-4101-empty-nfo-files/

Anybody else having these issues and/or knows how to fix this? I am having this issues since several versions, probably right from the beginning...

Edit: I was finally able to reproduce the issue:

  1. no nfo present on rclone drive or local drive
  2. "search for missing meta data" for item x
  3. nfo is created locally and uploaded to rclone drive with > 0 bytes
  4. "search for missing meta data" for item x again
  5. nfo on rclone drive is 0 byte

So emby seems not to be able to write on the rclone mount, while this works for the user e.g. from command line. I keep asking at the emby forum in parallel to this.

Securing a server does not work on a fresh install

This is not a huge problem, more of a inconvenience, but in any case the function is not working as intended.

Securing a server after a fresh Linux and Gooby install does not work unless you have completed a System Cleanup first.

Reproduce:

  1. Install Linux
  2. Install Gooby
  3. Install any app, ie. Jackett
  4. Secure Jackett
    = This fails with message like (Forgot to save the exact message): "Could not create plexuser"
  • Reboot does not help
  • System Cleanup fixes the problem and you can secure a server.

I've experienced this a after each fresh install whilst testing of our favourite bug

Manual Sync Question

When I run a manual Sync (because I'm sometimes inpatient..lol), I noticed it's been sitting for about 10mins before actually uploading.

Starting sync at Mon Apr 15 19:47:44 CDT 2019
Copying files
/TVShows/Bar Rescue/Season 6/Bar Rescue-S06E25.mkv
Files to copy: 1
Bytes to copy: 1562664119
2019/04/15 19:47:44 INFO : Starting HTTP transaction limiter: max 5 transactions/s with burst 1
0 / 0 Bytes, -, 0 Bytes/s, ETA -

This didn't start to upload uploading till 19:58.

Any ideas?

[Request] Fail2ban plugin in Netdata

Thanks for v2.

I would like the ability to view Fail2ban jail chart in Netdata. This is normally a standard plugin of Netdata which needs to be enabled/configured.

Thanks

nbget

Hi, i've been messing around with Gooby to try it out. Seems like a pretty good solution to running plex/gdrive on a vps. So thanks, its great.
Just to confirm plex/sonarr etc need to be pointed at Media?
Also how does the download/upload process flow? I have tried nzbget but cant see where its downloading to.
Thanks

Issues with mounts in v2.1.0

I made a fresh install of Ubuntu 18.04.1 and Gooby 2.1.0 and I'm experiencing stability issues with the Rclone/MergerFS mount. Basically it drops when I do a System Clean and I would have to reboot to get it online again. However, reboot does not always help and it seems overall that the mount is very unstable and not persistent.

Not sure how I can provide more info for debugging, but please let me know .

Rclone setup is:
[Gdrive]
type = drive
scope = drive
root_folder_id = 1hiddenn
token ={"access_token":"hidden","token_type":"Bearer","refresh_token":"1/hidden","expiry":"2018-11-12T12:18:37.767056944+01:00"}

Issues with docker? Plex and RClone keep unloading.

So first, I apologize. This is my first experience with docker so if my terminology is incorrect please tell me. :) I have installed Gooby, rClone(Google Drive) and Plex. I am having issues with the persistence of the rClone and Plex installations. I am not sure if it is related, but it happens when I add content to my Plex Library. First the rClone installation will fall off. I can select "install" and run through the setup, but it does not actually install. Rebooting as root results in Plex no longer showing up in concurrence with the rClone issue.

When I run a system cleanup I get this error:
Failed to start rclonefs.service: Unit rclonefs.service not found.
Failed to start mergerfs.service: Unit mergerfs.service not found.

Followed by this endless loop of errors:
mountpoint: /mnt/rclone: No such file or directory
mountpoint: /mnt/google: No such file or directory
Waiting on mounts to be created...
mountpoint: /mnt/rclone: No such file or directory
mountpoint: /mnt/google: No such file or directory
Waiting on mounts to be created...
etc etc

I've reimaged the server in order to get back to a functional Gooby, install rClone and Plex, add my libraries and then about midway through boom. Plex can't find the Media directory because rClone has disappeared. I am back where I am started. This most recent time I did the OS update before installing anything and I have listed that version information below.

Gooby Information:
Ubuntu 18.04.2 LTS (Hetzner)
Gooby version: 2.1.4

I am not sure if you have seen this before, but I figured I would bring it to your attention. I am reimaging the server again since I did so much fiddling around. This time I will be installing Ubuntu 18.04.1 LTS and not doing the version upgrade.

Organizr not installing

Hello again :-)

Orginizr dosn't appear to be installing, stable or beta. It says install complete, but doesn't create the container.

Thanks in advance!

[Request] Ability to add apache http auth

Thanks for v2.

I would like the option to add Apache http auth to the domains as an extra layer of security. I would imagine that adding one auth for all the domains as part of the environment section would be the easiest.

However, I have a use-case where I would like to give access to ie. Ombi, but not the other URLs, therefore it would be good to either have an option to set the password per domain, or to enable/disable the auth per domain. This way I can either controll access with different passwords or i can disable authentication on OMBI.

Thanks

Deluge doesn’t download and Rclone doesn’t upload automatically

Hi,
First of all, thank you for the good project you made.

I installed your suit and all works fine. I have the server in a VPS running 24/7 (Ubuntu machine) but, I have two problems that i can’t find the solution:

1- Deluge only downloads when I’m on my website (Deluge.mywebsite.com). If the server is running 24/7, why Deluge doesn’t download automatically during all the day?

2- Rclone doesn’t upload automatically the media to my TeamDrive (but it works doing manually). Should I do anything? I checked your script in your Cron folder but i think it should works automatically.

Thanks for your help in advance

[Question] Sonarr writtable error

When trying to add existing tv shows and browsing to the corresponding Gdrive folder, I receive the following error: Folder is not writable by user abc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.