Code Monkey home page Code Monkey logo

pihole-cloudsync's Introduction

pihole-cloudsync

A script to help synchronize Pi-hole adlist/blocklist, blacklist, whitelist, regex, custom DNS hostnames, and custom CNAME hostnames across multiple Pi-holes using a Git repository.

Why pihole-cloudsync?

I was running six Pi-holes on three different networks at three different physical locations. I wanted all six Pi-holes to share the same adlists, blacklists, whitelists, and regex files, but it was time-consuming to manually synchronize all of them (modify the local Pi-holes, VPN into the second network and modify those, then VPN into the third network and modify those). I also wanted the ability to share custom DNS hostnames between multiple Pi-holes so that the Pi-hole UI stats display the proper local hostnames instead of IP addresses.

I wanted to use Pi-hole's built-in web UI to manage only one set of lists on one Pi-hole -- and then securely synchronize an unlimited number of additional Pi-holes. I couldn't find an existing script that did exactly what I wanted... so I wrote pihole-cloudsync.

pihole-cloudsync is lightweight enough to use if you're only syncing 2 Pi-holes on a home network, but powerful enough to synchronize virtually unlimited Pi-holes on an unlimited number of networks.

Feedback, suggestions, bug fixes, and code contributions are welcome.

How pihole-cloudsync Works

pihole-cloudsync allows you to designate any Pi-hole on any network to act as your "Master" or "Primary." This is the only Pi-hole whose list settings you will need to manage using Pi-hole's built-in web UI. The Primary Pi-hole then uses pihole-cloudsync in Push mode to upload four files to a private Git repository that you control (such as GitHub) that contain:

  1. Your adlists/blocklists (queried from Pi-hole's database at /etc/pihole/gravity.db)
  2. Your domain lists: "exact match" and "regex" versions of your white and black lists (queried from Pi-hole's database at /etc/pihole/gravity.db)
  3. Any custom DNS names you've configured via the Pi-hole UI (copied from /etc/pihole/custom.list)
  4. Any custom CNAMEs you've configured via the Pi-hole UI (copied from /etc/dnsmasq.d/05-pihole-custom-cname.conf)

All other Secondary Pi-holes that you wish to keep synchronized use pihole-cloudsync in Pull mode to download the above files from from your private Git repository.

The script is designed to work with any Git repo that your Pi-holes can access, but I have only personally tested it with GitHub.

No more Pi-hole v4 support

This script was originally written to work on Pi-hole version 4. However, as of Pi-hole version 5, most of the settings needed for sync between Pi-holes is no longer stored in individual text files -- they are now all stored in a single database file called gravity.db. The changes required to pihole-cloudsync to work with Pi-hole v5 means it will no longer with with version of Pi-hole earlier than v5.

Before proceeding, verify that your Primary and all Secondary Pi-holes are running Pi-hole v5 or later.

Setup

Prior to running pihole-cloudsync, you must first create a new dedicated Git repository to store your lists, then clone that new repository to all Pi-holes (both Primary and Secondary) that you wish to keep in sync. The easiest way to do that is to fork my my-pihole-lists GitHub repo as a template. Do not simply create a regular fork of my example repository. When you fork as a template instead, GitHub will allow you to set your new repository as "Private." Don't worry if the example lists in the example repo are different than yours. You'll overwrite your forked version of the repo with your own Pi-hole lists the first time you run pihole-cloudsync in Push mode.

On GitHub

  1. Sign into GitHub.
  2. Go to https://github.com/stevejenkins/my-pihole-lists.
  3. Press Fork.
  4. Optional: If you wish to make your forked version of the repo private, press Settings, scroll down to the Danger Zone, then press Make private.
  5. On your new repo's main page, press the Clone or download button and copy the Clone with HTTPS link to your clipboard.

On your Primary Pi-hole device

  1. Install Git (on Raspbian/Debian do sudo apt-get install git).
  2. Do cd /usr/local/bin.
  3. Install pihole-cloudsync with sudo git clone https://github.com/stevejenkins/pihole-cloudsync.git.
  4. Create your private local Git repo with sudo git clone https://github.com/<yourusername>/my-pihole-lists.git (paste the URL you copied from GitHub).
  5. If you're using a repo name other than my-pihole-lists, edit /usr/local/bin/pihole-cloudsync/pihole-cloudsync and edit the personal_git_dir variable to match your local Git repo location.
  6. Run /usr/local/bin/pihole-cloudsync/pihole-cloudsync --initpush to initialize the local Pi-hole in "Push" mode. It will grab your Primary Pi-hole's list files (both from the gravity.db database and /etc/pihole) and add them to your new local Git repo. The --initpush mode should only need to be run once on your Primary Pi-hole.
  7. Run /usr/local/bin/pihole-cloudsync/pihole-cloudsync --push to push/upload your Primary Pi-hole's lists from your local Git repo to your remote Git repo. You will have to manually enter your GitHub email address and password the first time you do this, but read below for how to save your login credentials so you can run this script unattended.

On all Secondary Pi-hole devices

  1. Install Git (on Raspbian/Debian do sudo apt-get install git)
  2. Do cd /usr/local/bin
  3. Install pihole-cloudsync with sudo git clone https://github.com/stevejenkins/pihole-cloudsync.git
  4. Create your private local Git repo with sudo git clone https://github.com/<yourusername>/my-pihole-lists.git (paste the URL you copied from GitHub)
  5. If you're using a repo name other than my-pihole-lists, edit /usr/local/bin/pihole-cloudsync/pihole-cloudsync and edit the personal_git_dir variable to match your local Git repo location.
  6. Run /usr/local/bin/pihole-cloudsync/pihole-cloudsync --initpull to initialize the local Pi-hole in Pull/Download mode. You will have to manually enter your GitHub email address and password the first time you do this, but read below for how to save your login credentials so you can run this script unattended. The --initpull option will also perform your first pull automatically and only needs to be run once on each Secondary Pi-hole. All future pulls can be performed with /usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull.
  7. Running pihole-cloudsync --pull will pull/download your Primary Pi-hole's lists from your remote Git repo to your Secondary Pi-hole's local Git repo. The --pull option will automatically copy the downloaded file(s) to your Pi-hole directory and tell Pi-hole to do a pihole -g command to update its lists.

Running pihole-cloudsync Unattended

The following steps must be performed on each Pi-hole you wish to use with pihole-cloudsync.

In order to automate or run pihole-cloudsync unattended, you will need to either store your GitHub login credentials locally or create an SSH key for your Pi-hole's root user and upload the public key to GitHub. You will need to do this on the Primary Pi-hole as well as all Secondary Pi-holes.

The SSH key approach is for more advanced users who don't need me to explain how to do it. To store your Git credentials locally, do the following on each Pi-hole:

cd /usr/local/bin/my-pihole-lists

sudo git config --global credential.helper store

The next time you pull from or push to the remote repository, you'll be prompted for your username and password. But you won't have to re-enter them after that. So do a simple:

sudo git pull

to enter and save your credentials. Now you can run pihole-cloudsync unattended on this Pi-hole device.

Again, the above steps must be performed on each Pi-hole you wish to use with pihole-cloudsync.

Automating pihole-cloudsync

Once each Pi-hole's local Git repo has been configured to save your login credentials, you can automate your Primary Pi-hole's "push" and your Secondary Pi-holes' "pull" in any number of ways. The simplest method is to run a cron job a few times a day. If you want more flexibility over schedule and resource use, you can also use systemd to automate. Both methods are explained below.

Automating with cron

The simplest way is to automate pihole-cloudsync is to set a "push" cron job on your Primary Pi-hole that runs a few times a day, then set a "pull" cron job on each Secondary Pi-hole that pulls in any changes a few minutes after your Primary pushes them.

Once you can successfully run pihole-cloudsync --push from the command line on your Primary Pi-hole, do crontab -e (or sudo crontab -e if you're not logged in as the root user) and create a cron entry such as:

00 01,07,13,19 * * * sudo /usr/local/bin/pihole-cloudsync/pihole-cloudsync --push > /dev/null 2>&1 #Push Master Pi-hole Lists to remote Git repo

And once you can successfully run pihole-cloudsync --pull from the command line on each of your Secondary Pi-holes, do sudo crontab -e and create a cron entry that runs 5 minutes after your Primary pushes any changes, such as:

05 01,07,13,19 * * * sudo /usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull > /dev/null 2>&1 #Pull Master Pi-hole Lists from remote Git repo

NOTE: On Raspian, the script won't execute via cron without the sudo command (as shown above). If you're having trouble getting the script to run unattended on Raspian, try including sudo in the cron command.

Automating with systemd

pihole-cloudsync pulls can also be automated with systemd, if your Pi-hole is running on a systemd-supported distro. Once you're able to successfully run pihole-cloudsync --pull from the command line on each of your Secondary Pi-holes, you can proceed with systemd setup. You must install three [Unit] files on your Pi-hole to ensure a stable and non-intrusive update process: a .service file, a .timer file, and a .slice file.

Quick Start

  1. Copy the each of the three [Unit] files in the systemd Details section below into /etc/systemd/system on your Pi-hole
  2. Tell systemd you changed its configuration files with systemctl daemon-reload
  3. Enable and start the service/timer
# Enable the relevant configs
systemctl enable pihole-cloudsync-update.service
systemctl enable pihole-cloudsync-update.timer

# Start the timer
systemctl start pihole-cloudsync-update.timer

systemd Details

  1. .service - /etc/systemd/system/pihole-cloudsync-update.service - The core service file. Configured as a 'oneshot' in order to be run via a systemd timer.
[Unit]
Description=PiHole Cloud Sync Data Puller service
Wants=pihole-cloudsync-update.timer

[Service]
Type=oneshot
User=root
Group=root
ExecStart=/usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull
Slice=pihole-cloudsync-update.slice

[Install]
WantedBy=multi-user.target
  1. .timer - /etc/systemd/system/pihole-cloudsync-update.timer - The timer file. Determines when the .service file is executed. systemd timers are highly flexible and can be executed under a variety of timed and trigger-based circumstances. The ArchLinux systemd/Timer documentation is some of the best around. See their examples for many more ways to configure this systemd timer unit.
[Unit]
Description=PiHole Cloud Synd Data Puller timer
Requires=pihole-cloudsync-update.service

[Timer]
Unit=pihole-cloudsync-update.service
OnBootSec=15
OnUnitActiveSec=1h

[Install]
WantedBy=timers.target
  1. .slice - /etc/systemd/system/pihole-cloudsync-update.slice - The slice file. Determines how much of the total system resources the .service is allowed to consume. This slice is in place to keep the update process in check and ensure that there will always be plenty of room for the Pi-hole service to answer DNS queries without being obstructed by potential pihole-cloudsync updates. If you'd like to learn more about systemd slices, check out this wiki page.
[Unit]
Description=PiHole Cloud Sync Puller resource limiter slice
Before=slices.target

[Slice]
CPUQuota=50%

Special thanks to Conroman16 for contributing the systemd automation instructions

Updating pihole-cloudsync

To upgrade to the latest version of pihole-cloudsync, do the following on all Primary and Secondary Pi-holes. Note that this will completely over-write any previous modifications you've made to pihole-cloudsync.

  1. Do cd /usr/local/bin/pihole-cloudsync
  2. Do git fetch --all
  3. Do git reset --hard origin/master

Your local version of pihole-cloudsync is now updated to the lastest release version.

Disclaimer

You are totally responsible for anything this script does to your system. Whether it launches a nice game of Tic Tac Toe or global thermonuclear war, you're on your own. :)

pihole-cloudsync's People

Contributors

0xflotus avatar conroman16 avatar jgoguen avatar mooleshacat avatar stevejenkins avatar wetnun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pihole-cloudsync's Issues

systemctl example

For the README.me, Automating with systemd, systemd Details, .service its show this line
ExecStart=/usr/local/bin/pihole-cloudsync --pull

shouldn't it be?
ExecStart=/usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull

pihole-cloudsync --pull works manually but receive authentication error when systemd service runs

If I run /usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull this works 100% without issue letting me know the stored credentials are working. When my systemd task runs this command I get a issue.

pihole-cloudsync[65509]: fatal: could not read Username for 'https://github.com': No such device or address

Jan 02 11:24:10 pihole2 pihole-cloudsync[65507]: error: Could not fetch origin

Jan 02 11:24:10 pihole2 pihole-cloudsync[65500]: Local Pi-hole lists match remote Git repo. No further action required.

Jan 02 11:24:10 pihole2 systemd[1]: pihole-cloudsync-update.service: Succeeded.

Even though it says succeeded it does not actually succeed in downloading updated data from the repo (because of failed authentication). I have tested and verified the changes did not pull down, and then ran the command manually and verified THAT pulled the change. I'm not sure why the command works manually but something in the automated command isn't recognizing the saved credentials. I went through the info in your guide and have verified the credential.helper is working so I'm thinking this is a 2FA specific issue with the personal access token I have setup to make this work.

Suggestion for extra options

Would it be possible to add extra options? One for each of the items synchronized @ pull?
So a ‘--NOadlists’, a ‘--NOdomainlists’, a ‘--NOdnsnames’ and a ‘--NOcnames’.
Each of these options will skip the pull/import action.

It would help if you, for instance, do want to sync all but custom DNSnames or all but custom CNAMEs.

Github repo not being updated with --push command

I've added a few whitelisted domains to my main Pi-hole and unable to push the changes to my Github repo. Whenever I run the --push command, Local Pi-hole lists match remote Git repo. No further action required. is printed to the terminal window and the files in my repo remain unchanged.

"fatal: ambiguous argument" when running --pull command

When running sudo /usr/local/bin/pihole-cloudsync/pihole-cloudsync --pull I get the error

fatal: ambiguous argument 'HEAD..origin/master': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'

The error seems harmless as my lists are successfully pulled from my repo and gravity is updated as expected. I'm fairly inexperienced with using Git, but I'm fairly sure I've configured everything according to the instructions outlines in the readme.

Private Repo - Update Guide Instructions

Wanted to make a suggestion to update the guide instructions for setup as a private repo.

I ran into this issue when making my forked repo private. I tracked down information that GitHub will NOT allow public forks to be private, which is what your guide advises to do. That option will be grayed out in the settings and unselectable if you do a fork.

You need to create a new repository using your "my-pihole-lists" as a template instead. When you "fork" it is as template it will give you the prompt upon creation to make it public or private and then set the repository name to the same thing of my-pihole-lists. Everything works as normal doing it that way and I could continue the guide. I had never done a private repo before so this was all new to me.

Number of Blocked Domains on Dashboard not updating on Secondary PiHole

Hello.

I am able to confirm that my setup is working. The Master Pihole uploads to my github repo via Cronjob and my Secondary Pihole is pulling down those changes (new adlists at least) via Cronjob fine./

However the Domains blocked number on the Dashboard for the secondary Pihole never updates.

Pihole Master
image

Pihole Slave
image

My repo:
https://github.com/Wh1t3Rose/my-pihole-lists

Debug logs:
Master Pihole: https://pastebin.com/T8isvcJ9
Secondary: https://pastebin.com/PnHptpwZ

Running this in a docker-container

I run Pihole and Unbound in a Docker-container, so I guess I have to put the script in the docker-image as well and let it run from there. I'll check out how I can add this to it. Might indeed be convenient. Added a 2nd pihole in another network a few days ago, but adding manually all the lists is one hell of a job, so I quit after a few. :-)

Group assignments and groups are not synced

The groups are not synced at all, and groups assignments does not work for Whitelist and Blacklist.

I guess that if there is no assigned group to a whitelist/blacklist rule, it won't work, so the rules won't be applied the same on the master than on the clones?

Here is an example:

Master
image

Clone
image

FYI - group management sync will not fully work due to current schema

I have raised a 'feature request' on the pihole discourse for this but just FYI the future group sync feature (which I have been looking at coding) will not fully work due to the use of the group keyword for the table name (only this table is affected).

The use of a keyword results in a syntax error when trying to export the table to CSV.

A workaround is to manually create the groups to match but I'm not sure if this will cause issues as internally the DB's will be different.

Coding wise the change is very easy with additional commands for the 3 group tables and 2 client tables but the client side would raise issues if the piholes are in different locations and therefore see different clients. This might require the client sync aspect to be optional via command line flag ??

Adlist comment with quotemark breaks gravity update after pull

Adding a comment to an adlist that contains a quote mark or apostrophe, i.e. "Person**'**s Blocklist", breaks the gravity update on the "pull" client after the database update.

After updating the database, the pull client attempts a gravity update and receives an error:
[i] Creating new gravity databases... [✗] Unable to copy data from /etc/pihole/gravity.db to /etc/pihole/gravity.db_temp Runtime error near line 24: FOREIGN KEY constraint failed (19) [✗] Unable to create gravity database. Please try again later. If the problem persists, please contact support.

Removing the quote marks from blocklist comments on the "push" client allows the "pull" client to update gravity correctly after the next pull.

Examining the db_dump.sql file shows that the insert command attempts to escape the single quote by doubling it. This appears not to be handled well on the other end.

sqlite3 command not found | db_dump.sql not tracked (should it be??)

/usr/local/bin/my-pihole-lists$ sudo /usr/local/bin/pihole-cloudsync/pihole-cloudsync --push
--push option detected. Running in Push/Upload mode.
/usr/local/bin/pihole-cloudsync/pihole-cloudsync: line 99: sqlite3: command not found
Local Pi-hole lists are different than remote Git repo. Updating remote repo...
On branch master
Your branch is up to date with 'origin/master'.

Untracked files:
(use "git add ..." to include in what will be committed)
db_dump.sql

nothing added to commit but untracked files present (use "git add" to track)
Done!

initpush doesn't commit

Everything seems to work smoothly, but after the --initpush and the subsequent --push dont update with the local pihole's lists, only the examples from your github repository.

I can confirm nothing has happened as the commits do not show in github.

Am I missing something here? Thanks.

Collaborate on Docker Images

I've written and tested pihole-cloudsync-docker. It's really as simple as packaging this script up into layers on top of the regular pihole docker image, but it also does some clever things not mentioned here by hooking into gravity.sh and list.sh to push immediately to git when config changes.

I'm currently only supporting v4 (latest) but I have builds on docker hub for all architectures supported by pihole (Model 2B - Model 4b)

I'd like to continue testing this for a while on my own, but also talk about how we could either bring my work into this repository or add some docs to link out to my repo.

I think an auto-syncing docker image would provide a lot of value.

Cheers!

Unable to copy data from /etc/pihole/gravity.db to /etc/pihole/gravity.db_temp

Hello,

I'm trying this out for the first time. My secondary pihole is running from within docker.

When I run "pihole-cloudsync --initpull" I eventually run into the following error:

"Unable to copy data from /etc/pihole/gravity.db to /etc/pihole/gravity.db_temp"

Any pointers on where to start troubleshooting.

Thanks for looking!

[Suggestion] Provide an option to specify Docker container name and run docker exec to execute pihole -g

I just set up a secondary pihole for my home network using Docker and I was looking to use this tool to sync the two together. To make this work with my docker container, I'm planning on running it on my docker host (Ubuntu 20.04) and changing the pihole_dir to match the volume for the pihole dir of the Docker container. This should work fine but the pihole -g command won't be able to run on the host, it has to be directed to the container instead - docker exec <container_name> pihole -g

Note: I considered running docker-compose restart or other docker equivalent command but I don't feel it is necessary, as pihole runs the gravity update on a cron anyway, so restarting the whole container feels like overkill in this use case.

I'm planning on simply replacing the two pihole -g commands in the bash script to said docker command instead, and it should work. Perhaps an option can be included similar to the shared hosts option where you if you specify a container name, it will run the docker exec command instead of simply pihole -g

The only other way I could think of to get this to work for a docker container would be to build your own docker image from the pihole image, but have the dockerbuild bring in Git and this tool. However, that's a bit more complicated - especially for the average user - so I figured adding an option as explained above would be a simple yet effective alternative.

On —initpush

pihole-cloudsync: line 69: sqlite3: command not found
./pihole-cloudsync: line 70: sqlite3: command not found

Possible to shrink gravity.db before pushing?

My gravity.db file is just over 200mb and, after spending the day setting up Git LFS on my devices, I’m now nearing my monthly data cap for the service.

Would it be possible to compare the gravity.db file currently in use with the one stored in the local repo and delete whatever is redundant, then push that shrunken file to the remote repo? Would the secondary piholes then be able to use that shrunken file to fill in the gaps of their own local gravity.db files?

Suggestion: reboot pi after the pull command

Is it possible to add a script line for rebooting the pi when the pull command is finished?

I always get the API error when the pull request is finished and a reboot fixed it.

If not, just tell me where I should add the line and I will do it myself.

Thank you for taking the time to answer.

Script won't update github via cron

Hi - First off, thanks for pulling this together, it is awesome.

I followed the instructions, and when I manually run the script on my 'push' pihole, everything in my git repo is updated as expected. However, when I try to run the script via cron, my repo is not updated.

I piped the output of the cron job to a log file, and it looks like everything is doing what it should, so I"m not sure what I'm doing wrong. Here is the relevant details:

Pihole is running on Ubuntu server 18.04

Line in my root crontab: (this is just for testing purposes, which is why I have it running every minute)

          • sudo /usr/local/bin/pihole-cloudsync/pihole-cloudsync --push > "/home/kevin/logs/pihole-cloudsync.log"

Output of above:

--push option detected. Running in Push/Upload mode.
Local Pi-hole lists are different than remote Git repo. Updating remote repo...
Done!

No matter how many times I let the cron job run, my git repo is never updated.

When I run the script manually with the following (no sudo or anything is needed):

/usr/local/bin/pihole-cloudsync/pihole-cloudsync --push

I get the same output, but the git repo is updated.

Any idea on what I could be doind wrong?

Push Does Not Include - gravity.db and custom.list

I only have a single Pihole server at this point in time. I am using the custom.list file to store DNS records for systems on my internal network.

I went through the installation routine as documented:

  1. Forked the repo
  2. Marked as private
  3. Cloned pihole-cloudsync to /usr/local/bin
  4. Cloned my repo to /usr/local/bin (full path is now /usr/local/bin/my-pihole-lists)
  5. Ran sudo pihole-cloudsync --initpush
  6. Ran sudo pihole-cloudsync --push

When I go and look at the status of my repo, the following files were updated within the past few minutes:

  • adlists.list
  • black.list
  • blacklist.txt
  • regex.list
  • whitelist.txt

The following are marked as updated "Last month" (i.e. not synced):

  • custom.list
  • gravity.db

Shared hosts is not enabled at this time, so I don't expect to see sharedhosts.txt updated at all.

Why would pihole-cloudsync not pick up changes in my custom.list and gravity.db?

Push automation

Noticed you didn't provide the files for automating the push, I assume it is just changing the description and the command in the service file, but figured I would flag it.

Documentation - Marking Forked Repository Private

In the step-by-step directions, under "On GitHub" step 3 calls for creating a fork, then step 4 (optional) is on marking the forked repo as private.

I just tried this and, in the Danger Zone section, there's a message that states "Public forks can't be made private. Please duplicate the repository."

I've never tried forking a repository before, so this is all new to me. I am going to try the process that GitHub outlines in terms of duplicating a repo but I thought I would bring this to your attention in the event you wanted to add details to your documentation.

Thanks so much for sharing this!!!

ERROR: Updates were rejected because the tip of your current branch is behind

Hi @stevejenkins

Has anyone run into this error?

root@raspberrypi-hole1:/usr/local/bin/pihole-cloudsync# ./pihole-cloudsync --push
--push option detected. Running in Push/Upload mode.
Local Pi-hole lists are different than remote Git repo. Updating remote repo...
To github.com:REDACTED/my-pihole-lists.git
! [rejected] main -> main (non-fast-forward)
error: failed to push some refs to '[email protected]:REDACTED/my-pihole-lists.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
Done!

No such file or directory

when doing step 6 at primairy pihole:
pi@raspberrypi:~ $ /usr/local/bin/pihole-cloudsync/pihole-cloudsync --initpush
-bash: /usr/local/bin/pihole-cloudsync/pihole-cloudsync: No such file or directory

What am I doing wrong?

[Suggestion] Use PyPhTP or implement similar

Hi,

I don't use cloud-sync as I have only ever had one RPI and I just live with no DNS if things break lol.

In any case, I saw your suggestion on Reddit that one of my other projects may have been suitable for keeping the file size down to allow you to sync. In actual fact, this wouldn't be appropriate as it wouldn't copy enough information across to make a full 1:1 sync.

Anyway, I made this, this morning.

It basically removes all domains from the gravity table which is usually where the bulk of the DB size is, vacuums the DB (which reduces size down to around 0.1MB), exports it to /etc/pihole/PyPhTP/ and then repopulates the gravity table. If you sync the output directory to your slave RPI, you can use the same script to 'inject', or rather overwrite the pi-hole db.

On inject, it copies the gravity.db from /etc/pihole/PyPhTP/ to /etc/pihole/ and then repopulates the gravity table.

I'm not 100% sure whether I need to stop the Pi-hole service to try to prevent locks to the DB but I am trying to avoid that.

Feel free to give it a try though. If nothing else, it may give you an idea of how you might adapt your script to achieve the same outcome.

https://github.com/mmotti/PyPhTP

Example (eject) with loads of random blocklists:

mmotti@ubuntu-server:~$ curl -sSl https://raw.githubusercontent.com/mmotti/PyPhTP/master/PyPhTP.py | sudo python3 - --eject
[i] Pi-hole directory located
[i] Write access is available to Pi-hole directory
[i] Pi-hole DB located
[i] Connection established to Pi-hole DB
[i] gravity.db size: 90.24 MB
[i] Emptying the gravity table
[i] Updating the gravity count in the info table
[i] Running Vacuum
[i] gravity.db size: 0.11 MB
[i] Restarting Pi-hole
[i] Closing connection to the Pi-hole DB
[i] Copying gravity.db to /etc/pihole/PyPhTP/gravity.db
[i] Correcting permissions
[i] Refreshing Gravity for source database

90.24MB down to 0.11MB just by removing the gravity entries.

"Domains on Adlists" = 0 after pull

I have confirmed this on two systems. After the cron job runs, the Dashboard in PiHole says there are 0 domains on adlists. I ran the cron command manually and noticed this:

[i] Creating new gravity databases...
 [✗] Unable to copy data from /etc/pihole/gravity.db to /etc/pihole/gravity.db_temp
 Error: near line 15: in prepare, foreign key mismatch - "domainlist_by_group" referencing "domainlist" (1)
Error: near line 19: in prepare, foreign key mismatch - "adlist_by_group" referencing "adlist" (1)

More info: if I run pihole -g from the command line, logged in as root, it works fine.

How do I completly uninstall pihole-cloudsync?

I'm using systemd to automate the push/pull and I can remove that part easily enough but was wondering what else I might miss. I've been using the script to sync 2 pihole instances in my home lab and have run into issues where the pull instance has lost it's gravity db. I'd like to remove pihole-cloudsync completely while I try to resolve this issue.

Can't push to git repo with private repository

Whenever I try to run pihole-cloudsync --push, I get the following error:

fatal: Authentication failed for 'https://github.com/NGatti1997/my-pihole-lists.git/'
error: Could not fetch origin
Local Pi-hole lists are different than remote Git repo. Updating remote repo...

*** Please tell me who you are.

Run

  git config --global user.email "[email protected]"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: unable to auto-detect email address (got 'root@raspberrypi.(none)')

Not sure where the problem lies. I'm sure the username and password are right, and I've run git config.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.