Code Monkey home page Code Monkey logo

lndg's Introduction

LNDg - GUI for LND Data Analysis and Node Management

Welcome to LNDg, an advanced web interface designed for analyzing LND data and automating node management tasks.

Choose your preferred installation method:

  • 1-Click Installation: Easily install LNDg directly from popular platforms like Umbrel, Citadel, Start9 and RaspiBlitz.
  • Docker Installation: Ideal for users familiar with Docker and Docker Compose.
  • Manual Installation: If you prefer a hands-on approach to set up LNDg.

Docker Installation (requires Docker and Docker Compose)

Prepare Install

# Clone the repository
git clone https://github.com/cryptosharks131/lndg.git

# Change directory to the repository
cd lndg

# Customize the docker-compose.yaml file
nano docker-compose.yaml

Replace the contents of docker-compose.yaml with your desired volume paths and settings. An example configuration is shown below.

services:
  lndg:
    build: .
    volumes:
      - /home/<user>/.lnd:/root/.lnd:ro
      - /home/<user>/<path-to>/lndg/data:/lndg/data:rw
    command:
      - sh
      - -c
      - python initialize.py -net 'mainnet' -server '127.0.0.1:10009' -d && supervisord && python manage.py runserver 0.0.0.0:8889
    network_mode: "host"

Build and Deploy

# Deploy the Docker image
docker-compose up -d

# Retrieve the admin password for login
nano data/lndg-admin.txt
  • This example configuration will host LNDg at http://0.0.0.0:8889. Use the machine IP to reach the LNDg instance.
  • Log in to LNDg using the provided password and the username lndg-admin.

Updating

docker-compose down
docker-compose build --no-cache
docker-compose up -d

# OPTIONAL: remove unused builds and objects
docker system prune -f

Manual Installation

Step 1 - Install LNDg

  1. Clone the repository: git clone https://github.com/cryptosharks131/lndg.git
  2. Change the directory into the repository: cd lndg
  3. Ensure you have Python virtualenv installed: sudo apt install virtualenv
  4. Set up a Python 3 virtual environment: virtualenv -p python3 .venv
  5. Install the required dependencies: .venv/bin/pip install -r requirements.txt
  6. Initialize necessary settings for your Django site: .venv/bin/python initialize.py
    1. use .venv/bin/python initialize.py --help to see additional options
    2. add -wn | --whitenoise option to serve static (.js, .css) files (required if installing manually)
      • if whitenoise option is provided, you'll need to install it via .venv/bin/pip install whitenoise
  7. The initial login user is lndg-admin, and the password will be genereted and stored in: data/lndg-admin.txt
  8. Run the server using a Python development server: .venv/bin/python manage.py runserver 0.0.0.0:8889

Step 2 - Setup Backend Controller For Data, Automated Rebalancing, HTLC Stream Data and p2p-trade-secrets

The file controller.py orchastrates the services needed to update the backend database with the most up-to-date information, rebalance any channels based on your LNDg dashboard settings, listen for any failure events in your HTLC stream and serves the p2p trade secrets.

Recommended Setup with Supervisord (least setup) or Systemd (most compatible):

  1. Systemd (2 options)

  2. Supervisord

    • Configure Supervisord by running: .venv/bin/python initialize.py -sd
    • Install Supervisord: .venv/bin/pip install supervisor
    • Start Supervisord: supervisord

Alternatively, you may create your own task for these files using your preferred tool (task scheduler, cron job, etc).

Updating

  1. Make sure you are in the LNDg folder: cd lndg
  2. Pull the new files from the repository: git pull
  3. Migrate any database changes: .venv/bin/python manage.py migrate

Notes

  1. If you're not using default settings for LND or you'd like to run on a network other than mainnet, use the correct flags in step 6 (see initialize.py --help) or edit the variables directly in lndg/settings.py.
  2. You can not run the development server outside of DEBUG mode due to static file issues. To address this, install and configure Whitenoise by running the following command: .venv/bin/pip install whitenoise && rm lndg/settings.py && .venv/bin/python initialize.py -wn. (see 6.1)
  3. You can always update the lndg/settings.py file by directly modifying it or re-running the script .venv/bin/python initialize.py <options> -f. (see 6)
  4. If you plan to run this site continuously, it's advisable to set up a proper web server to host it (see Nginx below).
  5. You can manage your login credentials from the admin page, accessible at http:<your-hosting-lndg-ip:port>/lndg-admin.
  6. If you encounter issues accessing the site, ensure that any firewall is open on port 8889, where LNDg is running.

Using A Webserver

You can serve the dashboard at all times using a webserver instead of the development server. Using a webserver will serve your static files, and installing whitenoise is not required when running in this manner. Any webserver can be used to host the site if configured properly. A bash script has been included to help aid in the setup of an nginx webserver.

To set up the nginx webserver, run the following command:

sudo bash nginx.sh

When updating

When updating your LNDg installation, follow the same steps as described above. However, after updating, you will also need to restart the uWSGI service to apply the changes to the user interface (UI).

To restart the uWSGI service, use the following command:

sudo systemctl restart uwsgi.service

Postgres

Optionally, you may chose to run LNDg using a postgres database instead of the default sqlite3.

A setup guide can be found here: Postgres Setup

Key Features

Track Peer Events

LNDg will track the changes your peers make to channel policies you have in open channels and any connection events that may happen with those channels.

Batch Opens

You can use LNDg to batch open up to 10 channels at a time with a single transaction. This can help to significantly reduce the channel open fees incurred when opening multiple channels.

Watch Tower Management

You can use LNDg to add, monitor, or remove watch towers from the LND node.

Suggests Fee Rates

LNDg will make suggestions on an adjustment to the current set outbound fee rate for each channel. This uses historical payment and forwarding data over the last 7 days to drive suggestions. You can use the Auto-Fees feature in order to automatically act upon the suggestions given.

You may see another adjustment right after setting the new suggested fee rate on some channels. This is normal, and you should wait ~24 hours before changing the fee rate again on any given channel.

Suggests New Peers

LNDg will make suggestions for new peers to open channels to based on your node's successful routing history.

There are two unique values in LNDg:

  1. Volume Score - A score based upon both the count of transactions and the volume of transactions routed through the peer
  2. Savings By Volume (ppm) - The amount of sats you could have saved during rebalances if you were peered directly with this node over the total amount routed through the peer

Channel Performance Metrics

LNDg will aggregate your payment and forwarding data to provide the following metrics:

  1. Outbound Flow Details - This shows the amount routed outbound next to the amount rebalanced in
  2. Revenue Details - This shows the revenue earned on the left, the profit (revenue - cost) in the middle, and the assisted revenue (amount earned due to this channel's inbound flow) on the right
  3. Inbound Flow Details - This shows the amount routed inbound next to the amount rebalanced out
  4. Updates - This is the number of updates the channel has had and is directly correlated to the space it takes up in channel.db

LNDg also provides a P&L page in order to track overall metrics and profitability of the node.

Password Protected Login

The initial login username is lndg-admin but can be easily modified by going to the page found here: /lndg-admin

Suggests AR Actions

LNDg will make suggestions for actions to take around Auto-Rebalancing.

AR-Autopilot Setting

LNDg will automatically act upon the suggestions it makes on the Suggests AR Actions page.

HTLC Failure Stream

LNDg will listen for failure events in your htlc stream and record them to the dashboard when they occur.

API Backend

The following data can be accessed at the /api endpoint:
payments paymenthops invoices forwards onchain peers channels rebalancer settings pendinghtlcs failedhtlcs

Peer Reconnection

LNDg will automatically try to resolve any channels that are seen as inactive, no more than every 3 minutes per peer.

Auto-Fees

Here are some additional notes to help you get started using Auto-Fees (AF).

LNDg can update your fees on a channel every 24 hours (default) if there is a suggestion listed on the fee rates page. You must make sure the AF-Enabled setting is set to 1 and that individual channels you want to be managed are also set to enabled. You can view a log of AF changes by opening the Autofees tab.

You can customize some settings of AF by updating the following settings:
AF-FailedHTLCs - The minimum daily failed HTLCs count in which we could trigger a fee increase (depending on flow)
AF-Increment - The increment size of our potential fee changes, all fee suggestions will be a multiple of this value
AF-MaxRate - The maximum fee rate in which we can adjust to
AF-MinRate - The minimum fee rate in which we can adjust to
AF-Multiplier - Multiplier to increase incremental movements, the larger the multiplier, the larger the incremental moves
AF-UpdateHours - Change the number of hours that must pass since the last fee rate change before AF may adjust the fee rate again
AF-LowLiqLimit - The liquidity (%) a channel must drop below before running the Low Liquidity fee algorithm
AF-ExcessLimit - The liquidity (%) a channel must go above before running the Excess Liquidity fee algorithm

AF Notes:

  1. AF changes only trigger after AF-UpdateHours hours of no fee updates via LNDg
  2. Channels with less than AF-LowLiqLimit outbound liquidty will increase based on failed HTLC counts and incoming flow
  3. Channels with more than AF-ExcessLimit outbound liquidty will decrease based on no flows or assisted revenues
  4. Channels between the previous two groups will increase or decrease based on flow

Auto-Rebalancer - Quick Start Guide

Here are some additional notes to help you better understand the Auto-Rebalancer (AR).

The objective of the Auto-Rebalancer is to "refill" the liquidity on the local side (i.e. OUTBOUND) of profitable and lucrative channels. So that, when a forward comes in from another node there is always enough liquidity to route the payment and in return collect the desired routing fees.

  1. The AR variable AR-Enabled must be set to 1 (enabled) in order to start looking for new rebalance opportunities. (default=0)
  2. The AR variable AR-Target% defines the % size of the channel capacity you would like to use for rebalance attempts. Example: If a channel size is 1M Sats and AR-Target% = 0.05 LNDg will select an amount of 5% of 1M = 50K for rebalancing. (default=5)
  3. The AR variable AR-Time defines the maximum amount of time (minutes) we will spend looking for a route. (default=5)
  4. The AR variable AR-MaxFeeRate defines the maximum amount in ppm a rebalance attempt can ever use for a fee limit. This is the maximum limit to ensure the total fee does not exceed this amount. Example: AR-MaxFeeRate = 800 will ensure the rebalance fee is always less than 800 ppm. (default=100)
  5. The AR variable AR-MaxCost% defines the maximum % of the ppm being charged on the INBOUND receving channel that will be used as the fee limit for the rebalance attempt. Example: If your fee to node A is 1000ppm and AR-MaxCost% = 0.5 LNDg will use 50% of 1000ppm = 500ppm max fee limit for rebalancing. (default=65)
  6. The AR variable AR-Outbound% helps identify all the channels that would be a candidate for rebalancing targetd channels. Rebalances will only consider any OUTBOUND channel that has more outbound liquidity than the current AR-Outbound% setting AND the channel is not currently being targeted as an INBOUND receving channel for rebalances. Example: AR-Outboud% = 0.6 would make all channels with an outbound capacity of 60% or more AND not enabled under AR on the channel line to be a candidate for rebalancing. (default=75)
  7. Channels need to be targeted in order to be refilled with outbound liquidity and in order to control costs as a first prioirty, all calculations are based on the specific INBOUND receving channel.
  8. Enable INBOUND receving channels you would like to target and set an inbound liquidity Target% on the specific channel. Rebalance attempts will be made until inbound liquidity falls below this channel settting.
  9. The INBOUND receving channel is the channel that later routes out real payments and earns back the fees paid. Target channels that have lucrative outbound flows.
  10. Attempts that are successful or attempts with only incorrect payment information are tried again immediately. Example: If a rebalancing for 50k was sucessful, AR will try another 50k immediately with the same parameters.
  11. Attempts that fail for other reasons will not be tried again for 30 minutes after the stop time. This allows the liquidity in the network to move around for 30 mins before trying another rebalancing attempt that previously failed. The 30 minute window can be customized by updating the AR-WaitPeriod setting.

Additional customization options:

  1. AR-Autopilot - Automatically act upon suggestions on the AR Actions page. (default=0)
  2. AR-WaitPeriod - How much time (minutes) AR should wait before scheduling a channel that has recently failed an attempt. (default=30)
  3. AR-Variance - How much to randomly vary the target rebalance amount by this % of the target amount. (default=0)
  4. AR-Inbound% - The default iTarget% value to assign to new channels. (default=100)
  5. AR-APDays - The number of days of historical data AP should use to decide actions to take. (default=7)
  6. AR-Workers - Define how many parallel rebalances to spin up at once. (default=1)

Steps to start the Auto-Rebalancer:

  1. Update Channel Specific Settings
    a. Go to Active Channels section
    b. Find the channels you would like to activate for rebalancing (this refills its outbound)
    c. On far right column Click the Enable button to activate rebalancing
    d. The dashboard will refresh and show AR-Target 100%
    e. Adjust the AR-Target to desired % of liquidity you want to keep on remote INBOUND side. Example select 0.60 if you want 60% of the channel capacity on Remote/INBOUND side which would mean that there is 40% on Local/OUTBOUND side
    f. Hit Enter
    g. Dashboard will refresh in the browser
    h. Make sure you enable all channels that are valuable outbound routes for you to ensure they are not used for filling up routes you have targeted (you can enable and target 100% in order to avoid any action on this channel from the rebalancer)

  2. Update Global Settings
    a. Go to section Update Auto Rebalancer Settings
    b. Select the global settings (sample below):
    c. Click OK button to submit
    d. Once enabled is set to 1 in the global settings - the rebalancer will become active

Enabled: 1
Target Amount (%): 0.03
Target Time (min): 3
Target Outbound Above (%): 0.4
Global Max Fee Rate (ppm): 500
Max Cost (%): 0.6
  1. Go to section Last 10 Rebalance Requests - that will show the list of the rebalancing queue and status.

If you want a channel not to be picked for rebalancing (i.e. it is already full with OUTBOUND capacity that you desire), enable the channel and set the AR-Target% to 100. The rebalancer will ignore the channel while selecting the channels for outbound candidates and since its INBOUND can never be above 100% it will never trigger a rebalance.

Preview Screens

Main Dashboard

image image image image image image

Channel Performance, Peers, Balances, Routes, Keysends and Pending HTLCs All Open In Separate Screens

image image image image image

Manage Auto-Fees Or Get Suggestions

image

Batch Open Channels

image

Suggests Peers To Open With and Rebalancer Actions To Take

image image

Browsable API at /api (json format available with url appended with ?format=json)

image

View Keysend Messages (you can only receive these if you have accept-keysend=true in lnd.conf)

image

lndg's People

Contributors

bhaagbosedk avatar bitcoinite avatar blakejakopovic avatar blckbx avatar bochaco avatar cryptosharks131 avatar federicociro avatar impa10r avatar leemr avatar proof-of-reality avatar tehelsper avatar workflow avatar yuyaogawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lndg's Issues

Allow "open channel" to work with just remote_pubkey

To the extent of opening a channel, unless it is an additional channel to an existing peer, it is currently needed to connect to the relevant peer, and then add them via pubkey.
This is naturally feasible by the human node operator, by fetching the needed data and executing the two operations in tandem, yet this might also be scripted, so that the operator will just input the remote pubkey of the node, and if not already among connected peers, LNDg will fetch the full connection string, connect to the peer, and then open.
This could be achieved querying the graph info, rather than parsing remote services like amboss.space or 1ml, for privacy and autonomy.

Rebalance with no effect and high costs

Had 2 Channels with same peer.
Both were enabled for Rebalancing since both had more incoming than outgoing liquidity.
After the smaller of the two channels fell below iTarget%, suddenly a lot of rebalancing payments occurred which were executed from the smaller to the larger channel.
The balance of the larger channel did not change, and the payments seem to have ended up back at the smaller channel?!
This therefore resulted in an infinite number of rebalancing payments which I fortunately noticed relatively quickly... but a few thousand sats for fees disappeared anyway.

alow to set a minimum ppm fee

I use AR-Autofees = 1
For some channels, the fee will be set to 0 ppm (I use everwhere 0 base fee).
I would like to have a minimum fee of 50 ppm.
Would be nice, if I could set this in the coinfiguration.
Don't know if it would also make sence to set a max. fee.

don't just rebalance the same channel, when timeout occures

"Last 10 Rebalance Requests (currently scheduling 2 of 5 enabled channels for rebalancing)"
shows multiple rebalance attempts for only 1 channel.
I get most of the time a "Timeout" (sometimes also a "Rebalancer Request Timeout") on the same channel.

Rebalance again, if success, like in #47 said is fine.
But if it's not successful, also the other channels scheduled should try to rebalance.

/route?= does not work randomly

I noticed (appears to be random) that clicking on successful rebalance sometimes does not show the route. It does not appear to be linked to peer pairs.

The payment route is available in lncli and the payment is also available in lncli. However, in LNDg the "out" payment is not visible; therefore, the path/hops are also not visible.

It only happens at certain times randomly.

[Feature request] Add option to disable channels

Yes, is not good to disable channels (prohibit traffic through) temporarily, is almost against the meaning of LN.
But sometimes you want for specific reasons to route traffic only through specific channels.
So will be good to have an option, a tick box or something to disable a channel and re-activate it when is necessary.
I know that updatechanstatus is removed when the node is restarted but that is OK.

AR : Allow setting default AR-Inbound% on new channels.

For proper hammock life, there should be a possibility to set AR-Inbound% on new channels (default = 100 as today if no local setting is available). This would allow new channels to be automatically be ready with AR-Outbound% and AR-Inbound%.
AR would be enabled via AutoPilot if enabled.

Advanced Rebalancing Options (Variable Target Amount)

The built-in Rebalancing is a very nice feature of LNDg. However, one problem I am observing with this feature is the static Target Amount that is currently used. In a lot of cases a manual adjustment of the Target Amount is necessary to get satisfying rebalancing results. Adding the option to set a range for the Target Amount could be very helpful here.

Example:
I have a 3,000,000 sat channel with 200,000 sats outbound and 2,800,000 sats inbound liquidity that I want to rebalance.
Fees are 30 ppm / 0 base fee on my side and 10 ppm / 0 base fee on the inbound side.
I set the Target Amount to 0.1 (10%) which is 300.000 sats.

Now for some channels, rebalancing 300k in one batch is no problem, while for other channels that's quite a lot and therefore unlikely to go through.

Adding the option to set the Target Amount to range from 0.01 to 0.1 or even up to 1 and then using a random value of that for every rebalancing attempt could improve the possibility to successfully rebalance. Alternatively, the script could start with a Target Amount of 0.1 (in the case of 0.01 to 0.1) and then decrease the Target Amount with every iteration. If none of the attempts were successful, the script could start with 0.1 again.

opens/ : Suggested Open List : Wrong order ?

The list is ordered by score then ppm : .order_by('-score', 'ppm')
But, the order on ppm is from small to big, don't we want to peer with the one we pay the most : .order_by('-score', '-ppm') ?

open_list = PaymentHops.objects.filter(payment_hash__in=payments_60day).exclude(node_pubkey=self_pubkey).exclude(node_pubkey__in=current_peers).values('node_pubkey', 'alias').annotate(ppm=(Sum('fee')/Sum('amt'))*1000000).annotate(score=Round((Round(Count('id')/5, output_field=IntegerField())+Round(Sum('amt')/500000, output_field=IntegerField()))/10, output_field=IntegerField())).annotate(count=Count('id')).annotate(amount=Sum('amt')).annotate(fees=Sum('fee')).annotate(sum_cost_to=Sum('cost_to')/(Sum('amt')/1000000)).exclude(score=0).order_by('-score', 'ppm')[:21]

Change how many hours between AF changes.

Nowadays, it takes 24h for AF to act upon what it was designed to do.

But, my personal experience has teached me that the sweet spot is twice a day, 12h between.

Ive been doing it for months now and I’m afraid the 24h break won’t be as effective, no matter how intelligent, speed is of great importance.

The ability to change that could be great.

only displays details for 16 channels

LNDg v1.0.2 on Umbrel 0.4.14-build-5542633 (lnd v0.14.2-beta) on Ubuntu 20.04 Desktop

The most resently opened channels are not displayed.

Active Channels: 20 / 21
Lnd sync: True | chain sync: True | chain: "bitcoin" network: "mainnet"

Active Channels (Details) displayed only for 16 (4 missing)
Inactive Channels (Details) displayed for 1 (correct)

After a while an inactive channel became active, but the Details page was not updated:

Active Channels: 21 / 21
Lnd sync: True | chain sync: True | chain: "bitcoin" network: "mainnet"

Active Channels (Details) displayed only for 16 (5 missing)
Inactive Channels (Details) displayed for 1 (wrong)

What I would like to see:

  • Details for all active channels
  • if a channel is not any more inactive, it should disapear from this list

Unsupported config option for services: 'lndg'

I am trying to install using docker compose

I am currently getting this error

$ docker-compose up -d
ERROR: The Compose file './docker-compose.yaml' is invalid because:
Unsupported config option for services: 'lndg'

docker file:

services:
  lndg:
    build: .
    volumes:
      - /home/ben/.lnd:/root/.lnd:ro
      - /home/ben/projects/lndg/data:/lndg/data:rw
    command:
      - sh
      - -c
      - python initialize.py -net 'mainnet' -server '127.0.0.1:10009' -d && supervisord && python manage.py runserver 0.0.0.0:8889
    network_mode: "host"

docker versions:

$ docker-compose --version
docker-compose version 1.25.0, build unknown
$ docker --version
Docker version 20.10.16, build aa7e414

Automate `manage.py migrate` after `git pull`

LNDg now runs on a tight update schedule, which is very good.
I know a migration is usually needed after an update to make the database get on par with the release, I just always forget how to do that, so I need to open this repository and scroll down until I find the reference to the .venv/bin/python manage.py migrate command.

Suggestion: either make it so (probably would be hackish) that in git pull, there is some sort of notification that verbosely remembers to run that specific command, or better yet have jobs.py check if the release has been updated, and instead of running the DB update jobs for that cycle, call the migration command instead.

Manual install on a long running raspiblitz: Error processing background data: list index (0) out of range

Started to test LNDg as hearing more about it and got interested. Nice project!

Find my notes about the installation to a raspiblitz with a dedicated user:
https://github.com/openoms/lightning-node-management/blob/en/hardware/raspiblitz/lndg.md

However the installation and the interface are running fine I get and error with the jobs:

$ sudo -u lndg /home/lndg/lndg/.venv/bin/python /home/lndg/lndg/jobs.py
Error processing background data: list index (0) out of range

Background info about the node:

₿ uname -a
Linux raspberrypi 5.10.63-v8+ #1496 SMP PREEMPT aarch64 GNU/Linux
₿ python --version
Python 3.7.3
₿ /home/lndg/lndg/.venv/bin/pip --version
pip 22.0.3 from /home/lndg/lndg/.venv/lib/python3.7/site-packages/pip (python 3.7)
₿ lnd --version
lnd version 0.14.2-beta commit=v0.14.2-beta

it has some known quirks in the database including stuck channels and some issue with recalling the payment history.
Running LND v0.14.2, but has been through many versions and channels (120+ closed) over 2+ years.

Happy to test if there are some suggestions to make LNDg work,but wouldn't be surprised if it is a unique issue related to the node's database.
Documenting also if any others would have a similar issue. Will test on a cleaner raspiblitz and on FreeBSD too (for that great that supervisord can be used).

Don't try Manual Installation with Umbrel 0.5 or higer

Hi,

I've just installed LNDg. Followed procedure "Manual install with Umbrel".
When I tried to log for the first time, the following error message shows:

An error has occured!
Error Code: [Errno 2] No such file or directory: '/root/.lnd/data/chain/bitcoin/mainnet/admin.macaroon'
LNDg v1.2.1

I'm running Umbrel 0.5.1

Looking forward to your guidance!

Can't access behind my reverse proxy

Hey, I've tried to set up my reverse proxy to have lndg running with my other services and I keep getting this error (after login):
image

I'm passing the -ip parameter with the domain on the initialize.py script, but it doesn't work. From the info on the error page, it seems that it's missing some config on the Django side

Rebalance : Display Actual Fee Paid for rebalance

payment_response returned contains the fee_msat which can be stored in Rebalancer and displayed on relevant rebalance pages.
It might require a change in Model.py and possibly database migration if required.
I am happy to work on the PR but would require some guidance on how to go about dealing with migration routines if any.

>>> for response in stub.SendPaymentV2(request, metadata=[('macaroon', macaroon)]): print(response) { "payment_hash": <string>, "value": <int64>, "creation_date": <int64>, "fee": <int64>, "payment_preimage": <string>, "value_sat": <int64>, "value_msat": <int64>, "payment_request": <string>, "status": <PaymentStatus>, "fee_sat": <int64>, "fee_msat": <int64>, "creation_time_ns": <int64>, "htlcs": <array HTLCAttempt>, "payment_index": <uint64>, "failure_reason": <PaymentFailureReason>, }

GUI : Wrong redirection from /autopilot to /suggested_action

Being on /autopilot page, seeing the message :
Experimental. This will allow LNDg to automatically act upon the suggestions found here.

here points to /suggested_actions instead of newly named : /actions

<center><h6>Experimental. This will allow LNDg to automatically act upon the suggestions found <a href="/suggested_actions" target="_blank">here</a>.</h6></center>

Bellow the definition of /actions page:

path('actions/', views.actions, name='actions'),

Run the same rebalance again when a rebalance was successful

Immediately run a successful attempt again instead of waiting for it to be scheduled again.

Discussed in #44

Originally posted by 2Fast2BCn February 20, 2022
Would it be possible run the same rebalance again when a rebalance was successful? Right now you have to wait for the queue to finish before a rebalance attempt is done on a certain channel again. Usually I have between 15 and 22 channels that are being attempted by the scheduler. So it takes more then 30 minutes for a certain channel to get rebalanced a second time.

Or maybe have to option to run rebalances concurrently?

My problem is: even if I have a channel that rebalances easily it can't because of the very large and time consuming backlog of other channels that don't rebalance very well. I disabled a bunch of those difficult to rebalance channels on the scheduler and I'm considering bringing back my cronjobs but I would prefer not to ;-)

Auto Pilot : Take all channels for a peer into account while enable/disable AR

Current Autopilot works on the individual channels which can use one of the channels to be enabled or disabled for AR. While rebalancing there is no control on which channel is used for rebalancing if there are multiple channels with a peer since only pubkey is used in last hop and peer system decides which channel to use.

Autopilot should use combined capacity/inbound/outbound for a channel to enable/disable AR.

RFE: Clickable pub keys

Having pubkeys resolve to a link wherever they show up would help with workflow, for example on suggested_opens/, it would be great to click and go to the node's amboss or 1ml

[FEATURE] Add option to run in isolated environment without `lnd.conf`

All components of my lightning stack are running in separate VMs (actually LXC containers).

AFAICT, lndg needs RPC info from lnd.conf and admin.macaroon.

Please consider an lndg.conf with just RPC credentials and the name of an arbitrary macaroon (for example, I may wish to use the readonly caveat)

Thank you!

Manuel installation error : jobs.py : OverflowError: Python int too large to convert to SQLite INTEGER

Following : https://github.com/cryptosharks131/lndg#step-1---install-lndg

I get an error at step 8. Generate some initial data for your dashboard .venv/bin/python jobs.py

Tried to delete the db and redo the initialization.
Tried to remove lndg folder and restart from step 1.

user@hostname:~$ git clone https://github.com/cryptosharks131/lndg.git
Cloning into 'lndg'...
remote: Enumerating objects: 2108, done.
remote: Counting objects: 100% (2085/2085), done.
remote: Compressing objects: 100% (878/878), done.
remote: Total 2108 (delta 1469), reused 1704 (delta 1149), pack-reused 23
Receiving objects: 100% (2108/2108), 686.20 KiB | 2.48 MiB/s, done.
Resolving deltas: 100% (1469/1469), done.
user@hostname:~$ cd lndg

user@hostname:~/lndg$ virtualenv -p python3 .venv
created virtual environment CPython3.9.2.final.0-64 in 216ms
  creator CPython3Posix(dest=/home/user/lndg/.venv, clear=False, no_vcs_ignore=False, global=False)
  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/user/.local/share/virtualenv)
    added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2
  activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator

user@hostname:~/lndg$ .venv/bin/pip install -r requirements.txt
Collecting Django
  Using cached Django-4.0.2-py3-none-any.whl (8.0 MB)
Collecting djangorestframework
  Using cached djangorestframework-3.13.1-py3-none-any.whl (958 kB)
Collecting django-qr-code
  Using cached django_qr_code-3.0.0-py3-none-any.whl (28 kB)
Collecting grpcio
  Using cached grpcio-1.43.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.1 MB)
Collecting protobuf
  Using cached protobuf-3.19.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
Collecting pytz
  Using cached pytz-2021.3-py2.py3-none-any.whl (503 kB)
Collecting pandas
  Using cached pandas-1.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.7 MB)
Collecting sqlparse>=0.2.2
  Using cached sqlparse-0.4.2-py3-none-any.whl (42 kB)
Collecting asgiref<4,>=3.4.1
  Using cached asgiref-3.5.0-py3-none-any.whl (22 kB)
Collecting segno
  Using cached segno-1.4.1-py2.py3-none-any.whl (82 kB)
Collecting six>=1.5.2
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting numpy>=1.18.5
  Using cached numpy-1.22.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
Collecting python-dateutil>=2.8.1
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Installing collected packages: sqlparse, six, asgiref, segno, pytz, python-dateutil, numpy, Django, protobuf, pandas, grpcio, djangorestframework, django-qr-code
Successfully installed Django-4.0.2 asgiref-3.5.0 django-qr-code-3.0.0 djangorestframework-3.13.1 grpcio-1.43.0 numpy-1.22.2 pandas-1.4.0 protobuf-3.19.4 python-dateutil-2.8.2 pytz-2021.3 segno-1.4.1 six-1.16.0 sqlparse-0.4.2

user@hostname:~/lndg$ .venv/bin/python initialize.py
Setting up initial user...
Superuser created successfully.
FIRST TIME LOGIN PASSWORD:

user@hostname:~/lndg$ .venv/bin/python jobs.py
Traceback (most recent call last):
  File "/home/user/lndg/jobs.py", line 37, in update_payments
    Paymenthostnames(payment_hash=new_payment, attempt_id=attempt.attempt_id, step=hostname_count, chan_id=hostname.chan_id, alias=alias, chan_capacity=hostname.chan_capacity, node_pubkey=hostname.pub_key, amt=round(hostname.amt_to_forward_msat/1000, 3), fee=round(fee, 3), cost_to=round(cost_to, 3)).save()
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 743, in save
    self.save_base(using=using, force_insert=force_insert,
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 780, in save_base
    updated = self._save_table(
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 885, in _save_table
    results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 923, in _do_insert
    return manager._insert(
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1301, in _insert
    return query.get_compiler(using=using).execute_sql(returning_fields)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1441, in execute_sql
    cursor.execute(sql, params)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 85, in _execute
    return self.cursor.execute(sql, params)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 416, in execute
    return Database.Cursor.execute(self, query, params)
OverflowError: Python int too large to convert to SQLite INTEGER

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/user/lndg/jobs.py", line 248, in <module>
    main()
  File "/home/user/lndg/jobs.py", line 241, in main
    update_payments(stub)
  File "/home/user/lndg/jobs.py", line 74, in update_payments
    Paymenthostnames(payment_hash=db_payment, attempt_id=attempt.attempt_id, step=hostname_count, chan_id=hostname.chan_id, alias=alias, chan_capacity=hostname.chan_capacity, node_pubkey=hostname.pub_key, amt=round(hostname.amt_to_forward_msat/1000, 3), fee=round(fee, 3), cost_to=round(cost_to, 3)).save()
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 743, in save
    self.save_base(using=using, force_insert=force_insert,
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 780, in save_base
    updated = self._save_table(
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 885, in _save_table
    results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/base.py", line 923, in _do_insert
    return manager._insert(
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/query.py", line 1301, in _insert
    return query.get_compiler(using=using).execute_sql(returning_fields)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1441, in execute_sql
    cursor.execute(sql, params)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute
    return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
    return executor(sql, params, many, context)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/utils.py", line 85, in _execute
    return self.cursor.execute(sql, params)
  File "/home/user/lndg/.venv/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 416, in execute
    return Database.Cursor.execute(self, query, params)
OverflowError: Python int too large to convert to SQLite INTEGER

Add option to choose `--close_address` when opening channel

An additional, optional field in the open form could be added, to feed the closing address for the channel funds to be sent to, that gets validated on submit to verify is a well-formed address, and then attached to the open command.
Non plus ultra would be interaction with bos to also set the close address to pass to channel opens that are received by the node from other nodes.

Rebalancer : update_channels(stub, rebalance.last_hop_pubkey, successful_out) does not take into account MPP

Few issues in update_channels after successful rebalance in rebalancer.py

  • successful_out = payment_response.htlcs[0].route.hops[0].pub_key is set to pub_key instead of chan_id. As per grpc specs pub_key is optional and in any case we miss the information related to actual channel ID used in the hop.
  • Also it is only passing first htlc attempt, if the rebalanced was routed over MPP, it is missing out on other attempts.
  • Is update_channels needed in any case? since jobs.py would be updating the channel balances in parallel anyway?

Possibility to exclude channels for rebalance

I would like some one-way channels to be excluded from being used as outgoing channel for rebalances. This is possible with BOS, is it possible to add it as an option?

Edit. Found it already :) I should change the oTarget%, apologies!

Install directory != home

systemd.sh assumes the repository directory to be directly in $HOME_DIR. Since I had one more directory in between, it failed to correctly install the services.
Could you mention that requirement in step 1 of the manual install section?

IndexError at Step 8 of install process: Generate some initial data for your dashboard

Steps 1-7 work fine. When calling command for Step 8, the following results follow

$ admin@raspberrypi:~/lndg $ .venv/bin/python jobs.py
Traceback (most recent call last):
  File "jobs.py", line 220, in <module>
    main()
  File "jobs.py", line 215, in main
    update_invoices(stub)
  File "jobs.py", line 91, in update_invoices
    alias = Channels.objects.filter(chan_id=invoice.htlcs[0].chan_id)[0].alias
  File "/home/admin/lndg/.venv/lib/python3.7/site-packages/django/db/models/query.py", line 318, in __getitem__
    return qs._result_cache[0]
IndexError: list index out of range

AR : private channels should be selected for AR

Private channels are excluded from the outbound channel list. This unnecessarily limits the available channels for AR. A private can be used for outbound if the peer has other public channels available. If not, LND would not use it anyway because of no path. Therefore it is not needed to exclude them from the route.

Similarly for incoming, only last_hop_pubkey is used not a specific channel. If a peer has both public and private channel, peer's node will choose which channel to route it from. We would need to take care while updating balances on the channels (done in general by jobs.py and in particular taken up by #98 as a fix)

Max Cost % per Channel and AR-MaxCost%

I changed AR-MaxCost% in the Advanced Settings (Update Local Settings), but none of the settings in Max Cost % per Channel did change.
I think this field is absolet.

I could change it for all channels using the "Update All Channel AR max Cost %s" (Tooltip Text), but it was hard to find, because there are a bunch of empty fields without names (Why not write on the left of them "Update for all Channels:").

Data Discrepancy? - Handling MPP

I have come across what appears to be a curious anomaly between the data retrieved from the payments api and the data retrieved from the paymentshops api. I have found only this one instance, for a particular payment_hash.

In the payments api, the total value of the payment is shown as 29,411 sats for a rebalance from MyChannel-A. BUT the paymentshops data shows only 14,705 sats hoping from MyChannel-A to MyChannel-B for this payment_hash. There is a 14,706 sats discrepancy!

I have poked about in my lnd data and come to find there is another series of hops associated with this payment_hash: 14,706 sats from MyChannel-C to NotMyChannel-X to MyChannel-B, which completes the total. Can you speak to why this series of hops does not appear in LNDg paymenthops data?

When I look at this payment_hash in the lnd payments.json, it shows this transaction failed at many attempts to pay 29,411 sats before the payment was split into one 14,705 sats payment and another 14,706 payment and processed from separate outgoing channels to the one incoming channel.

As I have continued to poke, I am seeing more discrepancies like this between payments and paymenthops. It usually doesn’t stick-out because the multiple payments for a single payment_hash have previously been generated from the same channel rather than separate channels.

Logging steps to migrate from sqlite to postgres

Implemented this on two different nodes - follow at your own risk

Just logging the steps here to carefully record what needs to be done to migrate from LNDg sqlite to postgres. This is for advanced users, and I just want to kick off the convo here with ngu to ensure all grounds are covered properly

Preparation

  • check your locale is set to LANG=en_US.UTF-8

Postgress

  • install postgres with sudo apt install postgresql
  • login via sudo su - postgres and check locale again to ensure your global settings are not tricking you for all you non-US users out there
  • login to postgres with psql and create the database
    create user lndg;create database lndg LOCALE 'en_US.UTF-8' TEMPLATE template0;alter role lndg with password '<psql_password>';grant all privileges on database lndg to lndg;alter database lndg owner to lndg;
  • exit with \q
  • exit from postgres user, back to your main-user

Additional Requirements

  • cd lndg
  • sudo .venv/bin/pip install --upgrade setuptools
  • sudo apt install libpq-dev
  • .venv/bin/pip install psycopg2

Exectution

  • Stop lndg systemctl services
    sudo systemctl stop htlc-stream-lndg.service && sudo systemctl stop rebalancer-lndg.timer && sudo systemctl stop jobs-lndg.timer
  • adjust database connections: cd lndg and nano lndg/settings.py
DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'lndg',
        'USER': 'lndg',
        'PASSWORD': '<psql_password>',
        'HOST': 'localhost',
        'PORT': '5432',
    }
}
  • Dump your existing data into a jason file
    cd lndg && .venv/bin/python manage.py dumpdata gui.channels gui.peers gui.rebalancer gui.localsettings gui.autopilot gui.autofees gui.failedhtlcs > dump.json
  • .venv/bin/python manage.py migrate

Remove some noise

.venv/bin/python manage.py shell
from django.contrib.contenttypes.models import ContentType
ContentType.objects.all().delete()
exit()

Import

  • import dump data into postgres: .venv/bin/python manage.py loaddata dump.json
  • recreate your login user with data from lndg/data/lndg-admin.txt: .venv/bin/python manage.py createsuperuser
  • user: lndg-admin
  • pwd: cat lndg/data/lndg-admin.txt

Tiny adjustments

  • nano jobs.py
  • Required temporary change for payment/invoice messages on lines 52, 93, 112 of jobs.py (find/replace in nano via ALT-R)
    .decode('utf-8', errors='ignore')
    .decode('utf-8', errors='ignore').replace("\x00", "\uFFFD")

Put it together

  • Restart uwsgi via sudo systemctl restart uwsgi
  • Start all lndg-background services: sudo systemctl start htlc-stream-lndg.service && sudo systemctl start rebalancer-lndg.timer && sudo systemctl start jobs-lndg.timer

Verify & Conclude

Login to your GUI and verify all is working

  • tables payments and invoices may need some time repopulating. Those haven't been migrated
  • in case for my instance, I had 60,000 entries, so it's running backwards from 8 months and took 1.5hrs. If you have a slow machine, this is going to take a long time (couple of hours, at least)

Auto Rebalancer not acting

Running Version 1.0.3. on umbrel.

Followed instructions in README.md.

Initially, some unsuccessful rebalancing attempts were made, however after one iteration the rebalancer gave up permanently, i.e. it does no longer attempt any rebalancing.

Logs don't show anything of significance.

Appreciate any advice on how to approach the problem. Will be happy to provide more information, if attainable.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.