Code Monkey home page Code Monkey logo

clboss's People

Contributors

chrisguida avatar hosiawak avatar king-11 avatar ksedgwic avatar nassersaazi avatar thestack avatar vincenzopalazzo avatar willcl-ark avatar zmnscpxj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clboss's Issues

`Preinvestigator` should evaluate using the `ChannelCreator::Dowser` as well

If the patron-proposal link has low capacity at preinvestigation time, it seems likely that by the time we decide to channel, it is still low as well, and below the minimum, which would cause the proposal to be rejected by the ChannelCreator anyway.

It seems useful to do the Dowser check at preinvestigation so that it does not even enter the investigation pool.

We probably need to move the Dowser out into Boss/Mod/ and figure out a good interface for it instead of the internal interface used by ChannelCreator.

Fee Modder by Expenditures

We keep track of FundsMover expenditures anyway, but only use it to limit JitRebalancer to prevent targeted attacks if the attacker knows we use JIT rebalancing. An idea from @hosiawak : #55 (comment)

Here is a possible implementation:

  • Compute out_expenditures divided by out_earnings for each channel as a double. If this data is nonexistent, or out_earnings is zero, saturate to some maximum (maybe 100.0?).
  • Give that ratio as the multiplier returned by this FeeModder

For reference, the FeeModder system allows modules to provide a double that is multiplied to a computed basis fee. This computed basis fee is the median of fees that other channels to that node have.

While this may be workable, we should note that if CLBOSS uptake is high, then since the basis we use is the fee that other channels charge, if the other nodes are also using CLBOSS this can lead to absurd situations where CLBOSS-managed nodes start bumping up channel feerates, then their competitors see the median as going higher and also bump up their channel feerates, etc. Or similar absurd situations in the reverse direction. So I'm wondering if the median-fee should instead be used as an initial fee when we have no information yet and to use #11 of some kind to change this initial feerate over time as we try to discover the optimum price based on our position in the network.

plugin-clboss: Killing plugin: JSON-RPC response "id"-field is not a u64

My clboss got killed after running for a few days. The message in the log was:

plugin-clboss: Killing plugin: JSON-RPC response "id"-field is not a u64

Unfortunately I didn't have DEBUG logs enabled for clboss.

The log message prior to the error was:

lightningd: Sending 279628174msat over 7 hops to deliver 278212722msat

Dig error as command missing when installed and in PATH for user on Debian

Followup from #52

I'm running clboss master at 18d1a09 on Debian GNU/Linux bullseye/sid and see the follwing in the logs at startup:

... UNUSUAL plugin-clboss: DnsSeed: Cannot seed by DNS: `dig` command is not installed, it is usually available in a `dnsutils` package for your OS.

I run lightningd as user bitcoin who has dig installed and available:

bitcoin in ~/.lightning at will-nuc ➜ echo $SHELL
/usr/bin/fish

bitcoin in ~/.lightning at will-nuc ➜ whoami
bitcoin

bitcoin in ~/.lightning at will-nuc ➜ which dig
/usr/bin/dig

bitcoin in ~/.lightning at will-nuc ➜ dig -v; echo $status
DiG 9.16.8-Debian
0

Everything seems to be working, but odd error message I can't seem to get rid of nonetheless...

Channel feerates

CLBOSS has prepared internal hooks for how we can manipulate channel fees, but we need to implement modules that actually change the channel fees.

Currently the CLBOSS just sets the feerates to the weighted median of all the peers of the peer.

  • We use median so that extreme settings do not affect us, as the peer might manipulate us by making up artificial peers and creating some channels with those.
  • We use a weighted median, so that the signal we get is weighted by the size of the channels the peers of the peer have.

Nevertheless, @darosior claims the below on IRC:

<zmnscpxj__> the"median of peers of peer" is a reasonable starting point IMO
<darosior> FWIW i don't think it is :)
<zmnscpxj__> how so
<zmnscpxj__> ?
<darosior> From my own experience setting really high fees when you are a consequent router is the more reasonnable
<darosior> Like way more than all your peers
<zmnscpxj__> "consequent router" means what?
<darosior> I mean i've been testing different economic strategies on a ~70-100 channels node for months
<zmnscpxj__> okay, so what does "consequent router" mean? larger than your competitors?
<darosior> Ah, i meant like big enough to do some stats..
<zmnscpxj__> hmmm
<zmnscpxj__> you mean you look at channels with high fee/second and bump their fees up? Is that it?
<zmnscpxj__> or...?
<darosior> I mean i've been increasing the base and ppm fees for a while, nothing based on my peers
<darosior> Actually if i did, my (required atm..) rebalancings would cost me way too much  
<darosior> Hence my plugin
<zmnscpxj__> ah
<zmnscpxj__> but it seems to me that this approximates as well what I described: if you have been getting a lot of fees from outgoing forwards on a channel, that channel is also likely to be depleted and your plugin would bump up its fees by a good amount
* ghost43 has quit (Remote host closed the connection)
<darosior> Yeah, and exponentially discount its usage in the other way
<darosior> Which, i expect would reduce my rebalancing cost

To me, this suggests that if you are somehow "larger" than the peers of the peer, you should probably bump up your feerates beyond the baseline, and if you are "smaller", then you should lower your feerates. Probably the "largeness" value can be based on the capacities of all public channels, just sample all the capacities of every peer of the per (including yourself) and see where you are when this is sorted, getting a 0->1.0 position (0 you are at the lowest ned, 1.0 you are at the highest end, of the capacities of the peers of the peer). How to translate that number to a feerate multiplier, though, I am uncertain.

Basically, the fee-setting module lets other modules reigster a function that is given the node ID of the peer whose channel fees we are setting. That function then provides a single double, which is multiplied to the basis fees that we derived from the aforementioned weighted median. A function that does not want to manipulate the channel fee can provide 1.0, which is the identity object in the multiplication monoid.

Channel planner should factor in size of a channel output

2020-11-11T15:12:56.639Z DEBUG   plugin-clboss: Rpc in: multifundchannel {\"destinations\": [{\"id\": \"028f12261ce5806a83855fe8603b5cbf0b005f1393296887c427ae17dfed30cb53\", \"amount\": \"2582864000msat\", \"announce\": true}, {\"id\": \"025bddd8711e61965272811c0330ae0e2d106feac9792ea88bef53f55c51757ca9\", \"amount\": \"3312761000msat\", \"announce\": true}, {\"id\": \"03a178c7d5d1981843052f30ad9249c0f8681963c4c331317be6da7d9810339b21\", \"amount\": \"2046279000msat\", \"announce\": true}, {\"id\": \"037332b631d8974e36f3d338644564cd2a28872d88125fac80f1340e6faf704b9f\", \"amount\": \"2847416000msat\", \"announce\": true}], \"feerate\": \"normal\", \"minchannels\": 1} => error {\n\t\"code\" : 301,\n\t\"message\" : \"Could not afford 10789320sat using all 5 available UTXOs: 6143172sat short\"\n}

We should factor in the expected size of each channel UTXO plus the current latest feerate during planning, and probably override the feerate as well with the feerate we used during planning.

On the other hand it could be due to multiple overlapping planning attempts due to being slow with the channel finder algorithms. Hmm.

Create an iterator for `Jsmn::Object` arrays

Currently, we iterate over JSON arrays by using a standard for-i-from-0-to-length loop. Then we index the JSON array by [i] syntax. Unfortunately, due to how JSMN structures are, the [i] lookup is O(n), so iteration is O(n^2).

We can instead use an iterator, which only requires a operator++ operation, which can be O(1) with the JSMN structures. This probably requires that array Jsmn::Object wrappers cache their end iterator.

clboss-status reporting internet offline

My fresh install of FreeBSD clboss-status reports internet connection being constantly offline.

I checked Boss/Mod/InternetConnectionMonitor.cpp and it seems to be checking some popular servers on port 443 periodically. I can connect to these servers manually.

Here's my lightningd-bitcoin.conf:

alias=[REDACTED]
lightning-dir=/var/db/c-lightning
always-use-proxy=true
large-channels
funding-confirms=1
announce-addr=[redacted].onion:1234
bind-addr=127.0.0.1:5678
bitcoin-rpcconnect=127.0.0.1
bitcoin-rpcpassword=[redacted]
bitcoin-rpcport=8332
bitcoin-rpcuser=[redacted]
log-file=lightningd.log
network=bitcoin
proxy=127.0.0.1:9050
plugin=clboss

Unfortunately I don't know C++ and libev enough to investigate this issue further but if you could maybe provide a small snippet of code I can run on this box that might help. Thank you.

Crash/exit

Hello @ZmnSCPxj. Guess you're busy with other things but if you find a minute I noticed a second crash/exit today, running master on FreeBSD.

2021-04-03T14:10:34.366Z DEBUG   plugin-clboss: FundsMover: successfully moved 1465343006msat from XXX to YYY.
2021-04-03T14:10:34.366Z INFO    plugin-clboss: Killing plugin: exited during normal operation

Both crashes/exits happened right after FundsMover. If I can debug clboss and produce a backtrace somehow please let me know.

basicsecure.c compilation errors (FreeBSD)

/bin/sh ./libtool  --tag=CC   --mode=compile cc -DHAVE_CONFIG_H -I.     -g -O2 -std=c11 -c -o basicsecure.lo basicsecure.c
libtool: compile:  cc -DHAVE_CONFIG_H -I. -g -O2 -std=c11 -c basicsecure.c  -fPIC -DPIC -o .libs/basicsecure.o
basicsecure.c:198:2: error: use of undeclared identifier 'errno'
        errno = 0;
        ^
basicsecure.c:208:22: error: use of undeclared identifier 'errno'
        } while (fd < 0 && (errno == EINTR));
                            ^
basicsecure.c:208:31: error: use of undeclared identifier 'EINTR'
        } while (fd < 0 && (errno == EINTR));
                                     ^
basicsecure.c:216:24: error: use of undeclared identifier 'errno'
                } while (res < 0 && (errno == EINTR));
                                     ^
basicsecure.c:216:33: error: use of undeclared identifier 'EINTR'
                } while (res < 0 && (errno == EINTR));
                                              ^
basicsecure.c:218:22: error: use of undeclared identifier 'errno'
                        int saved_errno = errno;
                                          ^
basicsecure.c:221:26: error: use of undeclared identifier 'errno'
                        } while (cres < 0 && (errno == EINTR));
                                              ^
basicsecure.c:221:35: error: use of undeclared identifier 'EINTR'
                        } while (cres < 0 && (errno == EINTR));
                                                       ^
basicsecure.c:222:4: error: use of undeclared identifier 'errno'
                        errno = saved_errno;
                        ^
basicsecure.c:232:24: error: use of undeclared identifier 'errno'
        } while (cres < 0 && (errno == EINTR));
                              ^
basicsecure.c:232:33: error: use of undeclared identifier 'EINTR'
        } while (cres < 0 && (errno == EINTR));
                                       ^
basicsecure.c:233:2: error: use of undeclared identifier 'errno'
        errno = 0;
        ^
12 errors generated.```

clboss process hanging around

Stopping lightningd doesn't kill clboss process, after a few rounds of restarts I get multiple clboss processes all and loadavg 16:

# service lightningd stop
Stopping lightningd.
pay: Lost connection to the RPC socket.
Waiting for PIDS: 44953, 44953.
# ps xavf | grep boss
44960 RJ     1:45.50 127 105     109   88408  45884   - 11548 100.2  0.3 clboss
45084 R+J    0:00.00 127   0       0    6740   2280   -    96   0.0  0.0 grep boss

Use iterators for `Jsmn::Object` arrays

#15 has the iterators implemented, but as of this writing only the channel finders have been updated to use iterators. We should further audit the code for all cases where we have a for loop that is not an iterator-based loop and use iterator syntax for those. For now I think the channel finders are the main ones that have problems with processing speed.

Possible to run on testnet?

Locking 0.01 bitcoin to properly test this software is quite a lot for me and I am debating whether to use this software or not.
Whoever if it would be possible to deploy this on testnet first I'd be much more comfortable deploying it on mainnet after some time...

I think the idea for this project is great and I am prepared to do the necessary changes to use this on testnet. I'd just need some pointers / instructions.

Thanks

Channel proposal investigation by improvement to shortest path to payment target

By @AutonomousOrganization here: #2 (comment)

Seems the most important information from user initiated pays is which node is the payment target. Most simple action to maximize successful payments to this node would be a big direct channel, but user may not desire network knowing payment target. However user would appreciate future pays to target having a higher likelihood of success. What about including a 'desire to be close to target' metric into the channel create and score bonus points for multiplying possible routes to targeted pay nodes?

As a basic algorithm, something like:

  • For each payment target:
    • getroute (with fuzzpercent=0) from us to payment target, measure length of route, store in table [payment_target] -> route_length.
  • For each channel candidate:
    • For each payment target:
      • getroute (with fuzzpercent=0) from candidate to payment target, measure length of route + 1 (or just set 1 if canddidate == payment target).
      • if length+1 is less than the previous recorded route_length, add merit to the channel candidate.

This probably requires a rework of the channel candidate investigator.

Possible to expose FundsMover via the RPC?

Is there a way to get FundsMover's actions somehow via the RPC (either identify self-payments somehow via the lightningd rpc or create clboss rpc that exposes this data) ?

plugin-clboss: FundsMover: Moved 201239852msat from xxx, getting 200678652msat to yyy, costing us 561200msat.

I'm trying to calculate operational costs of running a node so that we can subtract the total fees from the collected fees and get an idea if we're operating at a loss or profit and how it changes over time. So far I've got channel opening/closing fees done. FundsMover seems to be the next step and Boltz swap fees next after that.

Sparse records of swaps and FundsMover pays

I started a new node with clboss a few days ago. I am trying to "audit" clboss and do double-entry accounting for all movement of funds on the node.

It was a bit difficult to find which pay resulted in new onchain funds in the event of a swap based just on the system logs. I noticed that a SwapManager record (that links address and payment_hash) in clboss's sqlite db gets wiped after a successful swap. I wonder if in the future this information might be preserved somewhere? Otherwise the process is to look at listpays and listfunds true and search for entries of approximately equal amount (differing by unknown fees). Looking in the system logs for SwapManager entries was confusing also due to disparate begin and end amounts for the same uuid:

plugin-clboss: SwapManager: Swap 7eebf286b148f88d7ecef301818b4ef7 started for 1923741188msat.
plugin-clboss: SwapManager: Swap 7eebf286b148f88d7ecef301818b4ef7 completed with 752127000msat onchain.

I haven't peeked at this portion of the code but I'm guessing "started for" is the target onchain amount, and "completed with" is the amount obtained in the swap, presumably meeting the target with whatever we already had onchain.

FundsMover seems to wipe pay records as well, not just from the clboss db but from c-lightning. I can understand wanting not to clutter pay records with automated activity, but I wonder if in the future this information might be saved also. Otherwise I have to grep system logs for FundsMover and parse entries like this:

plugin-clboss: FundsMover: Moved 187404944msat from 033b90ec27e6d8140f9d56910312815fc8d166d2bb93a2fae6d35c9e84032dbb62, getting 187081604msat to 0337a42a4384c331258f7fa53697f27dbc1596a689fe77923d31b8e7a6f5960929, costing us 323340msat.

Finally, in my personal accounting I find I am in agreement with utxo_amount from lightning-cli summary, but my own accounting exceeds avail_out by ~60k sats. (It's a whole number of sats, I don't know if this is a coincidence.) Activity accounted for into and out of channels was

  1. Opening channels
  2. Pays for swaps
  3. A single user-initiated test pay of 10 sats plus 2msat fee
  4. FundsMover fees parsed from logs
  5. Fee earnings from forwards

Every entry in listpays with status == "complete" and every entry in listtransactions and listfunds is accounted for. Is there some other source of fees or debit of funds I might be missing, other than the list above?

TL;DR
Can we in the future have a more detailed log of clboss activities that keeps, for example, address & payment hash of a swap in a single record and FundsMover activities that would have been in listpays? Also, any idea why my accounting expects a little more out liquidity in channels?

Optimize reading RPC responses

In #13 it seems ChannelFinderByPopularity takes a lot of CPU processing, which causes issues with responsiveness, since CLBOSS also hooks on rpc_command.

#15 would help reduce reduce processing load after the listnodes command returns, but the time from listnodes out to listnodes in is also pretty big, and is probably an even tighter processing load (ChannelFinderByPopularity performs Ev::yield after listnodes returns, but Boss::Mod::Rpc does not explicitly yield while processing the listnodes command).

2020-11-05T05:26:49.503Z DEBUG   plugin-clboss: Rpc out: listnodes {}
...
2020-11-05T05:45:48.127Z DEBUG   plugin-clboss: Rpc in: listnodes {} => {\n\t\"nodes\" : [\n\t\t{\n\t\t\t\"nodeid\" : \"0326efa5d368844e2c9cf1e46891a5d40cc5dc9d2371cfa23968d1fdf61b64814d\",\n\t\t\t\"alias\" : \"0326efa5d368844e2c9c\",\n\t\t\t\"color\" : \"3399ff...

That is ~19 minutes. True, this is on a lightningd running under valgrind but nonetheless, we should probably optimize it better.

Currently we feed 256-byte chunks of data to the parser. We can increase the chunk size (4096 seems better overall, at least for Linux).

Or, we could do what C-Lightning builtin plugins already do, which is to keep putting bytes in a buffer until it reaches a \n\n (two newline chars) and only then sending it to the parser. C-Lightning has a quirk where it emits two newline chars in succession at the end of each entire JSON datum. The JSMN parser is fast if and only if you give it an entire JSON datum, giving it a partial datum means it has to keep restarting the whole parse sequence from the start of the datum.

Unfortunately this ties us to a particular quirk of C-Lightning, meaning that if people use wrappers and so on (e.g. run clboss on one system while running C-Lightning on another) the quirk also has to be replicated.

Perhaps an alternative that does not tie us to this quirk of C-Lightning would be to keep putting bytes in a buffer until the socket would block on read, and then send to the parser, and explicitly yield at that point.

Yet another possibility: http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod#WATCHER_PRIORITY_MODELS

Modify fee by price theory

Discussion with @whitslack here: https://old.reddit.com/r/Bitcoin/comments/jkibch/announcing_clboss_automated_clightning_node/gakg2e6/

In price theory:

  • Given a particular demand profile for a product, there exists an optimal price where a supplier of the product maximizes its earnings.
  • If the price is higher than the optimum price, fewer customers will buy, reducing the number of items sold.
  • If the price is lower than the optimum price, more customers will buy, but the earning per customer is low, reducing earnings.

One thing we could do would be for a CLBOSS module to speculatively increase or decrease (at random) its fee modifier. Then at each forward, it records how much it earns, and whether it is currently lower or higher than the base. At some point, it changes what its "default" fee modifier is, depending on the recorded data.

Implement Lightning Loop support

Getting incoming capacity is dangerously centralized around Boltz exchange. In principle somebody else could come around and implement an alternative (hopefully using the same protocol so it is easy on CLBOSS), but there are no alternative Boltz-protocol exchanges that I could find.

On the other hand, there already is a viable alternative to Boltz, that also (looks like it) has non-custodiality across the entire swap, Lightning Loop.

Loop does have the drawback that the client-server protocol is not documented, there is only the .proto files in the Loop source code.

Seems like bug kills channel creation then hangs boss

NeedsConnectSolicitor: Connection solicited to @
Boss::Mod::ChannelCreator::Carpenter::construct(std::map<Ln::NodeId, Ln::Amount>): Assertion `!plan.empty()' failed.
Killing plugin: Plugin exited before completing handshake.
// then repeating for many nodes
-channeld-chan#32: Peer connection lost
Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died

Left running and do believe that all balancing & creation stalled (it was working great). Considered deleting that assert and then seeing if it worked, but then I decided yolbo.

Delayed clightning startup when offline

When run on a system without WAN access, clboss adds about 18 seconds to service startup time (defined as the duration from starting the clightning process to the start of the RPC server). This delay is due to failed DNS resolution.

clightning service output (gist)

clightning config file

network=bitcoin
bitcoin-datadir=/var/lib/bitcoind
proxy=127.0.0.1:9050
always-use-proxy=true
bind-addr=127.0.0.1:9735
bitcoin-rpcconnect=127.0.0.1
bitcoin-rpcport=8332
bitcoin-rpcuser=public
rpc-file-mode=0660
plugin=/nix/store/zd4y7s4vi43sjvmvyp6sv4zhc31rkk9l-clboss-0.10/bin/clboss
clboss-min-onchain=30000

disable-dns
bitcoin-rpcpassword=1s36RENe8gF7cYD8Zlm9

When dig is not in PATH, there is no delay: clightning service output (gist).

This issue appeared when I added clboss to our nix-bitcoin test suite, which runs bitcoin nodes including clightning in offline VMs.
It would be great if clboss could handle missing WAN at startup without delaying the main clightning process.

Proposal: low-ball the channel funding fee first, then CPFP later

Currently we just use the "normal" feerate for channel opens. This is usually fine since we tend to open channels when feerates are low, and improving #12 should make us more parsimonious about when we open channels.

However, in principle we should start with a channel funding fee that is just barely enough to get past the minimum relay fee, then just CPFP later if it takes too long to confirm. We can always CPFP since CLBOSS always tries to leave an amount onchain. With #29 this should be around the mnimum onchain amount, which we can then use as a budget for CPFP later. We can even RBF the CPFP transaction.

Issues:

  • C-Lightning's support for RBF is mostly "theoretical". I am not convinced that the wallet can handle correctly the case where multiple RBFed transactions are passed into sendpsbt; my understanding of the code suggests that it might register multiple unconfirmed outputs, some of which will never exist due to the RBF.
  • We can learn the transaction ID is from the multifundchannel return. However, by the time it returns, the transaction has been broadcasted already. There is a race condition where (a) multifundchannel broadcasts the tx (b) lightningd shuts down, before the command actually returns to CLBOSS, where we can save the tx in the CLBOSS database.
    • This can lead CLBOSS to forget about the funding tx, and then be unable to CPFP later if it turns out fees spike just after we broadcast.
    • We can "just" reimplement multifundchannel ourself and save the funding tx to the CLBOSS database before broadcasting, and rebroadcast the funding tx at startup just in case, but that requires us to implement a PSBT parser in CLBOSS, which I would prefer not to do.

Rpc calls possibly closing peer connections

Possibly related to #13

I'm not sure if it's the same issue exactly, but I also notice some patches of failures on my node, which has good quality internet connection. For example here is an unredacted! snippet of some logs:

2020-11-06T14:24:26.640Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:24:26.653Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : true,\n\t\t\t\"netaddr\" : [\n\t\t\t\t\"213.174.156.69:9...
2020-11-06T14:24:28.218Z DEBUG   02ad6fb8d693dc1e4569bcedefadf5f72a931ae027dc0f0c544b34c1c6f3b9a02b-connectd: Connected out, starting crypto
2020-11-06T14:24:28.337Z DEBUG   02ad6fb8d693dc1e4569bcedefadf5f72a931ae027dc0f0c544b34c1c6f3b9a02b-connectd: Connect OUT
2020-11-06T14:24:28.337Z DEBUG   02ad6fb8d693dc1e4569bcedefadf5f72a931ae027dc0f0c544b34c1c6f3b9a02b-connectd: peer_out WIRE_INIT
2020-11-06T14:24:28.443Z DEBUG   02ad6fb8d693dc1e4569bcedefadf5f72a931ae027dc0f0c544b34c1c6f3b9a02b-connectd: peer_in WIRE_INIT
2020-11-06T14:25:28.612Z INFO    033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-channeld-chan#469: Peer connection lost
2020-11-06T14:25:28.612Z INFO    033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-chan#469: Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died (62208)
2020-11-06T14:25:28.613Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:25:28.627Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : true,\n\t\t\t\"netaddr\" : [\n\t\t\t\t\"213.174.156.69:9...
2020-11-06T14:25:29.842Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: Connected out, starting crypto
2020-11-06T14:25:29.949Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: Connect OUT
2020-11-06T14:25:29.949Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: peer_out WIRE_INIT
2020-11-06T14:25:30.047Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: peer_in WIRE_INIT
2020-11-06T14:26:28.424Z INFO    035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-channeld-chan#74: Peer connection lost
2020-11-06T14:26:28.424Z INFO    035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-chan#74: Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died (62208)
2020-11-06T14:26:28.425Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:26:28.448Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : true,\n\t\t\t\"netaddr\" : [\n\t\t\t\t\"213.174.156.69:9...
2020-11-06T14:26:30.052Z DEBUG   035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-connectd: Connected out, starting crypto
2020-11-06T14:26:30.162Z DEBUG   035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-connectd: Connect OUT
2020-11-06T14:26:30.162Z DEBUG   035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-connectd: peer_out WIRE_INIT
2020-11-06T14:26:30.263Z DEBUG   035f5236d7e6c6d16107c1f86e4514e6ccdd6b2c13c2abc1d7a83cd26ecb4c1d0e-connectd: peer_in WIRE_INIT
2020-11-06T14:27:28.719Z INFO    033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-channeld-chan#469: Peer connection lost
2020-11-06T14:27:28.719Z INFO    033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-chan#469: Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died (62208)
2020-11-06T14:27:28.720Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:27:28.731Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : true,\n\t\t\t\"netaddr\" : [\n\t\t\t\t\"213.174.156.69:9...
2020-11-06T14:27:30.693Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: Connected out, starting crypto
2020-11-06T14:27:30.810Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: Connect OUT
2020-11-06T14:27:30.810Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: peer_out WIRE_INIT
2020-11-06T14:27:30.912Z DEBUG   033e9ce4e8f0e68f7db49ffb6b9eecc10605f3f3fcb3c630545887749ab515b9c7-connectd: peer_in WIRE_INIT
2020-11-06T14:28:29.335Z INFO    0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-channeld-chan#73: Peer connection lost
2020-11-06T14:28:29.335Z INFO    0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-chan#73: Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died (62208)
2020-11-06T14:28:29.336Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:28:29.350Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : false,\n\t\t\t\"channels\" : [\n\t\t\t\t{\n\t\t\t\t\t\"state\" ...
2020-11-06T14:28:30.837Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: Connected out, starting crypto
2020-11-06T14:28:30.950Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: Connect OUT
2020-11-06T14:28:30.951Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: peer_out WIRE_INIT
2020-11-06T14:28:31.053Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: peer_in WIRE_INIT
2020-11-06T14:29:27.533Z INFO    0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-channeld-chan#73: Peer connection lost
2020-11-06T14:29:27.534Z INFO    0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-chan#73: Peer transient failure in CHANNELD_NORMAL: channeld: Owning subdaemon channeld died (62208)
2020-11-06T14:29:27.534Z DEBUG   plugin-clboss: Rpc out: listpeers {}
2020-11-06T14:29:27.547Z DEBUG   plugin-clboss: Rpc in: listpeers {} => {\n\t\"peers\" : [\n\t\t{\n\t\t\t\"id\" : \"0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019\",\n\t\t\t\"connected\" : false,\n\t\t\t\"channels\" : [\n\t\t\t\t{\n\t\t\t\t\t\"state\" ...
2020-11-06T14:29:29.398Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: Connected out, starting crypto
2020-11-06T14:29:29.506Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: Connect OUT
2020-11-06T14:29:29.506Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: peer_out WIRE_INIT
2020-11-06T14:29:29.607Z DEBUG   0303a518845db99994783f606e6629e705cfaf072e5ce9a4d8bf9e249de4fbd019-connectd: peer_in WIRE_INIT

I can't tell exactly what the cause is, but to me it's a bit suspicious that a random channel dies each time clboss makes an RPC call?

I will try to enable full debug logs and see if I can hunt down a bit more when exactly this happens...

Is it possible to make clboss use #zerobasefee for channels under some conditions?

There is this #zerobasefee movement to ease the lightning routing process. If I would like to participate in this with clboss: Is it possible to make clboss set the fees in some way so that it will use 0 base fee? Even though that this would diminish my earnings a little bit?

If not, I would like to make this as a feature request. The best way for me would be a configurable limit where clboss will set 0 base fee if the optimum base fee calculated by clboss is below. In this way clboss could honor #zerobasefee if other fees on the network would allow it with not too much "feeloss".

`autoreconf -i` errors

Cloning the git repo and running autoreconf -iv results in a few errors. Firstly libsodium:

aclocal: warning: couldn't open directory 'm4': No such file or directory
aclocal: warning: couldn't open directory 'm4': No such file or directory
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'.
libtoolize: copying file 'build-aux/ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
configure.ac:47: installing 'build-aux/compile'
configure.ac:9: installing 'build-aux/config.guess'
configure.ac:9: installing 'build-aux/config.sub'
configure.ac:10: installing 'build-aux/install-sh'
configure.ac:10: installing 'build-aux/missing'
configure.ac:886: error: required file 'libsodium.pc.in' not found
configure.ac:886: error: required file 'libsodium-uninstalled.pc.in' not found
configure.ac:886: error: required file 'src/libsodium/include/sodium/version.h.in' not found
src/libsodium/Makefile.am:186: error: HAVE_LD_OUTPUT_DEF does not appear in AM_CONDITIONAL
src/libsodium/Makefile.am: installing 'build-aux/depcomp'
parallel-tests: installing 'build-aux/test-driver'
autoreconf: automake failed with exit status: 1

And similarly for secp256k1:

autoreconf: configure.ac: adding subdirectory external/secp256k1 to autoreconf
autoreconf: Entering directory `external/secp256k1'
autoreconf: running: aclocal -I build-aux/m4
aclocal: warning: couldn't open directory 'build-aux/m4': No such file or directory
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'.
libtoolize: copying file 'build-aux/ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'build-aux/m4'.
libtoolize: copying file 'build-aux/m4/libtool.m4'
libtoolize: copying file 'build-aux/m4/ltoptions.m4'
libtoolize: copying file 'build-aux/m4/ltsugar.m4'
libtoolize: copying file 'build-aux/m4/ltversion.m4'
libtoolize: copying file 'build-aux/m4/lt~obsolete.m4'
autoreconf: running: /usr/bin/autoheader
configure.ac:10: installing 'build-aux/compile'
configure.ac:5: installing 'build-aux/config.guess'
configure.ac:5: installing 'build-aux/config.sub'
configure.ac:9: installing 'build-aux/install-sh'
configure.ac:9: installing 'build-aux/missing'
configure.ac:517: error: required file 'libsecp256k1.pc.in' not found
Makefile.am: installing 'build-aux/depcomp'
parallel-tests: installing 'build-aux/test-driver'
autoreconf: automake failed with exit status: 1

I tried to manually insert the missing files, but it didn't seem to like it much. Nuking directories external/libsodium and external/secp256k1 and making fresh git clones of the original repos in there manually, seems to fix the issue.

I couldn't tell at which commit you might have copied those dependencies in, so I simply used stable branch for libsodium (442a23342f644005f63d0ee838d63d8bce94fb4b) and master for secp256k1 (3967d96bf184519eb98b766af665b4d4b072563e).

Combined with #9 this has allowed me to build and run successfully on Debian!

Support postgresql as an alternative to sqlite

If clightning is already configured to use a postgres database, it would be great to create a table there, that rather than creating a new sqlite3 database on disk

Especially for setups where the datadir for clightning is on a network-attached volume/disributed filesystem, as sqlite is known to be prone to corruption in those scenarios

Support .onion fallback for Boltz-compatible swaps

Recently, boltz.exchange had an expired SSL certificate which prevented CLBOSS from being able to reverse the polarity of our channels.

A possible workaround would have been to use the .onion, boltzzzbnus4m7mta3cxmflnps4fp7dueu2tgurstbvrbt6xswzcocyd.onion/, since this self-certifies.

What we could do would be to support using a "main" name and a fallback name, with the fallback name being an .onion.

Add commands to temporarily ignore onchain funds.

Mild overlap with #31

Add two new commands:

clboss-ignore-onchain [hours]
clboss-notice-onchain

clboss-ignore-onchain causes CLBOSS to temporarily ignore onchain funds for the specified number of hours from now, preventing CLBOSS from opening funds using the funds into new channels. This is time-bound so that even if the operator forgets to re-enable it, CLBOSS will resume its normal handling. If unspecified, set to 24 hours.

clboss-notice-onchain cancels an existing clboss-ignore-onchain.

The user story is like this:

  • You have a C-Lightning node that you are happily not manually managing with CLBOSS.
  • A friend asks a favor to get some incoming liquidity.
  • You clboss-ignore-onchain for some hours.
  • You get some funds from your cold storage and transfer to a newaddr on your node.
  • You set up the channel to your friend.
  • You clboss-notice-onchain once the channel is opened (or just let the timeout finish).

New incoming channel - high fees

Something I noticed on my node. Someone opened a 5 M channel to my node and made a 261k sat payment. CLBOSS set the our channel fees to 187313 sat (base fee) and and 3.278% (fee rate). Super high fees resulting from the rebalancing logic which makes sense for the general case but maybe not in this particular case.

What if this was a new user who connected to our routing node, paid for something and now expects to be able to receive some sats back and this is the only open channel ? The sender would pay a hefty fee. Perhaps adding upper bounds on base and fee rates would be a good idea (eg. 50k sats base and 2% fee rate ?)

I don't know if this is an actual issue and if it needs "fixing", just raising something to think about. Feel free to close this if this isn't a problem.

Below is a screenshot from the tool I'm building for C-Lightning showing this channel setup:

channel_setup

Error building on FreeBSD

On FreeBSD 11.3-RELEASE-p11 I downloaded v0.4 and then:

pkg install libev

(version 4.33 got installed).

Output of ./configure is here

1st error after running gmake.

Renaming std::site_t to __site_t in Bitcoin/hash160.hpp fixes this problem

2nd error

Unfortunately I have no idea how to fix this. I'm happy to try whatever you suggest and report back.

InitialRebalancer behaviour

@ZmnSCPxj I noticed that InitialRebalancer will exclude channels from rebalancing once we hit 0.1% in expenditures

auto limit = (total * max_in_expenditures_percent) / 100.0;

I think this line should at least take into account earnings because now I have a bunch of unbalanced channels that will not be rebalanced because their capital flow is unidirectional pretty much.

Even if we take earnings into account it will hit this threshold though because right now the earnings are an order of magnitude lower than the expenditures.

After looking at this for the past few weeks I concluded that the whole fee estimation system in clboss should eventually be based on how much we spend on maintaining channel balance because that's the main cost for running a routing node.

I have a large channel with the biggest merchant on LN. Initially I had all the capital on my side of the channel but CLBOSS managed to balance this channel using its brilliant rebalancing logic (something I couldn't do manually with the rebalance or the drain plugins). This was due to the fact that clboss learned from erring_node messages and with each rebalancing route got smarter and smarter. This part of CLBOSS:

  • is just awesome
  • saves a ton of manual work
  • puts your node on the BOS list (the top 5% of all the LN nodes) automatically

Then all this balanced liquidity got sucked up by capital flowing from the merchant node, via my node, to a couple other big nodes to which I have a large channel with outbound liquidity. This happened in a matter of 2 hours and now I'm back at square one with an unbalanced channel to the biggest merchant and a lot more spent on rebalancing this channel than I earned by forwarding to the other nodes.

What should have happened IMHO?

CLBOSS should recognize that it spent X on rebalancing the big merchant channel but the fees it earned by forwarding this capital out to the other nodes did not make up for it.

Let's assume we have a channel with a merchant M1, and 2 routing nodes N1 and N2.

Inital rebalancing 8 million sats for M1 cost: 13k sats
M1 state: 8 million sats inbound and 8 million sats outbound
Our fee rate to N1: 0.001%
Our fee base to N1: 1000 msat
Our fee rate to N2: 0.005%
Our fee base to N2: 5000 msat

Then M1 forwards 4 million sats (presumabely because it cares only about incoming liquidity) via us and N1 and 4 million via us and N2 in 20 transactions for each node:

N1: 40 * 1 sat + 0.001% * 4000000 sat = 80 sats
N2: 40 * 5 sat + 0.005% * 4000000 sat = 400 sats

We earned a total of 480 sats but we spent 13k on rebalancing.

This is the fundamental problem with the current CLBOSS fee setting system as I see it right now. It can start with a competitive low fee (FeeModderBySize is fine for starters) but then it needs to take all these 13k spent on rebalancing across all channels and update all outgoing fee rates (because fee rates reflect the cost of maintaing capital balance) using a weighted median or sth like that so that the rebalancing cost is reflected in our fees. Manipulating base fee can be done separately using different factors but I don't believe manipulating base fee will solve the above problem in the long run.

All the above observations apply only to routing nodes. Merchant nodes and user nodes should behave differently (merchant mode focusing on maintaining incoming liquidity and user mode on maintaining small, balanced set of channels to well connected nodes - but this is a whole new subject).

Remove libsodium dependency?

We repack libsodium since it is a security-sensitive dependency and we want to be in control of it in case of upstream takeover, but on the other hand it is actually fairly well-supported amongst distros (so repacking might be actually too paranoiac).

Further, we only really use the below bits:

  • Secure random number generation.
  • Secure memzero.
  • Constant-time memcmp.
  • SHA256

libsodium is a fairly large library and we are not using a lot of it, so it might actually be better to remove it and source the bits elsewhere.

  • Secure random number generation - copy from bitcoind? Just get from /dev/random? (not like we will actually be ported to Windows anytime soon given C-Lightning architecture...)
  • Secure memzero - dunno, implement from scratch?
  • Constant-time memcmp - implement from scratch
  • SHA256 - copy from bitcoind

Configuration - clboss-min-onchain

Edited by ZmnSCPxj

Add a --clboss-min-onchain option that overrides our default 0.0003 minimum amount we leave onchain.


Original OP:

@ZmnSCPxj Are you planning to include some way to configure clboss (eg. by passing clboss-whatever options to lightningd.conf) ?

For example clboss-max-channels = 30 meaning it'll try to open up to that many channels. Once it reaches this threshold it'll stop the opening process (even if there are onchain funds) but it'll continue its other chores (eg. rebalancing). The use case here is I'd like to have onchain funds in the wallet (eg. for swaps if it's an exchange) but not spend it all to open new channels. The default value could be 0 meaning the current behaviour.

clboss-withdraw?

If I want to pull some funds out of my node, I currently turn off clboss, close a channel at random, and withdraw. It'd be nice if clboss figured out how to get enough onchain funds to make the payment for me, instead.

Discover channel candidates by earned fee

As an automated node manager, we want CLBOSS to discover for itself what Lightning nodes are popular payment targets, so that we can build channels to them.

Now, we could implement a general AI that can understand Bitcoin news outlets to find hyped services and figure out which Lightning nodes are connected to those and so on, but that would require a lot of effort.

However, from the position of our node in the network, we do get to gather some amount of information. In particular, we can determine which of our current peers has been earning us more fees (i.e. which ones have the highest payments going to them).

This suggests that the peer itself, or the peers of that peer, are popular destinations on the network. So increasing our capacity towards that peer (and its other peers) might be a good idea.

Effectively, we theorize that if one of our peers has a lot of payments going to it, then one of its peers is probably the destination. We can try to guess which peer of our popular peer is the target.

  • If we guess wrong (likely) it is at minimum an easy source of channel rebalancing.
  • If we guess right, then we have undercut our peer and get a direct channel to the actual popular node.

Thus, we want to implement some kind of heuristic to propose channels to nodes that are peers of our own peer that has many outgoing payments going to it.

peer_complaints and closed_peer_complaints missing in clboss-status

I assume that peer_complaints and closed_peer_complaints of clboss-status will only be visible, if there are any complaints.

If so, has somebody some example JSON for me how it looks if there are complaints?

If not, than there is a bug, because I don't have such nodes in the resulting JSON.

Avoid planning channels to nodes with similar IP addresses

If possible we should channel to nodes with different geographical locations based on published IP address, in order to avoid having all our channels going to various LNBIG nodes.

This requires a map of autonomous system numbers to IP addresses.

LNBIG is known to host multiple nodes in various locations but hopefully this reduces the probability of having all our channels to LNBIG.

For .onion:

  • Give each unique .onion its own bucket?
  • Put all .onion in a single unique bucket, but allow drawing more than one from that bucket?
    • Equivalently, create N buckets for .onions and throw onions randomly in one of the onion buckets.

Not sure at what point to implement this. Probably best is to have the ChannelCandidateInvestigator add this as an investigation as well, and to penalize candidates that happen to have IPs similar to higher-scored candidates.

IRC discussion with @willcl-ark and @darosior :

<willcl_ark> zmnscpxj_: CLBOSS seemed to cope ok even during that crazy fee market. It did open two channels (when I had zero) at a rate of 250s/vB (two in one transaction), but then seemed to "wait" until fees were lower again to open a further 5. I suppose because of how agnositc it is, 5 out of 7 channels are to various LNBIG nodes which is probably not ideal, but difficult to automatically bucket peers by "likely same 
<willcl_ark> owner"
<zmnscpxj_> Yes, if a majority of your funds is outside of channels, CLBOSS will put as much as possible into channels even if feerates are high
<zmnscpxj_> but it tries to hold off as much as it can until feerates go low
<willcl_ark> That is pretty much exactly what I saw
<zmnscpxj_> works as designed then
<willcl_ark> :)
<zmnscpxj_> TheBlueMatt has suggested IP binning before (in the context of routing, but still...) so it might be useful to consider that as well
<zmnscpxj_> in order to avoid the "everryone is LNBIG in purgatory"
<willcl_ark> TBH, looks like the LNBIG nodes are hosted in different locations (on 1ML) so not sure that will help
<zmnscpxj_> awww crap
<willcl_ark> you could base it on Alias, but that's easy to break too
<zmnscpxj_> yes
<zmnscpxj_> and using IP binning does not help with .onion as well
<willcl_ark> actually I am not correct, all the LNBIG nodes I partnered with appear to be (Ashburn, VA, United States)
<willcl_ark> might be worth to have it "just in case", even if it's not perfect, I don't think it would make anything worse
<zmnscpxj_> okay, looks like IP binning might be useful at least to *reduce* the possibility
<willcl_ark> exactly
<zmnscpxj_> Dunno what to do about .onion though, oh well
<zmnscpxj_> And I have to wonder about IPv6 as well
<darosior> asmap ?
<zmnscpxj_> what is asmap?
<darosior> bitcoind's new technique to make sybil connections more costy but that may be nonsensical for your matter
<zmnscpxj_> references?
<darosior> https://github.com/bitcoin/bitcoin/issues/16599
<zmnscpxj_> yes, TheBlueMatt has also suggested this for LN routing as well
<zmnscpxj_> though he suggests all .onion should go into a single bin, which worries me
<willcl_ark> You could allow more from the tor bin, than other bins?
<zmnscpxj_> could be done as well.

`Json::Detail::Serializer` template handling of integral types

From the MacOS of @hosiawak: https://gist.github.com/hosiawak/94215c9f02d3174034b83692703b94f4

Sigh. It is the incoherent types of C that bite us here.

A simple way would be to just use explicit basic-C char, short, int, long and long long types. C++11 standard "should" support long long outright. Hopefully these all cover the std::int*_t types on all systems. Sigh.

Alternately, we can add more template metaprogramming and use std::is_integral to detect if a type is integral. That requires us to add another template.

`std::size_t` compilation on MacOS

From @hosiawak : https://gist.github.com/hosiawak/14e43d591878c613458857ef30607c50

Bitcoin::varint is properly defined for std::uint64_t, but in the failure in question, len is an std::size_t. If std::uint64_t != std::size_t, then the compiler makes a conversion from std::size_t to std::uint64_t, which leads it to matching std::uint64_t const& instead of std::uint64_t&, which leads to Bitcoin::varint returning Bitcoin::Detail::VarIntConst instead of Bitcoin::Detail::VarInt.

To confirm, @hosiawak, can you compile and run this on your MacOS system?

#include<cstddef>
#include<cstdint>
#include<iostream>

int main() {
    std::cout << "sizeof(std::size_t): " << sizeof(std::size_t) << std::endl;
    std::cout << "sizeof(std::uint64_t): " << sizeof(std::uint64_t) << std::endl;
    return 0;
}

On my Linux box it gives 8 for both sizes, if sizeof(std::size_t) is 4 on your system I probably need to change every std::size_t that is used in Bitcoin::varint to std::uint64_t instead.

`FeeModderByOnchainFees`

As discussed with az0re on IRC, we should modify our channel feerates for channels we funded depending on the onchain feerates.

We mostly keep track of only a boolean low/high in our code. And in practice, for most of the week, this is always "high feerates", we only get "low feerates" on weekends and maybe on Tuesdays (for some strange reason). Not sure if it is worth giving access to the running mean and last sampled feerate, since it seems to me that "low/high" is a bit too rough for this.

How do we evaluate channeled peers as "bad"?

If a peer is badly connected, we should reconsider whether to maintain the channel with it, or to just move out our funds (or if moving out the funds is hard, to outright close the channel).

But how do we judge that a peer is actually bad?

My initial naive thought was to use the listpeers report and measure out_fulfilled_msat divided by out_offered_msat. The lower, the "badder" the peer, and if the peer is below some value for some time, we can consider stronger responses, such as moving funds out of that channel and/or closing it and badlisting it.

However, if one of your peers knows you are using CLBOSS, then they can reduce the ratio of out_fulfilled_msat divided by out_offered_msat of any other peer they want to attack and make your CLBOSS close its channel.

They can do this by routing through you with a random payment_hash that has high probability of not being known by the peer it wants to remotely attack (by making your CLBOSS close on it). Your node, not knowing that this payment_hash is unknown, will in good faith offer the HTLC to the victim and increasing out_offered_msat of the channel with it, but the victim cannot claim the funds, and thus would reduce the out_fulfilled_msat / out_offered_msat ratio, eventually dropping this below whatever threshold we have.

Our alternative is active probing, where periodically we generate a short path and send out a random payment_hash. If it reaches the destination and the destination failed, then we consider this a plus for the first hop in the path, else we consider it a demerit for that path.

Since the path is self-directed, we cannot be fooled by other nodes.

However, active probing locks up our funds, and results in unnecessary HTLCs floating on the network, which are always risks and costs on node operators, thus we cannot do this probing at a high rate (maybe one probe an hour? Every two hours?) This means a very slow rate of learning information about our peers, which is bad if we have lots of channels.


It would be nice if any pay commands initiated by our node could also feed information to this "bad peer" judge --- the pay route is mostly self-determined, so it is still impossible for other nodes to deliberately lower the score of some competitor by this process. Unfortunately the sendpay_success/sendpay_failure does not give the important information we need --- the first hop.

This information is available as a parameter to sendonion, the core method that pay uses (we can change sendpay into a thin wrapper around createonion/sendonion in theory, but that means less information is available in the db if we do that). We can hook into rpc_command to sneak a peek at sendonion parameters, record the first_hop and payment_hash and any optional partid, and a later sendpay_failure/sendpay_success can then have the first_hop correlated by payment_hash+partid.

The drawback is that rpc_command is a single-plugin hook, meaning no other plugins can fool around with rpc_command if we hook into it. Hmmm.

Rebalance by earnings

One thing I noticed is that the current rebalancers do not handle well the case where we have a lot of forwards.

In principle, rebalancers should just move funds from channels where we earned lots of incoming fees, to channels where we earned lots of outgoing fees.

Currently, InitialRebalancer just selects a destination channel based on imbalance. However, this can make it move funds to nodes without much outgoing activity anyway. Similarly, JitRebalancer uses imbalance to select a source channel, meaning it can move incoming-capacity to nodes without much incoming activity anyway.

But we record raw earnings anyway, plus the expenditures in rebalancing to/from a node. We can select the destination channel in InitialRebalancer to select nodes that have been good at earning us outgoing fees minus rebalances to those nodes, similarly for JitRebalancer and nodes that earn lots of incoming fees minus rebalances from those nodes, instead.

Earlier I was thinking of adding another rebalancer but modifying the existing ones would be better.

Unusual swap

2021-01-18T05:24:33.947Z INFO    plugin-clboss: NeedsOnchainFundsSwapper: Need 100918000msat onchain: Swapping 113028160msat onchain (id: 465ffec39b1d49edd31a41f469be093b).
2021-01-18T05:24:33.992Z INFO    plugin-clboss: SwapManager: Swap 465ffec39b1d49edd31a41f469be093b started for 113028160msat.
2021-01-18T05:24:36.216Z UNUSUAL plugin-clboss: Boltz::Service(\"https://boltz.exchange/api\"): Onchain amount 99764 too low compared to offchain amount 113028160msat
2021-01-18T05:24:38.780Z UNUSUAL plugin-clboss: Boltz::Service(\"https://boltz.exchange/api\"): Onchain amount 86801 too low compared to offchain amount 100000000msat
2021-01-18T05:24:38.781Z INFO    plugin-clboss: SwapManager: Swap 465ffec39b1d49edd31a41f469be093b failed.
2021-01-18T05:24:38.781Z INFO    plugin-clboss: NeedsOnchainFundsSwapper: Swap failed.

Is this a problem ?

Change low onchain fee monitoring

We periodically do actions depending on onchain fee conditions. Currently we use a mean, but @whitslack suggests here https://old.reddit.com/r/Bitcoin/comments/jkibch/announcing_clboss_automated_clightning_node/gakgcv4/ to sample a week or so of data and get a percentile.

@whitslack suggests 10th percentile but I think revolving around the 25th percentile is better in practice (I think). Use 20th percentile for high-to-low transition, 30th percentile for low-to-high transition. I also think keeping 2 weeks of data is better.

An issue is how do we initialize? Until we get a lot of data we will have difficulty determining a good judgment for how much the fee is relatively. We could just write code for now to record the data (but retain the existing mean judgment code), run a node for a few weeks, then use the gathered data to initialize.

ChannelCandidateInvestigator - how does it work ?

Looking in the logs I noticed this:

plugin-clboss: ChannelCandidateInvestigator: Best candidates: 02df9bc4315c733658aaa910e7a857b0c34a1541dfb31bdffdd5eefa08c9da0f26
(24), 03afdfd5020decc582a9aedfc5190403117ec83cc0e5993a1e5bfb8448b7c5ee59(24), 029da80a069c1e6e854e127c022d4d6a725b3a8a5e7feb297e9ae1f336a2b74a8e(24), 03505902c83
ba1971c44e15736db0843271ec50a7741d0b15122f9e0d6d8a7ca98(24) ...

The first few nodes have 1 or 2 channels with very little capacity. I think it'll try to open a channel to 02df9bc4315c733658aaa910e7a857b0c34a1541dfb31bdffdd5eefa08c9da0f26 when fees drop but what's the reasoning behind it ?

How does the ChannelCandidateInvestigator work ? What determines if it's a good candidate ?

Implement DNS-over-TCP / HTTPS

Currently we just use dig. The big issue with dig is that it has no option I can find where it can be asked to do a DNS-over-TCP query via a SOCKS5 proxy (i.e. tor). It can work with torify but that increases our dependencies, and does not necessarily work for all proxy settings.

We may want to implement DNS-over-TCP directly, if only because we can tunnel that over a SOCKS5 proxy so that DNS seed queries can hide the IP of the user at least. DNS-over-TCP is preferred if we have a Tor proxy since Tor will do end-to-end encryption anyway, but we also want to avoid DNS attacks even if the user does not use Tor, which is why we might want to use DNS-over-HTTPS instead. However, I have no idea if lseed supports HTTPS, or if using a recursive resolver will let me paper over that lack if lseed does not support DNS-over-HTTPS. Probably @cdecker knows.

For now CLBOSS uses digwith atorify if it works, and if always-use-proxy is set, requires torify always.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.