Code Monkey home page Code Monkey logo

ssb-server's Introduction

ssb-server

ssb-server is an open source peer-to-peer log store used as a database, identity provider, and messaging system. It has:

  • Global replication
  • File-synchronization
  • End-to-end encryption

ssb-server behaves just like a Kappa Architecture DB. In the background, it syncs with known peers. Peers do not have to be trusted, and can share logs and files on behalf of other peers, as each log is an unforgeable append-only message feed. This means ssb-servers comprise a global gossip-protocol mesh without any host dependencies.

If you are looking to use ssb-server to run a pub, consider using ssb-minimal-pub-server instead.

Join us in #scuttlebutt on Libera Chat.

build status

Install

How to Install ssb-server and create a working pub

  1. sudo apt install curl autotools-dev automake

  2. Install the Node Version Manager (NVM):

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
  1. Close and reopen your terminal to start using nvm or run the following:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion
  1. nvm install 10

  2. nvm alias default 10

  3. npm install -g node-gyp

  4. Then to add ssb-server to your available CLI commands, install it using the -g global flag:

npm install -g ssb-server

If you are running as the root user this command will fail. Ideally you would run ssb-server as a non-priviledged user, but if you have to run it as root you can do so with npm install -g ssb-server --unsafe-perm.

  1. nano ~/run-server.sh and input:
#!/bin/bash
while true; do
  ssb-server start
  sleep 3
done

Input Ctrl-X to save and quit.

Be sure to start the pub server from this script (as shown in step 10), as this script will run the pub server and restart it even if it crashes.

  1. mkdir ~/.ssb/

  2. nano ~/.ssb/config and input:

{
  "connections": {
    "incoming": {
      "net": [
        { "scope": "public", "host": "0.0.0.0", "external": "Your Host Name or Public IP", "transform": "shs", "port": 8008 }
      ]
    },
    "outgoing": {
      "net": [{ "transform": "shs" }]
    }
  }
}
  1. Now run sh ~/run-server.sh in a detachable session (e.g. screens)

  2. Detach the session and run ssb-server whoami to check to see if the server is working.

  3. Now is the time to think of a really cool name for your new pub server. Once you have it run:

ssb-server publish --type about --about {pub-id (this is the output from ssb-server whoami)} --name {Your pubs awesome name}

  1. Now it's time to create those invites! Just run ssb-server invite.create 1 and send those codes to your friends.

Congratulations! You are now ready to scuttlebutt with your friends!

Note for those running ssb-server from a home computer. You will need to make sure that your router will allow connections to port 8008. Thus, you will need to forward port 8008 to the local IP address of the computer running the server (look up how to do this online). If you haven't done this step, when a client tries to connect to your server using the invite code, they will get an error that your invite code is not valid.

Applications

There are already several applications built on ssb-server, one of the best ways to learn about secure-scuttlebutt is to poke around in these applications.

  • patchwork is a discussion platform that we use to anything and everything concerning ssb and decentralization.
  • patchbay is another take on patchwork - it's compatible, less polished, but more modular. The main goal of patchbay is to be very easy to add features to.
  • git-ssb is git (& github!) on top of secure-scuttlebutt. Although we still keep our repos on github, primary development is via git-ssb.

It is recommended to get started with patchwork, and then look into git-ssb and patchbay.

Starting an ssb-server

Command Line Usage Example

Start the server with extra log detail Leave this running in its own terminal/window

ssb-server start --logging.level=info

Javascript Usage Example

var Server = require('ssb-server')
var config = require('ssb-config')
var fs = require('fs')
var path = require('path')

// add plugins
Server
  .use(require('ssb-master'))
  .use(require('ssb-gossip'))
  .use(require('ssb-replicate'))
  .use(require('ssb-backlinks'))

var server = Server(config)

// save an updated list of methods this server has made public
// in a location that ssb-client will know to check
var manifest = server.getManifest()
fs.writeFileSync(
  path.join(config.path, 'manifest.json'), // ~/.ssb/manifest.json
  JSON.stringify(manifest)
)

see: github.com/ssbc/ssb-config for custom configuration.

Calling ssb-server Functions

There are a variety of ways to call ssb-server methods, from a command line as well as in a javascript program.

Command Line Usage Example

The command ssb-server can also used to call the running ssb-server.

Now, in a separate terminal from the one where you ran ssb-server start, you can run commands such as the following:

# publish a message
ssb-server publish --type post --text "My First Post!"

# stream all messages in all feeds, ordered by publish time
ssb-server feed

# stream all messages in all feeds, ordered by receive time
ssb-server log

# stream all messages by one feed, ordered by sequence number
ssb-server hist --id $FEED_ID

Javascript Usage Example

Note that the following involves using a separate JS package, called ssb-client. It is most suitable for connecting to a running ssb-server and calling its methods. To see further distinctions between ssb-server and ssb-client, check out this handbook article.

var pull = require('pull-stream')
var Client = require('ssb-client')

// create a ssb-server client using default settings
// (server at localhost:8080, using key found at ~/.ssb/secret, and manifest we wrote to `~/.ssb/manifest.json` above)
Client(function (err, server) {
  if (err) throw err

  // publish a message
  server.publish({ type: 'post', text: 'My First Post!' }, function (err, msg) {
    // msg.key           == hash(msg.value)
    // msg.value.author  == your id
    // msg.value.content == { type: 'post', text: 'My First Post!' }
    // ...
  })

  // stream all messages in all feeds, ordered by publish time
  pull(
    server.createFeedStream(),
    pull.collect(function (err, msgs) {
      // msgs[0].key == hash(msgs[0].value)
      // msgs[0].value...
    })
  )

  // stream all messages in all feeds, ordered by receive time
  pull(
    server.createLogStream(),
    pull.collect(function (err, msgs) {
      // msgs[0].key == hash(msgs[0].value)
      // msgs[0].value...
    })
  )

  // stream all messages by one feed, ordered by sequence number
  pull(
    server.createHistoryStream({ id: < feedId > }),
    pull.collect(function (err, msgs) {
      // msgs[0].key == hash(msgs[0].value)
      // msgs[0].value...
    })
  )
})

Use Cases

ssb-server's message-based data structure makes it ideal for mail and forum applications (see Patchwork). However, it is sufficiently general to be used to build:

  • Office tools (calendars, document-sharing, tasklists)
  • Wikis
  • Package managers

Because ssb-server doesn't depend on hosts, its users can synchronize over WiFi or any other connective medium, making it great for Sneakernets.

ssb-server is eventually-consistent with peers, and requires exterior coordination to create strictly-ordered transactions. Therefore, by itself, it would probably make a poor choice for implementing a crypto-currency. (We get asked that a lot.)


Getting Started

Key Concepts

Further Reading

License

MIT

ssb-server's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ssb-server's Issues

weird error

I got this when following from the command line, (while gossiping over localnetwork... not sure if that is the problem?)

/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/range.js:29
  throw new Error('items not comparable:'
        ^
Error: items not comparable:1418075745004 undefined
    at compare (/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/range.js:29:9)
    at compare (/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/range.js:21:15)
    at compare (/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/range.js:21:15)
    at module.exports (/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/range.js:68:19)
    at Object.trigger (/home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/hooks.js:28:12)
    at /home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/nut.js:105:25
    at Array.forEach (native)
    at /home/dominic/c/secure-scuttlebutt/node_modules/level-sublevel/nut.js:104:17

pub server is connecting to itself

grim is connecting to itself. i'm not sure yet if this is a config issue (hostname set incorrectly) or a code issue. filing issue to i can get back to it

A multi-user distributed local web-server

We've recently discussed next steps for Pheonix/SSB, and so I spent the evening considering the question. This is my first proposal.

I propose that we view Scuttlebot as a web server such as NginX and Apache.

NginX and Apache serve web pages and other static assets, and they're pluggable. I propose we basically do that, but with the features we've developed here.

Here is how we might differentiate:

One, we back our web server with SSB, which gives p2p data syncronization. This lets us replace remote aggregator sites (Google, Facebook, Reddit, and Twitter) with local, consensus-based aggregation. For instance, Twitter is replaced by a Phoenix, which we would distribute over SSB.

Two, we automate the deployment of applications which are broadcast on the feed. Application messages might be defined as bundles of html, javascript, css, and other static assets. The server would host these bundles in a unique security domains, eg localhost:2001, localhost:2002, localhost:2003, each with Content-Security Policies which restrict outgoing traffic. The would have to do everything through Scuttlebot.

Three, we add hash-based routing to the main domain. A request to eg localhost:2000/h/{msghash} looks up the message and assembles the web page from the message's bundled attachments. It then allocates a port to host the assembled app, and redirects the user to that domain (eg localhost:2003). The application messages themselves may be analogous to package.json.

Four, we setup the MuxRPC network. Applications use this to communicate with the host server, peer servers, and with other applications. This includes the permissions system.

Add "no-broadcast" toggle to local plugin

There may be times when a user wants to receive LAN availability broadcasts, but not broadcast them ("invisible" mode). This might be implemented with an RPC method on the local plugin, enableBroadcast(bool)

switch to ssb-manifest?

right now, sbot's manifest is split between each of its plugins, so we use getManifest() on a running instance to generate the correct listing. now that we have https://github.com/ssbc/ssb-manifest, how do we want to do this? should we remove the manifest pieces from the plugins and just use ssb-manifest?

Peer diversity

Since gossip deals with discovery of new peers, instead of doing random peer connecting, would be nice to be able to 'balance' the connections through peers, although this would require a connection counter that would need to be passed around. this could distribute the load better and make the data scattered to more places and wouldn't hit the same peers over and over (unless a "peer height" calculation is applied to the peer, that means, a peer that can and will hold more connections and it's primary use is to serve as an initial connection point)

weird bug in using invite code... fixed it self after a restart

@NHQ was helping me test the latest scuttlebot and we got a weird error:

<jjjohnny_> Error: async call failed
<jjjohnny_>     at /home/johnny/projects/scuttlebot/bin.js:152:23
<jjjohnny_>     at Array.requests.(anonymous function) (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:153:9)
<jjjohnny_>     at onRequest (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:20:27)
<jjjohnny_>     at Object.p.write (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:173:33)
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/muxrpc/pull-weird.js:45:15
<jjjohnny_>     at loop (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/sinks.js:15:33)
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/throughs.js:97:9
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/pull-stream/throughs.js:97:9
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/pull-stream/throughs.js:21:7
<jjjohnny_>   Error: invite not accepted
<jjjohnny_>     at /home/johnny/projects/scuttlebot/plugins/invite.js:151:26
<jjjohnny_>     at requests.(anonymous function) (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:153:9)
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:184:58
<jjjohnny_>     at Array.forEach (native)
<jjjohnny_>     at Object.p.destroy (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:184:16)
<jjjohnny_>   Error: unexpected end of parent stream
<domanic> ah okay
<jjjohnny_>     at Object.p.destroy (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:181:15)
<jjjohnny_>     at Object.p.write (/home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:171:24)
<jjjohnny_>     at source (/home/johnny/projects/scuttlebot/node_modules/muxrpc/pull-weird.js:34:15)
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/muxrpc/node_modules/pull-goodbye/endable.js:6:7
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/pull-stream/throughs.js:13:5
<jjjohnny_>     at /home/johnny/projects/scuttlebot/node_modules/pull-inactivity/node_modules/pull-stream/throughs.js:123:12
<jjjohnny_>     at cancel (/home/johnny/projects/scuttlebot/node_modules/pull-inactivity/node_modules/pull-abortable/index.js:13:5)
<jjjohnny_>     at Function.reader.abort (/home/johnny/projects/scuttlebot/node_modules/pull-inactivity/node_modules/pull-abortable/index.js:47:5)
<jjjohnny_>     at abort (/home/johnny/projects/scuttlebot/node_modules/pull-inactivity/index.js:32:17)
<jjjohnny_>     at null.<anonymous> (/home/johnny/projects/scuttlebot/node_modules/pull-inactivity/index.js:39:7)

updating the server to the latest code and restarting fixed it. I'm not sure what it was but leaving this here incase it happens again.

Local peers table does not clear when the last local peer leaves

The local plugin updates its state when it receives availability announcements. Because it also handles entry-expiration in that codepath, if there are no peers in the network anymore, there are no opportunities to expire old entries. This causes the last available peer to stay in the local peers table, even after disconnect.

Various cleanup

Things I spotted while working on #11

  • Would be good to standardize the object returned from any connect() function as a 'client' object with {rpc:, rpcStream:, socket:}. The client object would also be emitted in the connect events.
  • The rpc-client and rpc-server events are a bit confusingly named - maybe rpc-client-connection and rpc-server-connection?

get invite flow as smooth as possible (thinking out loud)

At this early stage we want developers to start using scuttlebot and running their own servers.
Although developers are smart, we must make the flow for setting up a server, and for using invite codes as smooth as possible...

The readme in my gossip pull request documents how easy it is currently:
https://github.com/dominictarr/scuttlebot/blob/gossip/README.md

I had to tidy up quite a few loose ends to get it this smooth, but it could be a bit better...

here are my ideas:

  • instead of returning a json object, return a string which would be <host:port>,<id>,<secret> one string makes it easy to copy-paste and use as a command line argument in another script.
  • I made scuttlebot follow the server you connect to... but really this is not secure (trusting the host) and the id should be part of the invite... it should only follow if that is the server that is connected to.

maybe:

  • should the pub message and the follow message should be a part of the same message? maybe you follow a pub server and that message contains it's ip address? so you are saying "these are the pubservers I know about"
  • should servers replicate with the first N friends, even if they are multilple hops away? this would bootstrap the network early on...

Moving plugins away from internal events

Scuttlebot is meant to be imported into other projects, so it needs to be customizable. The plugins rely on internal events to coordinate, which I'd rather we exposed as an external API. Applications which import scuttlebot can then hook into the events and coordinate the plugins' APIs themselves.

For instance:

var server = sbot(config)
server.on('rpc-connection', function(rpc, rpcStream) {
  server.auth(rpc, { role: 'peer', ToS: 'be excellent to each other' }, function(err, res) {
    if (res.granted)
      server.syncFeeds(rpc)
  })
})

occasional HD usage spikes

i get alerts from linode every so often saying that grimwire is peaking its hard-drive usage. i'm running a few other services, but i'd bet it's scuttlebot causing it

the email I get is usually something like Your Linode, linode147928, has exceeded the notification threshold (1000) for disk io rate by averaging 5047.67 for the last 2 hours. that number refers to i/o blocks per second. it hasnt happened recently so i dont have a fine-grained graph to look at, but iirc it's usually very short peak (< 5 mins, maybe < 1 min) that are much higher (in the 30k-65k range)

not sure if this represents an issue or not

blob tmp-storage needs to be cleared

i encountered an error condition which caused a lot of tmp files to be created (in $SSB/blobs/tmp) and never destroyed. this is bad. we should use the system's tmp directory so that we know theyll get auto-cleaned

invite code allows too many uses

two bugs with invite code use

  1. one user can use an invite code multiple times. we should check if sbot is already following and, if so, return success without actually using the code
  2. i created an invite code with 1 use, but it's allowed me to use it multiple times. not sure why

No .read for stream

On local RPC exchanges, this sometimes occurs:

no .read for stream: -2 dropped: { req: 2,
  seq: 2,
  end: 
   { message: 'method:createHistoryStream is not on whitelist',
     name: 'Error',
     stack: 'Error: method:createHistoryStream is not on whitelist\n    at Function.perms.test (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/permissions.js:42:14)\n    at Object.stream.read (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/index.js:82:29)\n    at onStream (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:119:18)\n    at Object.p.write (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:174:33)\n    at /Users/paulfrazee/scuttlebot/node_modules/muxrpc/pull-weird.js:45:15\n    at loop (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/sinks.js:15:33)\n    at /Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/throughs.js:97:9\n    at /Users/paulfrazee/scuttlebot/node_modules/pull-stream/throughs.js:97:9\n    at /Users/paulfrazee/scuttlebot/node_modules/pull-stream/throughs.js:21:7\n    at pull (/Users/paulfrazee/scuttlebot/node_modules/pull-serializer/node_modules/pull-split/node_modules/pull-through/index.js:35:9)' } }

After a few of those, it sometimes crashes the server with

/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:70
          stream.read(null, err)
                 ^
TypeError: Property 'read' of object #<Object> is not a function
    at Object.stream.destroy (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:70:18)
    at /Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:187:11
    at Array.forEach (native)
    at Object.p.destroy (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:185:17)
    at Object.p.write (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:171:24)
    at source (/Users/paulfrazee/scuttlebot/node_modules/muxrpc/pull-weird.js:34:15)
    at /Users/paulfrazee/scuttlebot/node_modules/muxrpc/node_modules/pull-goodbye/endable.js:6:7
    at /Users/paulfrazee/scuttlebot/node_modules/pull-stream/throughs.js:13:5
    at /Users/paulfrazee/scuttlebot/node_modules/pull-ws-server/node_modules/pull-ws/sink.js:40:16
    at module.exports (/Users/paulfrazee/scuttlebot/node_modules/pull-ws-server/node_modules/pull-ws/ready.js:21:12)

RPC session distributed lifecycle

The RPC session has a distributed lifetime, and ideally needs to close only after both peers have completed their workloads. (In non-ideal situations, either peer may close the connection immediately.)

One option is to have each peer signal when it has finished its workload, marking the connection safe to close remotely. This might use a hangup signal, for instance.

Another option is to autoclose connections only after some period of inactivity (for instance, 60 seconds).

setup test network

@dominictarr said something the other day in this comment.

What if the nodes in that test network measured themselves (plugin?) and published that information to their own feeds?

That way we could aggregate that data and see how the network is doing memory wise, cpu wise (or whatever measurements we like to have) etc and we could more or less see how the network is doing before or after pull requests has been made.

This would help locating bugs and problems in all future versions of scuttlebot.

Questions that come to mind:

  1. What is the best way to run this network?
  2. How to deploy it?

Other thoughts?

very first peer does not initiate gossip pool

after adding a pub, the gossip plugin should connect and being replication. this doesn't seem to happen when the first pub is added. (this has only been reported for pubs, not for lan peers.)

restarting sbot solves this

CLI Syntax

I just launched the client and followed someone, but the syntax from the commandline is a little difficult, it might make more sense if it looked something more like...

./client.js add --type follow --feed 0VhEBzoxYCT6/0vlc=.blake2s --rel follows

/cc @dominictarr

pub server croaking with high swap

after leaving my pub overnight, i sshed into the vps and tried to create an invite (./bin.js invite.create 5). the ssh session was lagging, so i already knew something was up. then the invite never emitted - the command just hung.

i ran iotop and got this output:

screen shot 2015-01-22 at 9 36 20 am

node ./run is scuttlebot. as soon as i restarted the process, swap dropped back to 0 and all was well

unfortunately, i didnt grab the htop stats before restart, but here it is after

screen shot 2015-01-22 at 9 56 06 am

if i'm interpreting correctly, we've got 69MB in resident memory and over a gig in virtual, which suggests quite a bit of memory is swapped out (though i'm unsure why the swap meter on top left only shows 46 mb). either way, the virtual footprint seems way too high.

also, since taking that screenshot (about 5 min ago) the res & virt has climbed 5mb. it looks like it's climbing by about a mb a minute.

add another key as remote master

when a client authorized with the servers key then that means it's the same computer.
This works fine on your laptop, but when running scuttlebot on a remote server it means you need to ssh in. Instead, if you added a list of master keys in a configuration file, then you could add your laptop keys, or give out authorization to others also.

Connection timeout/failure

Gossip was working great for a while (on my local machine, with one process seeded to reach the other). Then, it began logging the connect attempt, but then doing nothing. The Gossip plugin only attempts to make a new connection after the current one completes, and the current one does not seem to be making progress.

I haven't figured out why the connection started to fail. However, in this case, we either have a failure to establish the connection (which should result in a timeout) or there's some kind of error/exception (which should result in an error or close event). Presumably, neither is happening- or, if they are, the gossip plugin isn't listening for it.

What's the right way to handle timeout and connection errors on muxrpc?

sbot invite.addMe crashes

$ sbot invite.addMe "ralphtheninja.info,55eHs2t9xT9hAQ3ln5PueRXllrxZIVhGhtCP2bL/IY0=.blake2s,0owmW8rCTQ5slb3fbY4YvXnPaN/mjCHNVOBJagdN+0w="

/usr/local/lib/node_modules/scuttlebot/bin.js:205
        if(err) throw err
                      ^
Error: unexpected end of parent stream
    at Object.p.destroy (/usr/local/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:189:15)
    at Object.p.write (/usr/local/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:179:24)
    at source (/usr/local/lib/node_modules/scuttlebot/node_modules/muxrpc/pull-weird.js:34:15)
    at /usr/local/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/pull-goodbye/endable.js:6:7
    at /usr/local/lib/node_modules/scuttlebot/node_modules/pull-stream/throughs.js:13:5
    at /usr/local/lib/node_modules/scuttlebot/node_modules/pull-inactivity/node_modules/pull-stream/throughs.js:123:12
    at cancel (/usr/local/lib/node_modules/scuttlebot/node_modules/pull-inactivity/node_modules/pull-abortable/index.js:13:5)
    at Function.reader.abort (/usr/local/lib/node_modules/scuttlebot/node_modules/pull-inactivity/node_modules/pull-abortable/index.js:47:5)
    at abort (/usr/local/lib/node_modules/scuttlebot/node_modules/pull-inactivity/index.js:32:17)
    at null.<anonymous> (/usr/local/lib/node_modules/scuttlebot/node_modules/pull-inactivity/index.js:39:7)

Fired up an sbot at work and used a one time invite code. Not sure if it's related to that or not.

I get the following output in the terminal running sbot server

info ZRUA SBOT 10 incoming-connection 
info ZRUA AUTH 10 req "ZRUAZND59Bxahx6z0R03LwWvfX+I3O08+1TUROaqirs=.blake2s"
info ZRUA REMO 10 remote-authed {"granted":true,"type":"client","role":"master"}
info ZRUA SBOT 10 client-authed "ZRUAZND59Bxahx6z0R03LwWvfX+I3O08+1TUROaqirs=.blake2s"
info ZRUA SBOT 10 authed {"type":"server"}
info ZRUA SBOT 11 connect {"host":"ralphtheninja.info","port":2000}
warn ZRUA SBOT 11 unauthed {}

proposal: bundled web frontends

i'm going to suggest we ditch the ssbui package and make sbot our primary install. to make that work, we should make phoenix the default web frontend in sbot. if run without additional config, the interface will only be available locally, so its not a danger

for pub servers, we can add a flag to the server command that replaces the frontend with a redirect. the pub server owner can use that to redirect to their html frontend. eg sbot server --redirect https://grimwire.com. then they can setup a public-facing frontend on port 80/443

Document the CLI (docs and -h)

We need to give users some docs on the CLI, both on the repo docs and with -h switches.

For standard ssb commands, a -h is no problem. For plugins, I dunno, do we want to include a "docs" structure?

remote error on blob download should cleanup and stop requesting from the errored peer

there's currently a bug in the blobs plugin that causes a successfully-found blob to get mismapped. the searcher sees that a peer has the blob, then mismaps and (as a result) requests the wrong blob from the peer.

i'm going to figure out that mismap next, but the larger issue is what happens on this mismap. a download is attempted, and the other peer fails to get the file. this causes the following to happen on the receiving end:

  • a tempfile is allocated
  • the file-read error makes its way to the receiving peer
  • for some reason, the content digest is still computed (and of course found to be incorrect)
  • the download aborts, but...
    • the tempfile is not destroyed (related to #89)
    • the download is re-queued without removing that peer, so it happens repeatedly

here's the log output:

info 64Fm BLOB  want "/ZrDz5KILJSmJ/BMyt2KTR8bc8vNXLMZAQaoRSdWvbY=.blake2s"
QUERY FINISHED kIetR4xx26Q2M62vG0tNrptJDFnxP0SexLHaOIkyy08=.blake2s [ true ]
info 64Fm BLOB kIetR4xx26Q2M62vG0tNrptJDFnxP0SexLHaOIkyy08=.blake2s found
  "Cfwc/sY2eTHDVfFJO7RWToGib2JAeBvDppXPamDE5Js=.blake2s"
QUERY FINISHED TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s [ false ]
download wantlist is... [ { id: 'Cfwc/sY2eTHDVfFJO7RWToGib2JAeBvDppXPamDE5Js=.blake2s',
    waiting: [ [Function], [Function], [Function] ],
    state: 'ready',
    has: 
     { 'kIetR4xx26Q2M62vG0tNrptJDFnxP0SexLHaOIkyy08=.blake2s': true,
       'TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s': false } } ]
info 64Fm BLOB kIetR4xx26Q2M62vG0tNrptJDFnxP0SexLHaOIkyy08=.blake2s downloading
  "Cfwc/sY2eTHDVfFJO7RWToGib2JAeBvDppXPamDE5Js=.blake2s"
Error: could not write to tmpfile
    at /home/pfraze/scuttlebot/node_modules/multiblob/index.js:79:29
    at done (/home/pfraze/scuttlebot/node_modules/stream-to-pull-stream/index.js:21:11)
    at /home/pfraze/scuttlebot/node_modules/stream-to-pull-stream/index.js:50:16
    at /home/pfraze/scuttlebot/node_modules/multiblob/node_modules/pull-stream/throughs.js:126:7
    at /home/pfraze/scuttlebot/node_modules/pull-stream/throughs.js:21:7
    at Object.weird.read (/home/pfraze/scuttlebot/node_modules/muxrpc/pull-weird.js:24:7)
    at onStream (/home/pfraze/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:86:14)
    at Object.p.write (/home/pfraze/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:174:33)
    at /home/pfraze/scuttlebot/node_modules/muxrpc/pull-weird.js:45:15
  Error: ENOENT, open '/Users/paulfrazee/.ssb/blobs/blake2s/09/fc1cfec6367931c355f1493bb4564e81a26f6240781bc3a695cf6a60c4e49b'
Error: actual hash:aSF6MHmQgJThESHQQjVKfB9VtkgsoaUeGyUN/R7Q7vk=.blake2s did not match expected hash:Cfwc/sY2eTHDVfFJO7RWToGib2JAeBvDppXPamDE5Js=.blake2s
    at /home/pfraze/scuttlebot/node_modules/multiblob/index.js:82:23
    at done (/home/pfraze/scuttlebot/node_modules/stream-to-pull-stream/index.js:21:11)
    at WriteStream.onClose (/home/pfraze/scuttlebot/node_modules/stream-to-pull-stream/index.js:28:16)
    at WriteStream.EventEmitter.emit (events.js:92:17)
    at fs.js:1601:14
    at Object.oncomplete (fs.js:107:15)

blob polling increases when a new ext link is discovered

this has been observed twice. i added some logging to see when sbot was sending queries for blobs. what appears to happen is that, when a new blob is published (via an ext link) the blob pluign starts polling its already-connected peers repeatedly. as far as i saw, other connections could be created afterward and those peers did not get added to this pool of rapidly-queried targets

havent yet diagnosed possible causes

info 64Fm BLOB  want "qH8g5u/EFdx46SOxudBomUbFyRZ0MA5LRR699yo71+s=.blake2s"
BLOBS QUERYING TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s
QUERY FINISHED TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s [ false ]
BLOBS QUERYING TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s
QUERY FINISHED TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s [ false, false, false, false, false, false, false ]
BLOBS QUERYING TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s
QUERY FINISHED TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s [ false ]
BLOBS QUERYING TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s
QUERY FINISHED TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s [ false, false, false, false, false, false, false ]
etc

goes to 100% cpu

something weird is happening and I'm seeing this go to 100% cpu on @NHQ's laptop.
it's debian, running 10.30
Works fine for me on archlinux and 10.31, so I'm not really sure what the problem is.
test/realtime reproduces it,

I'm gonna put this on travis and see if we can reproduce this.

Add permissions with [email protected]

We still don't have authenticated or encrypted channels, but we can get the ball rolling by implementing usernames and passwords and the different perm levels.

(Note, though, until we have encrypted channels, all comm is plaintext, so logins shouldn't occur over the network.)

Once we have authenticated channels, we can do login by reading the channel's pubkey hash id. Until then, we'll use user/pass.

after suspend, gossip stops making new connections

i've observed this on my mac laptop. if the system suspends while sbot is running, sbot won't resume gossip on wake-up.

here are the tail of the logs:

info kIet SBOT 284 connect
  {
    "host": "176.58.117.63",
    "port": 2000,
    "time": {
      "connect": 1421999525938,
      "attempt": 1422010336295
    },
    "connected": true,
    "id": "TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s",
    "failure": 0
  }
warn kIet SBOT 284 unauthed
  Error: connect ENETUNREACH
      at errnoException (net.js:904:11)
      at connect (net.js:766:19)
      at net.js:845:9
      at dns.js:72:18
      at process._tickCallback (node.js:419:13)
info kIet SBOT 284 disconnect 
warn kIet SBOT 283 unauthed
  Error: connect ENETUNREACH
      at errnoException (net.js:904:11)
      at connect (net.js:766:19)
      at net.js:845:9
      at asyncCallback (dns.js:68:16)
      at Object.onanswer [as oncomplete] (dns.js:121:9)
info kIet SBOT 283 disconnect 
info kIet SBOT 285 connect
  {
    "host": "176.58.117.63",
    "port": 2000,
    "time": {
      "connect": 1421999525938,
      "attempt": 1422010339483
    },
    "connected": true,
    "id": "TNn7v0MsAs8OpQnyRwtsMeROVWGlKnS/ItX966PAWjI=.blake2s",
    "failure": 1
  }
warn kIet SBOT 285 unauthed
  Error: connect ENETUNREACH
      at errnoException (net.js:904:11)
      at connect (net.js:766:19)
      at net.js:845:9
      at dns.js:72:18
      at process._tickCallback (node.js:419:13)
info kIet SBOT 285 disconnect 
info kIet SBOT 286 connect
  {
    "host": "grimwire.com",
    "port": 2000,
    "time": {
      "connect": 1421999503513,
      "attempt": 1422010340164
    },
    "connected": true,
    "id": "64FmNkolarZeXMldJAKwj9E54XwXJdubIqD1sPPCgCQ=.blake2s",
    "failure": 1
  }
warn kIet SBOT 286 unauthed
  Error: getaddrinfo ENOTFOUND
      at errnoException (dns.js:37:11)
      at Object.onanswer [as oncomplete] (dns.js:124:16)
info kIet SBOT 286 disconnect 

both connections are available and neither peer has a failure rate high enough to keep them from getting selected. is it possible the timeout for connection scheduling needs to be reset?

realtime replication

replication with a long lived connection... this will demand killing inactive connections.

Error: method:auth is not on whitelist

Submitting this as an issue as discussed with @dominictarr:

This is what I'm seeing with commit 93ea9e2 (which is currently @dominictarr's master branch):

root@Michiel:/scuttlebot# ./client.js add --type follows --'$feed' $ssb_master
Error: method:auth is not on whitelist
    at Function.perms.test (/scuttlebot/node_modules/muxrpc/permissions.js:31:16)
    at Object.PacketStream.request (/scuttlebot/node_modules/muxrpc/index.js:69:27)
    at onRequest (/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:26:12)
    at Object.p.write (/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:173:33)
    at /scuttlebot/node_modules/muxrpc/pull-weird.js:45:15
    at loop (/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/sinks.js:15:33)
    at /scuttlebot/node_modules/muxrpc/node_modules/pull-stream/throughs.js:97:9
    at /scuttlebot/node_modules/pull-stream/throughs.js:97:9
    at /scuttlebot/node_modules/pull-stream/throughs.js:21:7
    at pull (/scuttlebot/node_modules/pull-serializer/node_modules/pull-split/node_modules/pull-through/index.js:35:9)
{
  "previous": "RT1WDBOK8Oe9HMGoYQltATV8ZXBsjicqdAD24fMImlU=.blake2s",
  "author": "mGkIdJ6VJL+pz+USBIYX1wOIR5MJjqUEOZzMcoz1fbg=.blake2s",
  "sequence": 3,
  "timestamp": 1417196219024,
  "hash": "blake2s",
  "content": {
    "type": "follows",
    "$feed": "LONGHASH.blake2s",
    "$rel": "follows"
  },
  "signature": "n32Xi8sAO7xL6H+LYu+ye2AmJO5QZutfJo0oo7C9p5h/9QAktFU0ERMb9hZKfAMNXTKEzUWcsNo0iimsIHPoog==.blake2s.k256"
}

Let me know if you need more info!

trusting 'pub' messages

somebody could be a real jerk and flood your feed with bad pub messages. should we consider only using the pub messages of trusted peers?

sbot crash

I'm not exactly sure what triggered this (I noticed the server was dead only after coming back a bit later), but I did start following dominic (and so dominic's_linode_server got added to my server list) pretty shortly before that. Lemme know if there's anything I can dig deeper on.

info DW1i SBOT 18 incoming-connection 
warn DW1i SBOT 18 unauthed
  Error: method not supported:auth
      at Object.PacketStream.request (http://localhost:2000/js/home.js:10764:23)
      at onRequest (http://localhost:2000/js/home.js:10994:12)
      at Object.p.write (http://localhost:2000/js/home.js:11141:33)
      at http://localhost:2000/js/home.js:11333:15
      at http://localhost:2000/js/home.js:10094:33
      at http://localhost:2000/js/home.js:10377:9
      at http://localhost:2000/js/home.js:12622:9
      at http://localhost:2000/js/home.js:12546:7
      at pull (http://localhost:2000/js/home.js:12211:9)
      at next (http://localhost:2000/js/home.js:12236:11)
info DW1i AUTH 18 req "luy/tFqWzdYTrMles/hoannfsau1JDhMMPA+dIjMS7M=.blake2s"

TypeError: Property 'read' of object #<Object> is not a function
    at Object.stream.destroy (/usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:70:18)
    at /usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:188:11
    at Array.forEach (native)
    at Object.p.destroy (/usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:185:17)
    at Object.p.write (/usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/packet-stream/index.js:171:24)
    at /usr/lib/node_modules/scuttlebot/node_modules/muxrpc/pull-weird.js:47:35
    at loop (/usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/sinks.js:11:20)
    at /usr/lib/node_modules/scuttlebot/node_modules/muxrpc/node_modules/pull-stream/throughs.js:97:9
    at /usr/lib/node_modules/scuttlebot/node_modules/pull-stream/throughs.js:97:9
    at /usr/lib/node_modules/scuttlebot/node_modules/pull-stream/throughs.js:21:7

reducing blob-sync load: registering interest

i think we're going to have congestion issues if we don't add sophistication to the blob-syncing strategy. even if every user is a good citizen, the dataset will get large fast. if a pub follows 100 users, each of which has posted 10mb of data, that's a gig of data for the initial sync. that can't work!

most peers, i think, need to download blobs only after registering interest in them. we can come up with different ways to do this, but i think they'd break down to either

  • "give me every blob this feed posts"
  • "give me every blob in this message"
  • "give me this blob"

i think phoenix's UX should focus around the latter two, and maybe make the first an option

with pubs, we could use the first strategy and auto-download every blob its members post (perhaps under a size limit) but ignore blobs by friends of friends. another thing we could do is let users "push" blobs to a server - that is, request that the pub take the blob. this would allow members to upload non-members' blobs.

pubs can deal with a slightly higher load overall, but they may need to have a prioritization tool so that they only fetch blobs when feed-syncing doesnt ned to take precedence.

bug

I added some logging and got this:

connect { host: 'grimwire.com', port: 2000 }
authorizing...
incoming connection
authorizing...
authorized? { message: 'method not supported:auth',
  name: 'Error',
  stack: '[134]</module.exports/</createPacketStream/<.request@http://localhost:2000/js/home.js:15103:13\nonRequest@http://localhost:2000/js/home.js:15335:7\n[135]</module.exports/p.write@http://localhost:2000/js/home.js:15482:21\n[145]</module.exports/<.sink/<@http://localhost:2000/js/home.js:16451:9\n[141]</exports.drain/<@http://localhost:2000/js/home.js:15839:1\n[143]</</exports.filter/next/<@http://localhost:2000/js/home.js:16122:9\n[165]</</exports.filter/next/<@http://localhost:2000/js/home.js:17726:9\n[165]</</exports.map/</<@http://localhost:2000/js/home.js:17650:7\npull@http://localhost:2000/js/home.js:16833:9\nnext@http://localhost:2000/js/home.js:16858:7\n[153]</module.exports/<@http://localhost:2000/js/home.js:16860:18\n[152]</module.exports</</pull/<@http://localhost:2000/js/home.js:16843:11\nread@http://localhost:2000/js/home.js:18071:7\n[167]</</read2/<@http://localhost:2000/js/home.js:18077:5\n[8]</EventEmitter.prototype.emit@http://localhost:2000/js/home.js:4832:9\nemitReadable_@http://localhost:2000/js/home.js:6462:3\nemitReadable@http://localhost:2000/js/home.js:6458:5\nreadableAddChunk@http://localhost:2000/js/home.js:6201:9\n[19]</</Readable.prototype.push@http://localhost:2000/js/home.js:6163:3\n[187]</</Duplexify.prototype._forward@http://localhost:2000/js/home.js:18851:5\n[187]</</Duplexify.prototype.setReadable/onreadable@http://localhost:2000/js/home.js:18817:5\n[8]</EventEmitter.prototype.emit@http://localhost:2000/js/home.js:4832:9\nemitReadable_@http://localhost:2000/js/home.js:6462:3\nemitReadable@http://localhost:2000/js/home.js:6458:5\nreadableAddChunk@http://localhost:2000/js/home.js:6201:9\n[19]</</Readable.prototype.push@http://localhost:2000/js/home.js:6163:3\n[20]</Transform.prototype.push@http://localhost:2000/js/home.js:7144:3\nonmessage@http://localhost:2000/js/home.js:18684:5\n' }

one problem was that the address to grimwire was coming through as a string grimwire.com:2000
but it should have been an object...

Server Logging

What do you think about adding logging to the server that's formatted for users to read? It may help users diagnose issues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.