Code Monkey home page Code Monkey logo

nebula's Introduction

What is Nebula?

Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.

Nebula incorporates a number of existing concepts like encryption, security groups, certificates, and tunneling, and each of those individual pieces existed before Nebula in various forms. What makes Nebula different to existing offerings is that it brings all of these ideas together, resulting in a sum that is greater than its individual parts.

Further documentation can be found here.

You can read more about Nebula here.

You can also join the NebulaOSS Slack group here.

Supported Platforms

Desktop and Server

Check the releases page for downloads or see the Distribution Packages section.

  • Linux - 64 and 32 bit, arm, and others
  • Windows
  • MacOS
  • Freebsd

Distribution Packages

Mobile

Technical Overview

Nebula is a mutually authenticated peer-to-peer software defined network based on the Noise Protocol Framework. Nebula uses certificates to assert a node's IP address, name, and membership within user-defined groups. Nebula's user-defined groups allow for provider agnostic traffic filtering between nodes. Discovery nodes allow individual peers to find each other and optionally use UDP hole punching to establish connections from behind most firewalls or NATs. Users can move data between nodes in any number of cloud service providers, datacenters, and endpoints, without needing to maintain a particular addressing scheme.

Nebula uses Elliptic-curve Diffie-Hellman (ECDH) key exchange and AES-256-GCM in its default configuration.

Nebula was created to provide a mechanism for groups of hosts to communicate securely, even across the internet, while enabling expressive firewall definitions similar in style to cloud security groups.

Getting started (quickly)

To set up a Nebula network, you'll need:

1. The Nebula binaries or Distribution Packages for your specific platform. Specifically you'll need nebula-cert and the specific nebula binary for each platform you use.

2. (Optional, but you really should..) At least one discovery node with a routable IP address, which we call a lighthouse.

Nebula lighthouses allow nodes to find each other, anywhere in the world. A lighthouse is the only node in a Nebula network whose IP should not change. Running a lighthouse requires very few compute resources, and you can easily use the least expensive option from a cloud hosting provider. If you're not sure which provider to use, a number of us have used $5/mo DigitalOcean droplets as lighthouses.

Once you have launched an instance, ensure that Nebula udp traffic (default port udp/4242) can reach it over the internet.

3. A Nebula certificate authority, which will be the root of trust for a particular Nebula network.

./nebula-cert ca -name "Myorganization, Inc"

This will create files named ca.key and ca.cert in the current directory. The ca.key file is the most sensitive file you'll create, because it is the key used to sign the certificates for individual nebula nodes/hosts. Please store this file somewhere safe, preferably with strong encryption.

4. Nebula host keys and certificates generated from that certificate authority

This assumes you have four nodes, named lighthouse1, laptop, server1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.10/24"

5. Configuration files for each host

Download a copy of the nebula example configuration.

  • On the lighthouse node, you'll need to ensure am_lighthouse: true is set.

  • On the individual hosts, ensure the lighthouse is defined properly in the static_host_map section, and is added to the lighthouse hosts section.

6. Copy nebula credentials, configuration, and binaries to each host

For each host, copy the nebula binary to the host, along with config.yml from step 5, and the files ca.crt, {host}.crt, and {host}.key from step 4.

DO NOT COPY ca.key TO INDIVIDUAL NODES.

7. Run nebula on each host

./nebula -config /path/to/config.yml

Building Nebula from source

Make sure you have go installed and clone this repo. Change to the nebula directory.

To build nebula for all platforms: make all

To build nebula for a specific platform (ex, Windows): make bin-windows

See the Makefile for more details on build targets

Curve P256 and BoringCrypto

The default curve used for cryptographic handshakes and signatures is Curve25519. This is the recommended setting for most users. If your deployment has certain compliance requirements, you have the option of creating your CA using nebula-cert ca -curve P256 to use NIST Curve P256. The CA will then sign certificates using ECDSA P256, and any hosts using these certificates will use P256 for ECDH handshakes.

In addition, Nebula can be built using the BoringCrypto GOEXPERIMENT by running either of the following make targets:

make bin-boringcrypto
make release-boringcrypto

This is not the recommended default deployment, but may be useful based on your compliance requirements.

Credits

Nebula was created at Slack Technologies, Inc by Nate Brown and Ryan Huber, with contributions from Oliver Fross, Alan Lam, Wade Simmons, and Lining Wang.

nebula's People

Contributors

alanhlam avatar alrs avatar antiz96 avatar brad-defined avatar coderobe avatar czbix avatar dependabot[bot] avatar fale avatar forfuncsake avatar harpchad avatar jasikpark avatar jdoss avatar johnmaguire avatar kazzmir avatar michardy avatar nbrownus avatar nilium avatar nodakai avatar numkem avatar otaku avatar pdbogen avatar philippgille avatar rawdigits avatar stangri avatar terrywang avatar timrots avatar ton31337 avatar tsroten avatar wadey avatar zeisss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nebula's Issues

IPv6 for overlay network?

Is it possible to use IPv6 for the internal overlay network? Our IPv4 RFC1918 allocation strategy has been...not good.

cmd/nebula/main.go:8:2: cannot find package "github.com/slackhq/nebula"

During "make all" on a fresh Ubuntu machine on Amazon Lightsail, I get this:

$ make all
make bin-linux
make[1]: Entering directory '/home/ubuntu/nebula'
mkdir -p build/linux
GOARCH=amd64 GOOS=linux go build -o build/linux/nebula -ldflags "-X main.Build=dev+20191120181618" ./cmd/nebula
cmd/nebula/main.go:8:2: cannot find package "github.com/slackhq/nebula" in any of:
/usr/lib/go-1.6/src/github.com/slackhq/nebula (from $GOROOT)
/home/ubuntu/nebula/src/github.com/slackhq/nebula (from $GOPATH)
Makefile:51: recipe for target 'bin-linux' failed
make[1]: *** [bin-linux] Error 1
make[1]: Leaving directory '/home/ubuntu/nebula'
Makefile:6: recipe for target 'all' failed
make: *** [all] Error 2

I'm probably an idiot, but what am I missing?

Feature Request: Distribution of DNS

Is there any possibility that the lighthouse can act as a DNS server?

I want to assign my hosts in the overlay netork a domain name. Since I do not want to edit the hosts file on each client, I wonder If I can organize this using the light house.

Oliver

FATA[0000] error while adding CA certificate to CA trust store: input did not contain a valid PEM encoded block

Got this error after I've run the program between two machines once. It worked (kinda, I could ping) but then couldn't run it on the same machine again?

FATA[0000] error while adding CA certificate to CA trust store: input did not contain a valid PEM encoded block

Forgive me, but I'm new to this -- see link to a video of what happened and tell me if you find something I did wrong?
(This rough cut will be only temporary with all the identifying info shown, only for diagnostics only: https://www.youtube.com/watch?v=64yrCHKIAe0)

MIPS64

Are there any plans on supporting mips64?

hole punching fails

one lighthouse ip : 192.168.111.1
two local nodes are behind a difficult nat, ip : 192.168.111.2 and 192.168.111.4
two local nodes can not connect each other

here is node 192.168.111.2 logs

 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.1.13:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.122.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="172.17.0.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.1.13:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.122.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="172.17.0.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.1.13:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.122.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="172.17.0.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="219.142.145.143:33902" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.1.13:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="192.168.122.1:46888" vpnIp=192.168.111.4
 level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=1352070048 remoteIndex=0 udpAddr="172.17.0.1:46888" vpnIp=192.168.111.4

Failed to open udp listener error="protocol not available"

OS: Rapsbian
OS Ver:

pi@bananapi-r1:~/nebula $ uname -r
3.4.112-sun7i

nebula configuration:

listen:
  host: 0.0.0.0
  port: 0

Error messageq:

pi@bananapi-r1:~/nebula $ sudo ./nebula -config config.yaml
INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip:<nil> proto:0 startPort:0]"
INFO[0000] Firewall rule added                           firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip:<nil> proto:0 startPort:0]"
INFO[0000] Firewall started                              firewallHash=0b03f21d896ea52a717719ed2630c566ea6c212951db0551b4d8c04f5f15dd09
FATA[0000] Failed to open udp listener                   error="protocol not available"

flag provided but not defined: -service

Version: dev+20191203175553
Windows 10
Binaries built on Fedora 31 make all

$ ./nebula.exe -service install                                                                                   
flag provided but not defined: -service
Usage of C:\Users\User\Documents\nebula.exe:
  -config string
        Path to either a file or directory to load configuration from
  -help
        Print command line usage
  -test
        Test the config and print the end result. Non zero exit indicates a faulty config
  -version
        Print version

Nodes can see the Lighthouse but they cant see eachother

Hi

I setup a small network of 3+ nodes. Non LH nodes can ping the LH. LH can ping the nodes but the nodes cant ping eachoher.

This seems to work only for the nodes that are on the same wifi network. Anything from external to another external node or external to internal does not work, unless there is another form of VPN is active between the exteral nodes, like Wireguard.

The LH is behind a router so I port forwarded the default port, this seems to work given that any of the nodes can connect to the LH.

It is interesting that when I try to ping from one of the external nodes to a node in the home wifi, there is activity on the receiving internal node, but pings are all unsuccessful meaning that the ping just stalls.


time="2019-12-11T14:46:38-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="192.168.0.23:59683" vpnIp=10.x.0.12

time="2019-12-11T14:46:40-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="EXTERNAL-IP:59683" vpnIp=10.x.0.12

time="2019-12-11T14:46:43-06:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=688165208 remoteIndex=0 udpAddr="10.3.0.2:59683" vpnIp=10.x.0.12


I have all the punch stuff enabled. Am I supposed to forward more ports or port ranges?

Please bear in mind that in the given situation WG works perfectly, and all the wg nodes can see eachother without issues, including all the traffic routing setup. I would like to setup Nebula as a fallback solution, in case one wonders why I am trying to use both.

Fatal Error: TAP driver

Activate failed: Failed to find the tap device in registry with specified ComponentId 'tap0901', TAP driver may be not installed

The docs don't mention the need for a TAP driver. Which driver is recommended?

Performance

The release blog post mentions that

There cannot be a performance penalty that greatly increases latency or reduces available bandwidth between hosts.

Trying this on two Digital Oceans servers brings the iperf benchmark down from 2 Gbits/sec to 200 Mbits/sec. Is that to be expected or are there any ways to optimize this?

Question: NAT Setup

I seem to be missing something important. If I setup a mesh of hosts with all direct public IP addresses, it works fine. However, if I have a network with a light house(public IP), then all nodes behind NAT, they will not connect to each other. The lighthouse is able to communicate with all hosts, but hosts are not able to communicate with each other.

Watching the logs I see connections trying to be made to both the NAT public, and the private IPs.

I have enabled punchy and punch back, but does not seem to help.

Hope it is something simple?

drop_local_broadcast and drop_multicast not effective

Using Wireshark i can see broadcast & multicast traffic on the network no matter which state these flags are set to.
I tried checking nebula's code, and could not find where drop_local_broadcast and drop_multicast are actually used, so i am having a hard time debugging this.

Can someone explain to me how these flags are supposed to work, and where i can find the implementation?

vagrant demo, ansible error: 'map' object does not support item assignment

Hi

I'm trying to test the demo/vagrant enviroment, but the ansible playbooks fails....ยฟ?

TASK [nebula : sign using the root key] ****************************************************************************************************************************************************************************************************
fatal: [generic1.vagrant]: FAILED! => {"msg": "Unexpected templating type error occurred on (nebula-cert sign -ca-crt /opt/vagrant-test-ca.crt -ca-key /opt/vagrant-test-ca.key -duration 4320h -groups vagrant -ip {{ hostvars[inventory_hostname][vagrant_ifce]['ipv4']['address'] | to_nebula_ip }}/9 -name {{ ansible_hostname }}.nebula -out-crt /etc/nebula/host.crt -out-key /etc/nebula/host.key): 'map' object does not support item assignment"}

Any idea why this happens?

Thanks.

Whitelist/Blacklist local interfaces

We have been playing with nebula to build a higher speed overlay between several of our sites (all behind different NATs) over the internet than relying on IPsec built into our firewalls. One issue we are running into however, is that even if we set preferredRanges in the config (which is loading/parsing correctly based on the logs), the nodes will find each other over MPLS/low speed networks and not the internet. All sites have a fairly low speed MPLS any-any mesh.

It would be great if we could tell nebula to not include a local IP as part of the path selection criteria. In the interim I can block the UDP port at the site's firewall to/from the MPLS zone but that gets unwieldy fairly quick.

Please add โ€œnext stepsโ€ or โ€œAPIโ€ to readme

Hey,

I think it would be useful to run ansible over nebula to manage a docker swarm. The readme has a getting started Guide but what about the rest of the usage lifecycle (ex how do you actually use nebula once itโ€™s configured?)

Would it be possible to add a section of docs for next steps once the hosts are connected, and perhaps some API list of functions/commands/arguments/options?

Thanks for sharing your project with us,

Bionicles

Windows service

This is great - thanks!

My application is a blend of cloud (linux) and multi-site on-prem (mostly windows but also linux) machines. I found that I could get this installed as a service on Windows 7 and Windows 10 using cygwin's cygrunsrv, but was unable to using the built-in sc from Microsoft (see https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/sc-create and https://support.microsoft.com/en-us/help/251192/how-to-create-a-windows-service-by-using-sc-exe).

I suspect this is because the code doesn't use golang.org/x/sys/windows/svc to implement the Windows service hooks.

If you have got this to run as a service on Windows workstations, could please you include tips?
Thanks!

FATA[0001] no such device

I am run nebula in Raspberry Pi 1 Model B+, sudo ./nebula -config config.yaml But it says FATA[0001] no such device. Is it mean it fail to create tun dev?

Windows 10 - Fatal Error if a tunnel adapter is already in use.

Running on Windows 10, nebula fails with a fatal error if some tunnel adapters are already in use.

The only relevant logs (even with debug setting) is:
time="2019-12-05T15:14:45+11:00" level=fatal msg="Activate failed: A device attached to the system is not functioning."

This occurs if an existing OpenVPN session (or two) is running. Terminating all OpenVPN sessions allows nebula to run as expected.

Any thoughts?

Suggestion: Unify example config file extension

I deployed nebula on some of my machines after listening to a recent episode of Linux Unplugged. It worked really well. Thank you.

One issue that I ran into while setting up systemd service on a linux machine is that, due to the yaml file extension mismatch between the provided example config file and the service file, the service refuse to start. The example config file is named "config.yaml" while the config file path in the systemd service file is "/etc/nebula/config.yml". Toke me a while to figure out.

I would suggest unifying the file extensions to save some head scratch for new users.

panic: runtime error on arm7 32bit binary

Hey guys, trying to fire up a lighthouse on an odroid hc1 armv7 board. Here is error I'm getting:

panic: runtime error: index out of range [3] with length 0

goroutine 1 [running]:
encoding/binary.bigEndian.Uint32(...)
/usr/lib/go/src/encoding/binary/binary.go:111
github.com/slackhq/nebula.ip2int(...)
/tmp/nebula/cidr_radix.go:140
github.com/slackhq/nebula.NewLightHouse(0xd20001, 0xa006401, 0xc0eb50, 0x1, 0x1, 0x3c, 0x1092, 0xc1ad4c, 0x0, 0x0)
/tmp/nebula/lighthouse.go:50 +0x1ac
github.com/slackhq/nebula.Main(0xbee18901, 0xa, 0x0, 0x705d88, 0x5)
/tmp/nebula/main.go:192 +0xe30
main.main()
/tmp/nebula/cmd/nebula/main.go:42 +0x1e4

last successful message: INFO[0000] UDP hole punching enabled

I'm on a custom debian-stretch image with 4.14.127 kernel

Ban a node from whole network

Hi there, and thanks for open sourcing this great tool!
I'd like to submit an idea, although I don't have a clue yet about how to implement it.

Here is the use case: ban a node from the whole network.
Maintaining a synchronized blacklist in each nebula client config file can quickly turn into a nightmare.

Some options I can think about:

  • As lighthouses typically are "master" nodes in the network, my idea is to add a config option to propagate lighthouses' blacklists to other nodes in the network.
  • implement an OCSP-like responder in lighthouses
  • add an optional OCSP client in Nebula talking to a real, independent, OCSP responder. (this probably is the quickest win)

I get the point this breaks the decentralized pattern of Nebula, but I feel large enough networks are bound to need a clean node exclusion process.

What do you think about this?

Documentation Request: Usage example with CSRs

The current README documentation involves generating keys and certificates on the CA host, then shipping the key+cert+config to the new client node. Generally speaking a better pattern would be to generate a key and certificate signing request on the client node, then ship the CSR to the CA host and sign a cert, then ship cert/config back to the client node.

If Nebula is capable of handling that pattern, could the README example be updated for it?

error in README.md documentation - node names

In the README.md section titled:

4. Nebula host keys and certificates generated from that certificate authority

it names 3 nodes lighthouse1, host1, host3 but in the commands following it uses node names of lighthouse1, laptop, server1 and host3...

This assumes you have three nodes, named lighthouse1, host1, host3. You can name the nodes any way you'd like, including FQDN. You'll also need to choose IP addresses and the associated subnet. In this example, we are creating a nebula network that will use 192.168.100.x/24 as its network range. This example also demonstrates nebula groups, which can later be used to define traffic rules in a nebula network.

./nebula-cert sign -name "lighthouse1" -ip "192.168.100.1/24"
./nebula-cert sign -name "laptop" -ip "192.168.100.2/24" -groups "laptop,home,ssh"
./nebula-cert sign -name "server1" -ip "192.168.100.9/24" -groups "servers"
./nebula-cert sign -name "host3" -ip "192.168.100.9/24"

Any concept of routing via nodes?

Hi!
Great project! I'm currently testing this against Zerotier, they seem quite similar but ZeroTier also has the concept of advertising routes down to the nodes via the controller (a similar setup would be from the lighthouse server).

Just wondering if there was any scope for adding the ability to advertise routes via individual nodes so in essence on the lighthouse server you could add a field to say that 10.10.0.0/16 should be known via node10, 10.20.0.0/16 is via node20, so similar to how a route reflector tells nodes how to route to each other directly.

I can achieve the same effect manually running route reflectors/route servers, but its nice in zerotier the way you can do it all from the interface and can do away with BGP.

Cheers :)
Jon.

Docker image

When someone has some time, a Docker image would be awesome :)

[macOS] Unable to ping its own IP

Lighthouse (192.168.100.1): a DigitalOcean droplet with public IP.
Laptop (192.168.100.2): MacbookAir 2018 Catalina.
Config: I use the example config (with lighthouse public IP)

I can ping from Laptop to Lighthouse, and from Lighthouse to Laptop. But on the Laptop, I cannot ping its own IP (192.168.100.2).

Screenshot 2019-11-23 at 11 50 21 PM

On another Linux laptop with IP (192.168.100.3), I can ping its own IP.

Is it the limitation of macOS utun or somehow I need to config my Mac?

msg="dropping outbound packet"

i have nebula setup, all hosts talking to each other, no communication issues it would seem. im attempting to use nebula to replace a gre tunnel setup, with the following iptables config:

sysctl -w net.ipv4.ip_forward=1

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X

iptables -t nat -I PREROUTING -p tcp --match multiport ! --dports 22 -j DNAT --to-destination 10.50.0.1:65535
iptables -t nat -I PREROUTING 1 -j LOG --log-prefix "[netfilter]: " --log-level 7

but when doing so, i receive a message from nebula in /var/log/syslog:

Dec  4 09:11:11 node-01 kernel: [41774.774651] [netfilter]: IN=eno1 OUT= MAC=94:c6:91:af:31:4d:00:c1:64:1f:dd:ba:08:00 SRC=185.176.27.30 DST=x.x.x.x LEN=40 TOS=0x00 PREC=0x40 TTL=247 ID=50880 PROTO=TCP SPT=54435 DPT=7196 WINDOW=1024 RES=0x00 SYN URGP=0 
Dec  4 09:11:11 node-01 nebula[1664]: time="2019-12-04T09:11:11+11:00" level=debug msg="dropping outbound packet" fwPacket="&{3115326238 171048961 54435 65535 6 false}" vpnIp=10.50.0.1

relevant part of the config file:

firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any

  inbound:
    # Allow all inbound traffic
    - port: any
      proto: any
      host: any

this works fine with other setups (gre, openvpn, etc), and i'd like to know if there's a way to allow this in nebula

Imprecise error messages

Running nebula -config /this/doesnt/exist or nebula -config /no/read/permissions (nonexistent path and unreadable file, respectively) results in

FATA[0000] no pki.ca path or PEM data provided          

While generally correct, i suppose, this should maybe point out that the config file could not be found or read.

How to change from AES?

At your talk it was said to avoid using AES-256-GCM if you'll be adding ARMv7 devices to the network because it will result in extremely poor performance on them due to the lack of hardware accelerated AES. The recommendation was to use keccak but how does one configure that in Nebula?

nebula-cert sign fails if a .key file with the same name as -in-pub exists

$ nebula-cert keygen -out-key foo.key -out-pub foo.pub
$ nebula-cert sign -ip 0.0.0.0/32 -name foo -groups ceph,swarm -in-pub foo.pub
Error: refusing to overwrite existing key: foo.key
$ rm foo.key
$ nebula-cert sign -ip 0.0.0.0/32 -name foo -groups ceph,swarm -in-pub foo.pub
$ ls foo*
foo.crt  foo.pub
$

Node outside of LAN can only talk to light house

I have a bunch of computers on my LAN with one light house that is accessible from the outside world
Lighthouse: 192.168.42.99 (mydomain.com:4242)
Lan Machine 1 (A) : 192.168.42.200
Lan Machine 2 (B): 192.168.42.203

Outside lan machine (C): 192.168.42.10

using the 192.168.42.0 IPs:

  • A, B and lighthouse can ping each other without any issue
  • C can ping the lighthouse but not A nor B
  • A and B can't ping C
  • Light house can ping C

Light house config:

# This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
# Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)

# PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/pihole.crt
  key: /etc/nebula/pihole.key
  #blacklist is a list of certificate fingerprints that we will refuse to talk to
  #blacklist:
  #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72

# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
# The syntax is:
#   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
# Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
static_host_map:
  "192.168.42.99": ["mydomain.com:4242"]


lighthouse:
  # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
  # you have configured to be lighthouses in your network
  am_lighthouse: true
  # serve_dns optionally starts a dns listener that responds to various queries and can even be
  # delegated to for resolution
  # serve_dns: true
  # interval is the number of seconds between updates from this node to a lighthouse.
  # during updates, a node sends information about its current IP addresses to each node.
  interval: 60
  # hosts is a list of lighthouse hosts this node should report to and query from
  # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
  hosts:
          #  - "192.168.42.1"

# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
  host: 0.0.0.0
  port: 4242
  # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
  # default is 64, does not support reload
  #batch: 64
  # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
  # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
  # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
  # max, net.core.rmem_max and net.core.wmem_max
  #read_buffer: 10485760
  #write_buffer: 10485760

# Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
punchy: true
# punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
# this is extremely useful if one node is behind a difficult nat, such as symmetric
punch_back: true

# Cipher allows you to choose between the available ciphers for your network.
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly

# Local range is used to define a hint about the local network range, which speeds up discovering the fastest
# path to a network adjacent nebula node.
#local_range: "172.16.0.0/24"

# sshd can expose informational and administrative functions via ssh this is a
#sshd:
  # Toggles the feature
  #enabled: true
  # Host and port to listen on, port 22 is not allowed for your safety
  #listen: 127.0.0.1:2222
  # A file containing the ssh host private key to use
  # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
  #host_key: ./ssh_host_ed25519_key
  # A file containing a list of authorized public keys
  #authorized_users:
    #- user: steeeeve
      # keys can be an array of strings or single string
      #keys:
        #- "ssh public key string"

# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
  # Name of the device
  dev: nebula1
  # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
  drop_local_broadcast: false
  # Toggles forwarding of multicast packets
  drop_multicast: false
  # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
  tx_queue: 500
  # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
  mtu: 1300
  # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
  routes:
    #- mtu: 8800
    #  route: 10.0.0.0/16

# TODO
# Configure logging level
logging:
  # panic, fatal, error, warning, info, or debug. Default is info
  level: info
  # json or text formats currently available. Default is text
  format: text

#stats:
  #type: graphite
  #prefix: nebula
  #protocol: tcp
  #host: 127.0.0.1:9999
  #interval: 10s

  #type: prometheus
  #listen: 127.0.0.1:8080
  #path: /metrics
  #namespace: prometheusns
  #subsystem: nebula
  #interval: 10s

# Nebula security group configuration
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  # The firewall is default deny. There is no way to write a deny rule.
  # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
  # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
  # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
  #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
  #   proto: `any`, `tcp`, `udp`, or `icmp`
  #   host: `any` or a literal hostname, ie `test-host`
  #   group: `any` or a literal group name, ie `default-group`
  #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
  #   cidr: a CIDR, `0.0.0.0/0` is any.
  #   ca_name: An issuing CA name
  #   ca_sha: An issuing CA shasum

  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any

  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: any
      host: any

C config:

# This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
# Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)

# PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/work.crt
  key: /etc/nebula/work.key
  #blacklist is a list of certificate fingerprints that we will refuse to talk to
  #blacklist:
  #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72

# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
# The syntax is:
#   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
# Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
static_host_map:
  "192.168.42.99": ["ftpix.com:4242"]

lighthouse:
  # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
  # you have configured to be lighthouses in your network
  am_lighthouse: false
  # serve_dns optionally starts a dns listener that responds to various queries and can even be
  # delegated to for resolution
  #serve_dns: false
  # interval is the number of seconds between updates from this node to a lighthouse.
  # during updates, a node sends information about its current IP addresses to each node.
  interval: 60
  # hosts is a list of lighthouse hosts this node should report to and query from
  # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
  hosts:
    - "192.168.42.99"

# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
  host: 0.0.0.0
  port: 0
  # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
  # default is 64, does not support reload
  #batch: 64
  # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
  # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
  # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
  # max, net.core.rmem_max and net.core.wmem_max
  #read_buffer: 10485760
  #write_buffer: 10485760

# Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
punchy: true
# punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
# this is extremely useful if one node is behind a difficult nat, such as symmetric
punch_back: true

# Cipher allows you to choose between the available ciphers for your network.
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly

# Local range is used to define a hint about the local network range, which speeds up discovering the fastest
# path to a network adjacent nebula node.
#local_range: "172.16.0.0/24"

# sshd can expose informational and administrative functions via ssh this is a
#sshd:
  # Toggles the feature
  #enabled: true
  # Host and port to listen on, port 22 is not allowed for your safety
  #listen: 127.0.0.1:2222
  # A file containing the ssh host private key to use
  # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
  #host_key: ./ssh_host_ed25519_key
  # A file containing a list of authorized public keys
  #authorized_users:
    #- user: steeeeve
      # keys can be an array of strings or single string
      #keys:
        #- "ssh public key string"

# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
  # Name of the device
  dev: nebula1
  # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
  drop_local_broadcast: false
  # Toggles forwarding of multicast packets
  drop_multicast: false
  # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
  tx_queue: 500
  # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
  mtu: 1300
  # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
  routes:
    #- mtu: 8800
    #  route: 10.0.0.0/16

# TODO
# Configure logging level
logging:
  # panic, fatal, error, warning, info, or debug. Default is info
  level: info
  # json or text formats currently available. Default is text
  format: text

#stats:
  #type: graphite
  #prefix: nebula
  #protocol: tcp
  #host: 127.0.0.1:9999
  #interval: 10s

  #type: prometheus
  #listen: 127.0.0.1:8080
  #path: /metrics
  #namespace: prometheusns
  #subsystem: nebula
  #interval: 10s

# Nebula security group configuration
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  # The firewall is default deny. There is no way to write a deny rule.
  # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
  # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
  # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
  #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
  #   proto: `any`, `tcp`, `udp`, or `icmp`
  #   host: `any` or a literal hostname, ie `test-host`
  #   group: `any` or a literal group name, ie `default-group`
  #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
  #   cidr: a CIDR, `0.0.0.0/0` is any.
  #   ca_name: An issuing CA name
  #   ca_sha: An issuing CA shasum

  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any

  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: icmp
      host: any

    # Allow tcp/443 from any host with BOTH laptop and home group
    - port: any
      proto: tcp
      host: any

    - port: any
      proto: udp
      host: any

Logs from C:

Dec 05 15:55:20 gz-t480 nebula[32698]: time="2019-12-05T15:55:20+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.1:52803" vpnIp=192.168.42.198
Dec 05 15:55:22 gz-t480 nebula[32698]: time="2019-12-05T15:55:22+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.1.198:52803" vpnIp=192.168.42.198
Dec 05 15:55:23 gz-t480 nebula[32698]: time="2019-12-05T15:55:23+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="192.168.200.198:52803" vpnIp=192.168.42.198
Dec 05 15:55:25 gz-t480 nebula[32698]: time="2019-12-05T15:55:25+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:52803" vpnIp=192.168.42.198
Dec 05 15:55:27 gz-t480 nebula[32698]: time="2019-12-05T15:55:27+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:52803" vpnIp=192.168.42.198
Dec 05 15:55:29 gz-t480 nebula[32698]: time="2019-12-05T15:55:29+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:52803" vpnIp=192.168.42.198
Dec 05 15:55:31 gz-t480 nebula[32698]: time="2019-12-05T15:55:31+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:52803" vpnIp=192.168.42.198
Dec 05 15:55:33 gz-t480 nebula[32698]: time="2019-12-05T15:55:33+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.21.0.1:58904" vpnIp=192.168.42.198
Dec 05 15:55:35 gz-t480 nebula[32698]: time="2019-12-05T15:55:35+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.19.0.1:58904" vpnIp=192.168.42.198
Dec 05 15:55:38 gz-t480 nebula[32698]: time="2019-12-05T15:55:38+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.17.0.1:58904" vpnIp=192.168.42.198
Dec 05 15:55:40 gz-t480 nebula[32698]: time="2019-12-05T15:55:40+08:00" level=info msg="Handshake message sent" handshake="map[stage:1 style:ix_psk0]" initiatorIndex=89620360 remoteIndex=0 udpAddr="172.20.0.1:58904" vpnIp=192.168.42.198

Question: LXD Bridge Setup

I have Ubuntu homelab server with LXD enabled. My setup includes:

  • 1 DO lighthouse (nebula IP 192.168.16.1)
  • 1 Macbook (nebula IP 192.168.16.10)
  • 1 physical Ubuntu 18.04 server (nebula IP 192.168.16.2)
  • 1 LXD Ubuntu 18.04 server (LXD container) inside the above physical server (nebula IP 192.168.16.3)

My physical Ubuntu server has 3 interfaces: enp2s0 (physical, bridge mode),lanbr0 (bridge to enp2s0), and lxdbr0 (bridge for LXD internal network). The physical NIC enp2s0 has been set to bridge mode as following netplan config

network:
    version: 2
    renderer: networkd
    ethernets:
        enp2s0:
            dhcp4: no
    bridges:
        lanbr0:
            interfaces: [enp2s0]
            macaddress: (same MAC address as the physical enp2s0)
            dhcp4: true
            parameters:
                stp: false
                forward-delay: 0

The LXD Ubuntu 18.04 server (LXD container) has 2 interfaces: eth0 (bridged to parent lxdbr0), and eth1 (bridged to parent lanbr0). Since this LXD container is bridged to lanbr0, it is visible to my physical homelab LAN.

When my MacBook is in the homelab LAN, I can use nebula to access to both physical server and LXD container.

But when I am in another house (100m away from my homelab, but both connected to the same ISP through fibre cable), my MacBook can ping (and ssh) to the physical server ( 192.168.16.2), but cannot see the lxd container (192.168.16.3).

Should I need make any change on my LXD in order to access my LXD container when away from my homelab?

Revisit DPDK (and perhaps experiement with XDP)

So far we haven't needed additional performance under our workloads, but making a placeholder issue so I don't forget to port the most recent code forward and see how the speed looks under latest DPDK.

Feature Request: Support for RFC 8410 Ed25519 certificates and keys

So I was looking at how the certificates generated by nebula-cert work and I noticed that Nebula is using it's own type of certificates. I also understand that it's encoding information like groups into these certificates, potentially making usage of x509 harder.

That being said, I saw recently that Go added support for RFC 8410 certificates supporting Ed25519 in golang/go#25355 (https://go-review.googlesource.com/c/go/+/175478/ )

I wanted to start a discussion on if it would be possible to use standard certificates in the future. I think supporting the customized fields that Nebula uses would be the most difficult aspect, but would still be possible using standard fields within x509 certificates I think.

For example, in the project ghostunnel (a TLS proxy), it supports doing authorization using x509 certificates by supporting matches against the subject of the certificate (https://github.com/square/ghostunnel/blob/master/docs/ACCESS-FLAGS.md).

One type of access check that ghostunnel supports is --allow-uri which supports a Spiffie URL which I felt could be a great way to encode all the "extra" information that Nebula wants. Since a URI can support things like query parameters, it seems like even things like the IP, subnets, groups, etc could all be encoded into a URI.

My reasoning behind wanting this is that I would like to be able to use off the shelf tools like Vault to handle my PKI infrastructure, and I want to avoid provisioning long lived certificates, but having to use non-standard tooling makes this difficult.

I'm not an expert in this space, so I'd love to know if this makes sense or if there's better alternatives, or if this is simply out of scope for Nebula. Thanks!

ping, no ssh

I am trying out nebula on a light tower on DO and two clients. I can ping the light tower as well as the other client, but I can not ssh into them. Do I need to open the ssh port or something?

Oliver

P.s. The port rules are the ones of the example

firewall:
conntrack:
tcp_timeout: 120h
udp_timeout: 3m
default_timeout: 10m
max_connections: 100000

The firewall is default deny. There is no way to write a deny rule.

Rules are comprised of a protocol, port, and one or more of host, group, or CIDR

Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)

- port: Takes 0 or any as any, a single number 80, a range 200-901, or fragment to match second and further fragments of fragmented packets (since there is no port available).

code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use any

proto: any, tcp, udp, or icmp

host: any or a literal hostname, ie test-host

group: any or a literal group name, ie default-group

groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass

cidr: a CIDR, 0.0.0.0/0 is any.

ca_name: An issuing CA name

ca_sha: An issuing CA shasum

outbound:
# Allow all outbound traffic from this node
- port: any
proto: any
host: any

inbound:
# Allow icmp between any nebula hosts
- port: any
proto: icmp
host: any

# Allow tcp/443 from any host with BOTH laptop and home group
- port: 443
  proto: tcp
  groups:
    - laptop
    - home

Activating lighthouse drops all networking

I am running Ubuntu 18.04

Nebula v 1.0.0

I configured my first node, that first node is my lighthouse. When I activate the lighthouse functionality, my networking completely drops for this host.

My host is directly connected to a static ip, with no router or internal ip.

# This is the nebula example configuration file. You must edit, at a minimum, the static_host_map, lighthouse, and firewall sections
# Some options in this file are HUPable, including the pki section. (A HUP will reload credentials from disk without affecting existing tunnels)

# PKI defines the location of credentials for this node. Each of these can also be inlined by using the yaml ": |" syntax.
pki:
  # The CAs that are accepted by this node. Must contain one or more certificates created by 'nebula-cert ca'
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/lighthouse1.crt
  key: /etc/nebula/lighthouse1.key
  #blacklist is a list of certificate fingerprints that we will refuse to talk to
  #blacklist:
  #  - c99d4e650533b92061b09918e838a5a0a6aaee21eed1d12fd937682865936c72

# The static host map defines a set of hosts with fixed IP addresses on the internet (or any network).
# A host can have multiple fixed IP addresses defined here, and nebula will try each when establishing a tunnel.
# The syntax is:
#   "{nebula ip}": ["{routable ip/dns name}:{routable port}"]
# Example, if your lighthouse has the nebula IP of 192.168.100.1 and has the real ip address of 100.64.22.11 and runs on port 4242:
static_host_map:
  "publicIP": ["publicIP:4242"]


lighthouse:
  # am_lighthouse is used to enable lighthouse functionality for a node. This should ONLY be true on nodes
  # you have configured to be lighthouses in your network
  am_lighthouse: true
  # serve_dns optionally starts a dns listener that responds to various queries and can even be
  # delegated to for resolution
  #serve_dns: false
  # interval is the number of seconds between updates from this node to a lighthouse.
  # during updates, a node sends information about its current IP addresses to each node.
  interval: 60
  # hosts is a list of lighthouse hosts this node should report to and query from
  # IMPORTANT: THIS SHOULD BE EMPTY ON LIGHTHOUSE NODES
#  hosts:
#    - "192.168.100.1"

# Port Nebula will be listening on. The default here is 4242. For a lighthouse node, the port should be defined,
# however using port 0 will dynamically assign a port and is recommended for roaming nodes.
listen:
  host: 0.0.0.0
  port: 4242
  # Sets the max number of packets to pull from the kernel for each syscall (under systems that support recvmmsg)
  # default is 64, does not support reload
  #batch: 64
  # Configure socket buffers for the udp side (outside), leave unset to use the system defaults. Values will be doubled by the kernel
  # Default is net.core.rmem_default and net.core.wmem_default (/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_default)
  # Maximum is limited by memory in the system, SO_RCVBUFFORCE and SO_SNDBUFFORCE is used to avoid having to raise the system wide
  # max, net.core.rmem_max and net.core.wmem_max
  #read_buffer: 10485760
  #write_buffer: 10485760

# Punchy continues to punch inbound/outbound at a regular interval to avoid expiration of firewall nat mappings
punchy: true
# punch_back means that a node you are trying to reach will connect back out to you if your hole punching fails
# this is extremely useful if one node is behind a difficult nat, such as symmetric
punch_back: true

# Cipher allows you to choose between the available ciphers for your network.
# IMPORTANT: this value must be identical on ALL NODES/LIGHTHOUSES. We do not/will not support use of different ciphers simultaneously!
#cipher: chachapoly

# Local range is used to define a hint about the local network range, which speeds up discovering the fastest
# path to a network adjacent nebula node.
#local_range: "172.16.0.0/24"

# sshd can expose informational and administrative functions via ssh this is a
#sshd:
  # Toggles the feature
#  enabled: true
  # Host and port to listen on, port 22 is not allowed for your safety
#  listen: 127.0.0.1:477
  # A file containing the ssh host private key to use
  # A decent way to generate one: ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" < /dev/null
#  host_key:
  # A file containing a list of authorized public keys
#  authorized_users:
#    - user: 
      # keys can be an array of strings or single string
#      keys:
#        - ""

# Configure the private interface. Note: addr is baked into the nebula certificate
tun:
  # Name of the device
  dev: xoverlay
  # Toggles forwarding of local broadcast packets, the address of which depends on the ip/mask encoded in pki.cert
  drop_local_broadcast: false
  # Toggles forwarding of multicast packets
  drop_multicast: false
  # Sets the transmit queue length, if you notice lots of transmit drops on the tun it may help to raise this number. Default is 500
  tx_queue: 500
  # Default MTU for every packet, safe setting is (and the default) 1300 for internet based traffic
  mtu: 1300
  # Route based MTU overrides, you have known vpn ip paths that can support larger MTUs you can increase/decrease them here
  routes:
    #- mtu: 8800
    #  route: 10.0.0.0/16

# TODO
# Configure logging level
logging:
  # panic, fatal, error, warning, info, or debug. Default is info
  level: info
  # json or text formats currently available. Default is text
  format: text

#stats:
  #type: graphite
  #prefix: nebula
  #protocol: tcp
  #host: 127.0.0.1:9999
  #interval: 10s

  #type: prometheus
  #listen: 127.0.0.1:8080
  #path: /metrics
  #namespace: prometheusns
  #subsystem: nebula
  #interval: 10s

# Nebula security group configuration
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  # The firewall is default deny. There is no way to write a deny rule.
  # Rules are comprised of a protocol, port, and one or more of host, group, or CIDR
  # Logical evaluation is roughly: port AND proto AND ca_sha AND ca_name AND (host OR group OR groups OR cidr)
  # - port: Takes `0` or `any` as any, a single number `80`, a range `200-901`, or `fragment` to match second and further fragments of fragmented packets (since there is no port available).
  #   code: same as port but makes more sense when talking about ICMP, TODO: this is not currently implemented in a way that works, use `any`
  #   proto: `any`, `tcp`, `udp`, or `icmp`
  #   host: `any` or a literal hostname, ie `test-host`
  #   group: `any` or a literal group name, ie `default-group`
  #   groups: Same as group but accepts a list of values. Multiple values are AND'd together and a certificate would have to contain all groups to pass
  #   cidr: a CIDR, `0.0.0.0/0` is any.
  #   ca_name: An issuing CA name
  #   ca_sha: An issuing CA shasum

  outbound:
    # Allow all outbound traffic from this node
    - port: any
      proto: any
      host: any

  inbound:
    # Allow icmp between any nebula hosts
    - port: any
      proto: icmp
      host: any

    # Allow SSH 477 from wan TEMP
    - port: 477
      proto: udp
      host: any

    # Allow tcp/443 from any host with BOTH laptop and home group
    - port: 443
      proto: tcp
      groups:
        - laptop
        - home

Would really appreciate any assistance, thank you.

Windows 10: "failed to run 'netsh' to set address: exit status 1"

Hi,

I'm trying to setup a simple network, with one lighthouse and one node. The lighthouse should run on Windows 10 (v1903, build 18362.476) while the node runs on macOS (Catalina, 10.15.1).

I've deployed the certificates and prepared both of the configs. Also installed TAP driver from OpenVPN.

However, when I start the lighthouse node this error appears:

D:\nebula>.\nebula.exe --config .\config.yml
time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups:[] host:any ip:<nil> proto:0 startPort:0]"
time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:0 groups:[] host:any ip:<nil> proto:1 startPort:0]"
time="2019-11-20T12:47:12+01:00" level=info msg="Firewall rule added" firewallRule="map[caName: caSha: direction:incoming endPort:443 groups:[laptop home] host: ip:<nil> proto:6 startPort:443]"
time="2019-11-20T12:47:12+01:00" level=info msg="Firewall started" firewallHash=3e3f317872f504cec08154d9fb0a726ebc68235d1a5075426317696bdd388336
time="2019-11-20T12:47:12+01:00" level=info msg="Main HostMap created" network=192.168.178.122/24 preferredRanges="[192.168.178.0/24]"
time="2019-11-20T12:47:12+01:00" level=fatal msg="failed to run 'netsh' to set address: exit status 1"

Here's the lighthouse config.yml:

pki:
  ca: D:\\nebula\\ca.crt
  cert: D:\\nebula\\lighthouse1.crt
  key: D:\\nebula\\lighthouse1.key

lighthouse:
  am_lighthouse: true
  interval: 60

listen:
  host: 0.0.0.0
  port: 4242

local_range: "192.168.178.0/24"

handshake_mac:
  key: "MYHANDSHAKE"
  accepted_keys:
    - "MYHANDSHAKE"

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300

logging:
  level: info
  format: text

# Nebula security group configuration
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000

  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: icmp
      host: any
    - port: 443
      proto: tcp
      groups:
        - laptop
        - home

Certificate verification relies on undefined protobuf behavior

Protobuf does not guarantee that the encoding is canonical/repeatable into the exact same bytes (especially between protobuf versions), just that it will successfully decode into the expected data.

nebula/cert/cert.go

Lines 233 to 240 in d68a039

// CheckSignature verifies the signature against the provided public key
func (nc *NebulaCertificate) CheckSignature(key ed25519.PublicKey) bool {
b, err := proto.Marshal(nc.getRawDetails())
if err != nil {
return false
}
return ed25519.Verify(key, b, nc.Signature)
}

The code above verifies a certificate signature by encoding the certificate details into the protobuf representation and then verifying the signature across the message bytes that were encoded by the verifier. This means that if there is any meaningful skew between the encoder in the program that signed the certificate and the program that is verifying the certificate, it will not verify successfully.

The solution is to have the creator of the certificate encode it before signing and then send it in this representation as bytes:

message SignedCert {
  bytes data = 1;
  bytes signature = 2;
}

And then the verifier can decode the data into the RawNebulaCertificateDetails and separately run the signature algorithm across the data containing the encoded bytes from the signer to confirm that the certificate is properly signed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.