Code Monkey home page Code Monkey logo

gorb's People

Contributors

aledbf avatar brianadams avatar codelingobot avatar diogogmt avatar guilhem avatar ianmiell avatar jsravn avatar kklimonda avatar kobolog avatar leslie-wang avatar lnguyen avatar monaka avatar noxiouz avatar rewiko avatar sebbonnet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gorb's Issues

Cannot be found error When NAT'ing to services on SWARM event stream

Accessing the service on the load balanced port fails. Using GoRB+GoRB Docker link on a SWARM cluster named "swarm-master". For this example, all of the services are running on the same node swarm-node-1. Trying to run a simple service on port 8000 but it fails. Using the docker images, kobolog/gorb in docker hub as of today. The only thing different from a standard deployment is that we are listening to the swarm event stream instead of the local HOST socket. If I use the HOST socket for the docker-link then it works correctly.

GoRB service

docker $(docker-machine config swarm-node-1) run --net=host --rm --privileged kobolog/gorb -f -i eth0

GoRB Docker Link

docker $(docker-machine config swarm-node-1) run --net=host --rm -e DOCKER_HOST=$(docker-machine env --swarm swarm-master | grep DOCKER_HOST | awk 'BEGIN { FS = "=" } ; { print $2}' | tr -d '"') -e DOCKER_TLS_VERIFY="1" -e DOCKER_CERT_PATH="/etc/docker/" -v /etc/docker/server-key.pem:/etc/docker/key.pem:ro -v /etc/docker/server.pem:/etc/docker/cert.pem:ro -v /etc/docker/ca.pem:/etc/docker/ca.pem:ro  kobolog/gorb-docker-link -r 172.17.0.1:4672 -i eth0

And running a container

docker $(docker-machine config swarm-node-1) run -d --name motd -p 8000 dockhero/motd-http

Executing:

curl -sS $(docker-machine ip swarm-node-1):8000

Expect:

From listening comes wisdom and from speaking repentance.

But Got:

curl: (7) Failed to connect to x.x.x.x port 8000: Connection refused

Logs from GoRB Server

time="2015-12-09T20:33:35Z" level=info msg="starting GORB Daemon v0.2"
time="2015-12-09T20:33:35Z" level=info msg="initializing IPVS context"
time="2015-12-09T20:33:35Z" level=info msg="setting up HTTP server on :4672"
time="2015-12-09T21:46:47Z" level=info msg="creating virtual service [dockhero-motd-http_8000_tcp] on 104.236.147.174:8000"

Logs from GoRB Linker

time="2015-12-09T21:46:47Z" level=info msg="creating [dockhero-motd-http_8000_tcp/swarm-node-1/motd_8000_tcp] with 104.236.147.174:32768 -> 8000"
time="2015-12-09T21:46:47Z" level=info msg="creating service [dockhero-motd-http_8000_tcp] on port 8000/tcp"
time="2015-12-09T21:46:47Z" level=warning msg="no public ports were processed for [dockhero/motd-http/swarm-node-1/motd]"

...

Time="2015-12-09T21:09:56Z" level=warning msg="errors while exposing existing containers: [service parent [dockhero-motd-http_8000_tcp] cannot be found

and after removing the motd container...

time="2015-12-09T21:56:27Z" level=info msg="removing [dockhero-motd-http_8000_tcp/motd_8000_tcp] with 104.236.147.174:32768 -> 8000"
time="2015-12-09T21:56:27Z" level=warning msg="no public ports were processed for [dockhero/motd-http/motd]"
time="2015-12-09T21:56:27Z" level=error msg="error(s) while processing container 42d598b98522294c9060dcfdced9b737320bc2650649ce1ee89803dfcb9aa921: [backend [dockhero-motd-http_8000_tcp/motd_8000_tcp] cannot be found]"

Query of the service from GoRB:

curl -sS 127.0.0.1:4672/service/dockhero-motd-http_8000_tcp

{
"options": {
"host": "",
"port": 8000,
"protocol": "tcp",
"method": "wrr",
"persistent": false
},
"health": 1,
"backends": null
}

The Docker Hosts network config:

root@swarm-node-1:~# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:5a:cd:6d:f2
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:5aff:fecd:6df2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:479175 errors:0 dropped:0 overruns:0 frame:0
TX packets:801946 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:61015167 (58.1 MiB) TX bytes:110121273 (105.0 MiB)

docker_gwbridge Link encap:Ethernet HWaddr 02:42:34:80:f9:5e
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:34ff:fe80:f95e/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:956 (956.0 B) TX bytes:1182 (1.1 KiB)

eth0 Link encap:Ethernet HWaddr 04:01:8d:fc:d1:01
inet addr:104.236.147.174 Bcast:104.236.191.255 Mask:255.255.192.0
inet6 addr: fe80::601:8dff:fefc:d101/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:906074 errors:0 dropped:0 overruns:0 frame:0
TX packets:524341 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:662057540 (631.3 MiB) TX bytes:74092490 (70.6 MiB)

Docker-link the backend service names use the public ip in name

Currently:

{
"options": {
"host": "",
"port": 8000,
"protocol": "tcp",
"method": "wrr",
"persistent": false
},
"health": 1,
"backends": [
"swarm-node-2-motd-8000-tcp"
]
}

Seems like multiple backends will have non-unique names on the same host. I would expect something more like "swarm-node-2-motd-32768-tcp" which uses the port Docker assigned in the name.

Looking for doc

Hello,

This project looks like the missing link to my swarm cluster to have floating IPs automagically configured to point to my docker service. However I'm a little bit stuck with the lack of documentation to start playing with this cool stuff.
By looking at the other issues, it looks like I have to start a container with kobolog/gorb image on my router with --net=host and --privileged kobolog/gorb.

if I use want to use consul as my services are already registered in it with gliderlabs/registrator, do I still have to launch a kobolog/gorb-docker-link container on my docker hosts?

Do I have to run the gorb container with some kind of consul-template so that the configuration is automatically refreshed?

Best Regards

Segmentation Fault: The command '/bin/sh -c pip install pyinotify' returned a non-zero code: 139

Hi,

Trying to build on alpinelinux v3.6 host by fetching master (82f9de7) . But I get interrupted at,

Segmentation fault
The command '/bin/sh -c pip install pyinotify' returned a non-zero code: 139

The output is as follows:

# docker build -t gorb .
Sending build context to Docker daemon  78.17MB
Step 1/16 : FROM golang:1.8
 ---> ba52c9ef0f5c
Step 2/16 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> dc0a08ee87c7
Step 3/16 : RUN apt-get update   && apt-get install -y software-properties-common python-pip   python-setuptools   python-dev   build-essential   libssl-dev   libffi-dev   && apt-get install --no-install-suggests --no-install-recommends -y   curl   git   build-essential   python-netaddr   unzip   vim   wget   inotify-tools   && apt-get clean -y   && apt-get autoremove -y   && rm -rf /var/lib/apt/lists/* /tmp/*
 ---> Using cache
 ---> 4a101a7c98a0
Step 4/16 : RUN pip install pyinotify
 ---> Running in 6a306dfc707f
Downloading/unpacking pyinotify
Segmentation fault
The command '/bin/sh -c pip install pyinotify' returned a non-zero code: 139

In my case, this does not seem like a swap issue as described at http://samwize.com/2016/05/19/docker-error-returned-a-non-zero-code-137/ because, I have adequate RAM allocated to the VM:

# free -m
             total       used       free     shared    buffers     cached
Mem:          7974        405       7568         10         27        243
-/+ buffers/cache:        134       7839
Swap:         2047          0       2047

Thanks for a nice project.

Season's greetings and cheers,
/zenny

Doesn't update weight with tcp pulse

level=warning msg="backend [http/192.168.1.226] status: Up"
level=info msg="updating backend [http/192.168.1.226] with weight: 0"

Service configuration:

curl -v -XPUT http://127.0.0.1:4672/service/ads-http -d @- << EOF
{
    "host": "10.128.1.222",
    "port": 80,
    "protocol": "tcp",
    "method": "rr",
    "persistent": false,
    "flags": "sh-fallback"
}
EOF

Backend configuration:

curl -v -XPUT http://127.0.0.1:4672/service/http/backend1 -d @- << EOF
{
    "host": "192.168.1.226",
    "port": 80,
    "method": "nat",
    "pulse": {
        "type": "tcp",
        "interval": "20s"
    },
    "weight": 1
}

ipvsadm -S -n:

-A -t 10.128.1.222:80 -s rr -b flag-1
-a -t 10.128.1.222:80 -r 192.168.1.226:80 -m -w 0
-a -t 10.128.1.222:80 -r 192.168.1.228:80 -m -w 0
-a -t 10.128.1.222:80 -r 192.168.1.229:80 -m -w 0
-a -t 10.128.1.222:80 -r 192.168.1.230:80 -m -w 0
-a -t 10.128.1.222:80 -r 192.168.1.231:80 -m -w 0

Relationship with Swarm Mode?

Hi, just encountered Gorb. It's unclear to me if it would still be beneficial given that Docker Swarm Mode also uses IPVS. Can they be used together? Would that be useful? Thanks for any clarity. Maybe this could be in the README?

map error on rest api when docker-link is processing new node

I am simply using docker-compose to bring up some existing docker images. Is that pull path for the service supposed to be passed all the way though to the rest api? I'm getting the error you see below

time="2015-11-29T05:46:39Z" level=info msg="creating [/bin/consul/consultest_consulserverBootstrap_1] with 192.168.99.100:32827 -> 8500"
time="2015-11-29T05:46:39Z" level=info msg="creating service [/bin/consul] on port 8500/tcp"
time="2015-11-29T05:46:39Z" level=info msg="creating [/bin/consul/consultest_consulserverBootstrap_1] with 192.168.99.100:32828 -> 8400"
time="2015-11-29T05:46:39Z" level=info msg="creating service [/bin/consul] on port 8400/tcp"
time="2015-11-29T05:46:39Z" level=info msg="creating [/bin/consul/consultest_consulserverBootstrap_1] with 192.168.99.100:32798 -> 53"
time="2015-11-29T05:46:39Z" level=info msg="creating service [/bin/consul] on port 53/udp"
time="2015-11-29T05:46:39Z" level=warning msg="no public ports were processed for [/bin/consul/consultest_consulserverBootstrap_1]"
time="2015-11-29T05:46:39Z" level=error msg="error(s) while processing container a69c28bcf5e8ce2dc78adab990d7386ab8ebe6393eb1a07514fa3fe7473a113b: [got unknown error from http://192.168.99.100:4672/service/bin/consul: map[error:endpoint information is missing] got unknown error from http://192.168.99.100:4672/service/bin/consul: map[error:endpoint information is missing] got unknown error from http://192.168.99.100:4672/service/bin/consul: map[error:endpoint information is missing]]"

Unable to add "same" backend to multiple services

When i try to add same backend (with same name - hostname) i have got

{
	"error": "specified object already exists"
}

For example, i'm trying to have one host/backend in multiple services for http 80 and https 443. So hostname/backend the same, but i must "create" for the same host/backend different "name" ...
The global uniqueness of backend name is not necessary - if i look at this from "perspective", or am i missing something..?

As workaround i'm creating backend name as hostname_portnumber, which is also "Bad" because one hostname with same port should be member of multiple services.. so as actual workaround of this i must creating backend name like servicename_hostname_port - it is ugly...

register service ids lost after gorb crashes

What I have noticed is if the gorb cashes or gets killed, all the registered service ids are lost. It seems all the services states are saved in-memory and it might require a database to save the states. But I still wonder if there could be other easy solutions.

Request: Allow to use sched-flags of ipvsadm

Hi,
I'm trying to make a simple load balance in UDP with source hashing as if created with ipvsadm:
ipvsadm -A -u 192.168.0.2:50100 -s sh -b sh-port,sh-fallback
ipvsadm -a -u 192.168.0.2:50100 -r 192.168.0.2:40058 -m -w 1
ipvsadm -a -u 192.168.0.2:50100 -r 192.168.0.2:40059 -m -w 1

I can't find if gorb has the support for the sched-flags (-b option) . The sched-flags is a relatively new option for ipvsadm and is not contemplated in the link to the manpage you provide. However, it is present in the lastest source man page and others man pages online, for example in ipvsadm man.

Do you know any other way I can use the sched-flags option?

Thanks in advance.
Dario.

the rest return message is in text/plain content type

Since we already have return the message in a JSON format, it would be nice to have a option to set the Content-Type of returned message to application/json. The current return message Content-Type →text/plain; charset=utf-8

I was trying to change in main.go wasn't able to get it to work.

does IPVS work for container to container communication on the same bridge?

When I create a backend service with IPVS for a given container, other containers on the same bridge are unable to talk to this container. Reason is due to partially open TCP connection, since all traffic from and to the IPVS service is supposed to go through the host network namespace, but for container to container communication some of the traffic might get switched through the bridge itself.

Does gorb setup iptables SNAT to avoid this situation?

endpoint information is missing

Hi,

I'm giving a try with GORB with a basic setup but I get the following error:

time="2016-03-18T23:05:31Z" level=info msg="starting GORB Docker Link Daemon v0.1"
time="2016-03-18T23:05:31Z" level=info msg="listening on event feed at unix:///var/run/docker.sock"
time="2016-03-18T23:05:31Z" level=info msg="bootstrapping with existing containers"
...
time="2016-03-18T23:05:31Z" level=info msg="creating [sonatype-nexus-8081-tcp/serene_goldstine-8080-tcp] with 172.17.0.4:8080 -> 8081"
time="2016-03-18T23:05:31Z" level=info msg="creating service [sonatype-nexus-8081-tcp] on port 8081/tcp"
time="2016-03-18T23:05:31Z" level=warning msg="no public ports were processed for [sonatype/nexus/serene_goldstine]"
...
time="2016-03-18T23:05:31Z" level=warning msg="errors while exposing existing containers: [got unknown error from http://192.168.1.120:4672/service/sonatype-nexus-8081-tcp: map[error:endpoint information is missing]]"

I've started GORB and GORB-DOCKER-LINK with following compose file and Consul is listening correctly on http://192.168.1.120:8500

gorb-docker-link:
image: kobolog/gorb-docker-link:0.1
container_name: gorb-docker-link
restart: always
command: -r 192.168.1.120:4672
volumes:
- /var/run/docker.sock:/var/run/docker.sock
gorb:
image: kobolog/gorb:0.1
container_name: gorb
restart: always
net:
"host"
privileged: true
command: -c http://192.168.1.120:8500 -l 192.168.1.120:4672

What did I miss ?

Thanks,
Pierre #

simple example trouble

started GORB on veles, doing a simple service creation with PUT from volos fails with an error
having hard time seeing mistake in data payload.

anapsix@volos:~$ curl -sS -H "Content-Type: application/json" \
  -X PUT \
  --data-binary '{"host":"10.20.0.100","port":"8080","protocol":"tcp","method":"rr","persitent":false}'  \
  veles:4672/service/test \
  | json_pp 
{
   "error" : "json: cannot unmarshal string into Go value of type uint16"
}

😭 please advise..

Automatically route services registered in Consul

This is looking like a very interesting project and a great idea - exposing IPVS via a Rest API! I think it would be very cool if Gorb could monitor services in Consul and use Consul's health checks and then create and destroy routes based on this info. There's a similar project called Fabio (https://github.com/eBay/fabio) which does this for http(s) proxying and it works beautifully. It relies on tags registered with the service in Consul and only routes to services which pass consul's health checks (all proxy'd services are required to have a health check).

I don't know if gorb can do multi-host routing like fabio will but it would be brilliant if it could - it would allow any 'edge' of a cluster to accept incoming connections for a container anywhere in the cluster.

error while creating virtual service: Error! errorcode is: 2\n

Hey, this looks pretty interesting.
Unfortunately it appear not to work out of the box.
I built the images from the current master: 43f7c94
Here are the steps to reproduce:

➜  gorb git:(test) box restart bar
Starting VM...
Restarted machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
➜  gorb git:(test) docker-machine ssh bar sudo modprobe ip_vs
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
➜  gorb git:(test) eval $(docker-machine env bar)
➜  gorb git:(test) docker run -d --net=host --privileged gorb -v -f -i eth1
79920696feef4c37be1fa9a4466b464d1d23310d5f426d6fec2c5f716bce0205
➜  gorb git:(test) docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
79920696feef        gorb                "gorb -v -f -i eth1"   5 seconds ago       Up 2 seconds                            sick_sammet
➜  gorb git:(test) docker run -d --net=host -v /var/run/docker.sock:/var/run/docker.sock gorb-link -r $(docker-machine ip bar):4672 -i eth1
6c3e6e51a0a603017b779ce7ff283c6ea82e5879fc240ba82955f3efaaab5853
➜  gorb git:(test) docker logs -f 28
➜  gorb git:(test) docker run -d -p 80 nginx
a183ee5c5894ac4d1b1386bc018807d641cf18600af4ce0d8dd7de8ed469af1a
➜  gorb git:(test) docker logs -f 799
time="2015-12-03T17:04:18Z" level=info msg="starting GORB Daemon v0.1"
time="2015-12-03T17:04:18Z" level=info msg="initializing IPVS context"
time="2015-12-03T17:04:18Z" level=info msg="setting up HTTP server on :4672"
time="2015-12-03T17:04:54Z" level=info msg="creating virtual service [nginx_80_tcp] on 192.168.99.101:80"
time="2015-12-03T17:04:54Z" level=error msg="error while creating virtual service: Error! errorcode is: 2\n"

Proposal to automatically manage container networking for IPIP in docker-link

In order to use IPIP it appears necessary to make some tweaks to the networking of the container that is receiving the traffic.

The docker-link project can be updated to execute the necessary commands when a container comes on-line.

These commands are something like:

ip link set tunl0 up
ip addr add <VIP>/32 dev tunl0 brd <VIP>
sysctl -w net.ipv4.conf.tunl0.rp_filter=2

I could imagine having docker-link look for labels in the container that indicate the desire for IPIP routing. A tag such as GORB-IPIP. The system will also need to verify that the container was started with --cap-add=NET_ADMIN.

This appears to work just fine, even on minimal docker containers.

Is this a direction that docker-link should go?

Your video https://www.youtube.com/watch?v=oFsJVV1btDU and ab utility that use in your video

Dear Andrey Sibiryov,

I watched your excellent video https://www.youtube.com/watch?v=oFsJVV1btDU
You are using a utility ab
Your command line in that video is ab -n 1000 -c 32 http://$....
Where to get this ab utility please?
This is the first time I am learning this utility. Very interested to put hands on to this utility.
Can I please request you to kindly let me know the url for instructions as how to use ab utility and download instructions please.
Much appreciated
M Jay

Works on Docker-machine on Mac OSX?

Great presentation at DockerCon!

Running the most recent version of Docker-machine, and using Ubuntu:15.04 or Alpine:latest, I'm unable to run ipvsadm. No matter what I do, I get this error (any ideas?):

bash-4.3# ipvsadm
modprobe: can't change directory to '/lib/modules': No such file or directory
Can't initialize ipvs: Protocol not available
Are you sure that IP Virtual Server is built in the kernel or as module?

~/c/ipvs ❯❯❯ docker version ⏎
Client:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 18:18:11 2016
OS/Arch: darwin/amd64

Server:
Version: 1.10.0
API version: 1.22
Go version: go1.5.3
Git commit: 590d5108
Built: Thu Feb 4 19:55:25 2016
OS/Arch: linux/amd64

external traffic through virtual port failing to connect to real server

I cannot connect to the NAT'd service behind the NLVS box from an outside client. I can however connect to the service using the NLVS IP/Port when I am on the NLVS box directly. Not sure where the config is off, some odd networking side effects in docker 1.9, or if there is a missing dependency for IPVS but it is not working with a SWARM cluster out of the box as a result. Any ideas?

I originally thought the NAT was not working correctly but according to http://www.ultramonkey.org/papers/lvs_tutorial/html/ that tcpdump I was reviewing looks correct.

I have a two host setup with ipvs running on node-1 and the service running on node-2. The node-1 appears to correctly have the routing rule to forward to node-2 setup by gorb-docker-link.

IPVS setting:

TCP 107.170.251.157:9292 wrr
-> 104.236.180.219:32768 Masq 100 0 0

From TCP dump I see the inbound connection but the service on the real server does not appear to see the traffic.

15:10:27.928913 IP .58501 > 104.236.180.219.32768: ...

I notice that the outbound traffic is supposed to route back through the NAT director which is typically done by making it a gateway. Anyone has a working swarm setup that still runs NLVS directly on the docker hosts but allows NAT to work? Is it easier to simply go all the way and configure direct routing so to eliminate the need of routing back through the director?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.