Code Monkey home page Code Monkey logo

dvm's Introduction

dvm

An on demand Docker virtual machine, thanks to Vagrant and boot2docker. Works great on Macs and other platforms that don't natively support the Docker daemon. Under the covers this is downloading and booting Mitchell Hashimoto's boot2docker Vagrant Box image.

The driving need for something like dvm was for running infrastructure testing, like Test Kitchen using the kitchen-docker driver. For the driver to work it needs access to all the dynamically assigned ports, not just the Docker daemon port. That's why dvm uses a private network segment and address (192.168.42.43 by default). Once Docker started supporting the DOCKER_HOST environment variable, the actual IP address was less important and consequently made the docker command on non-Linux distros feel almost native.

tl;dr for Mac Users

Are you already a Vagrant user using Virtualbox? Use Homebrew? Great!

# Install Docker Mac binary
brew install docker

# Install dvm
brew tap fnichol/dvm
brew install dvm

# Bring up your Vagrant/Docker VM
dvm up

# Set a DOCKER_HOST environment variable that points to your VM
eval $(dvm env)

# Run plain 'ol Docker commands right from your Mac
docker run ubuntu cat /etc/lsb-release

p.s. No Vagrant or VirtualBox installed? Check out the Requirements section below.

Requirements

Use Homebrew Cask? For Vagrant and VirtualBox, too easy!

brew cask install vagrant    --appdir=/Applications
brew cask install virtualbox --appdir=/Applications

Install

Installation is supported for any Unixlike platform that Vagrant and VirtualBox/VMware support.

wget -O dvm-0.9.0.tar.gz https://github.com/fnichol/dvm/archive/v0.9.0.tar.gz
tar -xzvf dvm-0.9.0.tar.gz
cd dvm-0.9.0/
sudo make install

Installing with Homebrew (Mac)

There is a Homebrew tap with a formula which can be installed with:

brew tap fnichol/dvm
brew install dvm

Upgrade

You can follow the instructions for installing dvm.

Please note however that if the underlying boot2docker basebox is upgraded between versions, you will effectively get a new virtual machine when dvm restarts. A good idea before upgrading is to destroy your current dvm instance with dvm destroy.

Upgrading with Homebrew (Mac)

If using the dvm Homebrew tap, simply:

brew update
brew upgrade dvm

Also please read the above note about destroying in between upgrades.

Usage

Bring up help with:

$ dvm --help

Usage: dvm [-v|-h] command [<args>]

Options

  --version, -v - Print the version and exit
  --help, -h    - Display CLI help (this output)

Commands

  check           Ensure that required software is installed and present
  destroy         Stops and deletes all traces of the vagrant machine
  env             Outputs environment variables for Docker to connect remotely
  halt, stop      Stops the vagrant machine
  ip              Outputs the IP address of the vagrant machine
  reload          Restarts vagrant machine, loads new configuration
  resume          Resume the suspended vagrant machine
  ssh             Connects to the machine via SSH
  status          Outputs status of the vagrant machine
  suspend, pause  Suspends the machine
  up, start       Starts and provisions the vagrant environment
  vagrant         Issue subcommands directly to the vagrant CLI

Keep in mind that dvm thinly wraps Vagrant so don't hesitate to use raw Vagrant commands in your $HOME/.dvm directory. Or use the dvm vagrant subcommand from anywhere:

$ dvm vagrant --version
Vagrant 1.5.2

Bring up your VM with dvm up:

$ dvm up
Bringing machine 'dvm' up with 'virtualbox' provider...
...<snip>...
==> dvm: Configuring and enabling network interfaces...
==> dvm: Running provisioner: shell...
    dvm: Running: inline script

Or maybe you want to use the vmware_fusion Vagrant provider which isn't your default?

$ dvm up --provider=vmware_fusion

Need to free up some memory? Pause your VM with dvm suspend:

$ dvm suspend
==> dvm: Saving VM state and suspending execution...

When you come back to your awesome Docker project, resume your VM with dvm resume:

$ dvm resume
==> dvm: Resuming suspended VM...
==> dvm: Booting VM...
==> dvm: Waiting for machine to boot. This may take a few minutes...
    dvm: SSH address: 127.0.0.1:2222
    dvm: SSH username: docker
    dvm: SSH auth method: private key
    dvm: Warning: Connection refused. Retrying...
==> dvm: Machine booted and ready!

Your local docker binary needs to be told that it is targetting a remote system and to not try the local Unix socket, which is the default behavior. Version 0.7.3 of Docker introduced the DOCKER_HOST environment variable that will set the target Docker host. By default, dvm will run your VM on a private network at 192.168.42.43 with Docker listening on port 2375. The dvm env subcommand will print a suitable DOCKER_HOST line that can be used in your environment. If you want this loaded into your session, evaluate the resulting config with:

$ echo $DOCKER_HOST

$ eval `dvm env`

$ echo $DOCKER_HOST
tcp://192.168.42.43:2375

Check your VM status with dvm status:

$ dvm status
Current machine states:

dvm                       running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

Log into your VM (via SSH) with dvm ssh:

$ dvm ssh
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
boot2docker: 1.0.0
             master : 16013ee - Mon Jun  9 16:33:25 UTC 2014
docker@boot2docker:~$

Embed in a Project

As the core of dvm is a Vagrantfile (surprise!), you can simply download the dvm Vagrantfile into your project using the http://git.io/dvm-vagrantfile shortlink:

wget -O Vagrantfile http://git.io/dvm-vagrantfile

Configuration

If you wish to change the Docker TCP port or memory settings of the virtual machine, edit $HOME/.dvm/dvm.conf for the configuration to be used. By default the following configuration is used:

  • DOCKER_IP: 192.168.42.43
  • DOCKER_PORT: 2375
  • DOCKER_MEMORY: 512 (in MB)
  • DOCKER_CPUS: 1
  • DOCKER_ARGS: -H unix:// -H tcp://

If you wish to change the network range Docker uses for the docker0 bridge, set DOCKER0_CIDR to the range required.

See dvm.conf for more details.

Development

Pull requests are very welcome! Make sure your patches are well tested. Ideally create a topic branch for every separate change you make. For example:

  1. Fork the repo
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Authors

Created and maintained by Fletcher Nichol ([email protected])

Credits

License

Apache 2.0 (see LICENSE.txt)

dvm's People

Contributors

agoddard avatar dlitz avatar eik3 avatar fnichol avatar gianpaj avatar hiremaga avatar hmarr avatar jfoy avatar lamdor avatar spheromak avatar tlockney avatar tmatilai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dvm's Issues

dvm runs insecure by default

When dvm starts boot2docker, it enables port forwarding on the host, which makes the docker VM immediately available for anyone in the world to control, if they can connect to ports 2020 or 4243 on the host. (Docker itself requires no authentication, and SSH works with a default username and password of docker and tcuser.)

(Using VMware Fusion.)

Chasing releases

Considering that the docker project moves at a speedy pace, how can we keep dvm, boot2docker, docker commandline in sync?

`dvm up` hangs on VMFusion 6, missing OS

When I run dvm up using VMFusion as my provider the VM never boots, it hangs here:

$ dvm up
Bringing machine 'dvm' up with 'vmware_fusion' provider...
==> dvm: Cloning VMware VM: 'boot2docker-1.0.1'. This can take some time...
==> dvm: Verifying vmnet devices are healthy...
==> dvm: Preparing network adapters...
==> dvm: Fixed port collision for 22 => 2222. Now on port 2200.
==> dvm: Starting the VMware VM...
==> dvm: Waiting for the VM to finish booting...

When I launch the VM from Fusion's GUI I see:

PXE-M0F: Exiting Intel PXE Rom.
Operating System not found

The vmware.log reads:

2014-07-02T20:35:28.346-07:00| vcpu-0| I120: Msg_Post: Warning
2014-07-02T20:35:28.346-07:00| vcpu-0| I120: [msg.Backdoor.OsNotFound.Mac] No operating system was found. Check your Startup Disk in the virtual machine settings. If you have not installed an operating system yet, you can choose an installation disc or disc image in the CD/DVD settings and restart the virtual machine.
2014-07-02T20:35:28.346-07:00| vcpu-0| I120: ----------------------------------------
2014-07-02T20:35:28.346-07:00| vcpu-0| I120: MsgIsAnswered: Using builtin default 'OK' as the answer for 'msg.Backdoor.OsNotFound.Mac'
2014-07-02T20:38:00.904-07:00| vmx| I120: VmdbAddConnection: cnxPath=/db/connection/#5/, cnxIx=2
2014-07-02T20:38:00.950-07:00| vmx| I120: VMXVmdbCbVmVmxExecState: Exec state change requested to state poweredOn without reset, default, softOptionTimeout: 0.
2014-07-02T20:38:00.999-07:00| mks| I120: KHBKL: Unable to parse keystring at: ''
2014-07-02T20:38:00.999-07:00| mks| I120: KHBKL: Unable to parse keystring at: ''
2014-07-02T20:38:01.322-07:00| mks| I120: GL-Backend: successfully started by HWinMux to do window composition.
2014-07-02T20:38:01.326-07:00| mks| I120: MKS-SWB: Number of MKSWindows changed: 1 rendering MKSWindow(s) of total 1.
2014-07-02T20:38:01.376-07:00| vmx| I120: TOOLS received request in VMX to set option 'synctime' -> '1'
2014-07-02T20:38:01.401-07:00| vmx| I120: VMXVmdb_SetCfgState: cfgReqPath=/vm/#_VMX/vmx/cfgState/req/#6/, remDevPath=/vm/#_VMX/vmx/vigor/setCfgStateReq/#40b/in/

Did I do something wrong or is the latest not working with VMFusion?

Error response from daemon: client and server don't have same version (client : 1.15, server: 1.14)

DVM 0.9.0 Installed on OS X 10.10.1 via homebrew.

$ docker version
Client version: 1.3.1
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 4e9bbfa
OS/Arch (client): darwin/amd64
2014/11/22 09:20:42 Error response from daemon: client and server don't have same version (client : 1.15, server: 1.14)
$ docker -v
Docker version 1.3.1, build 4e9bbfa

dvm ssh logs me into the boot2docker VM just fine.

However I can't docker build . as it keeps giving me the error "Error response from daemon: client and server don't have same version (client : 1.15, server: 1.14)"

I've tried brew uninstall docker && brew install docker to no avail.
I've also uninstalled and reinstalled dvm, virtualbox and vagrant to no avail.

I am also using fig

$ fig --version
fig 1.0.1

and note that fig run db happily pulls down the mongo image and runs it, so clearly some form of docker is working. Is there a known conflict between fig and dvm?

Static IP assignment leaves stray udhcpc process running in background

If I vagrant up a boot2docker instance through dvm, everything is fine when the VM starts up:

$ vagrant ssh
…
docker@boot2docker:~$ ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:E5:4D:D1  
          inet addr:192.168.42.43  Bcast:192.168.42.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fee5:4dd1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:632 (632.0 B)  TX bytes:6763 (6.6 KiB)

But there is a udhcpc client being left running on eth1:

docker@boot2docker:~$ ps ax | grep -i eth1
  905 root     /sbin/udhcpc -b -i eth1 -x hostname boot2docker -p /var/run/udhcpc.eth1.pid

For the most part this doesn't seem to hurt but occasionally (randomly?) the IP address on that interface will jump to something in the 192.168.56.x range. Interestingly, this overlaps with a range of addresses provided by Virtualbox:

$ VBoxManage list dhcpservers
NetworkName:    HostInterfaceNetworking-vboxnet0
IP:             192.168.56.100
NetworkMask:    255.255.255.0
lowerIPAddress: 192.168.56.101
upperIPAddress: 192.168.56.254
Enabled:        Yes

The strange thing is that killing and rerunning the udchpc process manually doesn't end up assigning the interface an address but if I kill the VM, change eth1's config to explicitly point to vboxnet0, and start the VM directly in Virtualbox, the machine ends up booting and getting assigned something in that range:

$ ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:D8:52:91  
          inet addr:192.168.56.101  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fed8:5291/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1180 (1.1 KiB)  TX bytes:1330 (1.2 KiB)

I'm wondering if the stray udhcpc process in inadvertently causing #8 in some scenarios.

I put together a really dirty fix for this which I would submit as a pull request if I didn't think there was a cleaner way to do:

diff --git a/Vagrantfile b/Vagrantfile
index 28cfee3..7e46c4d 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -52,6 +52,7 @@ module VagrantPlugins
                 ifc = "/sbin/ifconfig eth#{n[:interface]}"
                 broadcast = (IPAddr.new(n[:ip]) | (~ IPAddr.new(n[:netmask]))).to_s
                 comm.sudo("#{ifc} down")
+                comm.sudo("kill $(cat /var/run/udhcpc.eth#{n[:interface]}.pid) || true")
                 comm.sudo("#{ifc} #{n[:ip]} netmask #{n[:netmask]} broadcast #{broadcast}")
                 comm.sudo("#{ifc} up")
               end

Since adding that I haven't seen any more IP address drifts on my local boot2docker VMs. I'm not 100% sure this will fix that issue but it seems a likely cause.

network connectivity inside boot2docker machine

Thank you for your work. Seems really useful.

I am a newbie with docker, boot2docker and Tiny Core. Seem to be having a problem with dvm and network connectivity. On OSX 10.9.1 and VirtualBox 3.4.6.

$ docker version
Client version: 0.7.4
Go version (client): go1.2
Git commit (client): 010d74e
Server version: 0.7.1
Git commit (server): 88df052
Go version (server): go1.2
Last stable version: 0.7.4, please update docker

I followed the instructions by installing docker from brew and then installing dvm. DOCKER_HOST is set to tcp://192.168.42.43:4243. If I dvm ssh into the VM, I don't seem to have any network connectivity. E.g.,

$ dvm ssh
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           \______ o          __/
             \    \        __/
              \____\______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
docker@boot2docker:~$ nslookup github.com
Server:    127.0.0.1
Address 1: 127.0.0.1 boot2docker

nslookup: can't resolve 'github.com'

As a result, docker run commands never get off the ground. E.g.,

$ docker run ubuntu cat /etc/lsb-release
Unable to find image 'ubuntu' (tag: latest) locally
Pulling repository ubuntu
2014/01/08 20:52:22 Get https://index.docker.io/v1/repositories/ubuntu/images: dial tcp: lookup index.docker.io: no DNS servers

Output of dvm up:

$ dvm up
Bringing machine 'dvm' up with 'virtualbox' provider...
[dvm] Importing base box 'boot2docker-0.3.0-1'...
[dvm] Matching MAC address for NAT networking...
[dvm] Setting the name of the VM...
[dvm] Clearing any previously set forwarded ports...
[dvm] Fixed port collision for 22 => 2222. Now on port 2200.
[dvm] Clearing any previously set network interfaces...
[dvm] Preparing network interfaces based on configuration...
[dvm] Forwarding ports...
[dvm] -- 22 => 2200 (adapter 1)
[dvm] Running 'pre-boot' VM customizations...
[dvm] Booting VM...
[dvm] Waiting for machine to boot. This may take a few minutes...
[dvm] Machine booted and ready!
No installation found.
The guest's platform is currently not supported, will try generic Linux method...
Copy iso file /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 4.3.6 - guest version is 
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.3.6 Guest Additions for Linux............
VirtualBox Guest Additions installer
Copying additional installer modules ...
add_symlink: link directory /usr/src does not exist
Installing additional modules ...
As our installer does not recognize your Linux distribution, we were unable to
set up the initialization script vboxadd correctly.  The script has been copied
copied to the /etc/init.d directory.  You should set up your system to start
it at system start, or start it manually before using VirtualBox.

If you would like to help us add support for your distribution, please open a
new ticket on http://www.virtualbox.org/wiki/Bugtracker.
/sbin/modprobe: invalid option -- 'c'
BusyBox v1.20.2 (2012-08-07 01:31:01 UTC) multi-call binary.

Usage: modprobe [-alrqvsDb] MODULE [symbol=value]...

    -a  Load multiple MODULEs
    -l  List (MODULE is a pattern)
    -r  Remove MODULE (stacks) or do autoclean
    -q  Quiet
    -v  Verbose
    -s  Log to syslog
    -D  Show dependencies
    -b  Apply blacklist to module names too

Removing existing VirtualBox non-DKMS kernel modules ...done.
Building the VirtualBox Guest Additions kernel modules
The make utility was not found. If the following module compilation fails then
this could be the reason and you should try installing it.

The gcc utility was not found. If the following module compilation fails then
this could be the reason and you should try installing it.

The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.

Building the main Guest Additions module ...fail!
(Look at /var/log/vboxadd-install.log to find out what went wrong)
Doing non-kernel setup of the Guest Additions ...done.
As our installer does not recognize your Linux distribution, we were unable to
set up the initialization script vboxadd-service correctly.  The script has been copied
copied to the /etc/init.d directory.  You should set up your system to start
it at system start, or start it manually before using VirtualBox.

If you would like to help us add support for your distribution, please open a
new ticket on http://www.virtualbox.org/wiki/Bugtracker.
As our installer does not recognize your Linux distribution, we were unable to
set up the initialization script vboxadd-x11 correctly.  The script has been copied
copied to the /etc/init.d directory.  You should set up your system to start
it at system start, or start it manually before using VirtualBox.

If you would like to help us add support for your distribution, please open a
new ticket on http://www.virtualbox.org/wiki/Bugtracker.
Installing the Window System drivers ...fail!
(Could not find the X.Org or XFree86 Window System.)
An error occurred during installation of VirtualBox Guest Additions 4.3.6. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
rm: remove '/tmp/VBoxGuestAdditions.iso'? 
[dvm] No guest additions were detected on the base box for this VM! Guest
additions are required for forwarded ports, shared folders, host only
networking, and more. If SSH fails on this machine, please install
the guest additions and repackage the box to continue.

This is not an error message; everything may continue to work properly,
in which case you may ignore this message.
[dvm] Configuring and enabling network interfaces...
[dvm] Running provisioner: shell...
[dvm] Running: inline script
---> Configuring docker to bind to tcp/4243 and restarting

Any help is appreciated. Sorry for writing a whole book.

dvm starts but times out trying to communicate with box

I'm on Ubuntu 13.10, Vagrant 1.4.3 and Docker 0.7.6.

I did the install, which btw created the .dvm directory as root and prevented me from running dvm up as a regular user.

After chmod -R me:me .dvm, I was able to run dvm up, but I just get the long "Timed out while waiting for the machine to boot" message.

dvm status shows the machine as up, but dvm ssh doesn't work either. dvm destroy seems to work ok though.

Separate question: does dvm work in windows?

can't kill or restart ghosted containers (returns status 255)

This started happening when I restarted the VM I believe.

$ docker ps
CONTAINER ID        IMAGE                           COMMAND                CREATED             STATUS              PORTS                    NAMES
128cb396bf69        coreos/etcd:latest              /opt/etcd/bin/etcd     26 hours ago        Ghost               4001/tcp, 7001/tcp       service_etcd           

I tried killing all ghosted containers:

docker@boot2docker:/var/lib/docker$ docker ps | grep Ghost | awk '{print $1}' | xargs docker kill

Which resulted in errors like the following:

$ docker kill 128cb396bf69
Error: kill: Cannot kill container 128cb396bf69: exit status 255
2014/01/25 21:22:11 Error: failed to kill one or more containers

Things I tried:

After ssh'ing into the dvm (with dvm ssh) I ended up noticing the date was way off, so I adjusted it to match my host system.
I also thought maybe there was an aufs mount issue so I did a docker@boot2docker:~$ mount | awk '{print $3}' | grep aufs | xargs sudo umount (don't know if this is recommended...)

I manually edited /usr/local/etc/init.d/docker to pass the -D option to docker when it starts (I guess this is also configurable in ${HOME}/.dvm/dvm.conf but I liked my way at the time.

I looked at the log file in /var/lib/docker/docker.log and saw the following:

[debug] api.go:1038 Calling POST /containers/{name:.*}/kill
2014/01/26 03:02:57 POST /v1.8/containers/128cb396bf69/kill
[/var/lib/docker|31a8991b] +job kill(128cb396bf69)
2014/01/26 03:02:57 error killing container 128cb396bf69 (lxc-kill: failed to get the init pid, exit status 255)
Cannot kill container 128cb396bf69: exit status 255[/var/lib/docker|31a8991b] -job kill(128cb396bf69) = ERR (1)
[error] api.go:1064 Error: kill: Cannot kill container 128cb396bf69: exit status 255
[error] api.go:87 HTTP Error: statusCode=500 kill: Cannot kill container 128cb396bf69: exit status 255

Anything else I should try?

need a section on port forwarding in instructions

Instructions about port forwarding should be added.

Typical scenario:

Container Ports -> Docker Host Ports

dvm scenario:

Container Ports -> Docker Host Ports (on Virtualbox) -> Docker Client Ports (on OSX)

The Vagrantfile does not setup any forwarded ports.

So, if you are using dvm and you want to browse to a network port from your OSX browser (e.g., 8080), then unless a forwarded port is setup manually in Virtualbox or by the Vagrantfile (not done at the moment), the connection will be refused from OSX.

Another caveat is that ports < 1024 cannot be forwarded. You can get around it by using ipfw but I haven't looked into it.

installed via brew tap, dvm up errors: 'Linux_64' is not a valid Guest OS type

$ dvm -v
dvm: 0.3.0
$ VBoxManage -v     
4.2.12r84980
$ vagrant -v
Vagrant 1.4.2

$ dvm up
Bringing machine 'dvm' up with 'virtualbox' provider...
[dvm] Importing base box 'boot2docker-0.4.0'...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["import", "-n", "/Users/drnic/.vagrant.d/boxes/boot2docker-0.4.0/virtualbox/box.ovf"]

Stderr: 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interpreting /Users/drnic/.vagrant.d/boxes/boot2docker-0.4.0/virtualbox/box.ovf...
VBoxManage: error: 'Linux_64' is not a valid Guest OS type
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component VirtualBox, interface IVirtualBox, callee IAppliance
VBoxManage: error: Context: "Interpret" at line 330 of file VBoxManageAppliance.cpp

DVM does't finish booting

Hi,

Just installed and configured DVM, however the machine finishes booting. e.g.

Bringing machine 'dvm' up with 'vmware_fusion' provider...
==> dvm: Verifying vmnet devices are healthy...
==> dvm: Preparing network adapters...
==> dvm: Starting the VMware VM...
==> dvm: Waiting for the VM to finish booting...

image

docker client cant connect to tcp://192.168.42.43:4243

Hi,

dvm starts up nicely, i can start containers etc. after a while I cant reach it from docker client:

2014/03/12 16:04:52 dial tcp 192.168.42.43:4243: operation timed out

I can still connect to docker by: dvm ssh, and containers are still running.

environment

  • osx: 10.9
  • dvm: 0.4.1
  • docker (client side): 0.8.1, build a1598d1
  • DOCKER_HOST=tcp://192.168.42.43:4243

diagnostic

by checking the open ports, i can see virtualbox listening

lsof -i|grep 4243
VBoxHeadl 11971 lalyos   20u  IPv4 0x459e391648321dc7      0t0  TCP *:4243 (LISTEN)

Unable to locate ps

This is a new one to me.

tcp://192.168.42.43:4243

$dvm env
export DOCKER_HOST=tcp://192.168.42.43:4243

$docker version
docker version
Client version: 0.7.4
Go version (client): go1.2
Git commit (client): 010d74e
Server version: 0.7.5
Git commit (server): c348c04
Go version (server): go1.2
Last stable version: 0.7.6, please update docker

$ docker run 8242d7285d23
Unable to locate 'ps'.
Please report this message along with the location of the command on your system.

$which ps
/bin/ps

$dvm ssh
dvm ssh
                        ##        .
                  ## ## ##       ==
               ## ## ## ##      ===
           /""""""""""""""""_**/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
           ____** o          **/
             \    \        **/
              \****______/

---

| |__   ___   ___ | |_|___ \ **| | ___   ___| | ___** _ __
| '_ \ / _ \ / _ | **| __) / _` |/ _ \ / __| |/ / _ \ '**|
| |_) | (_) | (_) | |_ / **/ (_| | (_) | (**|   <  **/ |
|_.**/ _**/ _**/ __|_______,_|_**/ _**|_|\____|_|
boot2docker: 0.4.0
docker@boot2docker:~$ which ps
/bin/ps

dvm ssh works but cant run docker commands

Installation seemed to go ok, and I can run dvm ssh and get connected to the virtual machine and am greeted with the boot2docker MOTD and pompt. However when I try to run docker commands this is what happens

$ docker run ubuntu cat /etc/passwd
2014/01/16 10:51:42 dial tcp 192.168.42.43:4243: operation timed out

and if I try to sudo it

$ sudo docker run ubuntu cat /etc/passwd
2014/01/16 10:52:51 dial unix /var/run/docker.sock: no such file or directory

I am sure I jacked something up here, but not sure what. The docker.sock doesnt appear to be there. Any ideas?

Error response from daemon: client and server don't have same version

When I try to setup docker with dvm, I get an error message:

FATA[0000] Error response from daemon: client and server don't have same version (client : 1.16, server: 1.14) 

Trace:

$ dvm up
Bringing machine 'dvm' up with 'virtualbox' provider...
==> dvm: Importing base box 'boot2docker-1.2.0'...
==> dvm: Matching MAC address for NAT networking...
==> dvm: Setting the name of the VM: dvm_dvm_1419118928038_21058
==> dvm: Clearing any previously set network interfaces...
==> dvm: Preparing network interfaces based on configuration...
    dvm: Adapter 1: nat
    dvm: Adapter 2: hostonly
==> dvm: Forwarding ports...
    dvm: 22 => 2222 (adapter 1)
==> dvm: Running 'pre-boot' VM customizations...
==> dvm: Booting VM...
==> dvm: Waiting for machine to boot. This may take a few minutes...
    dvm: SSH address: 127.0.0.1:2222
    dvm: SSH username: docker
    dvm: SSH auth method: private key
    dvm: Warning: Connection timeout. Retrying...
    dvm: 
    dvm: Vagrant insecure key detected. Vagrant will automatically replace
    dvm: this with a newly generated keypair for better security.
    dvm: 
    dvm: Inserting generated public key within guest...
    dvm: Removing insecure key from the guest if its present...
    dvm: Key inserted! Disconnecting and reconnecting using new SSH key...
==> dvm: Machine booted and ready!
No installation found.
The guest's platform is currently not supported, will try generic Linux method...
Copy iso file /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 4.3.20 - guest version is 
mkdir: can't create directory '/tmp/selfgz85231093': No such file or directory
Cannot create target directory /tmp/selfgz85231093
You should try option --target OtherDirectory
An error occurred during installation of VirtualBox Guest Additions 4.3.20. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
==> dvm: Checking for guest additions in VM...
    dvm: No guest additions were detected on the base box for this VM! Guest
    dvm: additions are required for forwarded ports, shared folders, host only
    dvm: networking, and more. If SSH fails on this machine, please install
    dvm: the guest additions and repackage the box to continue.
    dvm: 
    dvm: This is not an error message; everything may continue to work properly,
    dvm: in which case you may ignore this message.
==> dvm: Configuring and enabling network interfaces...
==> dvm: Running provisioner: shell...
    dvm: Running: inline script
==> dvm: boot2docker: 1.2.0
$ eval $(dvm env)
$ docker images
FATA[0000] Error response from daemon: client and server don't have same version (client : 1.16, server: 1.14) 

System:

$ specs dvm docker vagrant virtualbox brew os
Specs:

specs 0.12
https://github.com/mcandre/specs#readme

dvm --version
dvm: 0.9.0

docker --version
Docker version 1.4.1, build 5bc2ff8

vagrant --version
Vagrant 1.7.1

vboxwebsrv --help 2>&1 | grep VirtualBox
Oracle VM VirtualBox web service Version 4.3.20

brew --version
0.9.5

system_profiler SPSoftwareDataType | grep 'System Version'
      System Version: OS X 10.10.1 (14B25)

dvm up Error: Connection timeout. Retrying...

localhost:bin medcl$ dvm up
Bringing machine 'dvm' up with 'virtualbox' provider...
==> dvm: Importing base box 'boot2docker-0.5.4-1'...
==> dvm: Matching MAC address for NAT networking...
==> dvm: Setting the name of the VM: dvm_dvm_1395208518522_99598
==> dvm: Clearing any previously set network interfaces...
==> dvm: Preparing network interfaces based on configuration...
dvm: Adapter 1: nat
dvm: Adapter 2: hostonly
==> dvm: Forwarding ports...
dvm: 4243 => 4243 (adapter 1)
dvm: 22 => 2222 (adapter 1)
==> dvm: Running 'pre-boot' VM customizations...
==> dvm: Booting VM...
==> dvm: Waiting for machine to boot. This may take a few minutes...
dvm: SSH address: 127.0.0.1:2222
dvm: SSH username: docker
dvm: SSH auth method: private key
dvm: Error: Connection timeout. Retrying...
dvm: Error: Connection timeout. Retrying...
dvm: Error: Connection timeout. Retrying...
dvm: Error: Connection timeout. Retrying...
^C==> dvm: Waiting for cleanup before exiting...
dvm: Error: Connection timeout. Retrying...
Vagrant exited after cleanup due to external interrupt.

VM keeps changing its IP address

I'm going in circles with this one.

I bring up the fresh VM. I $(dvm env). I also check to see what that exports:

❯❯❯ dvm env
export DOCKER_HOST=tcp://192.168.42.43:4243

If I dvm ssh and check ifconfig, sure enough, that's the IP address.

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.42.43  Bcast:192.168.42.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1388 (1.3 KiB)  TX bytes:1838 (1.7 KiB)

And docker commands work fine. Great.

Then, a little while later, docker commands stop working. I can't ping 192.168.42.43 anymore. So I dvm ssh back in:

eth1      Link encap:Ethernet  HWaddr 08:00:27:6B:6F:FF
          inet addr:192.168.56.102  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6b:6fff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2672 errors:0 dropped:0 overruns:0 frame:0
          TX packets:350 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3838330 (3.6 MiB)  TX bytes:26060 (25.4 KiB)

It's moved to 192.168.56.102.

dvm reload correctly resets the VM's IP address (along with the entire machine).

Any clue what could be going on here? I haven't found a pattern to my activity that could be causing it.

DOCKER_PORT does not seem to alter port of Vagrantfile

The docker default port unfortunately conflicts with Crashplan on OSX, so I tried to set DOCKER_PORT in ~/.dvm/dvm.conf to something different, but it does not appear to have any effect.

lsof -n -i4TCP:4243 on OSX shows VBoxHeadl listening and inside of the boot2docker system, top shows /usr/local/bin/docker -d -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:4243 This is after destroying the dvm system to make sure the new DOCKER_PORT was picked up.

Is there a way to do this via DOCKER_ARGS that I am missing?

Just use localhost for DOCKER_IP?

I'm sure I'm missing something, but when I bring up dvm with vmware fusion, it seems to have good network config:

docker@boot2docker:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN
    link/ether 72:4c:bb:80:e7:16 brd ff:ff:ff:ff:ff:ff
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:14:93:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.149.131/24 brd 192.168.149.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe14:93b2/64 scope link
       valid_lft forever preferred_lft forever
4: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:0c:29:14:93:bc brd ff:ff:ff:ff:ff:ff
    inet 192.168.42.43/24 brd 192.168.42.255 scope global eth1
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

But I simply can't connect to that ip or port, eg telnet times out.

Thanks to the port forwarding we do anyway, 127.0.0.1 works fine and seems like it would always work no matter how dhcp ends up shaking out. So, this seems like the most sane default, or like I'm doing something dumb.

Please advise.

docker is not working proparly after starting dvm

I followed the instruction, added an environment variable of DOCKER_CIDR of 172.18.0.1/16,
when trying to use docker from my mac i get the following error:
➜ ~ docker images
2014/10/19 23:26:17 Get https://192.168.42.43:2375/v1.15/images/json: tls: oversized record received with length 20527
➜ ~ docker pull ubuntu
2014/10/19 23:26:22 Post https://192.168.42.43:2375/v1.15/images/create?fromImage=ubuntu%3Alatest: tls: oversized record received with length 20527

any idea what is causing the error?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.