Code Monkey home page Code Monkey logo

miguno / wirbelsturm Goto Github PK

View Code? Open in Web Editor NEW
331.0 36.0 72.0 316 KB

[PROJECT IS NO LONGER MAINTAINED] Wirbelsturm is a Vagrant and Puppet based tool to perform 1-click local and remote deployments, with a focus on big data tech like Kafka.

Home Page: http://www.michael-noll.com/blog/2014/03/17/wirbelsturm-one-click-deploy-storm-kafka-clusters-with-vagrant-puppet/

License: Other

Ruby 17.83% Shell 74.03% HTML 5.36% Puppet 2.78%
vagrant puppet kafka apache-kafka storm apache-storm spark apache-spark

wirbelsturm's People

Contributors

bradydoll avatar miguno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wirbelsturm's Issues

Failed dependencies on AWS deployment

Hi,

I'm attempting to use wirbelstrum on AWS. This is my first experience with wirbelsturm and puppet.

I set up an instance as the primary box from which to run wirbelsturm. After getting that set up, I followed all of the AWS directions, and that all seemed to go fine. When I run "vagrant up" on my primary box, it successfully launches all of the instances on EC2. However, each instance (kafka1, nimbus1, supervisor1, etc.) on the deployment gets failed dependencies (listing for kafka1 section below).

The boxes seem to launch fine, but it appears the dependency failure is complete in that no Storm, Kafka, etc. services are running. Which is probably obvious.

Immediately I get this warning:

==> kafka1: Warning! The AWS provider doesn't support any of the Vagrant
==> kafka1: high-level network configurations (`config.vm.network`). They
==> kafka1: will be silently ignored.

The first error is:

==> kafka1: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list kafka' returned 1: Error: No matching Packages to list

Followed later by:

==> kafka1: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list supervisor' returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 4906.
==> kafka1: Another app is currently holding the yum lock; waiting for it to exit...
==> kafka1:   The other application is: yum
==> kafka1:     Memory :  49 M RSS (260 MB VSZ)
==> kafka1:     Started: Thu Feb 19 03:30:52 2015 - 00:01 ago
==> kafka1:     State  : Running, pid: 4906
==> kafka1: Error: No matching Packages to list
==> kafka1: Error: /Stage[main]/Supervisor/Package[supervisor]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list supervisor' returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 4906.

Any pointers or advice?

Thanks in advance!

Guerry

(kafka1 listing below).

------------------------------
==> kafka1: Notice: Scope(Class[main]): Deployment environment: 'staging-environment'
==> kafka1: Notice: Scope(Class[main]): I am running within a Vagrant-controlled environment
==> kafka1: Notice: Scope(Class[main]): Disabling firewall...
==> kafka1: Notice: Scope(Class[main]): I have been assigned the role 'kafka_broker'
==> kafka1: Notice: Compiled catalog for kafka1.gateway01.vintel.info in environment production in 1.66 seconds
==> kafka1: Notice: /Stage[main]/Firewall::Linux::Redhat/Service[iptables]/enable: enable changed 'true' to 'false'
==> kafka1: Notice: /Stage[main]/Kafka::Users/Group[kafka]/ensure: created
==> kafka1: Notice: /Stage[main]/Kafka::Users/User[kafka]/ensure: created
==> kafka1: Notice: /Stage[main]/Kafka::Install/File[/var/log/kafka]/ensure: created
==> kafka1: Notice: /Stage[main]/Kafka::Install/Kafka::Install::Create_log_dirs[/app/kafka/log]/Exec[create-kafka-log-directory-/app/kafka/log]/returns: executed successfully
==> kafka1: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list kafka' returned 1: Error: No matching Packages to list
==> kafka1: Error: /Stage[main]/Kafka::Install/Package[kafka]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list kafka' returned 1: Error: No matching Packages to list
==> kafka1: Notice: /Stage[main]/Kafka::Install/File[/opt/kafka/logs]: Dependency Package[kafka] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Install/File[/opt/kafka/logs]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Wirbelsturm_common::Install/File[/var/lib/cloud/data/scripts]/ensure: created
==> kafka1: Notice: /Stage[main]/Wirbelsturm_common::Install/Package[jq]/ensure: created
==> kafka1: Notice: /Stage[main]/Kafka::Install/Kafka::Install::Create_log_dirs[/app/kafka/log]/File[kafka-log-directory-/app/kafka/log]/owner: owner changed 'root' to 'kafka'
==> kafka1: Notice: /Stage[main]/Kafka::Install/Kafka::Install::Create_log_dirs[/app/kafka/log]/File[kafka-log-directory-/app/kafka/log]/group: group changed 'root' to 'kafka'
==> kafka1: Notice: /Stage[main]/Kafka::Install/Kafka::Install::Create_log_dirs[/app/kafka/log]/File[kafka-log-directory-/app/kafka/log]/mode: mode changed '0755' to '0750'
==> kafka1: Notice: /Stage[main]/Kafka::Config/File[/opt/kafka/config/log4j.properties]: Dependency Package[kafka] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Config/File[/opt/kafka/config/log4j.properties]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka::Config/File[/opt/kafka/config/server.properties]: Dependency Package[kafka] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Config/File[/opt/kafka/config/server.properties]: Skipping because of failed dependencies
==> kafka1: Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list supervisor' returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 4906.
==> kafka1: Another app is currently holding the yum lock; waiting for it to exit...
==> kafka1:   The other application is: yum
==> kafka1:     Memory :  49 M RSS (260 MB VSZ)
==> kafka1:     Started: Thu Feb 19 03:30:52 2015 - 00:01 ago
==> kafka1:     State  : Running, pid: 4906
==> kafka1: Error: No matching Packages to list
==> kafka1: Error: /Stage[main]/Supervisor/Package[supervisor]/ensure: change from absent to present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y list supervisor' returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 4906.
==> kafka1: Another app is currently holding the yum lock; waiting for it to exit...
==> kafka1:   The other application is: yum
==> kafka1:     Memory :  49 M RSS (260 MB VSZ)
==> kafka1:     Started: Thu Feb 19 03:30:52 2015 - 00:01 ago
==> kafka1:     State  : Running, pid: 4906
==> kafka1: Error: No matching Packages to list
==> kafka1: Notice: /Stage[main]/Supervisor/File[/var/run/supervisor]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/File[/var/run/supervisor]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor/File[/var/log/supervisor]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/File[/var/log/supervisor]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor/File[/etc/logrotate.d/supervisor]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/File[/etc/logrotate.d/supervisor]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor/File[/etc/supervisord.d]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/File[/etc/supervisord.d]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor/File[/etc/supervisord.conf]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/File[/etc/supervisord.conf]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor/Service[supervisord]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor/Service[supervisord]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka::Service/Exec[restart-kafka-broker]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Kafka::Service/Exec[restart-kafka-broker]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Service/Exec[restart-kafka-broker]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/var/log/supervisor/kafka-broker]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/var/log/supervisor/kafka-broker]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/var/log/supervisor/kafka-broker]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/etc/supervisord.d/kafka-broker.conf]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/etc/supervisord.d/kafka-broker.conf]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/File[/etc/supervisord.d/kafka-broker.conf]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/Service[supervisor::kafka-broker]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/Service[supervisor::kafka-broker]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka::Service/Supervisor::Service[kafka-broker]/Service[supervisor::kafka-broker]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Kafka/Anchor[kafka::end]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Kafka/Anchor[kafka::end]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Kafka/Anchor[kafka::end]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Supervisor::Update/Exec[supervisor::update]: Dependency Package[kafka] has failures: true
==> kafka1: Notice: /Stage[main]/Supervisor::Update/Exec[supervisor::update]: Dependency Package[supervisor] has failures: true
==> kafka1: Warning: /Stage[main]/Supervisor::Update/Exec[supervisor::update]: Skipping because of failed dependencies
==> kafka1: Notice: /Stage[main]/Wirbelsturm_common::Install/Package[java-runtime-environment]/ensure: created
==> kafka1: Notice: Finished catalog run in 17.04 seconds
==> kafka1: An error occurred. The error will be shown after all tasks complete.
------------------------------

Configuring Java 8

Im trying to set up java in https://github.com/miguno/wirbelsturm/blob/master/puppet/manifests/hieradata/common.yaml

as
wirbelsturm_common::java_package_name: 'java-1.8.0-sun'

I get the following error when running vagrant up:

Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y install java-1.8.0-sun' returned 1: Error: Nothing to do

Im not sure if I am using the proper rpm package name. I've also tried jdk-8u5-linux-x64.rpm and jdk-8u5-linux-x64. Has anyone been able to deploy with java 8?

Zookeeper quorum issue

This might just be an issue with the puppet-zookeeper module, but since I'm using it in the context of wirbelsturm I'm posting it here.

In attempting to setup a zookeeper quorum, I'm running into the issue of the myid file never being created on the first time the boxes are provisioned. I've uncommented the zookeeper::quorum section of the puppet/manifests/hieradata/environments/default-environment.yaml file, and I've added a separate zookeeperX.yaml file for each zookeeper node under puppet/manifest/hieradata/environments/default-environment.yaml.

On provisioning, zookeeper1 fails with the following error message:

==> zookeeper1: Notice: /Stage[main]/Zookeeper::Service/Exec[zookeeper-initialize]/returns: ZooKeeper data directory already exists at /var/lib/zookeeper (or use --force to force re-initialization)
==> zookeeper1: Error: /Stage[main]/Zookeeper::Service/Exec[zookeeper-initialize]: Failed to call refresh: service zookeeper-server init returned 1 instead of one of [0]
==> zookeeper1: Error: /Stage[main]/Zookeeper::Service/Exec[zookeeper-initialize]: service zookeeper-server init returned 1 instead of one of [0]

Adding --force to the zookeeper initialize command in service.pp gets rid of the error, but the myid file never appears on the zookeeper node after a fresh provision (vagrant destroy, vagrant up). If I vagrant up, then vagrant provision the myid files appear as expected.

I can't figure out why the file is not being created, especially since a few lines above the error message the log reports the following:

==> zookeeper1: Notice: /Stage[main]/Zookeeper::Service/File[zookeeper-myid]/ensure: defined content as '{md5}b026324c6904b2a9cb4b88d6d61c81d1'

I can only guess that the log is removed if I do a --force init, but without the force init the vagrant up fails.

Is there something I'm missing here? Some configuration setting that I haven't applied that makes this all go away?

aws-iam-tools deprecated

This breaks your aws-setup-iam script. I tried installing the deprecated library and got the error:

Creating Wirbelsturm IAM group...
Exception in thread "main" java.lang.NoClassDefFoundError: com/amazonaws/services/auth/identity/cli/view/GroupCreateView
Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.auth.identity.cli.view.GroupCreateView
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)

error when preparing puppet environment

when i run ./bootstrap ,there is an error occured :

Checking Puppet environment...
Checking for librarian-puppet: OK
Preparing Puppet environment...
/home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/lib/librarian/puppet.rb:11:in `rescue in <top (required)>': undefined local variable or method `status' for main:Object (NameError)
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/lib/librarian/puppet.rb:5:in `<top (required)>'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/lib/librarian/puppet/cli.rb:4:in `require'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/lib/librarian/puppet/cli.rb:4:in `<top (required)>'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/bin/librarian-puppet:6:in `require'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/gems/librarian-puppet-0.9.17/bin/librarian-puppet:6:in `<top (required)>'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/bin/librarian-puppet:23:in `load'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/bin/librarian-puppet:23:in `<main>'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `eval'
    from /home/upyuntec/.rvm/gems/ruby-2.1.2/bin/ruby_executable_hooks:15:in `<main>'
Checking for known problems...

do i need to install something else?

vagrant error on doing vagrant up

I am trying to bootstrap a few VMs with wirbelsturm on Centos 6.6 and I am seeing the following logs. Can you please recommend a workaround or way forward?

Thanks,
Chirag

[wirbelsturm]$ sudo vagrant up --debug
 INFO global: Vagrant version: 1.7.2
 INFO global: Ruby version: 2.0.0
 INFO global: RubyGems version: 2.0.14
 INFO global: VAGRANT_EXECUTABLE="/opt/vagrant/bin/../embedded/gems/gems/vagrant-1.7.2/bin/vagrant"
 INFO global: VAGRANT_INSTALLER_EMBEDDED_DIR="/opt/vagrant/bin/../embedded"
 INFO global: VAGRANT_INSTALLER_VERSION="2"
 INFO global: VAGRANT_DETECTED_OS="Linux"
 INFO global: VAGRANT_INSTALLER_ENV="1"
 INFO global: VAGRANT_INTERNAL_BUNDLERIZED="1"
 INFO global: VAGRANT_LOG="debug"
 INFO global: Plugins:
 INFO global:   - CFPropertyList = 2.3.0
 INFO global:   - builder = 3.2.2
 INFO global:   - bundler = 1.7.11
 INFO global:   - excon = 0.44.4
 INFO global:   - fission = 0.5.0
 INFO global:   - formatador = 0.2.5
 INFO global:   - mime-types = 1.25.1
 INFO global:   - net-ssh = 2.9.2
 INFO global:   - net-scp = 1.1.2
 INFO global:   - fog-core = 1.29.0
 INFO global:   - mini_portile = 0.6.0
 INFO global:   - nokogiri = 1.6.3.1
 INFO global:   - fog-xml = 0.1.1
 INFO global:   - fog-atmos = 0.1.0
 INFO global:   - multi_json = 1.11.0
 INFO global:   - fog-json = 1.0.0
 INFO global:   - ipaddress = 0.8.0
 INFO global:   - fog-aws = 0.1.1
 INFO global:   - inflecto = 0.0.2
 INFO global:   - fog-brightbox = 0.7.1
 INFO global:   - fog-ecloud = 0.0.2
 INFO global:   - fog-profitbricks = 0.0.1
 INFO global:   - fog-radosgw = 0.0.3
 INFO global:   - fog-riakcs = 0.1.0
 INFO global:   - fog-sakuracloud = 1.0.0
 INFO global:   - fog-serverlove = 0.1.1
 INFO global:   - fog-softlayer = 0.4.1
 INFO global:   - fog-storm_on_demand = 0.1.0
 INFO global:   - fog-terremark = 0.0.4
 INFO global:   - fog-vmfusion = 0.0.1
 INFO global:   - fog-voxel = 0.0.2
 INFO global:   - fog = 1.28.0
 INFO global:   - json = 1.8.2
 INFO global:   - rdoc = 4.2.0
 INFO global:   - rest-client = 1.6.8
 INFO global:   - vagrant-aws = 0.6.0
 INFO global:   - vagrant-awsinfo = 0.0.8
 INFO global:   - vagrant-hosts = 2.4.0
 INFO global:   - vagrant-share = 1.1.3
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v2/plugin.rb
 INFO manager: Registered plugin: kernel
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/synced_folders/rsync/plugin.rb
 INFO manager: Registered plugin: RSync synced folders
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/synced_folders/nfs/plugin.rb
 INFO manager: Registered plugin: NFS synced folders
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/synced_folders/smb/plugin.rb
 INFO manager: Registered plugin: SMB synced folders
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/providers/hyperv/plugin.rb
 INFO manager: Registered plugin: Hyper-V provider
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/providers/docker/plugin.rb
 INFO manager: Registered plugin: docker-provider
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/providers/virtualbox/plugin.rb
 INFO manager: Registered plugin: VirtualBox provider
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/solaris/plugin.rb
 INFO manager: Registered plugin: Solaris guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/pld/plugin.rb
 INFO manager: Registered plugin: PLD Linux guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/fedora/plugin.rb
 INFO manager: Registered plugin: Fedora guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/coreos/plugin.rb
 INFO manager: Registered plugin: CoreOS guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/tinycore/plugin.rb
 INFO manager: Registered plugin: TinyCore Linux guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/omnios/plugin.rb
 INFO manager: Registered plugin: OmniOS guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/mint/plugin.rb
 INFO manager: Registered plugin: Mint guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/windows/plugin.rb
 INFO manager: Registered plugin: Windows guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/nixos/plugin.rb
 INFO manager: Registered plugin: NixOS guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/ubuntu/plugin.rb
 INFO manager: Registered plugin: Ubuntu guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/openbsd/plugin.rb
 INFO manager: Registered plugin: OpenBSD guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/solaris11/plugin.rb
 INFO manager: Registered plugin: Solaris 11 guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/arch/plugin.rb
 INFO manager: Registered plugin: Arch guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/netbsd/plugin.rb
 INFO manager: Registered plugin: NetBSD guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/darwin/plugin.rb
 INFO manager: Registered plugin: Darwin guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/suse/plugin.rb
 INFO manager: Registered plugin: SUSE guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/redhat/plugin.rb
 INFO manager: Registered plugin: RedHat guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/linux/plugin.rb
 INFO manager: Registered plugin: Linux guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/funtoo/plugin.rb
 INFO manager: Registered plugin: Funtoo guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/esxi/plugin.rb
 INFO manager: Registered plugin: ESXi guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/smartos/plugin.rb
 INFO manager: Registered plugin: SmartOS guest.
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/gentoo/plugin.rb
 INFO manager: Registered plugin: Gentoo guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/freebsd/plugin.rb
 INFO manager: Registered plugin: FreeBSD guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/guests/debian/plugin.rb
 INFO manager: Registered plugin: Debian guest
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/resume/plugin.rb
 INFO manager: Registered plugin: resume command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/ssh_config/plugin.rb
 INFO manager: Registered plugin: ssh-config command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/ssh/plugin.rb
 INFO manager: Registered plugin: ssh command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/status/plugin.rb
 INFO manager: Registered plugin: status command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/help/plugin.rb
 INFO manager: Registered plugin: help command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/box/plugin.rb
 INFO manager: Registered plugin: box command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/global-status/plugin.rb
 INFO manager: Registered plugin: global-status command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/package/plugin.rb
 INFO manager: Registered plugin: package command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/halt/plugin.rb
 INFO manager: Registered plugin: halt command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/login/plugin.rb
 INFO manager: Registered plugin: vagrant-login
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/up/plugin.rb
 INFO manager: Registered plugin: up command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/plugin/plugin.rb
 INFO manager: Registered plugin: plugin command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/reload/plugin.rb
 INFO manager: Registered plugin: reload command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/init/plugin.rb
 INFO manager: Registered plugin: init command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/version/plugin.rb
 INFO manager: Registered plugin: version command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/rdp/plugin.rb
 INFO manager: Registered plugin: rdp command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/destroy/plugin.rb
 INFO manager: Registered plugin: destroy command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/provision/plugin.rb
 INFO manager: Registered plugin: provision command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/suspend/plugin.rb
 INFO manager: Registered plugin: suspend command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/push/plugin.rb
 INFO manager: Registered plugin: push command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/list-commands/plugin.rb
 INFO manager: Registered plugin: list-commands command
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/communicators/ssh/plugin.rb
 INFO manager: Registered plugin: ssh communicator
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/communicators/winrm/plugin.rb
 INFO manager: Registered plugin: winrm communicator
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v1/plugin.rb
 INFO manager: Registered plugin: kernel
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/pushes/noop/plugin.rb
 INFO manager: Registered plugin: noop
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/pushes/ftp/plugin.rb
 INFO manager: Registered plugin: ftp
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/pushes/local-exec/plugin.rb
 INFO manager: Registered plugin: local-exec
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/pushes/heroku/plugin.rb
 INFO manager: Registered plugin: heroku
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/pushes/atlas/plugin.rb
 INFO manager: Registered plugin: atlas
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/salt/plugin.rb
 INFO manager: Registered plugin: salt
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/chef/plugin.rb
 INFO manager: Registered plugin: chef
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/shell/plugin.rb
 INFO manager: Registered plugin: shell
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/docker/plugin.rb
 INFO manager: Registered plugin: docker
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/puppet/plugin.rb
 INFO manager: Registered plugin: puppet
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/file/plugin.rb
 INFO manager: Registered plugin: file
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/ansible/plugin.rb
 INFO manager: Registered plugin: ansible
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/provisioners/cfengine/plugin.rb
 INFO manager: Registered plugin: CFEngine Provisioner
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/null/plugin.rb
 INFO manager: Registered plugin: null host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/bsd/plugin.rb
 INFO manager: Registered plugin: BSD host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/windows/plugin.rb
 INFO manager: Registered plugin: Windows host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/slackware/plugin.rb
 INFO manager: Registered plugin: Slackware host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/arch/plugin.rb
 INFO manager: Registered plugin: Arch host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/darwin/plugin.rb
 INFO manager: Registered plugin: Mac OS X host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/suse/plugin.rb
 INFO manager: Registered plugin: SUSE host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/redhat/plugin.rb
 INFO manager: Registered plugin: Red Hat host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/linux/plugin.rb
 INFO manager: Registered plugin: Linux host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/gentoo/plugin.rb
 INFO manager: Registered plugin: Gentoo host
DEBUG global: Loading core plugin: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/hosts/freebsd/plugin.rb
 INFO manager: Registered plugin: FreeBSD host
 INFO global: Loading plugins!
 INFO manager: Registered plugin: vagrant-share
 INFO manager: Registered plugin: hosts
 INFO manager: Registered plugin: AWS
 INFO manager: Registered plugin: Query SSH info from AWS guests.
 INFO vagrant: `vagrant` invoked: ["up", "--debug"]
DEBUG vagrant: Creating Vagrant environment
 INFO environment: Environment initialized (#<Vagrant::Environment:0x00000001a94df8>)
 INFO environment:   - cwd: /opt/wirbelsturm
 INFO environment: Home path: /root/.vagrant.d
 INFO environment: Local data path: /opt/wirbelsturm/.vagrant
DEBUG environment: Creating: /opt/wirbelsturm/.vagrant
 INFO environment: Running hook: environment_plugins_loaded
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 1 hooks defined.
 INFO runner: Running action: #<Vagrant::Action::Builder:0x00000001c3f608>
 INFO environment: Running hook: environment_load
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 1 hooks defined.
 INFO runner: Running action: #<Vagrant::Action::Builder:0x00000001d76440>
 INFO cli: CLI: [] "up" []
DEBUG cli: Invoking command class: VagrantPlugins::CommandUp::Command []
DEBUG command: 'Up' each target VM...
 INFO loader: Set :root = #<Pathname:/opt/wirbelsturm/Vagrantfile>
DEBUG loader: Populating proc cache for #<Pathname:/opt/wirbelsturm/Vagrantfile>
DEBUG loader: Load procs for pathname: /opt/wirbelsturm/Vagrantfile
 INFO root: Version requirements from Vagrantfile: [">= 1.6.5"]
 INFO root:   - Version requirements satisfied!
 INFO loader: Loading configuration in order: [:home, :root]
DEBUG loader: Loading from: root (evaluating)
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
DEBUG command: Getting target VMs for command. Arguments:
DEBUG command:  -- names: ["zookeeper1", "nimbus1", "supervisor1", "supervisor2", "kafka1"]
DEBUG command:  -- options: {:provider=>nil}
DEBUG command: Finding machine that match name: zookeeper1
 INFO loader: Set "15629680_machine_zookeeper1" = [["2", #<Proc:0x00000000d606f0@/opt/wirbelsturm/Vagrantfile:26>]]
DEBUG loader: Populating proc cache for ["2", #<Proc:0x00000000d606f0@/opt/wirbelsturm/Vagrantfile:26>]
 INFO loader: Loading configuration in order: [:home, :root, "15629680_machine_zookeeper1"]
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_zookeeper1 (evaluating)
DEBUG provisioner: Provisioner defined: 
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO environment: Getting machine: zookeeper1 (virtualbox)
 INFO environment: Uncached load of machine.
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO loader: Set "15629680_machine_zookeeper1" = [["2", #<Proc:0x00000000d606f0@/opt/wirbelsturm/Vagrantfile:26>]]
 INFO loader: Loading configuration in order: [:home, :root, "15629680_machine_zookeeper1"]
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_zookeeper1 (cache)
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO box_collection: Box found: centos6-compatible (virtualbox)
 INFO loader: Set :"16650140_centos6-compatible_virtualbox" = #<Pathname:/root/.vagrant.d/boxes/centos6-compatible/0/virtualbox/Vagrantfile>
DEBUG loader: Populating proc cache for #<Pathname:/root/.vagrant.d/boxes/centos6-compatible/0/virtualbox/Vagrantfile>
DEBUG loader: Load procs for pathname: /root/.vagrant.d/boxes/centos6-compatible/0/virtualbox/Vagrantfile
 INFO loader: Loading configuration in order: [:"16650140_centos6-compatible_virtualbox", :home, :root, "15629680_machine_zookeeper1"]
DEBUG loader: Loading from: 16650140_centos6-compatible_virtualbox (evaluating)
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_zookeeper1 (cache)
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO loader: Set :"15629680_vm_zookeeper1_centos6-compatible_virtualbox" = [["2", #<Proc:0x00000001ba74c0>]]
DEBUG loader: Populating proc cache for ["2", #<Proc:0x00000001ba74c0>]
 INFO loader: Loading configuration in order: [:"16650140_centos6-compatible_virtualbox", :home, :root, "15629680_machine_zookeeper1", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: 16650140_centos6-compatible_virtualbox (cache)
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_zookeeper1 (cache)
DEBUG loader: Loading from: 15629680_vm_zookeeper1_centos6-compatible_virtualbox (evaluating)
DEBUG provisioner: Provisioner defined: 
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO machine: Initializing machine: zookeeper1
 INFO machine:   - Provider: VagrantPlugins::ProviderVirtualBox::Provider
 INFO machine:   - Box: #<Vagrant::Box:0x00000001ad00b0>
 INFO machine:   - Data dir: /opt/wirbelsturm/.vagrant/machines/zookeeper1/virtualbox
DEBUG virtualbox: Instantiating the driver for machine ID: nil
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO machine: New machine ID: nil
DEBUG virtualbox: Instantiating the driver for machine ID: nil
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
DEBUG command: Finding machine that match name: nimbus1
 INFO loader: Set "15629680_machine_nimbus1" = [["2", #<Proc:0x00000000d63760@/opt/wirbelsturm/Vagrantfile:26>]]
DEBUG loader: Populating proc cache for ["2", #<Proc:0x00000000d63760@/opt/wirbelsturm/Vagrantfile:26>]
 INFO loader: Loading configuration in order: [:home, :root, "15629680_machine_nimbus1"]
ERROR loader: Unknown config sources: ["15629680_machine_zookeeper1", :"16650140_centos6-compatible_virtualbox", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_nimbus1 (evaluating)
DEBUG provisioner: Provisioner defined: 
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO environment: Getting machine: nimbus1 (virtualbox)
 INFO environment: Uncached load of machine.
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO loader: Set "15629680_machine_nimbus1" = [["2", #<Proc:0x00000000d63760@/opt/wirbelsturm/Vagrantfile:26>]]
 INFO loader: Loading configuration in order: [:home, :root, "15629680_machine_nimbus1"]
ERROR loader: Unknown config sources: ["15629680_machine_zookeeper1", :"16650140_centos6-compatible_virtualbox", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_nimbus1 (cache)
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO box_collection: Box found: centos6-compatible (virtualbox)
 INFO loader: Set :"16650140_centos6-compatible_virtualbox" = #<Pathname:/root/.vagrant.d/boxes/centos6-compatible/0/virtualbox/Vagrantfile>
 INFO loader: Loading configuration in order: [:"16650140_centos6-compatible_virtualbox", :home, :root, "15629680_machine_nimbus1"]
ERROR loader: Unknown config sources: ["15629680_machine_zookeeper1", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: 16650140_centos6-compatible_virtualbox (cache)
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_nimbus1 (cache)
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO loader: Set :"15629680_vm_nimbus1_centos6-compatible_virtualbox" = [["2", #<Proc:0x000000019f5578>]]
DEBUG loader: Populating proc cache for ["2", #<Proc:0x000000019f5578>]
 INFO loader: Loading configuration in order: [:"16650140_centos6-compatible_virtualbox", :home, :root, "15629680_machine_nimbus1", :"15629680_vm_nimbus1_centos6-compatible_virtualbox"]
ERROR loader: Unknown config sources: ["15629680_machine_zookeeper1", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: 16650140_centos6-compatible_virtualbox (cache)
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_nimbus1 (cache)
DEBUG loader: Loading from: 15629680_vm_nimbus1_centos6-compatible_virtualbox (evaluating)
DEBUG provisioner: Provisioner defined: 
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO machine: Initializing machine: nimbus1
 INFO machine:   - Provider: VagrantPlugins::ProviderVirtualBox::Provider
 INFO machine:   - Box: #<Vagrant::Box:0x00000001b50fd0>
 INFO machine:   - Data dir: /opt/wirbelsturm/.vagrant/machines/nimbus1/virtualbox
DEBUG virtualbox: Instantiating the driver for machine ID: nil
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
 INFO machine: New machine ID: nil
DEBUG virtualbox: Instantiating the driver for machine ID: nil
 INFO base: VBoxManage path: VBoxManage
 INFO subprocess: Starting process: ["/usr/bin/VBoxManage", "--version"]
DEBUG subprocess: Selecting on IO
DEBUG subprocess: stdout: 4.3.22r98236
DEBUG subprocess: Waiting for process to exit. Remaining to timeout: 32000
DEBUG subprocess: Exit status: 0
DEBUG meta: Finding driver for VirtualBox version: 4.3.22
 INFO meta: Using VirtualBox driver: VagrantPlugins::ProviderVirtualBox::Driver::Version_4_3
 INFO base: VBoxManage path: VBoxManage
DEBUG command: Finding machine that match name: supervisor1
 INFO loader: Set "15629680_machine_supervisor1" = [["2", #<Proc:0x00000000d607b8@/opt/wirbelsturm/Vagrantfile:26>]]
DEBUG loader: Populating proc cache for ["2", #<Proc:0x00000000d607b8@/opt/wirbelsturm/Vagrantfile:26>]
 INFO loader: Loading configuration in order: [:home, :root, "15629680_machine_supervisor1"]
ERROR loader: Unknown config sources: ["15629680_machine_zookeeper1", :"16650140_centos6-compatible_virtualbox", :"15629680_vm_zookeeper1_centos6-compatible_virtualbox", "15629680_machine_nimbus1", :"15629680_vm_nimbus1_centos6-compatible_virtualbox"]
DEBUG loader: Loading from: root (cache)
DEBUG loader: Loading from: 15629680_machine_supervisor1 (evaluating)
DEBUG provisioner: Provisioner defined: 
DEBUG loader: Configuration loaded successfully, finalizing and returning
DEBUG push: finalizing
 INFO environment: Running hook: environment_unload
 INFO host: Autodetecting host type for [#<Vagrant::Environment: /opt/wirbelsturm>]
DEBUG host: Trying: slackware
DEBUG host: Trying: arch
DEBUG host: Trying: darwin
DEBUG host: Trying: suse
DEBUG host: Trying: redhat
 INFO host: Detected: redhat!
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 1 hooks defined.
 INFO runner: Running action: #<Vagrant::Action::Builder:0x000000022f2d88>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<Vagrant::Errors::VagrantfileLoadError: There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: <provider config: virtualbox>
Message: no implicit conversion of Symbol into Integer>
ERROR vagrant: There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: <provider config: virtualbox>
Message: no implicit conversion of Symbol into Integer
ERROR vagrant: /opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v2/config/vm.rb:441:in `rescue in block in finalize!'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v2/config/vm.rb:434:in `block in finalize!'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v2/config/vm.rb:423:in `each'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/kernel_v2/config/vm.rb:423:in `finalize!'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/config/v2/root.rb:50:in `block in finalize!'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/config/v2/root.rb:49:in `each'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/config/v2/root.rb:49:in `finalize!'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/config/v2/loader.rb:21:in `finalize'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/config/loader.rb:158:in `load'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/vagrantfile.rb:149:in `machine_config'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:325:in `default_provider'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:165:in `block in with_target_vms'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:192:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:192:in `block in with_target_vms'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:174:in `each'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/plugin/v2/command.rb:174:in `with_target_vms'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/up/command.rb:74:in `block in execute'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:277:in `block (2 levels) in batch'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:275:in `tap'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:275:in `block in batch'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:274:in `synchronize'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:274:in `batch'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/plugins/commands/up/command.rb:58:in `execute'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/cli.rb:42:in `execute'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.2/lib/vagrant/environment.rb:301:in `cli'
/opt/vagrant/bin/../embedded/gems/gems/vagrant-1.7.2/bin/vagrant:174:in `<main>'
 INFO interface: error: There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: <provider config: virtualbox>
Message: no implicit conversion of Symbol into Integer
There was an error loading a Vagrantfile. The file being loaded
and the error message are shown below. This is usually caused by
a syntax error.

Path: <provider config: virtualbox>
Message: no implicit conversion of Symbol into Integer
 INFO interface: Machine: error-exit ["Vagrant::Errors::VagrantfileLoadError", "There was an error loading a Vagrantfile. The file being loaded\nand the error message are shown below. This is usually caused by\na syntax error.\n\nPath: <provider config: virtualbox>\nMessage: no implicit conversion of Symbol into Integer"]

"Error when launching multilang subprocess" with pyleus

I encounter this error when submitting a topology with pyleus.

2014-12-09 09:50:18 b.s.d.executor [ERROR] 
java.lang.RuntimeException: Error when launching multilang subprocess
pyleus_venv/bin/python: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

    at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:66) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:117) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.daemon.executor$fn__3441$fn__3453.invoke(executor.clj:692) ~[storm-core-0.9.3.jar:0.9.3]
    at backtype.storm.util$async_loop$fn__464.invoke(util.clj:461) ~[storm-core-0.9.3.jar:0.9.3]
    at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
Caused by: java.io.IOException: Stream closed
    at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434) ~[na:1.7.0_71]
    at java.io.OutputStream.write(OutputStream.java:116) ~[na:1.7.0_71]
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[na:1.7.0_71]
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[na:1.7.0_71]
    at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[na:1.7.0_71]
    at com.yelp.pyleus.serializer.MessagePackSerializer.writeMessage(MessagePackSerializer.java:208) ~[stormjar.jar:na]
    at com.yelp.pyleus.serializer.MessagePackSerializer.connect(MessagePackSerializer.java:65) ~[stormjar.jar:na]
    at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:64) ~[storm-core-0.9.3.jar:0.9.3]
    ... 5 common frames omitted

Some hostnames in cluster not resolvable, workers have UnresolvedAddressException

Summary

Some of the hosts in the cluster are not resolvable, and connection failures result. The /etc/hosts entries are not complete and not consistent across the cluster. The specific symptom observed is a worker throwing UnresolvedAddressException's trying to connect to other supervisors.

Workaround

Of course it easily worked around by amending the hosts file to add all machines.

Example error

E.g. a worker running on supervisor1 is unable to connect to supervisor2:-

2015-02-10 08:01:35 b.s.m.n.StormClientErrorHandler [INFO] Connection failed Netty-Client-supervisor2:6700
java.nio.channels.UnresolvedAddressException: null
        at sun.nio.ch.Net.checkAddress(Net.java:127) ~[na:1.7.0_75]
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644) ~[na:1.7.0_75]
        at org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.Channels.connect(Channels.java:634) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.channel.AbstractChannel.connect(AbstractChannel.java:207) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:229) [storm-core-0.9.3.jar:0.9.3]
        at org.apache.storm.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:182) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.messaging.netty.Client.connect(Client.java:152) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.messaging.netty.Client.access$000(Client.java:43) [storm-core-0.9.3.jar:0.9.3]
        at backtype.storm.messaging.netty.Client$1.run(Client.java:107) [storm-core-0.9.3.jar:0.9.3]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_75]
        at java.util.concurrent.FutureTask.run(FutureTask.java:262) [na:1.7.0_75]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_75]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) [na:1.7.0_75]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]

Environment Info

Current machine states:
zookeeper1 running (virtualbox)
nimbus1 running (virtualbox)
supervisor1 running (virtualbox)
supervisor2 running (virtualbox)
kafka1 running (virtualbox)

[vagrant@supervisor1 ~]$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 supervisor1
10.0.0.251 nimbus1
10.0.0.101 supervisor1
10.0.0.241 zookeeper1

[vagrant@nimbus1 ~]$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 nimbus1
10.0.0.251 nimbus1
10.0.0.241 zookeeper1

Connection timeout

Hello! I've just cloned the repository, set up everything, bootstrapped it and, when deploying the first vb host, i get connection timeout, after which deploying stops. What could i do to get it working?

Ubuntu 13.10, java 7. Virtual box 4.3.14, vagrant 1.6.3

Connection Issue

I am trying to set up wirbelsturm on my mac for development against a kafka-storm cluster. I can get through bootstrap and bringing the instances up with VirtualBox but I am having issues connecting with kafka1 / zookeeper1.

First, after bringing the boxes up with vagrant, my mac knows nothing of the zookeeper1/kafka1/etc.. hosts. I'm not sure if this gets automatically set or not. Ansible and Vagrant know about those names but my system does not. I figured out that the IP for these machines is defined to be something after ip_range_start.

Then, once the systems are up, I have a hard time connecting to zookeeper / kafka. Clients are giving me connection not found or unresolved address when trying to connect to the server. Some commands seem to work though such as:

bin/kafka-topics.sh --create --zookeeper 10.1.0.241:2181 --replication-factor 1 --partitions 1 --topic analytics
Created topic "analytics".

bin/kafka-topics.sh --describe --zookeeper 10.1.0.241:2181 --topic analytics
Topic:analytics  PartitionCount:1   ReplicationFactor:1 Configs:
    Topic: analytics    Partition: 0    Leader: 0   Replicas: 0 Isr: 0

but others don't...

bin/kafka-console-producer.sh --broker-list 10.0.0.21:9092 --topic analytics
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
{}
[2014-11-12 22:26:30,315] ERROR Producer connection to kafka1:9092 unsuccessful     (kafka.producer.SyncProducer)
java.nio.channels.UnresolvedAddressException
...
[2014-11-13 00:06:19,191] WARN Failed to send producer request with correlation id 2 to broker 0 with     data for partitions [analytics,0] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.UnresolvedAddressException

bin/kafka-console-consumer.sh --zookeeper 10.1.0.241:2181 --topic analytics --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[2014-11-13 00:01:28,093] ERROR Producer connection to kafka1:9092 unsuccessful (kafka.producer.SyncProducer)
java.nio.channels.UnresolvedAddressException

Nothing stands out in the Kafka logs or any problems with the zookeeper quorum.

Here is log from vagrant up: http://pastebin.com/0XU43rrFH
Here is a log after performing a vagrant up, setting a topic on kafka, trying to log a consumer in and checking the logs: http://pastebin.com/mVfKs66r

Any ideas on how to debug this issue? Any logs that might help? I can't tell if it's due to zookeeper from the logs provided or if there is a connection issue.

On another note, do you think a google group would be to good to set up to put questions like this?

Storm Supervisor provisioning broken?

Hi,

I've pulled latest source, destroyed old VMs (zookeeper, nimbus, supervisor and kafka) and tried provisioning them from scratch. Provisioning got stuck with supervisor machine with this output error message:

Stderr from the command:

Warning: Could not retrieve fact fqdn
Warning: Host is missing hostname and/or domain: supervisor1
Error: Could not prefetch package provider 'yum': Execution of '/usr/bin/python /usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yumhelper.py' returned 1: Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.zarhi.com
 * extras: mirror.zarhi.com
 * updates: mirror.zarhi.com
 * wirbelsturm-epel-6: mirror.pmf.kg.ac.rs
Traceback (most recent call last):
  File "/usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yumhelper.py", line 115, in <module>
    ypl = pkg_lists(my)
  File "/usr/lib/ruby/site_ruby/1.8/puppet/provider/package/yumhelper.py", line 40, in pkg_lists
    my.doTsSetup()
  File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 84, in doTsSetup
    return self._getTs()
  File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 99, in _getTs
    self._getTsInfo(remove_only)
  File "/usr/lib/python2.6/site-packages/yum/depsolve.py", line 110, in _getTsInfo
    pkgSack = self.pkgSack
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 887, in <lambda>
    pkgSack = property(fget=lambda self: self._getSacks(),
  File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 669, in _getSacks
    self.repos.populateSack(which=repos)
  File "/usr/lib/python2.6/site-packages/yum/repos.py", line 308, in populateSack
    sack.populate(repo, mdtype, callback, cacheonly)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 165, in populate
    if self._check_db_version(repo, mydbtype):
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 223, in _check_db_version
    return repo._check_db_version(mdtype)
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1256, in _check_db_version
    repoXML = self.repoXML
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1455, in <lambda>
    repoXML = property(fget=lambda self: self._getRepoXML(),
  File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1451, in _getRepoXML
    raise Errors.RepoError, msg
yum.Errors.RepoError: Cannot retrieve repository metadata (repomd.xml) for repository: miguno. Please verify its path and try again

Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y downgrade puppet-3.3.1-1.el6' returned 1: 

Error Downloading Packages:
  puppet-3.3.1-1.el6.noarch: failure: puppet-3.3.1-1.el6.noarch.rpm from puppetlabs-products: [Errno 256] No more mirrors to try.


Error: /Stage[main]/Wirbelsturm_common::Install/Package[puppet]/ensure: change from 3.4.2-1.el6 to 3.3.1-1.el6 failed: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y downgrade puppet-3.3.1-1.el6' returned 1: 

Error Downloading Packages:
  puppet-3.3.1-1.el6.noarch: failure: puppet-3.3.1-1.el6.noarch.rpm from puppetlabs-products: [Errno 256] No more mirrors to try.


Warning: /Stage[main]/Wirbelsturm_common::Config/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Sysctl[net.ipv6.conf.default.disable_ipv6]: Skipping because of failed dependencies
Warning: /Stage[main]/Wirbelsturm_common::Config/Sysctl::Value[net.ipv6.conf.default.disable_ipv6]/Exec[exec_sysctl_net.ipv6.conf.default.disable_ipv6]: Skipping because of failed dependencies
Warning: /Stage[main]/Wirbelsturm_common::Config/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Sysctl[net.ipv6.conf.all.disable_ipv6]: Skipping because of failed dependencies
Warning: /Stage[main]/Wirbelsturm_common::Config/Sysctl::Value[net.ipv6.conf.all.disable_ipv6]/Exec[exec_sysctl_net.ipv6.conf.all.disable_ipv6]: Skipping because of failed dependencies
Warning: /Stage[main]/Wirbelsturm_common/Anchor[wirbelsturm_common::end]: Skipping because of failed dependencies

How this problem can be fixed?

External dependencies on Mac OS

Is there a way to use this software without bringing in rvm and one more C++ compiler?

Right now ./bootstrap script tries to install a ruby version using rvm, wich brings gcc4 from homebrew.

May be that should be documented somewhere.

Kafka nodes don't reference zookeeper

I tried setting up a cluster of 3 Kafka brokers and a single Zookeeper node. The provisioning process appears to work just fine, but the Kafka brokers don't connect to Zookeeper and shut down after a short timeout.

I looked at the configuration on one of them in /opt/kafka/config/server.properties, and found this: zookeeper.connect=localhost:2181. It looks like the Kafka nodes are not referencing the Zookeeper node.

Have I made a mistake setting this up? Or is there another step I need to follow for them to connect them to Zookeeper? The only changes to the default wirbelsturm.yaml I made were to comment out the storm_slave and storm_master nodes and enable the commented out kafka_broker ones.

Amazon Linux - puppet issues

newer Amazon linux boxes come with ruby 2.0+ pre installed ex ami-2f726546. This conflicts with ruby 1.8 that comes with puppet which breaks the setup scripts.

I recommend using a CentOS box instead or fixing the setup script

Problems with deploying on AWS: parameter groupId is invalid

 # VAGRANT_LOG=info vagrant up nimbus1 --provider=aws

==> nimbus1:   - [ sh, -c, 'echo "export AWS_ACCESS_KEY_ID=\$AWS_ACCESS_KEY" >> /etc/aws/aws-credentials.sh' ]
==> nimbus1:   - [ sh, -c, 'echo "export AWS_SECRET_ACCESS_KEY=\$AWS_SECRET_KEY" >> /etc/aws/aws-credentials.sh' ]
==> nimbus1:   - [ chmod, 400, /etc/aws/aws-credentials.sh ]
==> nimbus1:   - [ wget, "https://s3.amazonaws.com/yum.miguno.com/bootstrap/aws/rc.local", -O, "/etc/rc.d/rc.local" ]
==> nimbus1:   - [ chown, "root:root", /etc/rc.d/rc.local ]
==> nimbus1:   - [ chmod, 755, /etc/rc.d/rc.local ]
==> nimbus1:   - [ /etc/rc.d/rc.local ]
==> nimbus1:
==> nimbus1: --===============8357495029070803742==--
 INFO interface: info:  -- Block Device Mapping: []
 INFO interface: info: ==> nimbus1:  -- Block Device Mapping: []
==> nimbus1:  -- Block Device Mapping: []
 INFO interface: info:  -- Terminate On Shutdown: false
 INFO interface: info: ==> nimbus1:  -- Terminate On Shutdown: false
==> nimbus1:  -- Terminate On Shutdown: false
 INFO interface: info:  -- Monitoring: false
 INFO interface: info: ==> nimbus1:  -- Monitoring: false
==> nimbus1:  -- Monitoring: false
 INFO interface: info:  -- EBS optimized: false
 INFO interface: info: ==> nimbus1:  -- EBS optimized: false
==> nimbus1:  -- EBS optimized: false
 INFO interface: info:  -- Assigning a public IP address in a VPC: false
 INFO interface: info: ==> nimbus1:  -- Assigning a public IP address in a VPC: false
==> nimbus1:  -- Assigning a public IP address in a VPC: false
 INFO interface: warn: Warning! Vagrant might not be able to SSH into the instance.
Please check your security groups settings.
 INFO interface: warn: ==> nimbus1: Warning! Vagrant might not be able to SSH into the instance.
==> nimbus1: Please check your security groups settings.
==> nimbus1: Warning! Vagrant might not be able to SSH into the instance.
==> nimbus1: Please check your security groups settings.
ERROR warden: Error occurred: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
 INFO warden: Beginning recovery process...
 INFO warden: Calling recover: #<VagrantPlugins::AWS::Action::RunInstance:0x000001023e9100>
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
ERROR warden: Error occurred: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
ERROR warden: Error occurred: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
 INFO warden: Beginning recovery process...
 INFO warden: Calling recover: #<Vagrant::Action::Builtin::Call:0x00000100bb7bb8>
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO warden: Beginning recovery process...
 INFO warden: Recovery complete.
 INFO environment: Released process lock: machine-action-31267fa52a22e23fe6caeea06ca2b355
 INFO environment: Running hook: environment_unload
 INFO runner: Preparing hooks for middleware sequence...
 INFO runner: 1 hooks defined.
 INFO runner: Running action: environment_unload #<Vagrant::Action::Builder:0x00000101157d20>
ERROR vagrant: Vagrant experienced an error! Details:
ERROR vagrant: #<VagrantPlugins::AWS::Errors::FogError: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty>
ERROR vagrant: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
ERROR vagrant: /Users/netalpha/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/run_instance.rb:113:in `rescue in call'
/Users/netalpha/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/run_instance.rb:101:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/Users/netalpha/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/elb_register_instance.rb:16:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/Users/netalpha/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/warn_networks.rb:14:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/synced_folders.rb:86:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/provision.rb:80:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in `block in run'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:in `busy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in `run'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/call.rb:53:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/Users/netalpha/.vagrant.d/gems/gems/vagrant-aws-0.6.0/lib/vagrant-aws/action/connect_aws.rb:43:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/warden.rb:34:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/builder.rb:116:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in `block in run'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/util/busy.rb:19:in `busy'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/action/runner.rb:66:in `run'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:214:in `action_raw'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:191:in `block in action'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/environment.rb:516:in `lock'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:in `call'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/machine.rb:178:in `action'
/opt/vagrant/embedded/gems/gems/vagrant-1.7.4/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
 INFO interface: error: There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
There was an error talking to AWS. The error message is shown
below:

InvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty
 INFO interface: Machine: error-exit ["VagrantPlugins::AWS::Errors::FogError", "There was an error talking to AWS. The error message is shown\nbelow:\n\nInvalidParameterValue => Value () for parameter groupId is invalid. The value cannot be empty"]

Setting up kafka1: "Could not find class schema_registry::service..."

When I tried to set up the default Kafka broker (kafka1) by uncommenting the proper lines in wirbelsturm.yaml, I got this error:

...
[Wirbelsturm] Beginning parallel 'vagrant provision' processes ... (it may take some time until you see further output)
Unknown option: no-notice
Unknown option: no-notice
[kafka1] Provisioning. Log: .../git/wirbelsturm/sh/../provisioning-logs/kafka1.log, Result: The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
 FAILURE
[kafka1] Last 12 entries in log file:
[kafka1]  ==> kafka1: Running provisioner: puppet...
[kafka1]  ==> kafka1: Running Puppet with site.pp...
[kafka1]  ==> kafka1: Notice: Scope(Class[main]): Deployment environment: 'default-environment'
[kafka1]  ==> kafka1: Notice: Scope(Class[main]): I am running within a Vagrant-controlled environment
[kafka1]  ==> kafka1: Notice: Scope(Class[main]): Disabling firewall...
[kafka1]  ==> kafka1: Notice: Scope(Class[main]): I have been assigned the role 'kafka_broker'
[kafka1]  ==> kafka1: Error: Could not find class schema_registry::service for kafka1.fi.upm.es on node kafka1.fi.upm.es
[kafka1]  ==> kafka1: Error: Could not find class schema_registry::service for kafka1.fi.upm.es on node kafka1.fi.upm.es
[kafka1] ---------------------------------------------------------------------------
...

As a result, a Kafka broker was set up but not fully configured, i.e. the Schema Registry was not installed, Kafka was not running, etc. To solve this issue, I added the following lines to .../wirbelsturm/puppet/Puppetfile:

mod 'verisign/schema_registry',
  :git => 'https://github.com/verisign/puppet-schema_registry.git'

And then, I run "librarian-puppet update" from .../wirbelsturm/puppet/. After redeployment, the environment is working properly.

Error running Bootstrap on Mac OSX 10.9

Running Mac OSX 10.9, I recieved the following error message while running ./bootstrap:

ruby-1.9.3-p362 - #compiling....................
Error running '__rvm_make -j 1',
showing last 15 lines of /Users/brady.doll/.rvm/log/1409157940_ruby-1.9.3-p362/make.log
f_rational_new_no_reduce1(VALUE klass, VALUE x)
^
6 warnings generated.
compiling re.c
compiling regcomp.c
compiling regenc.c
compiling regerror.c
compiling regexec.c
compiling regparse.c
regparse.c:582:15: error: implicit conversion loses integer precision: 'st_index_t' (aka 'unsigned long') to 'int' [-Werror,-Wshorten-64-to-32]
    return t->num_entries;
    ~~~~~~ ~~~^~~~~~~~~~~
1 error generated.
make: *** [regparse.o] Error 1
++ return 2
There has been an error while running make. Halting the installation.
Installing bundler...
ERROR:  While executing gem ... (Gem::FilePermissionError)
    You don't have write permissions for the /Library/Ruby/Gems/2.0.0 directory.
Installing gems (if any)
bash: line 200: bundle: command not found
Thanks for using ruby-bootstrap.  Happy hacking!
ruby-1.9.3-p362 is not installed.
To install do: 'rvm install ruby-1.9.3-p362'
Checking Vagrant environment...
Checking for Vagrant: OK
Preparing Vagrant environment...
Installing the 'vagrant-hosts --version '>= 2.1.4'' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hosts (2.2.0)'!
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.5.0)'!
Installing the 'vagrant-awsinfo' plugin. This can take a few minutes...
Installed the plugin 'vagrant-awsinfo (0.0.8)'!
==> box: Adding box 'centos6-compatible' (v0) for provider: virtualbox
==> box: Adding box 'centos6-compatible' (v0) for provider: aws
Checking Puppet environment...
Checking for librarian-puppet: NOT FOUND
Please run 'bundle install' and then re-run bootstrap

I was able to find a StackOverflow question that seemed to be the same issue that had a solution that worked for me.

The solution:

Install MacPorts, and then run:

sudo port selfupdate
sudo port install apple-gcc42

Then run:

CC=/opt/local/bin/gcc-apple-4.2 rvm install ruby-1.9.3-p362 --enable-shared --without-tk --without-tcl

I had already had MacPorts installed and I believe that apple-gcc42 was installed by some processes during the first call to ./bootstrap, so I didn't have to run the first steps. After running the above commands I was able to re-run ./bootstrap and the install completed successfully.

IP collission error, server failure when changing IPs

IP error when using default .yml file but when changing the IPs to something like 10.10.10.XXX I get failures for all the servers when using the deploy script.
Are there any other options I have to change when modifying the IPs?

Error below.

The specified host network collides with a non-hostonly network!
This will cause your specified IP to be inaccessible. Please change
the IP or name of your host only network so that it no longer matches that of
a bridged or non-hostonly network.

How to access ZK and Kafka instances with local install for development?

Hi,

I did install successfully and I am able to run the instances as instructed in the readme. However when I try to access the ZK and Kafka instance from my code I am not able to. From the ip checked, it comes back as 10.0.2.15 for both. Is there anything I need to set to allow access from my code? Thanks for your help.

Please run 'bundle install' and then re-run bootstrap (Windows)

$ ./bootstrap
+---------------------------+
| BOOTSTRAPPING WIRBELSTURM |
+---------------------------+
Checking for curl: OK
Preparing Ruby environment...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4942  100  4942    0     0   2455      0  0:00:02  0:00:02 --:--:--  2455
Detecting desired Ruby version: OK (1.9.3-p362)
Checking for rvm: NOT FOUND (If you already ran ruby-bootstrap: have you performed the post-installation steps?)
Installing latest stable version of rvm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0    115      0  0:00:01  0:00:01 --:--:--   115
100 22721  100 22721    0     0   6837      0  0:00:03  0:00:03 --:--:-- 26481
BASH 3.2.25 required (you have 3.1.23(6)-release)
Creating an example Wirbelsturm configuration file at /c/Dev/wirbelsturm/wirbelsturm.yaml...
Checking Vagrant environment...
Checking for Vagrant: OK
Preparing Vagrant environment...
Installing the 'vagrant-hosts --version '>= 2.1.4'' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hosts (2.4.0)'!
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.6.0)'!
Installing the 'vagrant-awsinfo' plugin. This can take a few minutes...
Installed the plugin 'vagrant-awsinfo (0.0.8)'!
==> box: Adding box 'centos6-compatible' (v0) for provider: virtualbox
    box: Downloading: http://puppet-vagrant-boxes.puppetlabs.com/centos-65-x64-virtualbox-puppet.box
    box: Progress: 100% (Rate: 78511/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'centos6-compatible' (v0) for 'virtualbox'!
==> box: Adding box 'centos6-compatible' (v0) for provider: aws
    box: Downloading: https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
    box: Progress: 100% (Rate: 538/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'centos6-compatible' (v0) for 'aws'!
Checking Puppet environment...
Checking for librarian-puppet: NOT FOUND
Please run 'bundle install' and then re-run bootstrap

Add support for riemann?

Would you be interested in me adding support for Riemann, perhaps on the current metrics VM and also adding Riemann monitoring for Kafka, Zookeeper, e.t.c.?

UnresolvedAddressException while using Wurstmeister's spout

I am using Wurstmeister's spout 0.4.0.
I configured my kafka spout like the following:

BrokerHosts brokerHosts = new ZkHosts("zookeeper1:2181");       
SpoutConfig kafkaConfig = new SpoutConfig(brokerHosts, "topicName", "/stormKafka", "StormConsumer");

TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("kafkaSpout", new KafkaSpout(kafkaConfig), 1);

I used wirbelsturm to deploy zookeeper, storm and kafka.

[vagrant@kafka1 ~]$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 kafka1
10.0.0.21 kafka1
10.0.0.251 nimbus1
10.0.0.101 supervisor1
10.0.0.102 supervisor2
10.0.0.241 zookeeper1

But while running the topology my spout throws the following error:

java.lang.RuntimeException: java.nio.channels.UnresolvedAddressException
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:83)
    at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:45)
    at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:118)
    at backtype.storm.daemon.executor$eval3848$fn__3849$fn__3864$fn__3893.invoke(executor.clj:562)
    at backtype.storm.util$async_loop$fn__384.invoke(util.clj:433)
    at clojure.lang.AFn.run(AFn.java:24)
    at java.lang.Thread.run(Thread.java:701)
Caused by: java.nio.channels.UnresolvedAddressException
    at sun.nio.ch.Net.checkAddress(Net.java:89)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:514)
    at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
    at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44)
    at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:129)
    at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69)
    at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:125)
    at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:80)
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:55)
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:45)
    at storm.kafka.PartitionManager.(PartitionManager.java:77)
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:78)
    ... 6 more

Logs of the worker running the spout:

2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:host.name=supervisor1
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.version=1.6.0_30
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.vendor=Sun Microsystems Inc.
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.class.path=/opt/storm/lib/junit-3.8.1.jar:/opt/storm/lib/minlog-1.2.jar:/opt/storm/lib/tools.macro-0.1.0.jar:/opt/storm/lib/servlet-api-2.5-20081211.jar:/opt/storm/lib/carbonite-1.3.2.jar:/opt/storm/lib/compojure-1.1.3.jar:/opt/storm/lib/clojure-1.4.0.jar:/opt/storm/lib/kryo-2.17.jar:/opt/storm/lib/tools.logging-0.2.3.jar:/opt/storm/lib/slf4j-api-1.6.5.jar:/opt/storm/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm/lib/commons-logging-1.1.1.jar:/opt/storm/lib/clj-stacktrace-0.2.4.jar:/opt/storm/lib/httpcore-4.1.jar:/opt/storm/lib/hiccup-0.3.6.jar:/opt/storm/lib/netty-3.6.3.Final.jar:/opt/storm/lib/jgrapht-core-0.9.0.jar:/opt/storm/lib/commons-exec-1.1.jar:/opt/storm/lib/servlet-api-2.5.jar:/opt/storm/lib/ring-servlet-0.3.11.jar:/opt/storm/lib/clj-time-0.4.1.jar:/opt/storm/lib/objenesis-1.2.jar:/opt/storm/lib/jline-2.11.jar:/opt/storm/lib/meat-locker-0.3.1.jar:/opt/storm/lib/logback-classic-1.0.6.jar:/opt/storm/lib/json-simple-1.1.jar:/opt/storm/lib/curator-framework-1.0.1.jar:/opt/storm/lib/commons-io-1.4.jar:/opt/storm/lib/curator-client-1.0.1.jar:/opt/storm/lib/guava-13.0.jar:/opt/storm/lib/storm-core-0.9.1-incubating.jar:/opt/storm/lib/commons-fileupload-1.2.1.jar:/opt/storm/lib/logback-core-1.0.6.jar:/opt/storm/lib/math.numeric-tower-0.0.1.jar:/opt/storm/lib/snakeyaml-1.11.jar:/opt/storm/lib/tools.cli-0.2.2.jar:/opt/storm/lib/core.incubator-0.1.0.jar:/opt/storm/lib/zookeeper-3.3.3.jar:/opt/storm/lib/httpclient-4.1.1.jar:/opt/storm/lib/clout-1.0.1.jar:/opt/storm/lib/jetty-6.1.26.jar:/opt/storm/lib/ring-devel-0.3.11.jar:/opt/storm/lib/asm-4.0.jar:/opt/storm/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm/lib/commons-codec-1.4.jar:/opt/storm/lib/ring-core-1.1.5.jar:/opt/storm/lib/joda-time-2.0.jar:/opt/storm/lib/commons-lang-2.5.jar:/opt/storm/lib/reflectasm-1.07-shaded.jar:/opt/storm/lib/jetty-util-6.1.26.jar:/opt/storm/lib/disruptor-2.10.1.jar:/opt/storm/conf:/app/storm/supervisor/stormdist/FirehoseTopology_DEV-1-1396002185/stormjar.jar
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.io.tmpdir=/tmp
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:java.compiler=<NA>
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:os.name=Linux
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:os.arch=amd64
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:os.version=2.6.32-431.el6.x86_64
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:user.name=storm
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:user.home=/home/storm
2014-03-28 10:23:19 o.a.z.ZooKeeper [INFO] Client environment:user.dir=/
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:zookeeper.version=3.3.3-1073969, built on 02/23/2011 22:27 GMT
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:host.name=supervisor1
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.version=1.6.0_30
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.vendor=Sun Microsystems Inc.
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.home=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.class.path=/opt/storm/lib/junit-3.8.1.jar:/opt/storm/lib/minlog-1.2.jar:/opt/storm/lib/tools.macro-0.1.0.jar:/opt/storm/lib/servlet-api-2.5-20081211.jar:/opt/storm/lib/carbonite-1.3.2.jar:/opt/storm/lib/compojure-1.1.3.jar:/opt/storm/lib/clojure-1.4.0.jar:/opt/storm/lib/kryo-2.17.jar:/opt/storm/lib/tools.logging-0.2.3.jar:/opt/storm/lib/slf4j-api-1.6.5.jar:/opt/storm/lib/log4j-over-slf4j-1.6.6.jar:/opt/storm/lib/commons-logging-1.1.1.jar:/opt/storm/lib/clj-stacktrace-0.2.4.jar:/opt/storm/lib/httpcore-4.1.jar:/opt/storm/lib/hiccup-0.3.6.jar:/opt/storm/lib/netty-3.6.3.Final.jar:/opt/storm/lib/jgrapht-core-0.9.0.jar:/opt/storm/lib/commons-exec-1.1.jar:/opt/storm/lib/servlet-api-2.5.jar:/opt/storm/lib/ring-servlet-0.3.11.jar:/opt/storm/lib/clj-time-0.4.1.jar:/opt/storm/lib/objenesis-1.2.jar:/opt/storm/lib/jline-2.11.jar:/opt/storm/lib/meat-locker-0.3.1.jar:/opt/storm/lib/logback-classic-1.0.6.jar:/opt/storm/lib/json-simple-1.1.jar:/opt/storm/lib/curator-framework-1.0.1.jar:/opt/storm/lib/commons-io-1.4.jar:/opt/storm/lib/curator-client-1.0.1.jar:/opt/storm/lib/guava-13.0.jar:/opt/storm/lib/storm-core-0.9.1-incubating.jar:/opt/storm/lib/commons-fileupload-1.2.1.jar:/opt/storm/lib/logback-core-1.0.6.jar:/opt/storm/lib/math.numeric-tower-0.0.1.jar:/opt/storm/lib/snakeyaml-1.11.jar:/opt/storm/lib/tools.cli-0.2.2.jar:/opt/storm/lib/core.incubator-0.1.0.jar:/opt/storm/lib/zookeeper-3.3.3.jar:/opt/storm/lib/httpclient-4.1.1.jar:/opt/storm/lib/clout-1.0.1.jar:/opt/storm/lib/jetty-6.1.26.jar:/opt/storm/lib/ring-devel-0.3.11.jar:/opt/storm/lib/asm-4.0.jar:/opt/storm/lib/ring-jetty-adapter-0.3.11.jar:/opt/storm/lib/commons-codec-1.4.jar:/opt/storm/lib/ring-core-1.1.5.jar:/opt/storm/lib/joda-time-2.0.jar:/opt/storm/lib/commons-lang-2.5.jar:/opt/storm/lib/reflectasm-1.07-shaded.jar:/opt/storm/lib/jetty-util-6.1.26.jar:/opt/storm/lib/disruptor-2.10.1.jar:/opt/storm/conf:/app/storm/supervisor/stormdist/FirehoseTopology_DEV-1-1396002185/stormjar.jar
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.io.tmpdir=/tmp
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:java.compiler=<NA>
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:os.name=Linux
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:os.arch=amd64
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:os.version=2.6.32-431.el6.x86_64
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:user.name=storm
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:user.home=/home/storm
2014-03-28 10:23:19 o.a.z.s.ZooKeeperServer [INFO] Server environment:user.dir=/
2014-03-28 10:23:23 b.s.d.worker [INFO] Launching worker for FirehoseTopology_DEV-1-1396002185 on e1364c61-e842-40f5-956e-a7cc27adc9d8:6701 with id 431a0794-40a2-4f07-b12f-5e4896297607 and conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 5000, "topology.skip.missing.kryo.registrations" false, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 500, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m -Djava.net.preferIPv4Stack=true", "java.library.path" "/usr/local/lib:/opt/local/lib:/usr/lib", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/app/storm", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "nimbus1", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2181, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["zookeeper1"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "nimbus.thrift.max_buffer_size" 1048576, "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.netty.Context", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.hostname" "supervisor1", "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "storm.cluster.mode" "distributed", "topology.optimize" true, "topology.max.task.parallelism" nil}
2014-03-28 10:23:23 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
2014-03-28 10:23:23 o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=20000 watcher=com.netflix.curator.ConnectionState@241bff0d
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Opening socket connection to server zookeeper1/10.0.0.241:2181
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Socket connection established to zookeeper1/10.0.0.241:2181, initiating session
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Session establishment complete on server zookeeper1/10.0.0.241:2181, sessionid = 0x14508151e480009, negotiated timeout = 20000
2014-03-28 10:23:24 b.s.zookeeper [INFO] Zookeeper state update: :connected:none
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] EventThread shut down
2014-03-28 10:23:24 o.a.z.ZooKeeper [INFO] Session: 0x14508151e480009 closed
2014-03-28 10:23:24 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
2014-03-28 10:23:24 o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zookeeper1:2181/storm sessionTimeout=20000 watcher=com.netflix.curator.ConnectionState@bdaf1cd
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Opening socket connection to server zookeeper1/10.0.0.241:2181
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Socket connection established to zookeeper1/10.0.0.241:2181, initiating session
2014-03-28 10:23:24 o.a.z.ClientCnxn [INFO] Session establishment complete on server zookeeper1/10.0.0.241:2181, sessionid = 0x14508151e48000a, negotiated timeout = 20000
2014-03-28 10:23:24 b.s.m.TransportFactory [INFO] Storm peer transport plugin:backtype.storm.messaging.netty.Context
2014-03-28 10:23:25 b.s.d.executor [INFO] Loading executor kafkaSpout:[2 2]
2014-03-28 10:23:25 b.s.d.executor [INFO] Loaded executor tasks kafkaSpout:[2 2]
2014-03-28 10:23:25 b.s.d.executor [INFO] Finished loading executor kafkaSpout:[2 2]
2014-03-28 10:23:25 b.s.d.executor [INFO] Opening spout kafkaSpout:(2)
2014-03-28 10:23:25 b.s.d.executor [INFO] Loading executor __system:[-1 -1]
2014-03-28 10:23:25 b.s.d.executor [INFO] Loaded executor tasks __system:[-1 -1]
2014-03-28 10:23:25 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
2014-03-28 10:23:25 o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zookeeper1:2181, sessionTimeout=20000 watcher=com.netflix.curator.ConnectionState@4be3eb6a
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Opening socket connection to server zookeeper1/10.0.0.241:2181
2014-03-28 10:23:25 b.s.d.executor [INFO] Finished loading executor __system:[-1 -1]
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Socket connection established to zookeeper1/10.0.0.241:2181, initiating session
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Session establishment complete on server zookeeper1/10.0.0.241:2181, sessionid = 0x14508151e48000b, negotiated timeout = 20000
2014-03-28 10:23:25 b.s.d.executor [INFO] Preparing bolt __system:(-1)
2014-03-28 10:23:25 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
2014-03-28 10:23:25 o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=20000 watcher=com.netflix.curator.ConnectionState@17fca9f9
2014-03-28 10:23:25 b.s.d.executor [INFO] Prepared bolt __system:(-1)
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Opening socket connection to server zookeeper1/10.0.0.241:2181
2014-03-28 10:23:25 b.s.d.executor [INFO] Loading executor __acker:[1 1]
2014-03-28 10:23:25 b.s.d.executor [INFO] Loaded executor tasks __acker:[1 1]
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Socket connection established to zookeeper1/10.0.0.241:2181, initiating session
2014-03-28 10:23:25 b.s.d.executor [INFO] Timeouts disabled for executor __acker:[1 1]
2014-03-28 10:23:25 b.s.d.executor [INFO] Finished loading executor __acker:[1 1]
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Session establishment complete on server zookeeper1/10.0.0.241:2181, sessionid = 0x14508151e48000c, negotiated timeout = 20000
2014-03-28 10:23:25 b.s.d.worker [INFO] Launching receive-thread for e1364c61-e842-40f5-956e-a7cc27adc9d8:6701
2014-03-28 10:23:25 b.s.d.executor [INFO] Preparing bolt __acker:(1)
2014-03-28 10:23:25 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=kafka1:9092}}
2014-03-28 10:23:25 c.n.c.f.i.CuratorFrameworkImpl [INFO] Starting
2014-03-28 10:23:25 o.a.z.ZooKeeper [INFO] Initiating client connection, connectString=zookeeper1:2181 sessionTimeout=20000 watcher=com.netflix.curator.ConnectionState@11499ff
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Opening socket connection to server zookeeper1/10.0.0.241:2181
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Socket connection established to zookeeper1/10.0.0.241:2181, initiating session
2014-03-28 10:23:25 b.s.d.executor [INFO] Prepared bolt __acker:(1)
2014-03-28 10:23:25 o.a.z.ClientCnxn [INFO] Session establishment complete on server zookeeper1/10.0.0.241:2181, sessionid = 0x14508151e48000d, negotiated timeout = 20000
2014-03-28 10:23:25 b.s.d.executor [INFO] Opened spout kafkaSpout:(2)
2014-03-28 10:23:25 b.s.d.worker [INFO] Worker has topology config {"storm.id" "FirehoseTopology_DEV-1-1396002185", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 5000, "topology.skip.missing.kryo.registrations" false, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 500, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m -Djava.net.preferIPv4Stack=true", "java.library.path" "/usr/local/lib:/opt/local/lib:/usr/lib", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/app/storm", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "nimbus1", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2181, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["zookeeper1"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "FirehoseTopology_DEV", "topology.transfer.buffer.size" 1024, "topology.worker.childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "nimbus.thrift.max_buffer_size" 1048576, "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" [6700 6701], "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" nil, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.netty.Context", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.hostname" "supervisor1", "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx256m -Djava.net.preferIPv4Stack=true", "storm.cluster.mode" "distributed", "topology.optimize" true, "topology.max.task.parallelism" nil}
2014-03-28 10:23:25 b.s.d.executor [INFO] Activating spout kafkaSpout:(2)
2014-03-28 10:23:25 s.k.ZkCoordinator [INFO] Refreshing partition manager connections
2014-03-28 10:23:25 b.s.d.worker [INFO] Worker 431a0794-40a2-4f07-b12f-5e4896297607 for storm FirehoseTopology_DEV-1-1396002185 on e1364c61-e842-40f5-956e-a7cc27adc9d8:6701 has finished loading
2014-03-28 10:23:25 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=kafka1:9092}}
2014-03-28 10:23:25 s.k.ZkCoordinator [INFO] Deleted partition managers: []
2014-03-28 10:23:25 s.k.ZkCoordinator [INFO] New partition managers: [Partition{host=kafka1:9092, partition=0}]
2014-03-28 10:23:26 s.k.PartitionManager [INFO] Read partition information from: /stormKafka/StormConsumer/partition_0  --> null
2014-03-28 10:23:26 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: java.nio.channels.UnresolvedAddressException
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:83) ~[stormjar.jar:na]
    at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:45) ~[stormjar.jar:na]
    at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:118) ~[stormjar.jar:na]
    at backtype.storm.daemon.executor$eval3848$fn__3849$fn__3864$fn__3893.invoke(executor.clj:562) ~[na:na]
    at backtype.storm.util$async_loop$fn__384.invoke(util.clj:433) ~[na:na]
    at clojure.lang.AFn.run(AFn.java:24) ~[clojure-1.4.0.jar:na]
    at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_30]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:89) ~[na:1.6.0_30]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:514) ~[na:1.6.0_30]
    at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:129) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:125) ~[stormjar.jar:na]
    at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:80) ~[stormjar.jar:na]
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:55) ~[stormjar.jar:na]
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:45) ~[stormjar.jar:na]
    at storm.kafka.PartitionManager.<init>(PartitionManager.java:77) ~[stormjar.jar:na]
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:78) ~[stormjar.jar:na]
    ... 6 common frames omitted
2014-03-28 10:23:26 b.s.d.executor [ERROR] 
java.lang.RuntimeException: java.nio.channels.UnresolvedAddressException
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:83) ~[stormjar.jar:na]
    at storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:45) ~[stormjar.jar:na]
    at storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:118) ~[stormjar.jar:na]
    at backtype.storm.daemon.executor$eval3848$fn__3849$fn__3864$fn__3893.invoke(executor.clj:562) ~[na:na]
    at backtype.storm.util$async_loop$fn__384.invoke(util.clj:433) ~[na:na]
    at clojure.lang.AFn.run(AFn.java:24) ~[clojure-1.4.0.jar:na]
    at java.lang.Thread.run(Thread.java:701) ~[na:1.6.0_30]
Caused by: java.nio.channels.UnresolvedAddressException: null
    at sun.nio.ch.Net.checkAddress(Net.java:89) ~[na:1.6.0_30]
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:514) ~[na:1.6.0_30]
    at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:44) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:129) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:69) ~[stormjar.jar:na]
    at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:125) ~[stormjar.jar:na]
    at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:80) ~[stormjar.jar:na]
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:55) ~[stormjar.jar:na]
    at storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:45) ~[stormjar.jar:na]
    at storm.kafka.PartitionManager.<init>(PartitionManager.java:77) ~[stormjar.jar:na]
    at storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:78) ~[stormjar.jar:na]
    ... 6 common frames omitted
2014-03-28 10:23:26 b.s.util [INFO] Halting process: ("Worker died")

Actually, I am not sure it is a wirbelsturm issue, the problem probably comes from here:

Read partition information from: /stormKafka/StormConsumer/partition_0 --> null
Which explains
Caused by: java.nio.channels.UnresolvedAddressException: null

But my topology was working when I used a local(storm)cluster with a remote server who shared kafka and zookeeper...

Please run 'bundle install' and then re-run bootstrap

I've just discovered wirbelsturm- it looks really useful! I can't seem to get it going though. running OSX 10.10.1

Pauls-MacBook-Pro:~ paul$ cd Source/wirbelsturm/
Pauls-MacBook-Pro:wirbelsturm paul$ ./bootstrap 
+---------------------------+
| BOOTSTRAPPING WIRBELSTURM |
+---------------------------+
Preparing Ruby environment...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4942  100  4942    0     0   3023      0  0:00:01  0:00:01 --:--:--  3022
Detecting desired Ruby version: OK (1.9.3-p362)
Checking for rvm: NOT FOUND (If you already ran ruby-bootstrap: have you performed the post-installation steps?)
Installing latest stable version of rvm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0    347      0 --:--:-- --:--:-- --:--:--   347
100 22817  100 22817    0     0  37122      0 --:--:-- --:--:-- --:--:-- 37122
Turning on ignore dotfiles mode.
Downloading https://github.com/wayneeseguin/rvm/archive/1.26.10.tar.gz
Downloading https://github.com/wayneeseguin/rvm/releases/download/1.26.10/1.26.10.tar.gz.asc
Found PGP signature at: 'https://github.com/wayneeseguin/rvm/releases/download/1.26.10/1.26.10.tar.gz.asc',
but no GPG software exists to validate it, skipping.

Upgrading the RVM installation in /Users/paul/.rvm/
Upgrade of RVM in /Users/paul/.rvm/ is complete.

# paul,
#
#   Thank you for using RVM!
#   We sincerely hope that RVM helps to make your life easier and more enjoyable!!!
#
# ~Wayne, Michal & team.

In case of problems: http://rvm.io/help and https://twitter.com/rvm_io

Upgrade Notes:

  * No new notes to display.

ruby-1.9.3-p362 is not installed.
To install do: 'rvm install ruby-1.9.3-p362'
Installing Ruby version 1.9.3-p362...
Searching for binary rubies, this might take some time.
No binary rubies available for: osx/10.10/x86_64/ruby-1.9.3-p362.
Continuing with compilation. Please read 'rvm help mount' to get more information on binary rubies.
Checking requirements for osx.
/usr/local/bin/brew: /usr/local/Library/brew.rb: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory
/usr/local/bin/brew: line 23: /usr/local/Library/brew.rb: Undefined error: 0
ERROR: '/bin' is not writable - it is required for Homebrew, try 'brew doctor' to fix it!
Requirements installation failed with status: 1.
Important: We performed a fresh install of rvm for you.
           This means you manually perform two tasks now.

Task 1 of 2:
------------
Please run the following command in all your open shell windows to
start using rvm.  In rare cases you need to reopen all shell windows.

    source ~/.rvm/scripts/rvm


Task 2 of 2:
------------
Permanently update your shell environment to source/add rvm.
The example below shows how to do this for Bash.

Add the following two lines to your ~/.bashrc:

    PATH=$PATH:$HOME/.rvm/bin # Add RVM to PATH for scripting
    [[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*

That's it!  Sorry for the extra work but this is the safest
way to update your environment without breaking anything.
ruby-1.9.3-p362 is not installed.
To install do: 'rvm install ruby-1.9.3-p362'
Checking Vagrant environment...
Checking for Vagrant: OK
Preparing Vagrant environment...
Installing the 'vagrant-hosts --version '>= 2.1.4'' plugin. This can take a few minutes...
Installed the plugin 'vagrant-hosts (2.3.0)'!
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.6.0)'!
Installing the 'vagrant-awsinfo' plugin. This can take a few minutes...
Installed the plugin 'vagrant-awsinfo (0.0.8)'!
==> box: Adding box 'centos6-compatible' (v0) for provider: virtualbox
The box you're attempting to add already exists. Remove it before
adding it again or add it with the `--force` flag.

Name: centos6-compatible
Provider: ["virtualbox"]
Version: 0
==> box: Adding box 'centos6-compatible' (v0) for provider: aws
The box you're attempting to add already exists. Remove it before
adding it again or add it with the `--force` flag.

Name: centos6-compatible
Provider: ["aws"]
Version: 0
Checking Puppet environment...
Checking for librarian-puppet: NOT FOUND
Please run 'bundle install' and then re-run bootstrap

Pauls-MacBook-Pro:wirbelsturm paul$ bundle install
-bash: bundle: command not found

I did see this solution but I got an error about not supporting yosemite

Pauls-MacBook-Pro:~ paul$ sudo port install apple-gcc42
--->  Computing dependencies for apple-gcc42
--->  Dependencies to be installed: cctools llvm-3.5 libcxx libedit ncurses libffi llvm_select zlib ld64 ld64-latest
--->  Fetching archive for libcxx
--->  Attempting to fetch libcxx-3.5.1_1.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/libcxx
--->  Attempting to fetch libcxx-3.5.1_1.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/libcxx
--->  Installing libcxx @3.5.1_1
--->  Activating libcxx @3.5.1_1
--->  Cleaning libcxx
--->  Fetching archive for ncurses
--->  Attempting to fetch ncurses-5.9_2.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/ncurses
--->  Attempting to fetch ncurses-5.9_2.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/ncurses
--->  Installing ncurses @5.9_2
--->  Activating ncurses @5.9_2
--->  Cleaning ncurses
--->  Fetching archive for libedit
--->  Attempting to fetch libedit-20140620-3.1_0.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/libedit
--->  Attempting to fetch libedit-20140620-3.1_0.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/libedit
--->  Installing libedit @20140620-3.1_0
--->  Activating libedit @20140620-3.1_0
--->  Cleaning libedit
--->  Fetching archive for libffi
--->  Attempting to fetch libffi-3.2.1_0.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/libffi
--->  Attempting to fetch libffi-3.2.1_0.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/libffi
--->  Installing libffi @3.2.1_0
--->  Activating libffi @3.2.1_0
--->  Cleaning libffi
--->  Fetching archive for llvm_select
--->  Attempting to fetch llvm_select-1.0_0.darwin_14.noarch.tbz2 from http://lil.fr.packages.macports.org/llvm_select
--->  Attempting to fetch llvm_select-1.0_0.darwin_14.noarch.tbz2.rmd160 from http://lil.fr.packages.macports.org/llvm_select
--->  Installing llvm_select @1.0_0
--->  Activating llvm_select @1.0_0
--->  Cleaning llvm_select
--->  Fetching archive for zlib
--->  Attempting to fetch zlib-1.2.8_0.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/zlib
--->  Attempting to fetch zlib-1.2.8_0.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/zlib
--->  Installing zlib @1.2.8_0
--->  Activating zlib @1.2.8_0
--->  Cleaning zlib
--->  Fetching archive for llvm-3.5
--->  Attempting to fetch llvm-3.5-3.5.1_1.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/llvm-3.5
--->  Attempting to fetch llvm-3.5-3.5.1_1.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/llvm-3.5
--->  Installing llvm-3.5 @3.5.1_1
--->  Activating llvm-3.5 @3.5.1_1
--->  Cleaning llvm-3.5
--->  Fetching archive for cctools
--->  Attempting to fetch cctools-862_1+llvm35.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/cctools
--->  Attempting to fetch cctools-862_1+llvm35.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/cctools
--->  Installing cctools @862_1+llvm35
--->  Activating cctools @862_1+llvm35
--->  Cleaning cctools
--->  Fetching archive for ld64-latest
--->  Attempting to fetch ld64-latest-236.3_0+llvm35.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/ld64-latest
--->  Attempting to fetch ld64-latest-236.3_0+llvm35.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/ld64-latest
--->  Installing ld64-latest @236.3_0+llvm35
--->  Activating ld64-latest @236.3_0+llvm35
--->  Cleaning ld64-latest
--->  Fetching archive for ld64
--->  Attempting to fetch ld64-2_0.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/ld64
--->  Attempting to fetch ld64-2_0.darwin_14.x86_64.tbz2.rmd160 from http://lil.fr.packages.macports.org/ld64
--->  Installing ld64 @2_0
--->  Activating ld64 @2_0
--->  Cleaning ld64
--->  Fetching archive for apple-gcc42
--->  Attempting to fetch apple-gcc42-5666.3_14.darwin_14.x86_64.tbz2 from http://lil.fr.packages.macports.org/apple-gcc42
--->  Attempting to fetch apple-gcc42-5666.3_14.darwin_14.x86_64.tbz2 from http://mse.uk.packages.macports.org/sites/packages.macports.org/apple-gcc42
--->  Attempting to fetch apple-gcc42-5666.3_14.darwin_14.x86_64.tbz2 from http://nue.de.packages.macports.org/macports/packages/apple-gcc42
--->  Fetching distfiles for apple-gcc42
Error: apple-gcc42 is not supported on Yosemite or later.
Error: org.macports.fetch for port apple-gcc42 returned: unsupported platform
Please see the log file for port apple-gcc42 for details:
    /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_apple-gcc42/apple-gcc42/main.log
To report a bug, follow the instructions in the guide:
    http://guide.macports.org/#project.tickets
Error: Processing of port apple-gcc42 failed

Any ideas on anything I can try?

installation problem

Hi,

I am having issues with regards to not finding the puppet modules folder. I checked on my directory listing and it actually does not exists. Attaching installation log.

=====================================
Output of GIT Bash running on windows 8
====================================
Welcome to Git (version 1.9.4-preview20140929)

Run 'git help git' to display the help index.
Run 'git help <command>' to display help for specific commands.

user@JJHG7X1 /C/Users/wirbelsturm (ma
ster)
$ ./bootstrap
+---------------------------+
| BOOTSTRAPPING WIRBELSTURM |
+---------------------------+
Preparing Ruby environment...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4942  100  4942    0     0  13213      0 --:--:-- --:--:-- --:--:-- 13213
Detecting desired Ruby version: OK (1.9.3-p362)
Checking for rvm: NOT FOUND (If you already ran ruby-bootstrap: have you perform
ed the post-installation steps?)
Installing latest stable version of rvm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0    274      0 --:--:-- --:--:-- --:--:--   274
100 22817  100 22817    0     0  28664      0 --:--:-- --:--:-- --:--:-- 28664
BASH 3.2.25 required (you have 3.1.20(4)-release)
Checking Vagrant environment...
Checking for Vagrant: OK
Preparing Vagrant environment...
Installing the 'vagrant-hosts --version '>= 2.1.4'' plugin. This can take a few
minutes...
Installed the plugin 'vagrant-hosts (2.3.0)'!
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.6.0)'!
Installing the 'vagrant-awsinfo' plugin. This can take a few minutes...
Installed the plugin 'vagrant-awsinfo (0.0.8)'!
==> box: Adding box 'centos6-compatible' (v0) for provider: virtualbox
    box: Downloading: http://puppet-vagrant-boxes.puppetlabs.com/centos-65-x64-v
irtualbox-puppet.box
    box: Progress: 100% (Rate: 700k/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'centos6-compatible' (v0) for 'virtualbox'!
==> box: Adding box 'centos6-compatible' (v0) for provider: aws
    box: Downloading: https://github.com/mitchellh/vagrant-aws/raw/master/dummy.
box
    box: Progress: 100% (Rate: 244/s, Estimated time remaining: --:--:--)
==> box: Successfully added box 'centos6-compatible' (v0) for 'aws'!
Checking Puppet environment...
Checking for librarian-puppet: NOT FOUND
Please run 'bundle install' and then re-run bootstrap

user@JJHG7X1 /C/Users/wirbelsturm (ma
ster)
$ vagrant status
Current machine states:

zookeeper1                not created (virtualbox)
nimbus1                   not created (virtualbox)
supervisor1               not created (virtualbox)
supervisor2               not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

user@JJHG7X1 /C/Users/wirbelsturm (ma
ster)
$ vagrant up
Bringing machine 'zookeeper1' up with 'virtualbox' provider...
Bringing machine 'nimbus1' up with 'virtualbox' provider...
Bringing machine 'supervisor1' up with 'virtualbox' provider...
Bringing machine 'supervisor2' up with 'virtualbox' provider...
There are errors in the configuration of this machine. Please fix
the following errors and try again:

puppet provisioner:
* The configured module path doesn't exist: c:/Users/wirbelsturm/puppet/modules

cloud-config.erb

looks like there is a stranded merge conflict in cloud-config.erb can the file be simplified to

Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"

#cloud-config

packages:
  - wget
  - git
  - screen
  - python-boto

runcmd:
  - [ sh, -c, 'echo -e "nameserver 8.8.8.8\nnameserver 8.8.4.4" >> /etc/resolv.conf' ]
  - [ yum, update, -y ]
  - [ sh, -c, 'echo "export FACTER_DESIRED_FQDN=<%= @fqdn %>" >> /etc/environment' ]
  - [ sh, -c, 'echo "export FACTER_ROLE=<%= @role %>" >> /etc/environment' ]
  - [ sh, -c, 'echo "export FACTER_PROVIDER=aws" >> /etc/environment' ]
  # Make your that subsequent commands -- notably Puppet -- have the FACTER_* facts available in their shell env
  - . /etc/environment
  - [ mkdir, -p, /etc/aws ]
  - [ chmod, 750, /etc/aws ]
  - [ sh, -c, 'echo "# Credentials of deploy IAM user" > /etc/aws/aws-credentials.sh' ]
  - [ sh, -c, 'echo "export AWS_ACCESS_KEY=<%= @aws_access_key %>" >> /etc/aws/aws-credentials.sh' ]
  - [ sh, -c, 'echo "export AWS_SECRET_KEY=<%= @aws_secret_key %>" >> /etc/aws/aws-credentials.sh' ]
  - [ sh, -c, 'echo "# For boto -- same information but different variables" >> /etc/aws/aws-credentials.sh' ]
  - [ sh, -c, 'echo "export AWS_ACCESS_KEY_ID=\$AWS_ACCESS_KEY" >> /etc/aws/aws-credentials.sh' ]
  - [ sh, -c, 'echo "export AWS_SECRET_ACCESS_KEY=\$AWS_SECRET_KEY" >> /etc/aws/aws-credentials.sh' ]
  - [ chmod, 400, /etc/aws/aws-credentials.sh ]
  - [ wget, "<%= @aws_rclocal_url %>", -O, "/etc/rc.d/rc.local" ]
  - [ chown, "root:root", /etc/rc.d/rc.local ]
  - [ chmod, 755, /etc/rc.d/rc.local ]
  - [ /etc/rc.d/rc.local ]

zookeeper1: Downloading: centos6-compatible

I'm getting the below error on OS X with the latest of virtualbox and 1.7.2 of vagrant;

jeffs-mbp:wirbelsturm Jeff$ vagrant up
Bringing machine 'zookeeper1' up with 'virtualbox' provider...
Bringing machine 'nimbus1' up with 'virtualbox' provider...
Bringing machine 'supervisor1' up with 'virtualbox' provider...
Bringing machine 'supervisor2' up with 'virtualbox' provider...
Bringing machine 'kafka1' up with 'virtualbox' provider...
==> zookeeper1: Box 'centos6-compatible' could not be found. Attempting to find and install...
    zookeeper1: Box Provider: virtualbox
    zookeeper1: Box Version: >= 0
==> zookeeper1: Adding box 'centos6-compatible' (v0) for provider: virtualbox
    zookeeper1: Downloading: centos6-compatible
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Couldn't open file /Users/Jeff/wirbelsturm/centos6-compatible

Storm UI Error on fresh setup of cluster

Hello, thanks for this great utility. I setup wirbelsturm on MacOS 10.9.5. Everything seems to be setup fine and all the 4 nodes (zookeeper,nimbus,supervisor1,supervisor2) are running. But when I launch the storm UI I get the following error:

aaa

No data flowing in kafka topology

I've tested default wirbelsturm setup (from wirbelsturm.yaml with kafka uncommented), latest source checked out, and this simple topology:

public class TestKafkaDeployer {

    public static class PrinterBolt extends BaseBasicBolt {
        @Override
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
        }

        @Override
        public void execute(Tuple tuple, BasicOutputCollector collector) {
            System.out.println(tuple.toString());
        }
    }

// expects one argument for topology name   
    public static void main(String[] args) throws Exception {

        GlobalPartitionInformation hostsAndPartitions = new GlobalPartitionInformation();
        hostsAndPartitions.addPartition(0, new Broker("10.0.0.21", 9092));
        BrokerHosts brokerHosts = new StaticHosts(hostsAndPartitions);

        SpoutConfig kafkaConfig = new SpoutConfig(brokerHosts, "storm-sentence", "", "storm");
        kafkaConfig.scheme = new SchemeAsMultiScheme(new StringScheme());

        TopologyBuilder builder = new TopologyBuilder();
        builder.setSpout("words", new KafkaSpout(kafkaConfig), 10);
        builder.setBolt("print", new PrinterBolt()).shuffleGrouping("words");

        Config config = new Config();
        config.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 2000);

        StormSubmitter.submitTopologyWithProgressBar(args[0], config,
                builder.createTopology());
    }
}

(adapted from here https://github.com/wurstmeister/storm-kafka-0.8-plus-test/blob/master/src/main/java/storm/kafka/TestTopologyStaticHosts.java)

After topology is submitted, and few test messages pushed to kafka via kafka-producer.sh, here are logs from machines:

supervisor.log

2014-07-07 13:28:18 b.s.d.supervisor [INFO] a5a78e40-34fc-49e0-98f2-5ecb67da1c7b still hasn't started
2014-07-07 13:28:18 b.s.d.supervisor [INFO] a5a78e40-34fc-49e0-98f2-5ecb67da1c7b still hasn't started 
.... 
this is repeated every minute

worker-6701.log

... after topology config is printed out, this is found
2014-07-07 13:28:29 b.s.d.worker [INFO] Worker a5a78e40-34fc-49e0-98f2-5ecb67da1c7b for storm kafka-topology-3-1404739696 on fd7c8a2a-9a97-4a5d-85ae-948369c895e8:6701 has finished loading
2014-07-07 13:28:30 s.k.PartitionManager [INFO] Read partition information from: /storm/partition_0  --> null
2014-07-07 13:28:30 s.k.PartitionManager [INFO] No partition information found, using configuration to determine offset
2014-07-07 13:28:30 s.k.PartitionManager [INFO] Starting Kafka 10.0.0.21:0 from offset 6
2014-07-07 13:28:30 b.s.d.executor [INFO] Opened spout words:(3)
2014-07-07 13:28:30 b.s.d.executor [INFO] Activating spout words:(3)
2014-07-07 13:29:30 s.k.KafkaUtils [WARN] No data found in Kafka Partition partition_0
2014-07-07 13:30:30 s.k.KafkaUtils [WARN] No data found in Kafka Partition partition_0
... repeated untill topology is killed

here is more complete worker log: http://pastie.org/private/lctrkmceiogpqz2ku7ura

zookeeper.log

2014-07-07 13:28:26,352 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48115
2014-07-07 13:28:26,353 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008a with negotiated timeout 20000 for client /10.0.0.101:48115
2014-07-07 13:28:27,384 [myid:] - INFO  [ProcessThread(sid:0 cport:-1)::PrepRequestProcessor@476] - Processed session termination for sessionid: 0x147106722dd008a
2014-07-07 13:28:27,387 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /10.0.0.101:48115 which had sessionid 0x147106722dd008a
2014-07-07 13:28:27,403 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48116
2014-07-07 13:28:27,403 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48116
2014-07-07 13:28:27,405 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008b with negotiated timeout 20000 for client /10.0.0.101:48116
2014-07-07 13:28:28,583 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48117
2014-07-07 13:28:28,584 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48117
2014-07-07 13:28:28,585 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008c with negotiated timeout 20000 for client /10.0.0.101:48117
2014-07-07 13:28:28,685 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48118
2014-07-07 13:28:28,696 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48118
2014-07-07 13:28:28,697 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008d with negotiated timeout 20000 for client /10.0.0.101:48118
2014-07-07 13:28:28,743 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48119
2014-07-07 13:28:28,745 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48119
2014-07-07 13:28:28,748 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008e with negotiated timeout 20000 for client /10.0.0.101:48119
2014-07-07 13:28:28,782 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48120
2014-07-07 13:28:28,784 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48120
2014-07-07 13:28:28,786 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd008f with negotiated timeout 20000 for client /10.0.0.101:48120
2014-07-07 13:28:28,850 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48121
2014-07-07 13:28:28,868 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48121
2014-07-07 13:28:28,870 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0090 with negotiated timeout 20000 for client /10.0.0.101:48121
2014-07-07 13:28:28,923 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48122
2014-07-07 13:28:28,925 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48122
2014-07-07 13:28:28,928 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0091 with negotiated timeout 20000 for client /10.0.0.101:48122
2014-07-07 13:28:29,017 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48123
2014-07-07 13:28:29,018 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48123
2014-07-07 13:28:29,019 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0092 with negotiated timeout 20000 for client /10.0.0.101:48123
2014-07-07 13:28:29,081 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48124
2014-07-07 13:28:29,082 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48124
2014-07-07 13:28:29,083 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0093 with negotiated timeout 20000 for client /10.0.0.101:48124
2014-07-07 13:28:29,161 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48125
2014-07-07 13:28:29,161 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48125
2014-07-07 13:28:29,163 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0094 with negotiated timeout 20000 for client /10.0.0.101:48125
2014-07-07 13:28:29,248 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.0.0.101:48126
2014-07-07 13:28:29,248 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@839] - Client attempting to establish new session at /10.0.0.101:48126
2014-07-07 13:28:29,251 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@595] - Established session 0x147106722dd0095 with negotiated timeout 20000 for client /10.0.0.101:48126
2014-07-07 13:39:39,694 [myid:] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x147106722dd0093, likely client has closed socket
        at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
        at java.lang.Thread.run(Thread.java:701)
2014-07-07 13:39:39,695 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /10.0.0.101:48124 which had sessionid 0x147106722dd0093
2014-07-07 13:39:39,695 [myid:] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x147106722dd008c, likely client has closed socket
        at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
        at java.lang.Thread.run(Thread.java:701)
2014-07-07 13:39:39,696 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /10.0.0.101:48117 which had sessionid 0x147106722dd008c
2014-07-07 13:39:39,696 [myid:] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@349] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x147106722dd0094, likely client has closed socket
        at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220)
        at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
        at java.lang.Thread.run(Thread.java:701)
2014-07-07 13:39:39,696 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1001] - Closed socket connection for client /10.0.0.101:48125 which had sessionid 0x147106722dd0094
... closed rest of sockets

here are ip adresses of VMs:

$ vagrant hosts list
10.0.0.241 zookeeper1
10.0.0.101 supervisor1
10.0.0.21 kafka1
10.0.0.251 nimbus1

This seams to be wirbelsturm issue, because similar setup for kafka topology runs just fine on local cluster and from storm cli submitter.

Since I'm newbie both to storm and wirbelsturm, I'll appreciate a lot help around this issue.

Upgrade to Storm 0.9.2-incubating

Do you have plans to update to 0.9.2-incubating? From your previous notes, it looked like it will involve a new zookeeper as well so I entered the issue here instead of over on puppet-storm. Would love to see this support soon so we can start using the new features and the latest version of the kafka spout as well.

issues running locally on Mac OSX

First off, thanks so much for the blog posts and this project.

I just ran across your blog post and tried to set this up following your instructions. I;m running Mac OS X Yosemite and I downloaded Vagrant 1.7.2.

bundle install, ./bootstrap all seems to work fine however, when I try a vagrant up

I get an error

Bringing machine 'zookeeper1' up with 'virtualbox' provider...
Bringing machine 'nimbus1' up with 'virtualbox' provider...
Bringing machine 'supervisor1' up with 'virtualbox' provider...
Bringing machine 'supervisor2' up with 'virtualbox' provider...
==> zookeeper1: VirtualBox VM is already running.
==> nimbus1: Importing base box 'centos6-compatible'...
==> nimbus1: Matching MAC address for NAT networking...
==> nimbus1: Setting the name of the VM: wirbelsturm_nimbus1_1420863657770_80832
==> nimbus1: Fixed port collision for 22 => 2222. Now on port 2200.
==> nimbus1: Clearing any previously set network interfaces...
==> nimbus1: Preparing network interfaces based on configuration...
    nimbus1: Adapter 1: nat
    nimbus1: Adapter 2: hostonly
==> nimbus1: Forwarding ports...
    nimbus1: 8080 => 28080 (adapter 1)
    nimbus1: 22 => 2200 (adapter 1)
==> nimbus1: Running 'pre-boot' VM customizations...
==> nimbus1: Booting VM...
==> nimbus1: Waiting for machine to boot. This may take a few minutes...
    nimbus1: SSH address: 127.0.0.1:2200
    nimbus1: SSH username: vagrant
    nimbus1: SSH auth method: private key
    nimbus1: Warning: Connection timeout. Retrying...
    nimbus1: Warning: Remote connection disconnect. Retrying...
    nimbus1: 
    nimbus1: Vagrant insecure key detected. Vagrant will automatically replace
    nimbus1: this with a newly generated keypair for better security.
    nimbus1: 
    nimbus1: Inserting generated public key within guest...
    nimbus1: Removing insecure key from the guest if its present...
    nimbus1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> nimbus1: Machine booted and ready!
==> nimbus1: Checking for guest additions in VM...
==> nimbus1: Setting hostname...
==> nimbus1: Configuring and enabling network interfaces...
==> nimbus1: Mounting shared folders...
    nimbus1: /shared => /Users/spark/workspace/wirbelsturm/shared
    nimbus1: /tmp/vagrant-puppet/modules-c119df1cc11c132dd5d64967542c94ee => /Users/spark/workspace/wirbelsturm/puppet/modules
    nimbus1: /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3 => /Users/spark/workspace/wirbelsturm/puppet/manifests
==> nimbus1: Running provisioner: puppet...
==> nimbus1: Running Puppet with site.pp...
==> nimbus1: Notice: Scope(Class[main]): Deployment environment: 'default-environment'
==> nimbus1: Notice: Scope(Class[main]): I am running within a Vagrant-controlled environment
==> nimbus1: Notice: Scope(Class[main]): Disabling firewall...
==> nimbus1: Notice: Scope(Class[main]): I have been assigned the role 'storm_master'
==> nimbus1: Error: Could not find data item classes in any Hiera data file and no default supplied at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/site.pp:30 on node nimbus1.cable.rcn.com
==> nimbus1: Error: Could not find data item classes in any Hiera data file and no default supplied at /tmp/vagrant-puppet/manifests-846018e2aa141a5eb79a64b4015fc5f3/site.pp:30 on node nimbus1.cable.rcn.com

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

At this point, the vagrant up command stops but the zookeeper service is up and running.

I'm very new to Vagrant and Puppet but have some experience with Ruby, Bundler, Gems but I"m a little stuck as to what the exact problem is.

Any thoughts on what I can try or look at to try and diagnose the problem?

Read environment variables from file explicitly in rc.local to handle reboots properly

The rc.local script runs upon a reboot, however the environment variables in /etc/environment (FACTER_*) are not available when the script is run. This prevents the hosts & resolv.conf files from gettting updated properly, causing storm nodes in AWS to be unable to resolve hosts.

FIX: Adding . /etc/environment near the top of the rc.local file ensures that the FACTER_* environment variables get loaded, making clusters work properly upon instance reboots.

This is only a problem when rebooting machines, and probably only an issue for AWS instances.

puppetlabs.repo for aws deployment

This file is unnecessary all you need is the call to

sudo rpm -ivh https://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm

then a lot of the work in aws/aws-prepare-image.sh can be removed.

Docker support

It would be very very useful to add docker support
This would really make this very very dev (read cheap laptop) friendly
I'm happy to contribute my bit.

"Please run bundle install and then re-run bootstrap"

Hi.
I'm trying to intsall Wirbelsturm in order to use your configuration to set up a Storm cluster.
I followed your indications but, once installed virtual box and vagrant, when I type ./bootstrap this is what I see:

+---------------------------+
| BOOTSTRAPPING WIRBELSTURM |
+---------------------------+
Checking for curl: OK
Preparing Ruby environment...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4942  100  4942    0     0   6263      0 --:--:-- --:--:-- --:--:--  6263
Detecting desired Ruby version: OK (1.9.3-p362)
Checking for rvm: NOT FOUND (If you already ran ruby-bootstrap: have you performed the post-installation steps?)
Installing latest stable version of rvm...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0    194      0 --:--:-- --:--:-- --:--:--   194
100 22865  100 22865    0     0  14006      0  0:00:01  0:00:01 --:--:-- 14006
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.27.0.tar.gz
Downloading https://github.com/rvm/rvm/releases/download/1.27.0/1.27.0.tar.gz.asc
gpg: Firma eseguita in data mar 29 mar 2016 15:49:47 CEST usando RSA, ID chiave BF04FF17
gpg: Impossibile controllare la firma: Nessuna chiave pubblica
Warning, RVM 1.26.0 introduces signed releases and automated check of signatures when GPG software found.
Assuming you trust Michal Papis import the mpapis public key (downloading the signatures).

GPG signature verification failed for '/usr/local/rvm/archives/rvm-1.27.0.tgz' - 'https://github.com/rvm/rvm/releases/download/1.27.0/1.27.0.tar.gz.asc'!
try downloading the signatures:

    sudo gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

or if it fails:

    command curl -sSL https://rvm.io/mpapis.asc | sudo gpg2 --import -

the key can be compared with:

    https://rvm.io/mpapis.asc
    https://keybase.io/mpapis

Creating an example Wirbelsturm configuration file at /home/mario/wirbelsturm/wirbelsturm.yaml...
Checking Vagrant environment...
Checking for Vagrant: OK
Preparing Vagrant environment...
Installing the 'vagrant-hosts --version '>= 2.1.4'' plugin. This can take a few minutes...
/usr/lib/ruby/2.3.0/rubygems/specification.rb:946:in `all=': undefined method `group_by' for nil:NilClass (NoMethodError)
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:275:in `with_isolated_gem'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:231:in `internal_install'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:102:in `install'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:62:in `block in install_plugin'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:72:in `install_plugin'
    from /usr/share/vagrant/plugins/commands/plugin/action/install_gem.rb:37:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/warden.rb:34:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/builder.rb:116:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `block in run'
    from /usr/lib/ruby/vendor_ruby/vagrant/util/busy.rb:19:in `busy'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `run'
    from /usr/share/vagrant/plugins/commands/plugin/command/base.rb:14:in `action'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:32:in `block in execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `each'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/root.rb:56:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/cli.rb:42:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/environment.rb:268:in `cli'
    from /usr/bin/vagrant:173:in `<main>'
Installing the 'vagrant-aws' plugin. This can take a few minutes...
/usr/lib/ruby/2.3.0/rubygems/specification.rb:946:in `all=': undefined method `group_by' for nil:NilClass (NoMethodError)
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:275:in `with_isolated_gem'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:231:in `internal_install'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:102:in `install'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:62:in `block in install_plugin'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:72:in `install_plugin'
    from /usr/share/vagrant/plugins/commands/plugin/action/install_gem.rb:37:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/warden.rb:34:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/builder.rb:116:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `block in run'
    from /usr/lib/ruby/vendor_ruby/vagrant/util/busy.rb:19:in `busy'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `run'
    from /usr/share/vagrant/plugins/commands/plugin/command/base.rb:14:in `action'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:32:in `block in execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `each'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/root.rb:56:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/cli.rb:42:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/environment.rb:268:in `cli'
    from /usr/bin/vagrant:173:in `<main>'
Installing the 'vagrant-awsinfo' plugin. This can take a few minutes...
/usr/lib/ruby/2.3.0/rubygems/specification.rb:946:in `all=': undefined method `group_by' for nil:NilClass (NoMethodError)
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:275:in `with_isolated_gem'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:231:in `internal_install'
    from /usr/lib/ruby/vendor_ruby/vagrant/bundler.rb:102:in `install'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:62:in `block in install_plugin'
    from /usr/lib/ruby/vendor_ruby/vagrant/plugin/manager.rb:72:in `install_plugin'
    from /usr/share/vagrant/plugins/commands/plugin/action/install_gem.rb:37:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/warden.rb:34:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/builder.rb:116:in `call'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `block in run'
    from /usr/lib/ruby/vendor_ruby/vagrant/util/busy.rb:19:in `busy'
    from /usr/lib/ruby/vendor_ruby/vagrant/action/runner.rb:66:in `run'
    from /usr/share/vagrant/plugins/commands/plugin/command/base.rb:14:in `action'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:32:in `block in execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `each'
    from /usr/share/vagrant/plugins/commands/plugin/command/install.rb:31:in `execute'
    from /usr/share/vagrant/plugins/commands/plugin/command/root.rb:56:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/cli.rb:42:in `execute'
    from /usr/lib/ruby/vendor_ruby/vagrant/environment.rb:268:in `cli'
    from /usr/bin/vagrant:173:in `<main>'
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'centos6-compatible' (v0) for provider: virtualbox
The box you're attempting to add already exists. Remove it before
adding it again or add it with the `--force` flag.

Name: centos6-compatible
Provider: ["virtualbox"]
Version: 0
==> box: Box file was not detected as metadata. Adding it directly...
==> box: Adding box 'centos6-compatible' (v0) for provider: aws
The box you're attempting to add already exists. Remove it before
adding it again or add it with the `--force` flag.

Name: centos6-compatible
Provider: ["aws"]
Version: 0
Checking Puppet environment...
Checking for librarian-puppet: NOT FOUND
Please run 'bundle install' and then re-run bootstrap

How can I solve it? Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.