This is an example of how to pass configuration parameters into a Storm topology. It builds on the simplest possible Storm topology that integrates with Kafka.
As with the simple Echo topology, a producer based on the Kafka examples is included.
Configuration files are stored in the config
directory. The two included are docker.properties
and aws.properties
.
This example is configured to run in Docker on OS X with an /etc/hosts
entry for docker-machine
(see this blog post for more details on configuring this environment). If it is running on a different configuration, the files bin/deploy-topology.sh
and docker-config.yml
must be modified to specify the correct Nimbus host and Kafka hosts, respectively.
The example depends on SLF4J, Zookeeper, Kafka, and Storm. The directories for those packages are specified in build.properties
. These must be set correctly for the build to succeed.
To build the jar files the first time, run ant build
. To clean and build, run ant rebuild
. This will create jar files in the lib
directory.
If Docker is being used to run the example, the first step is to start the Docker containers running Zookeeper, Kafka, and Storm. When Kafka and Storm are available, create the topic, deploy the topology then feed messages to Kafka:
docker-compose up &
(if using Docker)bin/create-topic.sh
bin/deploy-topology-on-docker.sh
bin/feed-kafka.sh
To run with the faux AWS configuration file, replace step 3 with bin/deploy-topology-on-aws.sh
.
The configuration details are used by the topology to configure the Kafka spout. The bolt in this topology writes a couple of the properties to the log file. To view them, run docker ps -a
to find the ID of the image running the Storm supervisor node, then:
docker exec -it <image-id> bash
cd /tmp/storm/logs
grep EchoBolt worker*.log | grep Configuration