Code Monkey home page Code Monkey logo

docker-kafka-alpine's Introduction

kafka-logo

docker-kafka-alpine

Publish Docker Image License Docker Stars Docker Pulls Docker Image

Alpine Linux based Kafka Docker Image

Dependencies

Image Tags

REPOSITORY          TAG                 SIZE
blacktop/kafka      latest              506MB
blacktop/kafka      3.3                 506MB
blacktop/kafka      3.2                 410MB
blacktop/kafka      3.1                 461MB
blacktop/kafka      3.0                 461MB
blacktop/kafka      2.8                 473MB
blacktop/kafka      2.7                 473MB
blacktop/kafka      2.6                 473MB
blacktop/kafka      2.5                 438MB
blacktop/kafka      2.4                 441MB
blacktop/kafka      2.3                 437MB
blacktop/kafka      2.2                 411MB
blacktop/kafka      2.1                 461MB
blacktop/kafka      2.0                 461MB
blacktop/kafka      1.1                 332MB
blacktop/kafka      1.0                 441MB
blacktop/kafka      0.11                226MB
blacktop/kafka      0.10                437MB
blacktop/kafka      0.9                 238.6MB
blacktop/kafka      0.8                 227.5MB

Getting Started

NOTE: I am assuming use of Docker for Mac with these examples.( KAFKA_ADVERTISED_HOST_NAME=localhost )

docker run -d \
           --name kafka \
           -p 9092:9092 \
           -e KAFKA_ADVERTISED_HOST_NAME=localhost \
           -e KAFKA_CREATE_TOPICS="test-topic:1:1" \
           blacktop/kafka

This will create a single-node kafka broker (listening on localhost:9092), a local zookeeper instance and create the topic test-topic with 1 replication-factor and 1 partition .

You can now test your new single-node kafka broker using Shopify/sarama's kafka-console-producer and kafka-console-consumer

Required

$ go get github.com/Shopify/sarama/tools/kafka-console-consumer
$ go get github.com/Shopify/sarama/tools/kafka-console-producer

Now start a consumer in the background and then send some data to kafka via a producer

$ kafka-console-consumer --brokers=localhost:9092 --topic=test-topic &
$ echo "shrinky-dinks" | kafka-console-producer --brokers=localhost:9092 --topic=test-topic

Documentation

Issues

Find a bug? Want more features? Find something missing in the documentation? Let me know! Please don't hesitate to file an issue

Credits

Heavily (if not entirely) influenced by https://github.com/wurstmeister/kafka-docker

Todo

  • Add ability to run a single node kafka broker when you don't supply a zookeeper link.

License

MIT Copyright (c) 2016-2022 blacktop

docker-kafka-alpine's People

Contributors

blacktop avatar diasjorge avatar saghir786 avatar timbotetsu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-kafka-alpine's Issues

zookeeper is not a recognized option

Hi Team!
I tried to use the backtop's latest version, but I got the following error when I tried to start the container:

Exception in thread "main" joptsimple.UnrecognizedOptionException: zookeeper is not a recognized option

I think the problem is related to the --zookeeper used in the command below:

JMX_PORT='' kafka-topics.sh --create --zookeeper $KAFKA_ZOOKEEPER_CONNECT --replication-factor ${topicConfig[2]} --partitions ${topicConfig[1]} --topic "${topicConfig[0]}"

Newer versions(2.2+) of Kafka no longer requires ZooKeeper connection string.

blacktop/kafka:1.1: not present

docker pull blacktop/kafka:1.1
Error response from daemon: manifest for blacktop/kafka:1.1 not found

Can you please publish 1.1.

Configuration error in DockerHub example

When I try to run the example listed on DockerHub, the container crashes with some kind of configuration formatting error:

$ docker run -p 9092:9092 -e KAFKA_ADVERTISED_HOST_NAME=172.17.0.2 blacktop/kafka:0.10
Configuring Kafka...
/configure-kafka.sh: line 3: -1: command not found
/configure-kafka.sh: line 4: 9092: command not found
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
DOCKER_KAFKA_PORT 
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Configuring Zookeeper...
Thu Jan 12 16:53:37 UTC 2017 - still trying
[2017-01-12 16:53:37,441] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-01-12 16:53:37,446] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-12 16:53:37,446] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-12 16:53:37,446] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2017-01-12 16:53:37,447] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2017-01-12 16:53:37,473] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2017-01-12 16:53:37,473] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2017-01-12 16:53:37,489] INFO Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,489] INFO Server environment:host.name=1f3ff79feb8e (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,489] INFO Server environment:java.version=1.8.0_111-internal (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,489] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,489] INFO Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,490] INFO Server environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/opt/kafka/bin/../libs/argparse4j-0.5.0.jar:/opt/kafka/bin/../libs/connect-api-0.10.1.1.jar:/opt/kafka/bin/../libs/connect-file-0.10.1.1.jar:/opt/kafka/bin/../libs/connect-json-0.10.1.1.jar:/opt/kafka/bin/../libs/connect-runtime-0.10.1.1.jar:/opt/kafka/bin/../libs/guava-18.0.jar:/opt/kafka/bin/../libs/hk2-api-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-locator-2.4.0-b34.jar:/opt/kafka/bin/../libs/hk2-utils-2.4.0-b34.jar:/opt/kafka/bin/../libs/jackson-annotations-2.6.0.jar:/opt/kafka/bin/../libs/jackson-core-2.6.3.jar:/opt/kafka/bin/../libs/jackson-databind-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/opt/kafka/bin/../libs/javassist-3.18.2-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.4.0-b34.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.22.2.jar:/opt/kafka/bin/../libs/jersey-common-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.22.2.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/opt/kafka/bin/../libs/jersey-guava-2.22.2.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.22.2.jar:/opt/kafka/bin/../libs/jersey-server-2.22.2.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/opt/kafka/bin/../libs/jopt-simple-4.9.jar:/opt/kafka/bin/../libs/kafka-clients-0.10.1.1.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-0.10.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-0.10.1.1.jar:/opt/kafka/bin/../libs/kafka-streams-examples-0.10.1.1.jar:/opt/kafka/bin/../libs/kafka-tools-0.10.1.1.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.1-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.1-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-0.10.1.1.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-1.3.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/reflections-0.9.10.jar:/opt/kafka/bin/../libs/rocksdbjni-4.9.0.jar:/opt/kafka/bin/../libs/scala-library-2.11.8.jar:/opt/kafka/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.21.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/opt/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.9.jar:/opt/kafka/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,490] INFO Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,491] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,491] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,492] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,492] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,492] INFO Server environment:os.version=4.4.41-moby (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,492] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,492] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,493] INFO Server environment:user.dir=/opt/kafka (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,507] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,507] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,508] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2017-01-12 16:53:37,526] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
Thu Jan 12 16:53:38 UTC 2017 - connected successfully
[2017-01-12 16:53:38,009] INFO Accepted socket connection from /127.0.0.1:37317 (org.apache.zookeeper.server.NIOServerCnxnFactory)
DYNAMTIC CONFIG=========================================================================
zookeeper.connect=localhost:2181
DYNAMTIC CONFIG=========================================================================
advertised.listeners=PLAINTEXT://172.17.0.2:
DYNAMTIC CONFIG=========================================================================
version=0.10.1.1
DYNAMTIC CONFIG=========================================================================
log.dirs=/kafka/kafka-logs/1f3ff79feb8e
DYNAMTIC CONFIG=========================================================================
listeners=PLAINTEXT://0.0.0.0:
[2017-01-12 16:53:38,116] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:230)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
	at java.lang.Thread.run(Thread.java:745)
[2017-01-12 16:53:38,119] INFO Closed socket connection for client /127.0.0.1:37317 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
OpenJDK 64-Bit Server VM warning: Cannot open file /opt/kafka/bin/../logs/kafkaServer-gc.log due to Permission denied

[2017-01-12 16:53:38,956] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = PLAINTEXT://172.17.0.2:
	advertised.port = null
	authorizer.class.name = 
	auto.create.topics.enable = false
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.id = -1
	broker.id.generation.enable = true
	broker.rack = null
	compression.type = producer
	connections.max.idle.ms = 600000
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.socket.timeout.ms = 30000
	default.replication.factor = 1
	delete.topic.enable = true
	fetch.purgatory.purge.interval.requests = 1000
	group.max.session.timeout.ms = 300000
	group.min.session.timeout.ms = 6000
	host.name = 
	inter.broker.protocol.version = 0.10.1-IV2
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listeners = PLAINTEXT://0.0.0.0:
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /kafka/kafka-logs/1f3ff79feb8e
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.format.version = 0.10.1-IV2
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	message.max.bytes = 1000012
	metric.reporters = []
	metrics.num.samples = 2
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 1440
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	port = 9092
	principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
	producer.purgatory.purge.interval.requests = 1000
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 10000
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism.inter.broker.protocol = GSSAPI
	security.inter.broker.protocol = PLAINTEXT
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = null
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	unclean.leader.election.enable = true
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 60000
	zookeeper.session.timeout.ms = 6000
	zookeeper.set.acl = false
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2017-01-12 16:53:38,979] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: Error creating broker listeners from 'PLAINTEXT://0.0.0.0:': Unable to parse PLAINTEXT://0.0.0.0: to a broker endpoint
	at kafka.server.KafkaConfig.validateUniquePortAndProtocol(KafkaConfig.scala:993)
	at kafka.server.KafkaConfig.getListeners(KafkaConfig.scala:1012)
	at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:965)
	at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:778)
	at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:775)
	at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
	at kafka.Kafka$.main(Kafka.scala:58)
	at kafka.Kafka.main(Kafka.scala)

Auto creation of topics is switched off

Just recently I've been seeing a lot of this sort of error start to happen in my tests that make use of blacktop/kafka:0.10:

Error while fetching metadata with correlation id 0 : {producer-topic=UNKNOWN_TOPIC_OR_PARTITION}

The problem seems to be as a result of this recent change: 5034673#diff-ce7d1556313e0097067704a2ac200740R118. i.e. kafka will not auto-create topics whereas before it did. The Apache docs give the setting as being switched on by default.

So, is the change deliberate?

A workaround is to just switch it back on by specifying the following environment variable: KAFKA_AUTO_CREATE_TOPICS_ENABLE=true

Unable to configure a Cluster

I am using Rancher with alpine-kafka, rancher-kafka and alpine-volume. I scaled them to 3.
I am having the below issues.
The scaled containers are pointing to the docker-machine-ip:9092 as the private port. There is no public port configured to be available at the docker host side. I am unable to configure the Kafka cluster in my Kafka clients because I dont know the ports and I dont know how to configure them.

Topic is not creating in latest blacktop/kafka:3.0

I am using blacktop/kafka in my gitlab-ci.yml for testing kafka for my project. Upto blacktop/kafka:2.6 i am able to create topics but as 3.0 is released, now topics are not creating, could you please guide me

Below are the variables i used in my .gitlab-ci.yml file

KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://kafka:9092,INTERNAL://localhost:9093' KAFKA_BROKER_ID: '1' KAFKA_INTER_BROKER_LISTENER_NAME: 'INTERNAL' KAFKA_LISTENERS: 'PLAINTEXT://0.0.0.0:9092,INTERNAL://0.0.0.0:9093' KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT' KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1" KAFKA_CREATE_TOPICS: "test_topic:1:1"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.