Code Monkey home page Code Monkey logo

dogstatsd-ruby's Introduction

dogstatsd-ruby

A client for DogStatsD, an extension of the StatsD metric server for Datadog. Full API documentation is available in DogStatsD-ruby rubydoc.

Build Status

See CHANGELOG.md for changes. To suggest a feature, report a bug, or general discussion, open an issue.

Installation

First install the library:

gem install dogstatsd-ruby

Configuration

To instantiate a DogStatsd client:

# Import the library
require 'datadog/statsd'

# Create a DogStatsD client instance
statsd = Datadog::Statsd.new('localhost', 8125)
# ...
# release resources used by the client instance
statsd.close()

Or if you want to connect over Unix Domain Socket:

# Connection over Unix Domain Socket
statsd = Datadog::Statsd.new(socket_path: '/path/to/socket/file')
# ...
# release resources used by the client instance
statsd.close()

Find a list of all the available options for your DogStatsD Client in the DogStatsD-ruby rubydoc or in the Datadog public DogStatsD documentation.

Migrating from v4.x to v5.x

If you are already using DogStatsD-ruby v4.x and you want to migrate to a version v5.x, the major change concerning you is the new threading model:

In practice, it means two things:

  1. Now that the client is buffering metrics before sending them, you have to call Datadog::Statsd#flush(sync: true) if you want synchronous behavior. In most cases, this is not needed, as the sender thread will automatically flush the buffered metrics if the buffer gets full or when you are closing the instance.

  2. You have to make sure you are either:

  • Using a singleton instance of the DogStatsD client instead of creating a new instance whenever you need one; this will let the buffering mechanism flush metrics regularly
  • Or properly disposing of the DogStatsD client instance when it is not needed anymore using the method Datadog::Statsd#close

If you have issues with the sender thread or the buffering mode, you can instantiate a client that behaves exactly as in v4.x (i.e. no sender thread and flush on every metric submission):

# Create a DogStatsD client instance using UDP
statsd = Datadog::Statsd.new('localhost', 8125, single_thread: true, buffer_max_pool_size: 1)
# ...
statsd.close()

or

# Create a DogStatsD client instance using UDS
statsd = Datadog::Statsd.new(socket_path: '/path/to/socket/file', single_thread: true, buffer_max_pool_size: 1)
# ...
statsd.close()

v5.x Common Pitfalls

Version v5.x of dogstatsd-ruby is using a sender thread for flushing. This provides better performance, but you need to consider the following pitfalls:

  1. Applications that use fork after having created the dogstatsd instance: the child process will automatically spawn a new sender thread to flush metrics.

  2. Applications that create multiple instances of the client without closing them: it is important to #close all instances to free the thread and the socket they are using otherwise you will leak those resources.

If you are using Sidekiq, please make sure to close the client instances that are instantiated. See this example on using DogStatsD-ruby v5.x with Sidekiq.

Applications that run into issues but can't apply these recommendations should use the single_thread mode which disables the use of the sender thread. Here is how to instantiate a client in this mode:

statsd = Datadog::Statsd.new('localhost', 8125, single_thread: true)
# ...
# release resources used by the client instance and flush last metrics
statsd.close()

Origin detection over UDP

Origin detection is a method to detect which pod DogStatsD packets are coming from, in order to add the pod's tags to the tag list.

To enable origin detection over UDP, add the following lines to your application manifest:

env:
  - name: DD_ENTITY_ID
    valueFrom:
      fieldRef:
        fieldPath: metadata.uid

The DogStatsD client attaches an internal tag, entity_id. The value of this tag is the content of the DD_ENTITY_ID environment variable, which is the pod’s UID.

Usage

In order to use DogStatsD metrics, events, and Service Checks the Datadog Agent must be running and available.

Metrics

After the client is created, you can start sending custom metrics to Datadog. See the dedicated Metric Submission: DogStatsD documentation to see how to submit all supported metric types to Datadog with working code examples:

Some options are suppported when submitting metrics, like applying a Sample Rate to your metrics or tagging your metrics with your custom tags. Find all the available functions to report metrics in the DogStatsD-ruby rubydoc.

Events

After the client is created, you can start sending events to your Datadog Event Stream. See the dedicated Event Submission: DogStatsD documentation to see how to submit an event to Datadog your Event Stream.

Service Checks

After the client is created, you can start sending Service Checks to Datadog. See the dedicated Service Check Submission: DogStatsD documentation to see how to submit a Service Check to Datadog.

Maximum packet size in high-throughput scenarios

In order to have the most efficient use of this library in high-throughput scenarios, recommended values for the maximum packet size have already been set for both UDS (8192 bytes) and UDP (1432 bytes).

However, if are in control of your network and want to use a different value for the maximum packet size, you can do it by setting the buffer_max_payload_size parameter:

statsd = Datadog::Statsd.new('localhost', 8125, buffer_max_payload_size: 4096)
# ...
statsd.close()

Threading model

Starting with version 5.0, dogstatsd-ruby employs a new threading model where one instance of Datadog::Statsd can be shared between threads and where data sending is non-blocking (asynchronous).

When you instantiate a Datadog::Statsd, a sender thread is spawned. This thread will be called the Sender thread, as it is modeled by the Sender class. You can make use of single_thread: true to disable this behavior.

This thread is stopped when you close the statsd client (Datadog::Statsd#close). Instantiating a lot of statsd clients without calling #close after they are not needed anymore will most likely lead to threads being leaked.

The sender thread has the following logic (from Datadog::Statsd::Sender#send_loop):

while the sender message queue is not closed do
  read message from sender message queue

  if message is a Control message to flush
    flush buffer in connection
  else if message is a Control message to synchronize
    synchronize with calling thread
  else
    add message to the buffer
  end
end while

There are three different kinds of messages:

  1. a control message to flush the buffer in the connection
  2. a control message to synchronize any thread with the sender thread
  3. a message to append to the buffer

There is also an implicit message which closes the queue which will cause the sender thread to finish processing and exit.

statsd = Datadog::Statsd.new('localhost', 8125)

The message queue's maximum size (in messages) is given by the sender_queue_size argument, and has appropriate defaults for UDP (2048), UDS (512) and single_thread: true (1).

The buffer_flush_interval, if enabled, is implemented with an additional thread which manages the timing of those flushes. This additional thread is used even if single_thread: true.

Usual workflow

You push metrics to the statsd client which writes them quickly to the sender message queue. The sender thread receives those message, buffers them and flushes them to the connection when the buffer limit is reached.

Flushing

When calling Datadog::Statsd#flush, a specific control message (:flush) is sent to the sender thread. When the sender thread receives it, it flushes its internal buffer into the connection.

Rendez-vous

It is possible to ensure a message has been consumed by the sender thread and written to the buffer by simply calling a rendez-vous right after. This is done when you are doing a synchronous flush using Datadog::Statsd#flush(sync: true).

Doing so means the caller thread is blocked and waiting until the data has been flushed by the sender thread.

This is useful when preparing to exit the application or when checking unit tests.

Thread-safety

By default, instances of Datadog::Statsd are thread-safe and we recommend that a single instance be reused by all application threads (even in applications that employ forking). The sole exception is the #close method — this method is not yet thread safe (work in progress here #209).

When using the single_thread: true mode, instances of Datadog::Statsd are still thread-safe, but you may run into contention on heavily-threaded applications, so we don’t recommend (for performance reasons) reusing these instances.

Delaying serialization

By default, message serialization happens synchronously whenever stat methods such as #increment gets called, blocking the caller. If the blocking is impacting your program's performance, you may want to consider the delay_serialization: true mode.

The delay_serialization: true mode delays the serialization of metrics to avoid the wait when submitting metrics. Serialization will still have to happen at some point, but it might be postponed until a more convenient time, such as after an HTTP request has completed.

In single_thread: true mode, you'll probably want to set sender_queue_size: from it's default of 1 to some greater value, so that it can benefit from delay_serialization: true. Messages will then be queued unserialized in the sender queue and processed normally whenever sender_queue_size is reached or #flush is called. You might set sender_queue_size: Float::INFINITY to allow for an unbounded queue that will only be processed on explicit #flush.

In single_thread: false mode, delay_serialization: true, will cause serialization to happen inside the sender thread.

Versioning

This Ruby gem is using Semantic Versioning but please note that supported Ruby versions can change in a minor release of this library. As much as possible, we will add a "future deprecation" message in the minor release preceding the one dropping the support.

Ruby Versions

This gem supports and is tested on Ruby minor versions 2.1 through 3.1. Support for Ruby 2.0 was dropped in version 5.4.0.

Credits

dogstatsd-ruby is forked from Rein Henrichs' original Statsd client.

Copyright (c) 2011 Rein Henrichs. See LICENSE.txt for further details.

dogstatsd-ruby's People

Contributors

abicky avatar adrienjt avatar ahmed-mez avatar albertvaka avatar clutchski avatar degemer avatar djmitche avatar elijahandrews avatar gmmeyer avatar grosser avatar hush-hush avatar ivoanjo avatar janester avatar jeremy avatar jordan-brough avatar kbogtob avatar kserrania avatar l0k0ms avatar laserlemon avatar marcotc avatar masci avatar matthewshafer avatar misterbyrne avatar raykrueger avatar reinh avatar remeh avatar rtomayko avatar sullerandras avatar tramfjord avatar yannmh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dogstatsd-ruby's Issues

tag parsing issue

Hey, it looks like there's a slight issue with how this library parses tags. A user reached out to me and was using this library to submit a metric with a tag formatted like this:

:tags=>["input_file_format:v1,v2,v3,v4,v5,v6"]

However, when this metric is submitted, it appears in the UI with the following tags:

input_file_format:v1
v2
v3
v4
v5
v6

This tag should not be treated like a comma-separated list of tags, though, since the tag is contained in one set of quotes. Based on how tags are handled, these commas should be replaced with underscores, correct?

Which tag is used: global tag or submission tag?

I have a question on how Datadog handles duplicate tag keys. DogStatsD allows an instance to be instantiated with Global Tags, and also allows metric submissions to include tags. In Datadog::Statsd::Serialization::TagSerializer#format, we see that DogStatsD concatenates submission tags with global tags

tag_list =  if message_tags && message_tags.any?
              global_tags + to_tags_list(message_tags)
            else
              global_tags
            end

For example, what happens when there is a global tag Foo:1 and the submission tag includes a tag called Foo:2? Is only one of the two used, or are they both used? Based on the implementation it looks like this library sends the following string to Datadog:

Foo:1,Foo2

We would like the submission tag to overwrite the global tag with the same tag key, but I'm not sure if that's possible based on the code.

Tags cannot be symbols since 2.0.0

This is a late bug, but when updating from 1.6.0, we noticed that symbols can't be used in tags anymore.

2.0.0 started running gsub on them, resulting in

undefined method `gsub' for :.....:Symbol

and 3.0.0 fails with TypeError: can't dup Symbol

I understand if this could be a new, expected behaviour. (in that case you may want to update the breaking changes list)

Memory leak in 5.0.1

We are using a Datadog::Statsd object in a sidekiq worker. When the worker executes, we basically do this:

statsd = Datadog::Statsd.new("localhost", 8125)
statsd.increment(....) # specific params not included here

When upgrading from 4.8.3 to 5.0.1, we are seeing memory usage on the instance start to climb linearly until it finally exhausts all memory and we get a ThreadError: can't create Thread: Resource temporarily unavailable. We have definitively pinpointed this problem to the 5.0.1 upgrade --- there were no other changes made other than upgrading just the dogstatsd-ruby gem.

You can see the mem usage problem in the graph below:
Screen Shot 2021-04-21 at 5 36 50 PM
(each little dip is a deploy where we changed just one gem version. The last one is where we upgraded dogstatsd-ruby from 4.8.3 to 5.0.1).

When batching metrics, possible to exhaust max UDP datagram size

I had an issue where I was sending metrics to dogstatsd from the ruby library. Each metric had 9 or 10 tags, and the tag values could be reasonably long. (Like a hostname in AWS.) I was batching the metrics, since I was using a lot of them.

I was noticing a lot of my tags were getting cut off at random places. Like if I had a tag where I expected the value to be something like db_type:read_replica, I would see values like:

  • db_type:read_replica
  • db_type:read_replic
  • db_type:read_repli
  • db_type:read_repl
  • etc.

After some digging and looking through the code, I looked up the UDP packet structure, and saw it had a max length of 65,527 bytes.

Here's the ruby logic for batching and flushing requests:

def send_stat(message)
if @batch_nesting_depth > 0
@buffer << message
flush_buffer if @buffer.length >= @max_buffer_size
else
send_to_socket(message)
end
end
def flush_buffer
return @buffer if @buffer.empty?
send_to_socket(@buffer.join(NEW_LINE))
@buffer = Array.new
end

@buffer is an array, so it's just checking that the array the length of array is under a certain size. (By default, 50 elements.) If each element in the array was, on average, more than 1,310 bytes, you'd wind up calling send_to_socket with a blob of data larger than the max size of a UDP packet.

I changed @max_buffer_size from 50 to 20 in my code, and my problems went away.

I recommend adding some logic so that either in #send_stat or #flush_buffer, it's smarter about chunking up the data. If the size of @buffer.join(NEW_LINE) is larger than the packet limit, it can break it up into multiple sends.

Broken compatibility

Related: basecamp/trashed#13

I tried to use dogstatsd-ruby with trashed, but it fails because of broken compatibility.

This is what I came up with

module Application
  class Statsd < ::Datadog::Statsd
    alias_method :datadog_count, :count
    def count(stat, count, sample_rate = 1)
      if sample_rate.is_a?(Hash)
        datadog_count(stat, count, sample_rate)
      else
        datadog_count(stat, count, sample_rate: sample_rate)
      end
    end

    alias_method :datadog_gauge, :gauge
    def gauge(stat, value, sample_rate = 1)
      if sample_rate.is_a?(Hash)
        datadog_gauge(stat, value, sample_rate)
      else
        datadog_gauge(stat, value, sample_rate: sample_rate)
      end
    end

    alias_method :datadog_timing, :timing
    def timing(stat, ms, sample_rate = 1)
      if sample_rate.is_a?(Hash)
        datadog_timing(stat, ms, sample_rate)
      else
        datadog_timing(stat, ms, sample_rate: sample_rate)
      end
    end

    alias_method :datadog_time, :time
    def time(stat, sample_rate = 1)
      if sample_rate.is_a?(Hash)
        datadog_time(stat, sample_rate)
      else
        datadog_time(stat, sample_rate: sample_rate)
      end
    end
  end
end

batch in v4.9.0 was removed

We recently upgraded to v4.9.0 from v4.8.3.

We heavily utilized the batch functionality in 4.8.3 https://github.com/DataDog/dogstatsd-ruby/blob/v4.8.3/lib/datadog/statsd.rb#L311-L315.

v4.9.0 does not have the batch method defined. We receive the following when trying to send a metric

irb(main):018:0> statsd.batch { |b| b.increment('some_metric') } 
Traceback (most recent call last):
        1: from (irb):18
NoMethodError (undefined method `batch' for #<Datadog::Statsd:0x000055ab24412718>)

We have reverted back to using v4.8.3.

Can the batch method be added back to the client?

edit:
Here is the diff https://github.com/DataDog/dogstatsd-ruby/compare/v4.8.3...v4.9.0#diff-6ac7ed21ecf1fe04389730ce724b7502f0b1b17ccef2c1ccdd41bb8cd77ae6b1L311

Options in initializer removed in 4.9.0

A handful of options were removed from Statsd#initialize in the 4.9.0 release, see the diff here.
Initializing the statsd client with one of these options supplied results in an ArgumentError.
I don't see any mention of it in the Changelog and it feels like this could be considered a breaking change?

Set name of background threads?

Howdy 👋

Recently I added names to dd-trace-rb background threads (DataDog/dd-trace-rb#1366).

Would be really cool to do the same for dogstatsd, as it helps debugging, and we have plans to show and use thread names in the profiler as well.

No metrics after adopting 5.0.1

Hi all,

after adopting 5.0.1 (from 4.8.3) in two of applications we stopped seeing metrics coming through. Both apps do not use #batch. We just create a global single instance of Datadog:Statsd that we keep using throughout the app.

Anything else to look out for when adopting 5.x?

Is this library thread safe?

I'm planning on using this in a highly thread environment (Sidekiq). Can someone confirm its thread safety?

I have 2 options:

  1. Always create a new instance of Datadog::Statsd.new. Is there a performance impact on this?

  2. Share the Datadog::Statsd.new in a static variable across all instances.

Suggestions?

Improve automatic configuration

Ideally in most cases, users of this library would not pass host/port or socket to the constructor, allowing configuration by environment variables and in their absence, selecting an appropriate default.

DataDog/dd-trace-rb#1700 did something similar for dd-trace-rb, selecting a socket if the expected file exists.

NoMethodError in tag parsing if passed non-string

In a change of behavior from 1.6.0, Datadog::Statsd will blow up if an object that does not respond to gsub is passed in the tags array.

For instance:
Given $statsd.timing("test", 0, tags: [MyClass])

in 1.9.0, this works fine with MyClass coming through as "MyClass" in the tags

in 2.0.0 and greater, this gives the following error:

	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:349:in `remove_pipes'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:345:in `escape_tag_content'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:366:in `block in send_stats'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:366:in `map'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:366:in `send_stats'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/dogstatsd-ruby-2.1.0/lib/datadog/statsd.rb:188:in `timing'
	from (irb):13
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/railties-4.2.7.1/lib/rails/commands/console.rb:110:in `start'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/railties-4.2.7.1/lib/rails/commands/console.rb:9:in `start'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/railties-4.2.7.1/lib/rails/commands/commands_tasks.rb:68:in `console'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/railties-4.2.7.1/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
	from /home/deploy/.bundler/orders/ruby/2.3.0/gems/railties-4.2.7.1/lib/rails/commands.rb:17:in `<top (required)>'
	from bin/rails:4:in `require'
	from bin/rails:4:in `<main>'

I realize with the bump to 2.0.0, there are the possiblity of breaking changes, but if this is the new desired behavior, documentation in the change log warning of this would be appreciated.

Error for Frozen Tag Array

When calling any Datadog::Statsd metric interface, if a frozen array is passed to the tags: parameter, an error is raised.

This is the code responsible in Datadog::Statsd:

def send_stats(stat, delta, type, opts={})
  ... 
  tag_arr = opts[:tags].to_a
  tag_arr.map! { |tag| t = tag.dup; escape_tag_content!(t); t }
  ...
end

If opts[:tags] is a frozen array, then tag_arr is also a frozen array, causing map! to error.

Modify #time to support a "count" option.

I have to time some bulk operations and it's not viable to push to statsd every iteration of the loop. However, what would be nice is something like the following:

$statsd.time('accounts.activate', { count: accounts.length } ) { process_accounts(accounts) }

Internally this would change time to something similar to the following:

def time(stat, opts={})
  count = [opts.fetch(:count, 1), 1].max
  start = Time.now
  return yield
ensure
  time_since(stat / count, start, opts)
end

I can open a PR if this sounds like a good idea :)

It’d be helpful if the docs described error handling

There’s no mention of what happens if an attempt to emit a metric fails, for example if, I dunno, maybe if the UDP socket is closed.

If an exception might be raised, it would be helpful if it was a specific exception subclass, something like Datadog::TransmissionError, so it could be rescued and handled specifically.

warning: instance variable @global_tags_formatted not initialized

I get lots of these warnings in my test suite (note two different line numbers):

/Users/mikeperham/.gem/ruby/2.7.2/gems/dogstatsd-ruby-5.0.0/lib/datadog/statsd/serialization/tag_serializer.rb:24: warning: instance variable @global_tags_formatted not initialized
/Users/mikeperham/.gem/ruby/2.7.2/gems/dogstatsd-ruby-5.0.0/lib/datadog/statsd/serialization/tag_serializer.rb:21: warning: instance variable @global_tags_formatted not initialized

Documentation for date_happened is incorrect

The documentation for the statsd.event() method lists :date_happened as a possible option that can be passed in, as type Integer or nil, however passing in as an Integer results in a NoMethodError exception occurring:

"undefined method `delete' for 1535753113:Fixnum" location="/fluentd/vendor/bundle/ruby/2.3.0/gems/dogstatsd-ruby-3.3.0/lib/datadog/statsd.rb:399:in `remove_pipes'

Passing it in as a string posts the event successfully. Either the documentation needs to be updated, or the code calling remove_pipes should handle the case where an Integer is passed in.

Dogstatsd latest seems to be opening too many connections

We were on 4.4.0 and upgraded to latest 4.7.0 and have been seeing issues where dogstatsd opens too many connections.

Here are some data from our application pod -

# ss -s
Total: 92071
TCP:   331 (estab 9, closed 320, orphaned 0, timewait 25)
Transport Total     IP        IPv6
RAW	  0         0         0        
UDP	  28233     28232     1        
TCP	  11        10        1        
INET	  28244     28242     2        
FRAG	  0         0         0    

Most of the UDP ones are from dogstatsd

Recv-Q Send-Q       Local Address:Port      Peer Address:Port
0      0                127.0.0.1:32768        127.0.0.1:8125
0      0                127.0.0.1:32769        127.0.0.1:8125
0      0                127.0.0.1:32770        127.0.0.1:8125
0      0                127.0.0.1:32771        127.0.0.1:8125
0      0                127.0.0.1:32772        127.0.0.1:8125
0      0                127.0.0.1:32773        127.0.0.1:8125
0      0                127.0.0.1:32774        127.0.0.1:8125
0      0                127.0.0.1:32775        127.0.0.1:8125

...

Recv-Q Send-Q       Local Address:Port      Peer Address:Port
0  0   127.0.0.1:60988     127.0.0.1:8125
0  0   127.0.0.1:60989     127.0.0.1:8125
0  0   127.0.0.1:60990     127.0.0.1:8125
0  0   127.0.0.1:60991     127.0.0.1:8125
0  0   127.0.0.1:60992     127.0.0.1:8125
0  0   127.0.0.1:60993     127.0.0.1:8125
0  0   127.0.0.1:60994     127.0.0.1:8125
0  0   127.0.0.1:60995     127.0.0.1:8125
0  0   127.0.0.1:60996     127.0.0.1:8125
0  0   127.0.0.1:60997     127.0.0.1:8125
0  0   127.0.0.1:60998     127.0.0.1:8125
0  0   127.0.0.1:60999     127.0.0.1:8125


This then causes new connections to fails.

I believe there's probably something else in our infra that triggers this, but something with the current lib makes it worse and go into this loop of intiating connections.

On the same pod. I tried to write a message to the sock

Traceback (most recent call last):
        6: from /usr/local/bin/irb:23:in `<main>'
        5: from /usr/local/bin/irb:23:in `load'
        4: from /usr/local/lib/ruby/gems/2.6.0/gems/irb-1.0.0/exe/irb:11:in `<top (required)>'
        3: from (irb):5
        2: from /usr/local/lib/ruby/2.6.0/socket.rb:321:in `sendmsg_nonblock'
        1: from /usr/local/lib/ruby/2.6.0/socket.rb:321:in `__sendmsg_nonblock'
IO::EAGAINWaitWritable (Resource temporarily unavailable - sendmsg(2) would block)

We downgraded back to 4.4.0 and haven't seen this happen again for the last 5 days.

Sorry I know this might not be enough to help you find the issue & cause, but just wanted to bring this to your notice. Feel free to close the issue if the info is not enough.

Next version date

Hey,
Do you have a date for releasing a new version?
I saw you've added 'Batch' as a class, which will be very helpful to me.
Thanks for the great work! :)

Consider a PR for `.current` similar to Redis

We have a number of tags that we instantiate Statsd with. With Redis.current we do something similar:

# config/initializers/redis.rb
config = {
  # ...
}
Redis.current = Redis.new(config)

https://github.com/redis/redis-rb/blob/b9ac235fa03de173d1d382878757850afae94bc3/lib/redis.rb#L19-L25

Would DataDog team consider a PR that does same for Statsd?

# config/initializers/redis.rb
tags: {
  # env, service name, revision, etc
}
Statsd.current = Statsd.new(url, port, tags)

ddtrace incompatbility ?

Seeing this warning after trying to upgrade ...

WARN -- ddtrace: [ddtrace] This version of `ddtrace` is incompatible with `dogstastd-ruby` version >= 5.0 and can cause unbounded memory usage. Please use `dogstastd-ruby` version < 5.0 instead.

Set a custom hostname for gauge and increment_counter?

I see in the docs that event accepts a :hostname argument. Can gauge and increment_counter also accept a hostname?

I'm working with ec2, and I'm frequently recycling hosts. It would the most sense for my use case to use the ec2 node ID.

"missing" metrics?

Hi Datadog,

I'm investigating a strange issue when some metrics that were sent to datadog using dogstatsd-ruby seem to be "missing" or lost ... I'm wondering if there's some kind of in-memory buffering or something else happening in dogstatsd-ruby or in the dogstatsd agent on the host?

This seems to happen when we restart sidekiq. However, checking the logs carefully and adding a log after every time we push data to dogstatsd, the jobs are never lost, perhaps just delayed very slightly ...

I did a test where data is pushed every ~30 seconds. The metric explorer shows the data correctly every ~30s and also our server logs. Then when we restart sidekiq, the server logs still show that data is sent out, but there's a "gap" in the metric explorer data ... It looks like one data point was lost somewhere.

Rails log after data is sent

2021-10-25 08:18:29.111632 INFO name=Module message=statsd gauge sent payload={"measurement":"failing_stripe_webhooks","value":0,"tags":[]} tags= named_tags={retry: true, queue: default, class: StatsdReporter, args: [], jid: 0df25039bb11f321ddbc1e7f, created_at: 1635149904.5933833, user: , remote_ip: 127.0.0.1, request_id: ff3d4ef979f5aac8b6126069b4ebca7c, enqueued_at: 1635149904.5935144} duration= process_info=3708942:processor exception=
2021-10-25 08:18:59.138478 INFO name=Module message=statsd gauge sent payload={"measurement":"failing_stripe_webhooks","value":0,"tags":[]} tags= named_tags={retry: true, queue: default, class: StatsdReporter, args: [], jid: ad37beb945ca9c71edfa1e1a, created_at: 1635149934.6400843, user: , remote_ip: 127.0.0.1, request_id: ebc4ce55671b952a6696a292628c9b0c, enqueued_at: 1635149934.640196} duration= process_info=3708942:processor exception=

2021-10-25 08:19:29.466738 INFO name=Module message=statsd gauge sent payload={"measurement":"failing_stripe_webhooks","value":0,"tags":[]} tags= named_tags={retry: true, queue: default, class: StatsdReporter, args: [], jid: c4fb20aaaee2833074cfbf85, created_at: 1635149964.6918392, user: , remote_ip: 127.0.0.1, request_id: 4934b476519348d83c7347595824304a, enqueued_at: 1635149964.6921527} duration= process_info=3723438:processor exception=
2021-10-25 08:19:59.201184 INFO name=Module message=statsd gauge sent payload={"measurement":"failing_stripe_webhooks","value":0,"tags":[]} tags= named_tags={retry: true, queue: default, class: StatsdReporter, args: [], jid: 40f4da65f420017d52da92af, created_at: 1635149994.7849233, user: , remote_ip: 127.0.0.1, request_id: 259b5d1a958ec3174f5327f307807b82, enqueued_at: 1635149994.785065} duration= process_info=3723438:processor exception=

Metrics on datadog at the same time

CleanShot 2021-10-25 at 10 24 20

Not sure how to log the datadog dogstatsd process on the host itself to see if data "reaches" the agent, and if it gets forwarded to Datadog though ...

Shutting down rails server is taking a long time since 5.1

We have a pretty classic rails application (with sidekiq pro) and since we upgraded to 5.1, then 5.2 we have noticed that between sending CTRL-C to stop rails and the moment it actually stops, there is a 30 seconds delay.

[6544] - Worker 1 (PID: 6578) booted in 2.44s, phase: 0
^C
[6544] - Gracefully shutting down workers...
30 seconds later
[6544] === puma shutdown: 2021-07-20 10:26:50 -0700 ===
[6544] - Goodbye!
Exiting

Reverting to 5.0.1 fixes the problem and our server shuts down immediately. Let me know if I can provide any other useful information.

Send events from dogstatsd

Rather than requiring to use dogapi-rb to send events, it'd much more convenient to send events directly via statsd.

Rails app using dogstatsd-ruby 5.3.0 fails to start when dogstatsd host can't be resolved

In dogstatsd-ruby 5.2.0 and earlier, in our production and pre-production environments, starting out application with the hostname for a dogstatsd host configured with a placeholder (e.g. instance) might show a warning, but allow the application to start.
In our case DD_AGENT_HOST is (when manually starting the app from a terminal session) set to the placeholder instance.

As of dogstatsd-ruby 5.3.0 this is no longer the case. Running a rake task or starting a rails console (e.g. bundle exec rails console) will result in the following error being raised:

SocketError: getaddrinfo: Name or service not known
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd/udp_connection.rb:37:in `connect'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd/udp_connection.rb:37:in `connect'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd/udp_connection.rb:23:in `initialize'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd/forwarder.rb:36:in `new'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd/forwarder.rb:36:in `initialize'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd.rb:120:in `new'
/usr/local/bundle/gems/dogstatsd-ruby-5.3.0/lib/datadog/statsd.rb:120:in `initialize'
<snip>
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/initializable.rb:32:in `instance_exec'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/initializable.rb:32:in `run'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/initializable.rb:61:in `block in run_initializers'
/usr/local/lib/ruby/2.7.0/tsort.rb:228:in `block in tsort_each'
/usr/local/lib/ruby/2.7.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
/usr/local/lib/ruby/2.7.0/tsort.rb:431:in `each_strongly_connected_component_from'
/usr/local/lib/ruby/2.7.0/tsort.rb:349:in `block in each_strongly_connected_component'
/usr/local/lib/ruby/2.7.0/tsort.rb:347:in `each'
/usr/local/lib/ruby/2.7.0/tsort.rb:347:in `call'
/usr/local/lib/ruby/2.7.0/tsort.rb:347:in `each_strongly_connected_component'
/usr/local/lib/ruby/2.7.0/tsort.rb:226:in `tsort_each'
/usr/local/lib/ruby/2.7.0/tsort.rb:205:in `tsort_each'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/initializable.rb:60:in `run_initializers'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/application.rb:391:in `initialize!'
/srv/config/environment.rb:2:in `<top (required)>'
/usr/local/bundle/gems/zeitwerk-2.4.2/lib/zeitwerk/kernel.rb:34:in `require'
/usr/local/bundle/gems/zeitwerk-2.4.2/lib/zeitwerk/kernel.rb:34:in `require'
/usr/local/bundle/gems/activesupport-6.1.4.1/lib/active_support/dependencies.rb:332:in `block in require'
/usr/local/bundle/gems/activesupport-6.1.4.1/lib/active_support/dependencies.rb:299:in `load_dependency'
/usr/local/bundle/gems/activesupport-6.1.4.1/lib/active_support/dependencies.rb:332:in `require'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/application.rb:367:in `require_environment!'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/application.rb:533:in `block in run_tasks_blocks'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:281:in `block in execute'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:281:in `each'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:281:in `execute'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:199:in `synchronize'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:243:in `block in invoke_prerequisites'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:241:in `each'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:241:in `invoke_prerequisites'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:218:in `block in invoke_with_call_chain'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:199:in `synchronize'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:199:in `invoke_with_call_chain'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/task.rb:188:in `invoke'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:160:in `invoke_task'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:116:in `each'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:116:in `block in top_level'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:125:in `run_with_threads'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:110:in `top_level'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/commands/rake/rake_command.rb:24:in `block (2 levels) in perform'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/application.rb:186:in `standard_exception_handling'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/commands/rake/rake_command.rb:24:in `block in perform'
/usr/local/bundle/gems/rake-13.0.6/lib/rake/rake_module.rb:59:in `with_application'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/commands/rake/rake_command.rb:18:in `perform'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/command.rb:50:in `invoke'
/usr/local/bundle/gems/railties-6.1.4.1/lib/rails/commands.rb:18:in `<top (required)>'
bin/rails:4:in `require'
bin/rails:4:in `<main>'

We wouldn't consider a failure to connect to a dogstatsd a fatal enough sort of scenario that we'd want to stop app from starting, but perhaps it's very intentional and as designed?

Introduce a test mode

So when sending events, it would be nice to have a flag that we can enable a test mode of sorts. Where instead of sending events via UDP to datadog statsd server, they actually just go to an in-memory array. This allows us to test and make sure we're wired up to send data points correctly. Thoughts?

Make this a source repo instead of a fork

GitHub isn't good about showing forks in search results, so when searching for the dogstatsd code, it never shows up. GitHub support would be able to remove the fork mode so that the typical flow involving deleting the repository wouldn't need to be followed

Restore `batch` method

This restores compatibility for downstream users. Why break apps when backwards compatibility is this easy?

def batch
  yield self
  flush(sync: true)
end

Mark it as deprecated and remove it in v6.0 if you want.

`dogstatsd-ruby v5.x` companion thread is not duplicated if the main process is forking

With dogstatsd-ruby >= 5.0, if an app is using fork after having created a dogstatsd client instance, the newly created forked process is not creating any companion thread, it is not duplicating the one of the main process. It means that the forked process can't use that instance to flush metrics.

A working solution is to create an instance of the dogstatsd client in every process after they've forked. However, we should investigate how we could detect it / avoid it or even better, fix it if it's possible to automatically create a companion thread per process.

This issue is detailed with an available work-around in this comment: #179 (comment)

Uninitialized constant error (ruby-kafka) w/ v4.7.0

Hi, following the release of v4.7.0, an error results when using ruby-kafka v0.7.1
"uninitialized constant Datadog::Statsd::Connection::DEFAULT_PORT\nDid you mean? Datadog::Statsd::DEFAULT_BUFFER_SIZE"

ruby-kafka/lib/kafka/datadog.rb: line 80

def default_host
  ::Datadog::Statsd.const_defined?(:Connection) ? ::Datadog::Statsd::Connection::DEFAULT_HOST : ::Datadog::Statsd::DEFAULT_HOST
end

Following is the listener error backtrace

[2020-02-17T20:05:44:328+0000Z] ERROR -- Phobos : {:message=>"Listener crashed, waiting 16.14s (uninitialized constant Datadog::Statsd::Connection::DEFAULT_PORT\nDid you mean? Datadog::Statsd::DEFAULT_BUFFER_SIZE)", :listener_id=>"51366e", :retry_count=>4, :waiting_time=>16.14, :exception_class=>"NameError", :exception_message=>"uninitialized constant Datadog::Statsd::Connection::DEFAULT_PORT\nDid you mean? Datadog::Statsd::DEFAULT_BUFFER_SIZE", :backtrace=>["/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:85:in default_port'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:52:in port'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:34:in statsd'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:106:in emit'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:99:in block (2 levels) in <class:StatsdSubscriber>'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/datadog.rb:124:in request'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/subscriber.rb:145:in finish'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/fanout.rb:160:in finish'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/fanout.rb:62:in block in finish'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/fanout.rb:62:in each'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/fanout.rb:62:in finish'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/instrumenter.rb:45:in finish_with_state'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications/instrumenter.rb:30:in instrument'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications.rb:180:in instrument'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/instrumenter.rb:21:in instrument'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/connection.rb:97:in send_request'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/broker.rb:200:in send_request'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/broker.rb:44:in fetch_metadata'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:375:in block in fetch_cluster_info'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:370:in each'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:370:in fetch_cluster_info'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:350:in cluster_info'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:98:in refresh_metadata!'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/cluster.rb:52:in add_target_topics'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/consumer_group.rb:27:in subscribe'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/consumer.rb:575:in subscribe_to_topic'", "/usr/local/bundle/gems/ruby-kafka-0.7.10/lib/kafka/consumer.rb:105:in subscribe'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/listener.rb:96:in block in start_listener'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/instrumentation.rb:21:in block in instrument'", "/usr/local/bundle/gems/activesupport-6.0.2.1/lib/active_support/notifications.rb:182:in instrument'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/instrumentation.rb:20:in instrument'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/listener.rb:94:in start_listener'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/listener.rb:46:in start'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/executor.rb:72:in run_listener'", "/usr/local/bundle/gems/phobos-1.8.2/lib/phobos/executor.rb:31:in block (2 levels) in start'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:353:in run_task'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:342:in block (3 levels) in create_worker'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:325:in loop'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:325:in block (2 levels) in create_worker'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:324:in catch'", "/usr/local/bundle/gems/concurrent-ruby-1.1.6/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:324:in block in create_worker'", "/usr/local/bundle/gems/logging-2.2.2/lib/logging/diagnostic_context.rb:474:in block in create_with_logging_context'"]}

Thank you

Clarify thread safety behavior of Dogstatsd 5.x

I'd like to get some clarity (ideally published in the README) around thread safety for Dogstatsd 5.x and going forward.

This is important because:

  1. There are implementations in a number of significant libraries that seem to assume it's thread safe, and I don't believe the underlying socket is actually thread safe (e.g. racecar and ruby-kafka
  2. The asynchronous flush seems to have substantially changed the socket behavior - clients need to be explicitly closed, or there will be a file handle leak

Re: 1, I'd like to know if I should open up issues on those libraries to update to a thread safe usage.

Re: 2, in our own application we had historically been using a Thread.context to store a Dogstatsd client per thread, to ensure thread safety, and this was not an issue. As of 5.x, because we're not calling close explicitly, we have a file handle leak. If the client is thread safe, then we can just reduce all of this to a single Dogstatsd client. But looking at the implementation, I suspect it isn't, and a connection pool will be required. We'd like to get clarity on the best path forward.

Development/test environment in Rails

We use this gem at my company and we recently started see "not enough file descriptor"-like errors in our Rails app. We found at that this gem makes makes a new connections to port localhost:8125 for every single call to StatsD. On MacOS, the limit is 256.

Anyways, my question is this: What's best practice for using this gem in development/test environment?

I see two possibilities:

  1. Don't use this gem in development/test environment. Perhaps use a different "mocked" client library, but then keeping the API consistent is a burden.
  2. In development/test, mock out calls so that it doesn't actually do anything. Other StatsD client libraries (eg https://github.com/Shopify/statsd-instrument#configuration) are RAILS_ENV aware and basically only log the call in development/test. No network calls made.

I think 2 is ideal, but this library does not support this. Nor does it provide an easy way of accomplishing this in a DIY manner. The closest I found were tips from this issue: #28. But looking at the way your tests are doing it, instance_variable_set seems very brittle to me.

Are there plans to make this client library more development-environment friendly? I can find some time to make this library RAILS_ENV-aware as well if you guys would like.

[Feature request] Allow passing tags as a ruby hash

From the README, the suggested way to tag a metric is to pass an array of strings.

# Tag a metric.
statsd.histogram('query.time', 10, :tags => ["version:1"])

How hard would it be to add the ability to pass a hash to :tags in addition to using an Array?

# Tag a metric.
statsd.histogram('query.time', 10, :tags => {version: 1})

Is this something you'd be willing to add, or if you don't want to add it, would you be willing to consider adding it if I created a PR?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.