Code Monkey home page Code Monkey logo

graphite-remote-adapter's Introduction

Graphite Remote storage adapter Build Status

This is a read/write adapter that receives samples via Prometheus's remote write protocol and stores them in remote storage like Graphite.

It is based on remote_storage_adapter

Compiling the binary

You can either go get it:

$ go get -d github.com/criteo/graphite-remote-adapter/...
$ cd $GOPATH/src/github.com/criteo/graphite-remote-adapter
$ make build
$ ./graphite-remote-adapter --graphite.read.url='http://localhost:8080' --graphite.write.carbon-address=localhost:2003

Or checkout the source code and build manually:

$ mkdir -p $GOPATH/src/github.com/criteo
$ cd $GOPATH/src/github.com/criteo
$ git clone https://github.com/criteo/graphite-remote-adapter.git
$ cd graphite-remote-adapter
$ make build
$ ./graphite-remote-adapter --graphite.read.url='http://localhost:8080' --graphite.write.carbon-address=localhost:2003

Running

Graphite example:

./graphite-remote-adapter \
  --graphite.write.carbon-address=localhost:2001 \
  --graphite.read.url='http://guest:guest@localhost:8080' \
  --read.timeout=10s --write.timeout=5s \
  --read.delay 3600s \
  --graphite.default-prefix prometheus.

To show all flags:

./graphite-remote-adapter -h

Example

You can provide some configuration parameters either as flags or in a configuration file. If defined in both, the flag is used. In addtion, you can fill the configuration file with Graphite specific parameters. You can indeed defined customized paths/behaviors for remote-write into Graphite.

This is an example configuration that should cover most relevant aspects of the YAML configuration format.

web:
  listen_address: "0.0.0.0:9201"
  telemetry_path: "/metrics"
write:
  timeout: 5m
read:
  timeout: 5m
  delay: 1h
  ignore_error: true
graphite:
  default_prefix: test.prefix.
  enable_tags: false
  read:
    url: http://localhost:8888
  write:
    carbon_address: localhost:2003
    carbon_transport: tcp
    carbon_reconnect_interval: 5m
    enable_paths_cache: true
    paths_cache_ttl: 1h
    paths_cache_purge_interval: 2h
    template_data:
      var1:
        foo: bar
      var2: foobar

    rules:
    - match:
        owner: team-X
      match_re:
        service: ^(foo1|foo2|baz)$
      template: '{{.var1.foo}}.graphite.path.host.{{.labels.owner}}.{{.labels.service}}{{if ne .labels.env "prod"}}.{{.labels.env}}{{end}}'
      continue: true
    - match:
        owner: team-X
        env:   prod
      template: 'bla.bla.{{.labels.owner | escape}}.great.{{.var2}}'
      continue: true
    - match:
        owner: team-Z
      continue: false

Support for Tags

Graphite 1.1.0 supports tags: http://graphite.readthedocs.io/en/latest/tags.html, you can enable support for tags in the remote adapter with --graphite.enable-tags or in the configuration file.

Filtering tags

Using --graphite.filtered-tags (or the filtered_tags yaml field in configuration files), it is possible to exports as tags only a given set of label names. Other labels/values won't be exported as tags, and will still be part of the metric name. This feature is only supported for Graphite Tags (not available when using the OpenMetrics format).

Configuring Prometheus

To configure Prometheus to send samples to this binary, add the following to your prometheus.yml:

# Remote write configuration.
remote_write:
  - url: "http://localhost:9201/write"

# Remote read configuration.
remote_read:
  - url: "http://localhost:9201/read"

Since the 0.0.15, a custom prefix can be set in the query string and this will replace the default one. This could be useful if you are using the graphite remote adapter for multiple Prometheus instances with different prefix.

# Remote write configuration.
remote_write:
  - url: "http://localhost:9201/write?graphite.default-prefix=customprefix."

# Remote read configuration.
remote_read:
  - url: "http://localhost:9201/read?graphite.default-prefix=customprefix."

Testing

You can test the graphite-remote-adapter behavior or its configuration using the second binary named ratool for remote-adapter tool. Here are two examples:

Integration test (manual end-to-end)

The remote-adapter tool will read an input file in Prometheus exposition text format; translate it in WriteRequest using compressed protobuf format; and send it to the graphite-remote-adapter url on its /write endpoint. No need to run a Prometheus instance to test it anymore:

file -> ratool -> graphite-remote-adapter -> nc

$ make build
$ cat cmd/ratool/input.metrics.example
  # Use the Prometheus exposition text format
  toto{foo="bar", cluster="test"} 42
  toto{foo="bar", cluster="canary"} 34
  # You can even force a given timestamp
  toto{foo="bazz", cluster="canary"} 18 1528819131000
$ ./graphite-remote-adapter --graphite.write.carbon-address ':8888' --log.level debug &
$ nc -l 0.0.0.0 8888 -w 1 > out.txt
$ ./ratool mock-write --metrics.file cmd/ratool/input.metrics.example --remote-adapter.url 'http://localhost:9201'
$ cat out.txt
  toto.cluster.test.foo.bar 42.000000 1570803131
  toto.cluster.canary.foo.bar 34.000000 1570803131
  toto.cluster.canary.foo.bazz 18.000000 1528819131

Unittests (automated config unittests)

If you want to unit test your configurations without requiring any network, define a file for each configuration you want to test.

Example:

config_file: config.yml
tests:
  - name: "Test label"
    input: |
        # Use the Prometheus exposition text format
        toto{foo="bar", cluster="test"} 42 1570802650000
        toto{foo="bar", cluster="canary"} 34 1570802650000
        toto{foo="bazz", cluster="canary"} 18 1528819131000
    output: |
        toto.my.templated.path.test.foo.bar.lulu 42.000000 1570802650
        toto.canary.other.template.bar 34.000000 1570802650
        toto.canary.other.template.bazz 18.000000 1528819131

  - name: "Other test"
    input: |
        foo{bar="baz"} 10
    output: |
        foo.bar.baz.lol 10 1528819131000

The path to config_file is relative to the test file.

Note: timestamps do not have the same unit for input and output. Input uses a regular unix timestamp in milliseconds, output is in seconds.

To run it:

$ make build
$ ./ratool unittest --test.file test_file.yml

The tool will exit with a non-zero code if the output of the remote adapter for the given configuration and the given input does not match the expected output (order of the lines is not checked).

It also prints the diff on the standard error stream.

Example of output:

./ratool unittest --config.file foo.yml --test.file bar.yml
# Testing foo.yml
## Test label
-toto.my.templated.path.test.foo.bar.lulu 42.000000 1570802650
-toto.canary.other.template.bar 34.000000 1570802650
-toto.canary.other.template.bazz 18.000000 1528819131
+toto.cluster.test.foo.bar 42.000000 1536658898
+
+toto.cluster.canary.foo.bar 34.000000 1536658898
+
+toto.cluster.canary.foo.bazz 18.000000 1528819131
## Other test
-foo.bar.baz.lol 10 1528819131000
+foo.bar.baz 10.000000 1536658898

graphite-remote-adapter's People

Contributors

adericbourg avatar dependabot[bot] avatar iksaif avatar informatiq avatar jasei avatar jfwm2 avatar mibc avatar mycroft avatar thib17 avatar wdauchy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphite-remote-adapter's Issues

Queue full

Hello, I have a problem with queue in prometheus + graphite-remote-adapter
level=warn ts=2019-12-10T08:31:54.018127762Z caller=queue_manager.go:230 component=remote queue="0:http://***/write?graphite.default-prefix=kube_poly_ " msg="Remote storage queue full, discarding sample. Multiple subsequent messages of this kind may be suppressed."

Prometheus & adapter config is default
only 10% of metrics from 70 computers reach

Can't set --read.delay to zero

--read.delay help us to never return recent points using a fix delay.
In prometheus 2, a better implementation has been merged with the prometheus config read_recent param.
The default value is 1h.

What happend: while trying to set --read.delay=0s, it still remain 1h.

DoD:

  • make it possible to set it to 0s, or even remove it since it's not necessary anymore with Prom2.

Configuration question

Hi guys,

I am having troubles configuring the remote storage adapter. Or perhaps I am just missing something obvious.

To give an example I picked a random metric: alertmanager_http_response_size_bytes_bucket
If I execute this query in Prometheus I get 96 results:

Screen Shot 2020-02-14 at 15 14 41

However in Graphite there are only two results being stored:

Screen Shot 2020-02-14 at 15 16 04

What may be the reason for this? I do not see any errors in the remote-storage-adaptor logs nor does Prometheus report any issues with remote writes.

Appreciate your help!

message too long

Hi,
I'm using prometheus operator and I'm trying to send metrics to graphite (hosted graphite).
I'm getting this error when trying to send data through the adapter.

level=warn ts=2018-05-15T13:03:59.344288008Z caller=main.go:446 num_samples=100 storage=graphite err="write udp {reporterIP}:42909->{carbonIP}:2003: write: message too long" msg="Error sending samples to remote storage"

this is my config
--graphite.write.carbon-address=${GRAPHITE_ADDRESS}
--write.timeout=60s
--graphite.default-prefix=${GRAPHITE_PREFIX}
--graphite.write.carbon-transport=udp

any idea how to resolve this ?

Too many new connexions created with high load

Version used: v0.0.9 but issue still exist in v0.0.11
Command line arguments: -carbon-transport=tcp -write-timeout=5s -read-timeout=5s

Hello,
I started seeing a lot of timeouts errors with load increase on graphite-remote adapter

err="dial tcp: lookup graphite-relay.xxx on xx.xx.xx.xx:53: dial udp xx.xx.xx.xx:53: i/o timeout" num_samples=100 source="main.go:346"
err="dial tcp xx.xx.xx.xx:3341: i/o timeout" num_samples=100 source="main.go:346"
err="dial tcp xx.xx.xx.xx:3341: context deadline exceeded"

after doing a perf trace -p pid_of_adapter I noticed a lot of number of connect syscall.
Doing a perf trace -e 'connect' -p pid_of_adapter 2>&1 | grep '= 0' reveal that there is a huge number of new connections created (I got over 500 for 10seconds of tracing)

Doing a perf record -g -F 999 -p pid and after a perf report -g callee --symbol-filter=connect we can find that github.com/criteo/graphite-remote-adapter/graphite.(*Client).Write is asking for those new connections.

After checking the code https://github.com/criteo/graphite-remote-adapter/blob/v0.0.9/graphite/client.go#L113 it seems indeed that each call to the write function will request to open a new connection (dns resolution + establishing tcp conx) toward graphite creating congestion during high load.

Increasing the -write-timeout argument from 5s to 30s solve the issue of lot of error messages, but this is only a partial solution as we still create a lot of new connections/dns requests.

What is expected:

graphite-remote-adapter should keep its existing tcp connection in order to reuse it and avoid putting pressure on the system by creating that many new connections.

Error parsing metric issue due to invalid segment

Running in Kubernetes environment, I am facing following issue. It's probably caused by weird characters in metric names:

22/05/2021 11:01:29 :: [console] Error parsing metric prometheus.prometheus_rule_group_last_evaluation_samples.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fcontainers%2Erules;ContainersGroup: Cannot parse path prometheus.prometheus_rule_group_last_evaluation_samples.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fcontainers%2Erules;ContainersGroup, invalid segment ContainersGroup
22/05/2021 11:01:29 :: [console] Error parsing metric prometheus.prometheus_rule_group_last_evaluation_samples.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fprometheus%2Erules;PrometheusGroup: Cannot parse path prometheus.prometheus_rule_group_last_evaluation_samples.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fprometheus%2Erules;PrometheusGroup, invalid segment PrometheusGroup
22/05/2021 11:01:29 :: [console] Error parsing metric prometheus.prometheus_rule_group_last_evaluation_timestamp_seconds.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fkubernetes%2Erules;KubernetesGroup: Cannot parse path prometheus.prometheus_rule_group_last_evaluation_timestamp_seconds.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fkubernetes%2Erules;KubernetesGroup, invalid segment KubernetesGroup
22/05/2021 11:01:29 :: [console] Error parsing metric prometheus.prometheus_rule_group_last_evaluation_timestamp_seconds.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fnode%2Erules;NodeGroup: Cannot parse path prometheus.prometheus_rule_group_last_evaluation_timestamp_seconds.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fnode%2Erules;NodeGroup, invalid segment NodeGroup
22/05/2021 11:01:29 :: [console] Error parsing metric prometheus.prometheus_rule_group_rules.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fcontainers%2Erules;ContainersGroup: Cannot parse path prometheus.prometheus_rule_group_rules.app.prometheus.cluster_name.domecek.component.core.instance.10%2E244%2E0%2E86:9090.job.kubernetes-pods.kubernetes_namespace.monitoring.kubernetes_pod_name.prometheus-5b6dc9b498-rkspq.pod_template_hash.5b6dc9b498.rule_group.%2Fetc%2Fprometheus-rules%2Fcontainers%2Erules;ContainersGroup, invalid segment ContainersGroup

Unable to complie

Compiling with 'export GO111MODULE="on"'

>> formatting code
go: finding github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
go: finding github.com/gogo/protobuf v1.3.0
......
go: github.com/coreos/[email protected]: go.mod has post-v0 module path "github.com/coreos/go-systemd/v22" at revision d657f9650837
go: finding github.com/influxdata/influxdb1-client latest
go: finding github.com/op/go-logging latest
go get: error loading module requirements
make: *** [Makefile:74: promu] Error 1

I'm not sure what it fail to find. "github.com/op/go-logging" - is live.
go version go1.11.6 linux/amd64

commit 7b02e4f3543cf3ff43d4249f30ab5e3daeccba17 (HEAD -> master, tag: v0.3.1, origin/master, origin/HEAD)
Author: Martin Conraux <[email protected]>
Date:   Wed Oct 23 13:22:10 2019 +0200

    Cancel write if they are already on a cancelled http request

UPD: I was able to complie previous version of code on same system:
"git checkout -b test 515cb3c"
It seems modules is pulling something not compliable.

DockerHub

It would be handy to have this on dockerhub, for people (like me) without a local go dev env.

Adaptor Cannot send value to Graphite

Hi @emmanuelguerin @wdauchy @brugidou @mycroft @emmanuelguerin
I am using graphite remote adaptor v0.4.1 for graphiteapp/graphite-statsd:1.1.7-6 to send prometheus metrics using remote_write but getting an error while adaptor is trying to send prometheus metrics to graphite

Adaptor logs

ts=2023-03-04T14:50:13.823Z caller=client.go:96 level=debug storage=Graphite msg="Cannot send value to Graphite, skipping sample" value=NaN sample="kafka_consumer_fetch_manager_records_per_request_avg{app_kubernetes_io_environment=\"uat\", app_kubernetes_io_service=\"howler\", build_number=\"264\", client_id=\"consumer-fabric-fabric-howler-consumers-uat-2\", eks_amazonaws_com_fargate_profile=\"apps-profile\", instance=\"176.24.1.133:30301\", job=\"kubernetes-pods\", kafka_version=\"3.1.1\", namespace=\"uat\", pod=\"howler-9cb6c848d-k6zw9\", pod_template_hash=\"9cb6c848d\", spring_id=\"fabric-alm-nimble-uat.consumer.consumer-fabric-fabric-howler-consumers-uat-2\", topic=\"fabric-alm-nimble-uat\"} => NaN @[1677941355.944]"
ts=2023-03-04T14:50:13.824Z caller=client.go:96 level=debug storage=Graphite msg="Cannot send value to Graphite, skipping sample" value=NaN sample="kafka_consumer_coordinator_rebalance_latency_avg{app_kubernetes_io_environment=\"uat\", app_kubernetes_io_service=\"howler\", build_number=\"264\", client_id=\"consumer-fabric-fabric-howler-consumers-uat-6\", eks_amazonaws_com_fargate_profile=\"apps-profile\", instance=\"176.24.1.133:30301\", job=\"kubernetes-pods\", kafka_version=\"3.1.1\", namespace=\"uat\", pod=\"howler-9cb6c848d-k6zw9\", pod_template_hash=\"9cb6c848d\", spring_id=\"fabric-fabric-attachments-uat.consumer.consumer-fabric-fabric-howler-consumers-uat-6\"} => NaN @[1677941355.944]"
ts=2023-03-04T14:50:13.831Z caller=client.go:96 level=debug storage=Graphite msg="Cannot send value to Graphite, skipping sample" value=NaN sample="kafka_consumer_node_request_latency_avg{app_kubernetes_io_environment=\"uat\", app_kubernetes_io_service=\"fabric-lm-service\", build_number=\"254\", client_id=\"consumer-anonymous.949d99bb-fc19-45e4-a0ac-e111b84bd9c3-2\", eks_amazonaws_com_fargate_profile=\"apps-profile\", instance=\"176.24.1.54:8081\", job=\"kubernetes-pods\", kafka_version=\"3.1.1\", namespace=\"uat\", node_id=\"node--1\", pod=\"fabric-lm-service-7b7b9cc76-7q226\", pod_template_hash=\"7b7b9cc76\", spring_id=\"publish-in-0.consumer.consumer-anonymous.949d99bb-fc19-45e4-a0ac-e111b84bd9c3-2\"} => NaN @[1677941357.431]"
ts=2023-03-04T14:50:13.833Z caller=client.go:96 level=debug storage=Graphite msg="Cannot send value to Graphite, skipping sample" value=NaN sample="kafka_consumer_coordinator_sync_time_avg{app_kubernetes_io_environment=\"uat\", app_kubernetes_io_service=\"jenkins\", build_number=\"217\", client_id=\"consumer-fabric-fabric-jenkins-consumers-uat-2\", eks_amazonaws_com_fargate_profile=\"apps-profile\", instance=\"176.24.5.74:30303\", job=\"kubernetes-pods\", kafka_version=\"3.1.1\", namespace=\"uat\", pod=\"jenkins-f66797798-rxtvj\", pod_template_hash=\"f66797798\", spring_id=\"ci-jenkins-uat.consumer.consumer-fabric-fabric-jenkins-consumers-uat-2\"} => NaN @[1677941350.884]"

I am new in GO Please HELP!

graphite timestamp float implementation: is this correct/required?

The current implementation only supports float timestamp with default precision for the data sent to graphite. AFAIK not all graphite implementations support float timestamp, nor such a high precision is usually required; our workaround is to use %.0f in https://github.com/criteo/graphite-remote-adapter/blob/master/client/graphite/write.go#L40

Would it be ok to configure the data point resolution based on a parameter? I.E. we could have

func (c *Client) prepareDataPoint(path string, s *model.Sample, subSecondEnabled bool) string {
...
}

Thank you

Problem with building

I'm trying to use this adapter, but can't build it:

go get: warning: modules disabled by GO111MODULE=auto in GOPATH/src;
        ignoring go.mod;
        see 'go help modules'
>> building binaries
 >   graphite-remote-adapter
# github.com/criteo/graphite-remote-adapter/client/graphite
client/graphite/read.go:188:19: cannot assign []*prompb.Label to ts.Labels (type []prompb.Label) in multiple assignment
client/graphite/read.go:190:19: cannot assign []*prompb.Label to ts.Labels (type []prompb.Label) in multiple assignment
!! command failed: build -o /root/go/src/github.com/criteo/graphite-remote-adapter/graphite-remote-adapter -ldflags -X github.com/criteo/graphite-remote-adapter/vendor/github.com/prometheus/common/version.Version=0.2.0 -X github.com/criteo/graphite-remote-adapter/vendor/github.com/prometheus/common/version.Revision=7b02e4f3543cf3ff43d4249f30ab5e3daeccba17 -X github.com/criteo/graphite-remote-adapter/vendor/github.com/prometheus/common/version.Branch=master -X github.com/criteo/graphite-remote-adapter/vendor/github.com/prometheus/common/version.BuildUser=root@go-builder -X github.com/criteo/graphite-remote-adapter/vendor/github.com/prometheus/common/version.BuildDate=20191122-15:42:28  -extldflags '-static' -a -tags netgo github.com/criteo/graphite-remote-adapter/cmd/graphite-remote-adapter: exit status 2
Makefile:54: recipe for target 'build' failed
make: *** [build] Error 1```

# go version
go version go1.12.9 linux/amd64

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.