ofesseler / gluster_exporter Goto Github PK
View Code? Open in Web Editor NEWGluster Exporter for Prometheus
License: Apache License 2.0
Gluster Exporter for Prometheus
License: Apache License 2.0
We should be exporting gluster_node_inode_count
and gluster_node_inode_free
metrics as well as we do with the size metrics.
See,
# gluster volume status all detail
Status of volume: test-volume
------------------------------------------------------------------------------
Brick : Brick glusterfs1:/data/brick1/gv0
TCP Port : 49152
RDMA Port : 0
Online : Y
Pid : 1942
File System : xfs
Device : /dev/sdb1
Mount Options : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 499.6GB
Total Disk Space : 499.8GB
Inode Count : 262143424
Free Inodes : 262143110
@ofesseler any thoughts?
Using release v0.2.5 (can't build from source, see my other bug report) and glusterfs 9.2
Readme.md says it will acquire node information via gluster volume status all detail --xml
. I can launch this and I can see per-node metrics such as sizeTotal, sizeFree, inodesTotal, inodesFree. But when I launch the exporter and ask for /metrics
, I do not see any mention of gluster_node_size_bytes_total
, gluster_node_size_free_bytes
, gluster_node_size_free_bytes
, gluster_node_inodes_free
I launched with -log.level debug
and there are no error messages... no message at all actually, after reporting it started and "GlusterFS Metrics Exporter v0.2.5", not even saying it received a request.
Output of gluster volume status all detail --xml
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>gameserver</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>[redacted-1]</hostname>
<path>/opt/data/gluster/gameserver</path>
<peerid>fb8dee35-0ea1-4d79-b733-e6edfb9f6b71</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>2468110</pid>
<sizeTotal>3541511364608</sizeTotal>
<sizeFree>2442654326784</sizeFree>
<device>/dev/mapper/vol2-gluster</device>
<blockSize>4096</blockSize>
<mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
<fsName>ext4</fsName>
<inodeSize>ext4</inodeSize>
<inodesTotal>219611136</inodesTotal>
<inodesFree>216624397</inodesFree>
</node>
<node>
<hostname>[redacted-2]</hostname>
<path>/opt/data/gluster/gameserver</path>
<peerid>f938331e-e057-4cf7-a63a-b2121c3f87ba</peerid>
<status>0</status>
<port>N/A</port>
<ports>
<tcp>N/A</tcp>
<rdma>N/A</rdma>
</ports>
<pid>-1</pid>
<sizeTotal>3540553007104</sizeTotal>
<sizeFree>2437145952256</sizeFree>
<device>/dev/mapper/vol2-gluster</device>
<blockSize>4096</blockSize>
<mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
<fsName>ext4</fsName>
<inodeSize>ext4</inodeSize>
<inodesTotal>219611136</inodesTotal>
<inodesFree>216624417</inodesFree>
</node>
</volume>
<volume>
<volName>iris</volName>
<nodeCount>2</nodeCount>
<node>
<hostname>[redacted-1]</hostname>
<path>/opt/data/gluster/iris</path>
<peerid>fb8dee35-0ea1-4d79-b733-e6edfb9f6b71</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>2468118</pid>
<sizeTotal>3541511364608</sizeTotal>
<sizeFree>2442654326784</sizeFree>
<device>/dev/mapper/vol2-gluster</device>
<blockSize>4096</blockSize>
<mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
<fsName>ext4</fsName>
<inodeSize>ext4</inodeSize>
<inodesTotal>219611136</inodesTotal>
<inodesFree>216624397</inodesFree>
</node>
<node>
<hostname>[redacted-2]</hostname>
<path>/opt/data/gluster/iris</path>
<peerid>f938331e-e057-4cf7-a63a-b2121c3f87ba</peerid>
<status>0</status>
<port>N/A</port>
<ports>
<tcp>N/A</tcp>
<rdma>N/A</rdma>
</ports>
<pid>-1</pid>
<sizeTotal>3540553007104</sizeTotal>
<sizeFree>2437145952256</sizeFree>
<device>/dev/mapper/vol2-gluster</device>
<blockSize>4096</blockSize>
<mntOptions>rw,noexec,noatime,nodiratime</mntOptions>
<fsName>ext4</fsName>
<inodeSize>ext4</inodeSize>
<inodesTotal>219611136</inodesTotal>
<inodesFree>216624417</inodesFree>
</node>
</volume>
</volumes>
</volStatus>
</cliOutput>
Output of curl -s http://localhost:9189/metrics |grep gluster_
# HELP gluster_brick_count Number of bricks at last query.
# TYPE gluster_brick_count gauge
gluster_brick_count{volume="gameserver"} 2
gluster_brick_count{volume="iris"} 2
# HELP gluster_exporter_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which gluster_exporter was built.
# TYPE gluster_exporter_build_info gauge
gluster_exporter_build_info{branch="master",goversion="go1.7.4",revision="0e16052c6a0399880d6c74ba5b94fe1ade283615",version="0.2.5"} 1
# HELP gluster_peers_connected Is peer connected to gluster cluster.
# TYPE gluster_peers_connected gauge
gluster_peers_connected 1
# HELP gluster_up Was the last query of Gluster successful.
# TYPE gluster_up gauge
gluster_up 1
# HELP gluster_volume_status Status code of requested volume.
# TYPE gluster_volume_status gauge
gluster_volume_status{volume="gameserver"} 1
gluster_volume_status{volume="iris"} 1
# HELP gluster_volumes_count How many volumes were up at the last query.
# TYPE gluster_volumes_count gauge
gluster_volumes_count 2
Tks your code can running in production environment;
I just interesting in continuing this project.
But I am a new gopher. How to connect you?
PS: I change your project code structure like mysql_exporter.
I m getting the following error when I try to build :
[root@k8s-01 gluster_exporter]# make build
ensure vendoring
grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export golang.org/x/crypto: unable to deduce repository and source type for "golang.org/x/crypto": unable to read metadata: unable to fetch raw metadata: failed HTTP request to URL "http://golang.org/x/crypto?go-get=1": Get "http://golang.org/x/crypto?go-get=1": proxyconnect tcp: dial tcp: lookup xn--http-u96a on 114.114.114.114:53: no such host
Please help
Hi, to get get heal status of volumes is necessary to specify heal_info_files_count
in promethues metrics, that internally execute gluster v heal VOLNAME info
.
This runs ok when there is few files to be healed, but when there are too many files to be healed, the command spend too time to get values.
Is it possible to change internally command to gluster volume heal VOLNAME statistics heal-count
?
Or, add new metric (for example heal_info_files_statistics
) to get this value?
Regards.
Hi,
The exporter does not get any profiling data when used with cluster of two or more Gluster volumes, i have tested on two different clusters, on one cluster it only gets profile data of one volume and ignore the other one, and on another cluster it just ignore the "-profile" option and displays nothing.
Gluster gets the request of profiling info command on all volumes, but the exporter just does display the data.
Hello!
Please add monitoring of the use of quotas.
Exemple:
gluster volume quota <VOLNAME> list
I am getting the following error when trying to use gluster exporter
I see #49 (comment), so I set “privileged: true”, but there are still the following problems:
An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values
gluster exporter log
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume info] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml volume info: exit status 1" source="main.go:211"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [peer status] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml of peer status: exit status 1" source="main.go:248"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume status all detail] and got error: exit status 1" source="gluster_client.go:23"
time="2022-05-11T11:54:49Z" level=error msg="couldn't parse xml of peer status: exit status 1" source="main.go:305"
time="2022-05-11T11:54:49Z" level=warning msg="no Volumes were given." source="main.go:322"
time="2022-05-11T11:54:49Z" level=error msg="tried to execute [volume list] and got error: exit status 1" source="gluster_client.go:23"
k8s yaml:
hi,
maybe i'm doing this wrong but the metrics:
$ curl -v http://127.0.0.1:9189/metrics
* About to connect() to 127.0.0.1 port 9189 (#0)
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9189 (#0)
> GET /metrics HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:9189
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 04 Oct 2018 12:21:09 GMT
< Content-Length: 152
<
An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values
* Connection #0 to host 127.0.0.1 left intact
We have to fix these issues reported by the promtool checker.
# curl -s glusterfs1:9189/metrics | promtool check-metrics
gluster_brick_data_read: counter metrics should have "_total" suffix
gluster_brick_data_written: counter metrics should have "_total" suffix
gluster_brick_duration: counter metrics should have "_total" suffix
gluster_brick_fop_hits: counter metrics should have "_total" suffix
gluster_brick_fop_latency_avg: counter metrics should have "_total" suffix
gluster_brick_fop_latency_max: counter metrics should have "_total" suffix
gluster_brick_fop_latency_min: counter metrics should have "_total" suffix
gluster_heal_info_files_count: counter metrics should have "_total" suffix
gluster_node_size_free_bytes: counter metrics should have "_total" suffix
gluster_node_size_total_bytes: counter metrics should have "_total" suffix
gluster_brick_count: non-histogram and non-summary metrics should not have "_count" suffix
gluster_heal_info_files_count: non-histogram and non-summary metrics should not have "_count" suffix
gluster_volumes_count: non-histogram and non-summary metrics should not have "_count" suffix
using go version go1.18.1 linux/amd64
Installation instruction in readme.md says go get github.com/ofesseler/gluster_exporter
What I saw:
go: go.mod file not found in current directory or any parent directory.
'go get' is no longer supported outside a module.
To build and install a command, use 'go install' with a version,
like 'go install example.com/cmd@latest'
For more information, see https://golang.org/doc/go-get-install-deprecation
or run 'go help get' or 'go help install'.
So I tried go install github.com/ofesseler/gluster_exporter@latest
What I saw:
go: finding module for package github.com/prometheus/common/version
go: finding module for package github.com/prometheus/client_golang/prometheus/promhttp
go: finding module for package github.com/prometheus/common/log
go: finding module for package github.com/prometheus/client_golang/prometheus
go: found github.com/prometheus/client_golang/prometheus in github.com/prometheus/client_golang v1.12.1
go: found github.com/prometheus/client_golang/prometheus/promhttp in github.com/prometheus/client_golang v1.12.1
go: found github.com/prometheus/common/version in github.com/prometheus/common v0.34.0
go: finding module for package github.com/prometheus/common/log
[redacted]/go/pkg/mod/github.com/ofesseler/[email protected]/structs/xmlStructs.go:8:2: module github.com/prometheus/common@latest found (v0.34.0), but does not contain package github.com/prometheus/common/log
Hi,
I m getting the following error when I try to build
vagrant@es1:~/work/src/github.com/gluster_exporter$ make build
>> ensure vendoring
>> vetting code
# github.com/gluster_exporter
./main.go:329:59: node.InodesTotal undefined (type struct { Hostname string "xml:\"hostname\""; Path string "xml:\"path\""; PeerID string "xml:\"peerid\""; Status int "xml:\"status\""; Port int "xml:\"port\""; Ports struct { TCP int "xml:\"tcp\""; RDMA string "xml:\"rdma\"" } "xml:\"ports\""; Pid int "xml:\"pid\""; SizeTotal uint64 "xml:\"sizeTotal\""; SizeFree uint64 "xml:\"sizeFree\""; Device string "xml:\"device\""; BlockSize int "xml:\"blockSize\""; MntOptions string "xml:\"mntOptions\""; FsName string "xml:\"fsName\"" } has no field or method InodesTotal)
./main.go:333:56: node.InodesFree undefined (type struct { Hostname string "xml:\"hostname\""; Path string "xml:\"path\""; PeerID string "xml:\"peerid\""; Status int "xml:\"status\""; Port int "xml:\"port\""; Ports struct { TCP int "xml:\"tcp\""; RDMA string "xml:\"rdma\"" } "xml:\"ports\""; Pid int "xml:\"pid\""; SizeTotal uint64 "xml:\"sizeTotal\""; SizeFree uint64 "xml:\"sizeFree\""; Device string "xml:\"device\""; BlockSize int "xml:\"blockSize\""; MntOptions string "xml:\"mntOptions\""; FsName string "xml:\"fsName\"" } has no field or method InodesFree)
Makefile:33: recipe for target 'vet' failed
make: *** [vet] Error 2
Please help
Built from the latest source,
Am I forgetting something?
./gluster_exporter --gluster.volumes data --web.listen-address=0.0.0.0:8010 --profile
I am getting the following error when trying to use gluster exporter
An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values
tried to execute [volume info] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml volume info: exit status 1" source="main.go:211
tried to execute [peer status] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml of peer status: exit status 1" source="main.go:248
tried to execute [volume status all detail] and got error: exit status 1" source="gluster_client.go:23
couldn't parse xml of peer status: exit status 1" source="main.go:305
Please help
Is the project dead?
Hi @ofesseler @mjtrangoni, I am writing from Gluster team to understand how to proceed with collaborating together on Prometheus exporter efforts!
We had also started some work on this regard @ https://github.com/gluster/gluster-prometheus sometime back!
Having 2 efforts to get the same output would not be ideal for the users and us as developers in my opinion! Happy to hear your opinions on this.
gometallinter error
see travis log: https://api.travis-ci.org/v3/job/408065174/log.txt
https://github.com/ofesseler/gluster_exporter/blob/master/main.go#L210
Im seeing a behaviour where the exporter is returning a 500 from the metrics api with this error:
An error has occurred during metrics gathering:
collected metric gluster_up gauge:<value:1 > was collected before with the same name and label values
My guess is that here
volumeInfo, err := ExecVolumeInfo()
if err != nil {
log.Errorf("couldn't parse xml volume info: %v", err)
ch <- prometheus.MustNewConstMetric(
up, prometheus.GaugeValue, 0.0,
)
}
// use OpErrno as indicator for up
if volumeInfo.OpErrno != 0 {
ch <- prometheus.MustNewConstMetric(
up, prometheus.GaugeValue, 0.0,
)
} else {
ch <- prometheus.MustNewConstMetric(
up, prometheus.GaugeValue, 1.0,
)
}
ExecVolumeInfo is returning an error and we are also sending up status on whats in volumeInfo.OpErrno. We should be short circuiting there.
@ofesseler @coder-hugo @mjtrangoni
Use gluster_exporter in docker,I need what?
What do I need to add in promtheus.yml?or other config.
how to use this exporter in kubernetes
"tried to execute [volume heal volume info] and got error: exit status 234" source="gluster_client.go:23"
INFO[0000] GlusterFS Metrics Exporter v0.2.7 source="main.go:531" WARN[0001] no Volumes were given. source="main.go:322" ERRO[0002] expected element type <cliOutput> but have <2f1c16b4-8477-493e-bf1b-3d0a1301fe39> source="xmlStructs.go:180"
glusterfs 3.7.11 built on Apr 18 2016 13:20:46
Steps to reproduce:
Step 6/19 : RUN apt-get update && apt-get install -y apt-utils apt-transport-https ca-certificates gnupg2
---> Running in 8cfff2a0bf71
Err:1 http://deb.debian.org/debian stretch InRelease
Temporary failure resolving 'deb.debian.org'
Err:2 http://security.debian.org/debian-security stretch/updates InRelease
Temporary failure resolving 'security.debian.org'
Err:3 http://deb.debian.org/debian stretch-updates InRelease
Temporary failure resolving 'deb.debian.org'
Reading package lists...
W: Failed to fetch http://deb.debian.org/debian/dists/stretch/InRelease Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/stretch/updates/InRelease Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/stretch-updates/InRelease Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.
Reading package lists...
Building dependency tree...
Reading state information...
Package apt-utils is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
apt
Package gnupg2 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
gpgv
E: Package 'apt-utils' has no installation candidate
E: Unable to locate package apt-transport-https
E: Unable to locate package ca-certificates
E: Package 'gnupg2' has no installation candidate
The command '/bin/sh -c apt-get update && apt-get install -y apt-utils apt-transport-https ca-certificates gnupg2' returned a non-zero code: 100
Along with heal counts, can we get split-brain counts?
gluster volume heal <vol> info split-brain
Do you have any grafana dashboards for this exporter? I could not find any on grafana.net
Could you create an automated build in docker hub? https://docs.docker.com/docker-hub/builds/
I'd like to create a helm-chart for this https://github.com/helm/charts
when i use v0.2.7 gluster_exporter. there were someting wrong.
WARN[0011] no Volumes were given. source="main.go:322"
i don not know how to do it. please help!
Hi,
Can anyone explain why all of my volume names were "devops-registry" from metrics?
gluster_exporter version: v0.2.6
Glusterfs server version: 3.10.0
# HELP gluster_node_size_free_bytes Free bytes reported for each node on each instance. Labels are to distinguish origins
# TYPE gluster_node_size_free_bytes counter
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.0427400192e+11
gluster_node_size_free_bytes{hostname="10.10.0.100",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898015232e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-alertmanager/brick",volume="devops-registry"} 1.05489092608e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data0/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data1/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-es-data2/brick",volume="devops-registry"} 5.2823959552e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-grafana/brick",volume="devops-registry"} 1.0409398272e+10
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-influxdb/brick",volume="devops-registry"} 9.925582848e+09
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-prometheus/brick",volume="devops-registry"} 1.04273997824e+11
gluster_node_size_free_bytes{hostname="10.10.0.101",path="/glusterfsvolumes/devops/devops-registry/brick",volume="devops-registry"} 4.8898019328e+10
here's my glusterfs vol info
Volume Name: devops-influxdb
Type: Replicate
Volume ID: 2803fc56-cdc6-469e-a57e-7982fc20023c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-influxdb/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-influxdb/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
Volume Name: devops-prometheus
Type: Replicate
Volume ID: 89c44318-e975-408d-9a6c-d15e44fddd0d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-prometheus/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-prometheus/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
Volume Name: devops-registry
Type: Replicate
Volume ID: 2bb07777-248d-46aa-863a-dad64a5207d0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.100:/glusterfsvolumes/devops/devops-registry/brick
Brick2: 10.10.0.101:/glusterfsvolumes/devops/devops-registry/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
error logs from syslog:
Mar 27 15:25:03 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:03+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:08 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:08+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Mar 27 15:25:32 prdsh01glus01 gluster_exporter[23074]: time="2017-03-27T15:25:32+08:00" level=warning msg="no Volumes were given." source="main.go:286"
Here is a memory leak issue of glusterd reported in 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1287517
The memory leak is caused by running "gluster volume status all" thousands of times which is the similar behavior with gluster-exporter does.
This issue has been fixed in glusterfs-3.8
My experience with glusterfs-3.6.9, the exporter will cause the severe memory leak on the server.
(prometheus scrape period : 180s)
The glusterd will crashed from time to time due to out of memory.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.