marcoslucianops / deepstream-yolo Goto Github PK
View Code? Open in Web Editor NEWNVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
License: MIT License
NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
License: MIT License
hi,could you please tell me where is the main program that links the various plugins
hello.
I need an engine file to run in deepstreemsdk. How do I create a model.engine file?
I use jetson nano and deepstreem sdk 5.0.1
How can i use yolov5?
first of all thanks for such a great repo to explain how to run yolov3 tiny with the deepstream. I am training a yoloV4 and the image size for training I have is 150(width)x80(height) to 600(width)x150(height). my config file contains height and width = 416x416. so is that a correct size or shall I change the size in the config file based on my image size? does yolo resize the image while keeping the aspect ratio constant in training?
thanks once again
Hi,
I am using Jetson Nano and able to generate .so shared library for pre-trained and custom yolov4 models and they works perfect. The Cuda version of my Jetson Nano is 10.2.
I am using Nvidia-Deepstream docker with yoloV4/yoloV4-tiny models at Jetson Nano an it works just fine.
Also, I use docker on AWS VM for better performance. The yoloV3/yoloV3-tiny models works without issue, but when I tried yoloV4 model with your solution, it didn't work. I checked my AWS VM Cuda version and it is 10.1. I do not have deepstream installed on VM, so couldn't generated .so shared library for Cuda 10.1
I believe the problem comes from different Cuda version, because docker is running over there without any issue for yoloV3.
If it is a version issue, how could I generate .so lib file for Cuda 10.1 or is there any other solution to get around the issue.
Your help would be appreciated!
Hello @marcoslucianops . I have followed your repo and able to run yolov4-tiny model in deepstream. Now I want to run a video with a framesize of 2464X1440 in deepstream. I am getting a log that deepstream at max supports resolution of 2048X2048. So I want to resize my original input video framesize to less than 2048X2048. So how to do this resizing and where to add this? Your help would be appreciated.
hi, Thanks for sharing !
I ran multiple inference on Jetson Xavier (Jetpack 4.4), but no result detected. terminal print as follows..
I tested the 2 models used, each of them works well standalone.
Using winsys: x11
Deserialize yoloLayer plugin: yolo_99
Deserialize yoloLayer plugin: yolo_108
Deserialize yoloLayer plugin: yolo_117
0:00:03.522306324 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_99 24x52x52
2 OUTPUT kFLOAT yolo_108 24x26x26
3 OUTPUT kFLOAT yolo_117 24x13x13
0:00:03.522553823 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b16_gpu0_fp16_helmet.engine
0:00:03.533651338 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/sgie1/config_infer_secondary1.txt sucessfully
Deserialize yoloLayer plugin: yolo_51
Deserialize yoloLayer plugin: yolo_59
0:00:03.886455896 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT data 3x416x416
1 OUTPUT kFLOAT yolo_51 18x13x13
2 OUTPUT kFLOAT yolo_59 18x26x26
0:00:03.886608479 30756 0x7f3c002380 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/admin123/deepstream/DeepStream-Yolo/native/model_b1_gpu0_fp16_personv3.engine
0:00:03.888024542 30756 0x7f3c002380 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/admin123/deepstream/DeepStream-Yolo/examples/multiple_inferences/pgie/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
** INFO: <bus_callback:181>: Pipeline ready
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:167>: Pipeline running
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
WARNING: Num classes mismatch. Configured: 1, detected by network: 0
Hi,
I noticed your table is empty for FPS for Yolov5s on the Jetson Nano. Have you gotten any FPS results lately? I followed your Yolov5s example and am getting ~13FPS for Yolov5s (pretrained weights; 608 resolution)) on the Nano 4GB using the default video file in the main config.
Also, under your "NVIDIA GTX 1050 (4GB Mobile)" section, you have 3 tables: TensorRT, Darknet, and PyTorch. What's the difference between the TensorRT table and the Darknet table? Because doesn't deepstream-app automatically convert your cfg and weights file into a TensorRT engine anyway? So essentially you'll be using TensorRT whether you point directly to a .engine file or a .cfg/.weights file? I understand why you would have the PyTorch table because you're starting with a different architecture configuration, but doesn't Yolov4 in the TensorRT table have the same architecture in the end as the Yolov4 in the Darknet table? Hope that makes sense, just look for clarity.
Hello @marcoslucianops,
I recently trained yolov5 model with one class and after following your instruction here() on how to config custom model, I got the error when I ran the command sudo ./yolov5 -s
:
Loading weights: ../yolov5s.wts
[03/19/2021-00:46:32] [E] [TRT] (Unnamed Layer* 17) [Convolution]: kernel weights has count 0 but 2048 was expected
[03/19/2021-00:46:32] [E] [TRT] (Unnamed Layer* 17) [Convolution]: count of 0 weights in kernel, but kernel dimensions (1,1) with 64 input channels, 32 output channels and 1 groups were specified. Expected Weights count is 64 * 11 * 32 / 1 = 2048
[03/19/2021-00:46:32] [E] [TRT] Parameter check failed at: ../builder/Network.cpp::addScale::482, condition: shift.count > 0 ? (shift.values != nullptr) : (shift.values == nullptr)
yolov5: /home/george/Desktop/vibever/tensorrtx/yolov5/common.hpp:189: nvinfer1::IScaleLayer addBatchNorm2d(nvinfer1::INetworkDefinition*, std::map<std::__cxx11::basic_string, nvinfer1::Weights>&, nvinfer1::ITensor&, std::__cxx11::string, float): Assertion `scale_1' failed.
Aborted
Is there any other modification I need to make? Thanks.
Hello, I followed your instructions for yolov5. I was able to run till the last step. The only change in my custom model was to change #labels from 80 to 6 and then change labels.txt. My model was fine-tuned on custom data (using ultralytics yolov5 repo). I updated yololayer.h accordingly. The yolov5.engine/wts file also has the same number of labels. However when I run deepstream-app on a test video, I get way too many bounding boxes and labels all over. There is no problem when I just run yolov5 test without deepstream. Any knobs I may have missed changing for a custom yolov5s model and following the entire set of steps?
How can do it, you can help me?
Hi @marcoslucianops ,
We talk few days ago, you told me to go to your repository in order to know how to get metadata from deepstream.
I read this section : https://github.com/marcoslucianops/DeepStream-Yolo/#custom-functions-in-your-model
However I'm still lost. I understand that I can get metadata with NvDsObjectMeta, NvDsFrameMeta and NvOSD_RectParams but :
-I don't know where are these structure in analytics_done_buf_prob function
-I don't understand how to use this function : I suppose I got to write in analytics_done_buf_prob function a code that let me save the metadata, or directly write a code in this function where I use this metadata, but I don't know where.
Could you help me understand, for example how get the coordinates of a specific bounding box and write theses coordinates on a file?
Hi, I have tried to compile your yolov5 app on Jetson, but it looks like the code is missing
nvdsinfer_custom_impl.h . Could you please take a look on it?
Using the new deepstream-5.1 triton docker to build the nvdsinfer_custom_impl_Yolo I get the following error in make:
yolo.cpp: In member function 'NvDsInferStatus Yolo::buildYoloNetwork(std::vector&, nvinfer1::INetworkDefinition&)':
yolo.cpp:298:48: error: 'createReorgPlugin' was not declared in this scope
nvinfer1::IPluginV2* reorgPlugin = createReorgPlugin(2);
^~~~~~~~~~~~~~~~~
yolo.cpp:298:48: note: suggested alternative: 'reorgPlugin'
nvinfer1::IPluginV2* reorgPlugin = createReorgPlugin(2);
^~~~~~~~~~~~~~~~~
reorgPlugin
Makefile:61: recipe for target 'yolo.o' failed
Checking cuda version using nvcc --version:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0
I have done sudo chmod -R 777 /opt/nvidia/deepstream/deepstream-5.1/sources/ and ran CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo.
No issues previously using 5.0.1 and old instructions
Hello @marcoslucianops,
Once again thanks for your response. I will like to interface Kafka with DeepStream-Yolo to stream inference. Is there anyway to go around it?Thanks.
Hi @marcoslucianops,
Thanks for your Projects. It helped me a lot honestly.
Actually, I have run the Yolov3 model(trained on my Custom Dataset) on Jetson Nano using DeepStream with 4 Cameras. Next, I want to Integrate Triton Server with DeepStream for the same model.
So, my doubts are:
1.) How to do the Integration, what all I need to do extra?
2.) Can I serve the TRT models with the triton Server integrated with DeepStream?
Thanks
Hello sir! Thanks for your work.
I am trying to run YoloV4 from DeepStream 5.0.1 using your repository on my Jetson Nano. I started with this. Everything went okay, I successfully compiled with CUDA and tested the TRT inference. However, when the stream starts, the terminal displays this message:
WARNING: Num classes mismatch. Configured: 80, detected by network: 0
I followed your instructions, but got this. Number of classes in my labels.txt is 80.
My system's info:
Please help me!
Hi, I built and run sample-app deepstream. I wonder how to write a application to run yolo in deepstream. Thank you!
Hi, Thank you for this repository,
i am not professional in C&C++. and use 3 yolo models back to back for car plate recognition.
how can save and show only object detected in each step of deepstream. and in last model how can sort left to right object detection(for take plate numbers). please sample code..
Many Thanks .
in deepstream-test5-app i can't find tiler_src_pad_buffer_probe() so in bbox_generated_probe_after_analytics() can do it
Hi, firstly thanks for your great work. Small issue, when I create an engine with a custom engine name using your native folder, the engine doesn't have the same name as that specified in the config file.
For example, if I set model-engine-file=model_b1_gpu0_fp32_custom.engine, the engine is saved as model_b1_gpu0_fp32.engine.
Hello,
I'm running two yolov4 tiny models on deepstream app, the first detector works on the full frame to detect cars then the second one works on the detected cars boxes to extract windshields. The first detection works fine but the second one there are some bboxes that are detected out of car's region as shown in the images below, the green box is from the first detector and the red box is from the second detector:
These are the yolo make folders and configuration files
nvdsinfer_custom_impl_Yolo.zip
nvdsinfer_custom_impl_Yolo_ws.zip
vehicle_detection_config.txt
ws_detection_config.txt
i want only "person" to be detected by yolov4
so i modified label.txt and config_infer_primary.txt num-detected-classes=1
i should also change in nvdsparsebbox_Yolo.cpp right, where is the variable to change?
Hi,
From your yolov5 FAQ, I'm aware something need to be changed with the config files. My question is, do I need to add functions to parse the model's data in nvdsparsebbox_Yolo.cpp? And how to do so?
Thanks,
Hi,
I have some trouble with deepstream.
I trained a Yolov3-tiny on darknet with specific classes :
person
wheelchair
bicycle
motorcycle
car
bus
truck
ambulance
traffic light
stop sign
cedez le passage
shoes
sports ball
traffic cones
As you can see it contains some classes of the COCO dataset but not only them, and not in the same order.
I trained on darknet, and It worked : I took somes pictures to verify it :
Then I took the .cfg .weight and .names for deepstream (I uses deepstream 4.0).
I changed the number of classes in nvdsparsebbox_Yolo.cpp, and I compiled it.
I also created a config_infer_primary... and deepstream_app_config... and configure it well (right number of classes, right sources)
I changed my .names for labels.txt
And I tried Yolo_Deepstream
I don't understand my results :
For now The video I used let me saw theses objects :
person
car
bicycle
wheelchair
but i detect this :
person is detected as person
car is detected as bicycle
bicycle is detected as wheelchair
wheelchair is detected as...wheelchair.
I really don't know where is the problem, It works on darknet, and I didn't modify the order.
Plus, first I thought that some classes were inversed, with car->bicycle and bicycle->wheelchair, but wheelchair->wheelchair!
Do you know where the problem can be in deepstream?
Sincerely,
If not, could you tell me the reason? I want to porting to v4 version for performance enhancement and adapt to int8 calibration. Thanks.
Some tactics do not have sufficient workspace memory to run
when using yolov4 in jetson NX1
already have set the workspace-size = 4000
Hi, thanks for your hard work and sharing it with us !
I'm able to use pre-trained yoloV4 and yoloV4-tiny with Deepstream, but had problem on custom yoloV4-tiny model. I would like to use my custom yoloV4-tiny model which has 6 classes.
For the original Deepstream API, I was just changing the "static const int NUM_CLASSES_YOLO = 6" on "nvdsparsebbox_Yolo.cpp" file , then make it and then able to use the generated "libnvdsinfer_custom_impl_Yolo.so" file with my custom-yoloV3 weight file to inference on Deepstream 5.
Please guide me to use my custom yoloV4-tiny model on Deeepstream 5
Your help would be appreciated!
Hi @marcoslucianops Can you also put what all we need to configure to run tiny yolov4 and yolov4 model on Deepstream?
@marcoslucianops Can you please check your email when you have time? I sent you a request abour Deepstream App
I keep getting this error.
deepstream-app: yolo.cpp:141: NvDsInferStatus Yolo::buildYoloNetwork(std::vector&, nvinfer1::INetworkDefinition&): Assertion `m_ConfigBlocks.at(i).at("activation") == "linear"' failed.
Aborted
Things i have done :
Hi, great job. I'm trying to deploy a model as an API service and I'd like to know if is it possible to do using your repo?
If yes, could you help me with this?
Tks!
Hi, I want to make nvdsparsebbox_Yolo more flexible by remove:
static const int NUM_CLASSES_YOLO = 80;
#define NMS_THRESH 0.45
#define CONF_THRESH 0.25
Right now I just cannot get NMS_THRESH from config file "config_infer_primary_yoloV5s.txt:
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
Do you have any suggestion?
Thanks,
https://forums.developer.nvidia.com/t/how-to-use-dla-in-deepstream-yolov5/161550/25
Hi @marcoslucianops ,
I used deepstream-yolov4, and I check out the engine. That been build on GPU. I saw the article before. How do I fix following program for building DLA engine?
DeepStream-Yolo/native/nvdsinfer_custom_impl_Yolo/yolo.cpp
Lines 74 to 81 in 470ed82
Thanks for creating this repo; I was under the impression that the Nvidia deepstream sdk has to be edited in a c++ file. Here it seems that is not how the interaction with deep stream taking place. Any information to clear up my misconception will be helpful.
Hi Marcos,
I noticed that your modified code for nvdsinfer_custom_impl_Yolo doesn't support non-square/asymmetric models (width!=height).
This was unexpected as I saw your discussion about attempts to get that working on the default implementation here:
https://forums.developer.nvidia.com/t/trouble-in-converting-non-square-grid-in-yolo-network-to-tensorrt-via-deepstream/107541/19
so I thought you would have included the functionality in your implementation. Is there a way to change some code (similar to eh-steve's patch from the above link) to enable asymmetric model input sizes for your implementation? I'm eager to make this work on your implementation as it already supports arbitrary custom yolo models based on alexeyAB's darknet fork.
Hello @marcoslucianops . There seems to be a lot of change in the repo since the last time I used it to convert my yolov4-tiny model. Last time, I used your nvdsparsebbox_Yolo.cpp
file which you have edited to add support for yolov4-tiny about 3 months back. Now this file in your repo is completely changed though. However, my question is I want to run yolov4 tiny model in deepstream but I want its name to be changed to another name instead of yolov4.
As of now, in nvdsparsebbox_Yolo.cpp
file, I see there is a function with name NvDsInferParseCustomYoloV4
and other related functions to it like convertBBoxYoloV4
addBBoxProposalYoloV4
decodeYoloV4Tensor
NvDsInferParseYoloV4
. And now in config file we are giving the name to the function NvDsInferParseCustomYoloV4
.
If I change the names of all these functions with some other name, say model ex:NvDsInferParseCustommodel
,convertBBoxmodel
, addBBoxProposalmodel
etc and give the name of function as NvDsInferParseCustommodel
in config file, will it work?
So basically I am replacing the name yolov4 with some other name and calling the renamed function from config file. Will this work?
Hi, I was wondering if the performance I was getting with deepstream yolov5 was normal on the jetson nano 4GB?
I run inference on 2 video camera (1280*720) and I get very laggy preview.
I have to set drop-frame-interval=5 to obtain a real-time inference, it takes 0.15s to make inference on each camera
Maybe it's my config ?
Environment :
jetpack 4.5.1
deepstream 5.1
Model used YOLOv5s 3.0
btw: nice tuto !
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
[tiled-display]
enable=1
rows=1
columns=2
width=1920
height=1080
gpu-id=0
nvbuf-memory-type=0
[source0]
enable=1
type=3
uri=rtsp://192.168.1.19:554/1/h264major
num-sources=1
gpu-id=0
cudadec-memtype=0
#latency=200
#drop-frame-interval=5
[source1]
enable=1
type=3
uri=rtsp://192.168.1.20:554/1/h264major
num-sources=1
gpu-id=0
cudadec-memtype=0
#latency=200
#drop-frame-interval=5
[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=1
type=2
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
live-source=1
batch-size=2
batched-push-timeout=40000
width=1280
height=720
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[tests]
file-loop=0
[E] [TRT] Parameter check failed at: ../builder/Network.cpp::addScale::434, condition: shift.count > 0 ? (shift.values != nullptr) : (shift.values == nullptr)
0
common.hpp:190: nvinfer1::IScaleLayer* addBatchNorm2d(nvinfer1::INetworkDefinition*, std::map<std::__cxx11::basic_string, nvinfer1::Weights>&, nvinfer1::ITensor&, std::__cxx11::string, float): Assertion `scale_1' failed.
Hello @marcoslucianops,
Thanks for the awesome work, I truly appreciate the great work. I want to extract the stream metadata, which contains useful information about the frames in the batched buffer for YOLOv3-Tiny-PRN model. How can I obtain the metadata?
hi,why the whole process takes up so much memory, and is there any way to reduce the memory
Hi @marcoslucianops I have followed your tutorial and able to run tiny-yolov4 model on nano. I understood a pipeline in deepstream can be created using config files. Now however I want to edit the reference deepstream-app
to add some custom functionalities. In which file can I edit for them? I have seen some deepstream-test
samples and all of them have a .c/.cpp files to edit the pipeline. I have followed your tutorial. So in this process, which files are used. Are the files in located in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app
? If I edit the file in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app
and then run from your directory, will they work? In which file should I edit. Please help me with this.
I change INPUT_H and INPUT_W in yololayer.h,but i find something strange
when INPUT_H and INPUT_W equal to 320,linking to the jetsonr via mobaxterm,display as follows
.But on a monitor connected to Jetson, it displays as below
Why is that?
and what are the ways I can do it if I want to speed up inferring
Hello @marcoslucianops .,Thank you for sharing your work. In MULTIPLE-INFERENCES.MD
file, what is meant by primary inference and secondary inference? I mean what's the difference between them? And I want to run my tiny-yolov4 model on multiple images . How to do that? Thanks in advance.
Hi @marcoslucianops
In your opinion, Is it possible to use deep stream for custom app? for example : face recognition
That's mean I want to use decode multi-stream and one detector of deep stream for face detection, but for face recognition, the deep stream doesn't support any model for this task, I want to know How I can integrated the face rec system to deep steam? I want to get outputs like counting and coordinates of deep stream, Is it possible?
Hi @marcoslucianops ,
I want to test Yolov3-tiny with a camera plug and play. In deepstream_app_config I changed the type to 1 and I got some errors, it says it failed to creat_camera_source_bin.
Do you know how to use yolov3-tiny with a simple usb camera?
For now I just started to use deepstream, by any way do you know if is there some tutorial to begin with deepstream?
Sincerely,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.