Code Monkey home page Code Monkey logo

unimvsnet's People

Contributors

prstrive avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unimvsnet's Issues

The result on ETH3D dataset

Hi, thank you for opening this great project!
I tested this algorithm on Tandks and Temples dataset, the result is really nice!
But when I tried on ETH3D dataset, the result is not so good, have you tested on ETH3D dataset? The checkpoint I used is the provided model on Tanks and Temples.

about F-score on tanksandtemples

I have a problem on the tanks dataset:
for all 8 scenes, the private testing result is
79.61 65.66 54.16 59.51 62.24 60.90 57.61 54.59

i follow the author's resconstruction configuration, while there is a large margin compare with the author's result, whether i miss something important?

Some questions about the differences between UniMVS and CasMVS

Thank you for your great work! I just went through your code, and I have two subtle questions about the paper and the code.
Q1. The network structures of Unimvs and Casmvs are the same. I just wonder whether the 1st baseline in Table 3 (Baseline(Reg)) is CasMVS? The result in Table 3 is much better than CasMVS paper. What are the differences between Baseline(Reg) and CasMVS?

Q2. Assume the interval is 1. If the ground truth is 7.99, then the unity ground truth should be (1-0.99)/1.0=0.01. It seems that it will causes instability problem. If there is a predicted value for another depth value is much larger than 0.01, then the wta depth will go wrong. How does UniMVSNet handle this problem?

Looking forward to your kind reply.

fusibile中的问题

使用fusibile进行点云的融合时,报以下的错误:Found 0.00 million points,是什么原因呢

Fusing points
Processing camera 0
Found 0.00 million points
Processing camera 1
Found 0.00 million points
Processing camera 2
Found 0.00 million points
Processing camera 3
Found 0.00 million points
Processing camera 4

关于坦克和寺庙数据集的调参问题!

您好,首先感谢您优秀的工作!
我最近也在从事MVS相关工作,在阅读您的代码时发现,坦克和寺庙数据集每个场景的光度一致性滤波的参数各不相同,想请教一下,当使用dypcd时如何获取每个场景最优的参数。
感谢您的阅读与回复!

empty output point cloud

thank you for sharing your marvellous work
I am testing your code with dtu
with python 3.6 and pytorch 1.2 environment
the result from the fusion

Not using distributed mode
netphs: [48, 32, 8]
depth_intervals_ratio: [4.0, 2.0, 1.0]
cr_base_chs: [8, 8, 8]
fea_mode: fpn
agg_mode: adaptive
depth_mode: unification
Namespace(agg_mode='adaptive', batch_size=1, blendedmvs_finetune=False, conf=[0.1, 0.15, 0.9], datapath='/content/UniMVSNet/dtu', dataset_name='general_eval', depth_img_save_dir='./', depth_mode='unification', depth_path=None, disp_threshold=0.25, display=False, dist_base=0.25, dist_url='env://', distributed=False, dlossw=[0.5, 1.0, 2.0], epochs=16, eval_freq=1, fea_mode='fpn', filter_method='gipuma', fix_res=False, fusibile_exe_path='/content/fusibile/fusibile', img_size=[512, 640], interval_ratio=[4.0, 2.0, 1.0], interval_scale=1.06, inverse_depth=False, local_rank=0, log_dir=None, lr=0.001, lr_decay=0.5, max_h=864, max_w=1152, milestones=[10, 12, 14], ndepths=[48, 32, 8], no_cuda=False, num_consistent=3.0, num_view=5, num_worker=4, numdepth=192, nviews=5, outdir='/content/UniMVSNet/output', prob_threshold=0.3, rel_diff_base=0.0007692307692307692, resume='/content/UniMVSNet/unimvsnet/unimvsnet_blendedmvs.ckpt', save_freq=20, scheduler='steplr', start_epoch=0, summary_freq=50, sync_bn=False, test=True, testlist='datasets/lists/dtu/test.txt', testpath_single_scene=None, thres_view=5, trainlist=None, val=False, vis=False, warmup=0.2, wd=0.0)
dataset test metas: 49 interval_scale:{'scan9': 1.06}
Iter 0/49, Time:12.386372804641724 Res:(5, 3, 864, 1152)
Iter 1/49, Time:0.9015965461730957 Res:(5, 3, 864, 1152)
Iter 2/49, Time:0.873647928237915 Res:(5, 3, 864, 1152)
Iter 3/49, Time:0.9006960391998291 Res:(5, 3, 864, 1152)
Iter 4/49, Time:0.896777868270874 Res:(5, 3, 864, 1152)
Iter 5/49, Time:0.8938038349151611 Res:(5, 3, 864, 1152)
Iter 6/49, Time:0.8924863338470459 Res:(5, 3, 864, 1152)
Iter 7/49, Time:0.8924596309661865 Res:(5, 3, 864, 1152)
Iter 8/49, Time:0.8854012489318848 Res:(5, 3, 864, 1152)
Iter 9/49, Time:0.9039878845214844 Res:(5, 3, 864, 1152)
Iter 10/49, Time:0.885890007019043 Res:(5, 3, 864, 1152)
Iter 11/49, Time:0.8955898284912109 Res:(5, 3, 864, 1152)
Iter 12/49, Time:0.9008729457855225 Res:(5, 3, 864, 1152)
Iter 13/49, Time:0.8910350799560547 Res:(5, 3, 864, 1152)
Iter 14/49, Time:0.8867418766021729 Res:(5, 3, 864, 1152)
Iter 15/49, Time:0.8871805667877197 Res:(5, 3, 864, 1152)
Iter 16/49, Time:0.8898115158081055 Res:(5, 3, 864, 1152)
Iter 17/49, Time:0.9023191928863525 Res:(5, 3, 864, 1152)
Iter 18/49, Time:0.9028089046478271 Res:(5, 3, 864, 1152)
Iter 19/49, Time:0.8996484279632568 Res:(5, 3, 864, 1152)
Iter 20/49, Time:0.901644229888916 Res:(5, 3, 864, 1152)
Iter 21/49, Time:0.9005064964294434 Res:(5, 3, 864, 1152)
Iter 22/49, Time:0.8896458148956299 Res:(5, 3, 864, 1152)
Iter 23/49, Time:0.8948163986206055 Res:(5, 3, 864, 1152)
Iter 24/49, Time:0.8975303173065186 Res:(5, 3, 864, 1152)
Iter 25/49, Time:0.8982207775115967 Res:(5, 3, 864, 1152)
Iter 26/49, Time:0.9016819000244141 Res:(5, 3, 864, 1152)
Iter 27/49, Time:0.9053738117218018 Res:(5, 3, 864, 1152)
Iter 28/49, Time:0.9074931144714355 Res:(5, 3, 864, 1152)
Iter 29/49, Time:0.8984146118164062 Res:(5, 3, 864, 1152)
Iter 30/49, Time:0.90126633644104 Res:(5, 3, 864, 1152)
Iter 31/49, Time:0.8981599807739258 Res:(5, 3, 864, 1152)
Iter 32/49, Time:0.902153491973877 Res:(5, 3, 864, 1152)
Iter 33/49, Time:0.9010906219482422 Res:(5, 3, 864, 1152)
Iter 34/49, Time:0.906144380569458 Res:(5, 3, 864, 1152)
Iter 35/49, Time:0.9062175750732422 Res:(5, 3, 864, 1152)
Iter 36/49, Time:0.9022436141967773 Res:(5, 3, 864, 1152)
Iter 37/49, Time:0.9004535675048828 Res:(5, 3, 864, 1152)
Iter 38/49, Time:0.9023058414459229 Res:(5, 3, 864, 1152)
Iter 39/49, Time:0.9123153686523438 Res:(5, 3, 864, 1152)
Iter 40/49, Time:0.9050571918487549 Res:(5, 3, 864, 1152)
Iter 41/49, Time:0.8972442150115967 Res:(5, 3, 864, 1152)
Iter 42/49, Time:0.8986566066741943 Res:(5, 3, 864, 1152)
Iter 43/49, Time:0.8996975421905518 Res:(5, 3, 864, 1152)
Iter 44/49, Time:0.9012742042541504 Res:(5, 3, 864, 1152)
Iter 45/49, Time:0.900902271270752 Res:(5, 3, 864, 1152)
Iter 46/49, Time:0.9018654823303223 Res:(5, 3, 864, 1152)
Iter 47/49, Time:0.9066755771636963 Res:(5, 3, 864, 1152)
Iter 48/49, Time:0.9046711921691895 Res:(5, 3, 864, 1152)
filter depth map with probability map
Convert mvsnet output to gipuma input
Run depth map fusion & filter
/content/fusibile/fusibile -input_folder /content/UniMVSNet/output/scan9/points_mvsnet/ -p_folder /content/UniMVSNet/output/scan9/points_mvsnet/cams/ -images_folder /content/UniMVSNet/output/scan9/points_mvsnet/images/ --depth_min=0.001 --depth_max=100000 --normal_thresh=360 --disp_thresh=0.25 --num_consistent=3.0
Command-line parameter error: unknown option -input_folder
input folder is /content/UniMVSNet/output/scan9/points_mvsnet/
image folder is /content/UniMVSNet/output/scan9/points_mvsnet/images/
p folder is /content/UniMVSNet/output/scan9/points_mvsnet/cams/
pmvs folder is 
numImages is 49
img_filenames is 49
Device memory used: 1018.167297MB
Device memory used: 1018.167297MB
P folder is /content/UniMVSNet/output/scan9/points_mvsnet/cams/
numCameras is 49
Camera size is 49
Accepted intersection angle of central rays is 10.000000 to 30.000000 degrees
Selected views: 49
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 
Reading normals and depth from disk
Size consideredIds is 49
Reading normal 0
Reading disp 0
Reading normal 1
Reading disp 1
Reading normal 2
Reading disp 2
Reading normal 3
Reading disp 3
Reading normal 4
Reading disp 4
Reading normal 5
Reading disp 5
Reading normal 6
Reading disp 6
Reading normal 7
Reading disp 7
Reading normal 8
Reading disp 8
Reading normal 9
Reading disp 9
Reading normal 10
Reading disp 10
Reading normal 11
Reading disp 11
Reading normal 12
Reading disp 12
Reading normal 13
Reading disp 13
Reading normal 14
Reading disp 14
Reading normal 15
Reading disp 15
Reading normal 16
Reading disp 16
Reading normal 17
Reading disp 17
Reading normal 18
Reading disp 18
Reading normal 19
Reading disp 19
Reading normal 20
Reading disp 20
Reading normal 21
Reading disp 21
Reading normal 22
Reading disp 22
Reading normal 23
Reading disp 23
Reading normal 24
Reading disp 24
Reading normal 25
Reading disp 25
Reading normal 26
Reading disp 26
Reading normal 27
Reading disp 27
Reading normal 28
Reading disp 28
Reading normal 29
Reading disp 29
Reading normal 30
Reading disp 30
Reading normal 31
Reading disp 31
Reading normal 32
Reading disp 32
Reading normal 33
Reading disp 33
Reading normal 34
Reading disp 34
Reading normal 35
Reading disp 35
Reading normal 36
Reading disp 36
Reading normal 37
Reading disp 37
Reading normal 38
Reading disp 38
Reading normal 39
Reading disp 39
Reading normal 40
Reading disp 40
Reading normal 41
Reading disp 41
Reading normal 42
Reading disp 42
Reading normal 43
Reading disp 43
Reading normal 44
Reading disp 44
Reading normal 45
Reading disp 45
Reading normal 46
Reading disp 46
Reading normal 47
Reading disp 47
Reading normal 48
Reading disp 48
Resizing globalstate to 49
Run cuda
Run gipuma
Grid size initrand is grid: 36-27 block: 32-32
Device memory used: 2766.143555MB
Number of iterations is 8
Blocksize is 15x15
Disparity threshold is 	0.250000
Normal threshold is 	6.283185
Number of consistent points is 	3
Cam scale is 	1.000000
Fusing points
Processing camera 0
Found 0.00 million points
Processing camera 1
Found 0.00 million points
Processing camera 2
Found 0.00 million points
Processing camera 3
Found 0.00 million points
Processing camera 4
Found 0.00 million points
Processing camera 5
Found 0.00 million points
Processing camera 6
Found 0.00 million points
Processing camera 7
Found 0.00 million points
Processing camera 8
Found 0.00 million points
Processing camera 9
Found 0.00 million points
Processing camera 10
Found 0.00 million points
Processing camera 11
Found 0.00 million points
Processing camera 12
Found 0.00 million points
Processing camera 13
Found 0.00 million points
Processing camera 14
Found 0.00 million points
Processing camera 15
Found 0.00 million points
Processing camera 16
Found 0.00 million points
Processing camera 17
Found 0.00 million points
Processing camera 18
Found 0.00 million points
Processing camera 19
Found 0.00 million points
Processing camera 20
Found 0.00 million points
Processing camera 21
Found 0.00 million points
Processing camera 22
Found 0.00 million points
Processing camera 23
Found 0.00 million points
Processing camera 24
Found 0.00 million points
Processing camera 25
Found 0.00 million points
Processing camera 26
Found 0.00 million points
Processing camera 27
Found 0.00 million points
Processing camera 28
Found 0.00 million points
Processing camera 29
Found 0.00 million points
Processing camera 30
Found 0.00 million points
Processing camera 31
Found 0.00 million points
Processing camera 32
Found 0.00 million points
Processing camera 33
Found 0.00 million points
Processing camera 34
Found 0.00 million points
Processing camera 35
Found 0.00 million points
Processing camera 36
Found 0.00 million points
Processing camera 37
Found 0.00 million points
Processing camera 38
Found 0.00 million points
Processing camera 39
Found 0.00 million points
Processing camera 40
Found 0.00 million points
Processing camera 41
Found 0.00 million points
Processing camera 42
Found 0.00 million points
Processing camera 43
Found 0.00 million points
Processing camera 44
Found 0.00 million points
Processing camera 45
Found 0.00 million points
Processing camera 46
Found 0.00 million points
Processing camera 47
Found 0.00 million points
Processing camera 48
Found 0.00 million points
	    ELAPSED 0.778188 seconds
Error: no kernel image is available for execution on the device
Writing ply file /content/UniMVSNet/output/scan9/points_mvsnet//consistencyCheck-20220607-121944//final3d_model.ply
store 3D points to ply file

what could be the problem ?

关于小数据集训练

感谢大佬分享这个很棒的工作,我用自己采集的数据,大约400张,用lidar做真值的,用blendedmvs_finetune.sh这个脚本去从头开始训练,大约30个epoch,可以收敛,但是预测不出来结果或者只有很少的点。
然后从BlendedMVS数据集中抽了其中一个场景来做同样的实验,也是预测不出来结果。
如果用整个BlendedMVS数据集来从头开始训练,不加载预训练模型,预测结果是正常的。
初始学习率设的是0.001。
DTU数据集部分的代码我也看了,就是同一个视角拍了七张不同打光的照片,其他基本上跟BlendedMVS数据集是一样的。
百思不得其解,想请教下大佬为什么小数据集训练会有问题?

Got OOM error when calculating the gt_unity_index_volume

Thanks for sharing your great work. The proposed loss is very valuable for my work.
But I always got the OOM error when calculating the gt_unity_index_volume. The shape of gt_index_volume: torch.Size([24, 118, 12, 40]) and depth_gt_volume size: torch.Size([24, 118, 12, 40]). Do you know what's the reason for this error? The batch size and depth channel are too big?

gt_unity_index_volume[gt_index_volume] = 1.0 - (depth_gt_volume[gt_index_volume] - depth_values[gt_index_volume]) / interval

gt_unity_index_volume[gt_index_volume] = 1.0 - (depth_gt_volume[gt_index_volume] - depth_values[gt_index_volume]) / interval
RuntimeError: CUDA out of memory. Tried to allocate 573.65 GiB (GPU 0; 39.59 GiB total capacity; 2.81 GiB already allocated; 16.61 GiB free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Loss equations 7-9 :: conditioning on q

Hi, thanks for making your works publicly available. I enjoyed going through your paper and "loss" code. I found a couple of things unclear for me regarding the loss, I will be glad if you will bring some clarifications.

In your paper, you are mentioning that q is a continuous target that has a domain of [0,1]. But in the Eq 7,8,9 in your system equations, you are conditioning your equations based on q > 0 . I have checked your code where you are implementing the loss, as far as I understand you are conditioning your equations based on the condition of if q[u, u+interval]

So in the Eq 7,8,9, do you actually condition based on if the right interval is found? Or is it my wrong misinterpretations?

can not download DTU, Tank and Temples Dataset

First of all, thank you for sharing your great research results.

I tried to download the DTU testing data and Tanks and Temples that you uploaded, but it says Access Denied to doc-00-54-docs.googleusercontent.com(HTTP ERROR 403) and i can not download.

Can you please solve this problem?

A problem about fusibile?

There is a problem when test DTU:
sh: 1: /home/***/fusibile(path of fusibile): Permission denied
some wrong?

the results in Tanks and Temples

Thanks for your generous open source. We apply unimvsnet_blendedmvs.ckpt to reproduce the results of Tanks and Temples via scripts/tank_test.sh. Results are approaching in advanced while those of intermediate drop considerably. Is there any different setting between intermediate and advanced,e.g., inverse depth, num_view, ndepths, interval_ratio ?

The full results are as follow:

F-score:

Auditorium: 27.28
Ballroom: 44.05
Courtroom: 39.13
Family: 34.35
Francis: 24.06
Horse: 06.13
Lighthouse: 58.74
M60: 57.29
Museum: 52.20
Palace: 32.27
Panther: 57.88
Playground: 57.57
Temple: 33.45
Train: 45.85

Mean intermediate: 42.73
Mean advanced: 38.06

Different results between released model and released ply

I tried to infer and fuse on DTU dataset, but found the predicted results are different from your released ply models.

For example:
Your released point cloud for scan1 (mvsnet001_l3.ply): 26727801 points, fscore 0.191357
Predicted point cloud for scan1: 19405933 points, fscore 0.2162721

When I visualize the two point clouds, the completeness of predicted point cloud is much lower than released point cloud, especially around edges.

My environment:
Fusibile: compiled with cuda 11.4, sm86
Pytorch: 1.8.2+cu111
Python: 3.8.12

Here is my log file for scan1
log.txt

Training and Inference Time

Thanks for sharing this excellent work!
As I have not found the description of training and inference time in the paper, I would like to ask how long it will take to train or fine-tune the model. And how long does it take to perform depth inference on 5 images (1 reference + 4 source)?
It would be so nice of you to present the training and inference time with detailed description of the settings, e.g. the type and the number of GPUs, resolution of input images, size of training set, the corresponding accuracy/completeness, etc. Many thanks.

Error: no kernel image is available for execution on the device

Hi, I have a question about running dtu_test.sh .

I installed Fusibile, and there were no errors during cmake . and make.
Result of cmake .

-- Found OpenMP_C: -fopenmp  
-- Found OpenMP_CXX: -fopenmp  
-- Found OpenMP: TRUE   
-- Configuring done
-- Generating done
-- Build files have been written to: /home/*****/UniMVSNet-main/fusibile-master

Result of make

[ 33%] Linking CXX executable fusibile
[100%] Built target fusibile

However when I run dtu_test.sh with DTU dataset-scan23, I got a same Error in #18 :
Error: no kernel image is available for execution on the device

What could be the problem? Maybe my pytorch version?

My environments are:
python 3.6
pytorch 1.8.1
CUDA 10.1

The winner-take-all selection

Hellow, thanks for your great work and contribution. The unification applies winner-take-all selection as the basis to generate the depth map for the next finner stage. However, torch.max() is a non-differentiable operation and the parameters in 3D-CNN would be optimized except the parameters w.r.t the selected depth hypothesis. How do the gradients backward in this case?

custom data test reconstruction effect is not good

Hello, thank you very much for your open source. I used the mobile phone to collect data and use colmap sparse reconstruction to get the camera parameters, and then use the conversion code provided by yaoyao colmap2mvsnet.py to get network available input.
But in the end the reconstruction works poorly. The depth map seems to have a look, but it is wrong, and the point cloud is very small, only a few hundred KB.

I would like to ask:

  1. Do you need to change the parameters when using colmap sparse reconstruction (the default parameters I use)
  2. There is obviously a problem with the camera parameters converted by colmap2mvsnet (depth range?)

Question about gipuma

Hi, you present the point clouds based on gipuma fusion, but I got some problems when I tried to install fusible. Could please me the environment you applied, including the version of CUDA and opencv?

About the setting of the 3-step conf values in a new dataset

Hello. Could you please tell me how you confirm the 3-step conf values of different classifications for the 'Tanks and Temples' dataset? Do I need use another neural network to learn it? Or just need get the conf distribution and take some experiments?

About abs_depthloss

Hello, thank you very much for your code. May I ask how to check the absolute depth error in the training process?

Convergence speed.

Hello, we train with a "unification" and “regression” strategy respectively. Abs depth err of "unification" in both avg train and
ave test is higher than that of “regression” at the first few epoches. Is it normal to converge slower for "unification" strategy?

The weights you upload

Thanks for your generous open source. I just wanna know if the weights you offered is only for the "fpn". When I turn the "fea_mode" to "unet", there are errors for structure of network.

关于fusion阈值的设置

谢谢大佬分享那么好的成果,想再请教一个问题,我看代码里设置的fusion参数
'--prob_threshold', '0.3',
'--disp_threshold', '0.25',
'--num_consistent', '3']

然后我看CVP-MVSNet里设置的如下,这个一般是有什么依据的?
parser.add_argument('--prob_threshold', type=float, default = '0.8')
parser.add_argument('--disp_threshold', type=float, default = '0.13')
parser.add_argument('--num_consistent', type=float, default = '3')

关于训练dtu数据时第七个epoch验证时报错

跟作者团队同样是用两张卡运行,但两次都是在第七个epoch验证时报错,报错内容为:OMP: Error #100: Fatal system error detected.
OMP: System error #22: Invalid argument。在调整过验证频率后得到解决(验证频率除以3),猜想是否是因为validate函数中没有进行类似清除梯度缓存的操作,希望作者能解惑一下。

About unity generation

Dear author,

Thank you for your great work.
Where can I find the autual code for unity generation?

Thank you.

The evaluation results on DTU evaluation set are different between paper and released checkpoints

Dear Rui Peng:

Thank you very much for your contribution and nice work.

I evaluate your released checkpoints "unimvsnet_dtu.ckpt" on the DTU evaluation set. (I do not change any parameters)

The results are: 0.4173 for mean accuracy and 0.2966 for mean completeness.

However, in the paper, the two metrics are 0.352 and 0.278, respectively.

May I know whether this released checkpoint is the one you used for the paper?

Thanks for your help.

Looking forward to your response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.