Code Monkey home page Code Monkey logo

ggcnn's People

Contributors

dougsm avatar lwohlhart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ggcnn's Issues

AttributeError: 'NoneType' object has no attribute 'TF_NewStatus'

Hi,I have recently read your igreat work "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" and have been trying to validate your results for a long time.However,I meet an error as below:
2019-05-02 11:10:37.457694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0)
Traceback (most recent call last):
File "evaluate.py", line 196, in
run()
File "evaluate.py", line 173, in run
succeeded, failed = calculate_iou_matches(grasp_positions_out, grasp_angles_out, bbs_all, no_grasps=NO_GRASPS, grasp_width_out=grasp_width_out)
File "evaluate.py", line 102, in calculate_iou_matches
gt_bbs = BoundingBoxes.load_from_array(ground_truth_bbs[i, ].squeeze())
IndexError: index 0 is out of bounds for axis 0 with size 0
Exception ignored in: <bound method BaseSession.del of <tensorflow.python.client.session.Session object at 0x7f479f396a58>>
Traceback (most recent call last):
File "/home/robot/anaconda3/envs/weiwuhuhu/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 587, in del
AttributeError: 'NoneType' object has no attribute 'TF_NewStatus'

Firstly, I run the file evaluate.py,it occurs that:
File "/home/robot/anaconda3/envs/weiwuhuhu/lib/python3.5/site-packages/h5py/_hl/files.py", line 170, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 85, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/home/douglas/dev/ae_grasp_prediction/data/datasets/dataset_rotated_width_zoom_171219_1516.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Sencondly,i replace the 'dataset_rotated_width_zoom_171219_1516.hdf5' with the 'dataset_190502_1033.hdf5'.
Finally, i run the evalute.py the, the obove wrong:
AttributeError: 'NoneType' object has no attribute 'TF_NewStatus' has occured .

How can i resolve it. Thank you very much

Plot output on a image

Hello Doug,

thanks a lot for your code. I'm writing a thesis about the gg-cnn. Used only the half of the cornell Dataset because of lack of storage. It worked fine and I will test it with a own dataset. I saw the def plout_output in the evaluate.py. Can you tell me how to use that to plot the grasp position on one of the used image,pcd.. data?

Thanks for your help

About validate code

Dear Doug,
when I read your code, I have a question, hope u can help me. thanks a lot.
In train_ggcnn.py file's validate() function. when you store the model's val_loss, and you use the code below:
results['loss'] += loss.item()/ld
my question is why we should divide ld in there?
because I found you do not do the same thing in the train() function. you divide the batch_idx in the train() function.
I don't understand what causes the difference.

could u help me? thxxxxxx!!

About the output results and 6D description

dear author, thank you for your code and literature. I have run the program you provided, and the outputs are four images:RGB, Depth,Q and Angle. But the g mentioned in the article is(p, phi, w,q) .I wonder how can I get ‘g’ from the outputs(the four images). Looking for your reply.

How the network transforms the angle image into a specific angle?

I know the ggcnn is input with a depth image and outputs three images. But I'm not sure how to use the three images to guide the robot when the robot need to grasp a specific part of something detected by it.
After all, one angel is needed when grasping,not an image.

How to use this for real time grasp detection?

I was trying to use this code for real time grasping. As a first step I tried to run eval on a tiff image from cornell dataset without using the labels.I observed that before feeding the depth image to the net, the code is cropping the image using the dataset labels. If I comment out the cropping function the results are not accurate.

potential ambiguity of angle images

Hi there,
Thank you for sharing the ggcnn code. The performance is quite impressive.
I ran your code with my own dataset, and the gripper angle often closes to 0. Later I found a possible logic leak of the designing of the angular supervision.
Assume we have a simplest case: cube. We labeled with two grasps, one is (cube center, 0pi), another is (cube center, 0.5pi). Each of these grasps occupy a rectangle in angle image. The problem is that these rectangles are overlapped. The overlapped region has ambiguity. That is, depending on which grasp comes first, the eventual overlap region may be marked by 0pi or 0.5pi. Averaging 0pi and 0.5pi may not be a good idea also, since 0.25*pi makes the gripper grasp cube edges.

The cause of the problem is overlapping. Do you have some workaround on this situation? After all, it is highly possible that more than one grasp exist in one object.
Thanks

something wrong with the 'evaluate.py'

Hi,I have recentlt read your interesing work "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" and have been trying to validate your results for a long time.However,I meet an error as below:
190317_1538__ggcnn_9_5_3__32_16_8 1 368.0/1750.0 21.03%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 2 548.0/1750.0 31.31%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 3 763.0/1750.0 43.60%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 4 941.0/1750.0 53.77%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 5 1071.0/1750.0 61.20%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 6 1047.0/1750.0 59.83%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 7 1165.0/1750.0 66.57%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 8 1145.0/1750.0 65.43%
No pre-computed values. Computing now.
190317_1538__ggcnn_9_5_3__32_16_8 9 1157.0/1750.0 66.11%
No pre-computed values. Computing now.
Traceback (most recent call last):
File "evaluate.py", line 188, in
run()
File "evaluate.py", line 156, in run
model = load_model(model_checkpoint_fn)
File "/home/jinhuan/anaconda3/envs/tf-gpu-27/lib/python2.7/site-packages/keras/engine/saving.py", line 417, in load_model
f = h5dict(filepath, 'r')
File "/home/jinhuan/anaconda3/envs/tf-gpu-27/lib/python2.7/site-packages/keras/utils/io_utils.py", line 186, in init
self.data = h5py.File(path, mode=mode)
File "/home/jinhuan/anaconda3/envs/tf-gpu-27/lib/python2.7/site-packages/h5py/_hl/files.py", line 394, in init
swmr=swmr)
File "/home/jinhuan/anaconda3/envs/tf-gpu-27/lib/python2.7/site-packages/h5py/_hl/files.py", line 170, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 85, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = 'data/networks/190317_1538__ggcnn_9_5_3__32_16_8/epoch_00_model.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

When I run the file evaluate.py,it is ok at the begining and could print something.However,when it runs at epoch 10,it does not working well and print some errors as shown above.After then,I add a new file named epoch_00_model.hdf5 which is modified from the others into /networks/*.It continues to work.I do not know why.Could you please tell me some details about it?I am looing forward to your reply.

memory overflow

Hello,I read your paper and I am more intersted in your work.But my memory overflow.I saw #2 have the same question.I wonder if you can give a tensflow version's flie ('.tfcords'),or the only way to solution is to get a bigger memory?I would be greteful to your kindness.

deal with the data

when I download the and extract Cornell Grasping Dataset into a single directorya,and I convert the PCD files to depth images by running
python -m utils.dataset_processing.generate_cornell_depth data/cornell
Nect,by running train_ggcnn.py ,and have the error
INFO:root:Loading Cornell Dataset...
Traceback (most recent call last):
File /ggcnn/train_ggcnn.py", line 269, in
run()
File "/ggcnn/train_ggcnn.py", line 202, in run
include_depth=args.use_depth, include_rgb=args.use_rgb)
File "/ggcnn/utils/data/cornell_data.py", line 26, in init
raise FileNotFoundError('No dataset files found. Check path: {}'.format(file_path))
FileNotFoundError: No dataset files found. Check path: /data/cornell

Processing of data set labels.

Hello, I am following your work "GG-CNN". I have a question to ask: when GG-CNN is based on Cornell and Jacquard data sets, it needs to process the original labels of data sets. Could you please tell me which program it is based on?

dim error when run train_ggcnn.py

Hi, I forked the repository from branch RSS2018
I already completed the step 1, 2 and 3 in Training, which is described in readme.md

But it was an error when I run python3 train_ggcnn.py

Using TensorFlow backend.
Traceback (most recent call last):
File "train_ggcnn.py", line 92, in
x = Conv2D(no_filters[0], kernel_size=filter_sizes[0], strides=(3, 3), padding='same', activation='relu')(input_layer)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 575, in call
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 474, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=2

Can you tell me how to solve the problem?

question during the train

INFO:root:Validating...
Traceback (most recent call last):
File "D:/Grasp/ggcnn-master/train_ggcnn.py", line 267, in
run()
File "D:/Grasp/ggcnn-master/train_ggcnn.py", line 249, in run
test_results = validate(net, device, val_data, args.val_batches)
File "D:/Grasp/ggcnn-master/train_ggcnn.py", line 101, in validate
lossd['pred']['sin'], lossd['pred']['width'])
File "D:\Grasp\ggcnn-master\models\common.py", line 22, in post_process_output
width_img = gaussian(width_img, 2.0)
File "C:\Users\dell\Anaconda3\lib\site-packages\skimage\filters_gaussian.py", line 104, in gaussian
image = img_as_float(image)
File "C:\Users\dell\Anaconda3\lib\site-packages\skimage\util\dtype.py", line 301, in img_as_float
return convert(image, np.float64, force_copy)
File "C:\Users\dell\Anaconda3\lib\site-packages\skimage\util\dtype.py", line 205, in convert
raise ValueError("Images of type float must be between -1 and 1.")
ValueError: Images of type float must be between -1 and 1.

How long have you learned?

Hello. I'd like to get the same results as your paper data.
Would you tell me time for learning(and GPU Spec) and final score?
Thank you.

Error while converting pcd to depth image

hello,I have some question when I run "Convert the PCD files to depth images by running python -m utils.dataset_processing.generate_cornell_depth "
The eorror shows "bash:未预期的符号“newline'附近有语法错误
I don't know what can I do.
If you know something, please tell me.Thank you!!!

About the code

q_img = q_img.cpu().numpy().squeeze()
ang_img = (torch.atan2(sin_img, cos_img) / 2.0).cpu().numpy().squeeze()
width_img = width_img.cpu().numpy().squeeze() * 150.0

q_img = gaussian(q_img, 2.0, preserve_range=True)
ang_img = gaussian(ang_img, 2.0, preserve_range=True)
width_img = gaussian(width_img, 1.0, preserve_range=True)

question1: What does the q_img mean?
question2: What is the effect about "cpu().numpy().squeeze()"?

Error while converting pcd to depth image

Hello,I have some question when I run "Convert the PCD files to depth images by running python -m utils.dataset_processing.generate_cornell_depth "
The eorror shows "bash:未预期的符号“newline'附近有语法错误
I don't know what can I do.
If you know something, please tell me.Thank you!!
微信图片_20220727214539
and
微信图片_20220727215254

Problem with training on macOS

I came across a problem when I train GG-CNN on Cornell Dataset on my MacBook Pro (15-inch, 2019)

python train_ggcnn.py --description training_example --network ggcnn --dataset cornell --dataset-path Cornell_Grasping_Dataset/

INFO:root:Loading Cornell Dataset...
INFO:root:Done
INFO:root:Loading Network...
Traceback (most recent call last):
File "train_ggcnn.py", line 272, in
run()
File "train_ggcnn.py", line 229, in run
net = net.to(device)
...
raise AssertionError("Torch not compiled with CUDA enabled")

Fine tuning on pretrained weights

Dear author,

May I ask you questions about training on customized data?

  • I have a specific application in mind. So I guess I should get some depth data from my scene setting and train with the pretrained weights you have provided. But I am quite new to deep learning so I am not sure if that is correct. Any suggestions on starting my own application would be highly appreciated.

  • I also tried along that way. What I did was to load your pretrained network and continue to train it on the same dataset, Cornell Dataset. But surprisingly, the pretrained network did not perform "well" on the same dataset. Here is some log info.


INFO:root:Beginning Epoch 00
INFO:root:Epoch: 0, Batch: 100, Loss: 0.0716
INFO:root:Epoch: 0, Batch: 200, Loss: 0.1626
INFO:root:Epoch: 0, Batch: 300, Loss: 0.0423
INFO:root:Epoch: 0, Batch: 400, Loss: 0.0685
INFO:root:Epoch: 0, Batch: 500, Loss: 0.1163
INFO:root:Epoch: 0, Batch: 600, Loss: 0.0476
INFO:root:Epoch: 0, Batch: 700, Loss: 0.0870
INFO:root:Epoch: 0, Batch: 800, Loss: 0.2107
INFO:root:Epoch: 0, Batch: 900, Loss: 0.1240
INFO:root:Validating...
INFO:root:176/249 = 0.706827

The pretrained network only achieve 0.7 on Cornell Dataset(data02.tar.gz) for the first epoch. I was wondering is it normal? Thank you!

One problem >﹏<

Hi,teacher.I'm new to this field.I read this paper and tried to reproduce it,but meet some problems,Could you tell me how to calculation speed of single image use'eval_ggcnn'?Just like the '6ms for a single depth image' presented in the paper.I used the jacquard dataset.

Problem converting tiff files

Hi,
First thanks for the code of the GGCNN.
I was trying to convert the dataset to achieve depth images. The problem is that when I run the command:
python -m utils.dataset_processing.generate_cornell_depth
changing the path to my own dataset path, I get no error in the conversion, but then the d.tiff files are empty.
Could you please help me solving this problem? Thank you

python eval_ggcnn.py error

AttributeError: 'collections.OrderedDict' object has no attribute 'compute_loss'

When I run python python eval_ggcnn.py --network epoch_41_iou_1.00_statedict.pt --dataset jacquard --dataset-path /home/luai/luai/Samples/ --jacquard-output --iou-eval ,
AttributeError: 'collections.OrderedDict' object has no attribute 'compute_loss'

how to solve this problem

question about experiment

Hi Morrison,

I want to do the same experiments to compare my work with yours. It is easy to get roughly the same household objects. But for adversarial objects, I wonder if you still have the stl files somewhere in your computer or you still remember the file name of 3d model such that I can easily find them from their databases.

Really appreciate your help.

Error while converting pcd to depth image

Hi,
I am getting this error while converting pcd to depth image using generate_cornell_depth.py.

OpenCV Error: Unsupported format or combination of formats (Only 8-bit 1-channel and 3-channel input/output images are supported) in cvInpaint, file /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/photo/src/inpaint.cpp, line 751
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/pranav/GRASPING/ggcnn/utils/dataset_processing/generate_cornell_depth.py", line 19, in <module>
    di.inpaint()
  File "/home/pranav/GRASPING/ggcnn/utils/dataset_processing/image.py", line 185, in inpaint
    self.img = cv2.inpaint(self.img, mask, 1, cv2.INPAINT_NS)
cv2.error: /build/opencv-L2vuMj/opencv-3.2.0+dfsg/modules/photo/src/inpaint.cpp:751: error: (-210) Only 8-bit 1-channel and 3-channel input/output images are supported in function cvInpaint

Please help.

Incomplete files

Thank you for your great work and the pytorch implementation! However it seems like there's some missing files such as model and utils.data.get_dataset. Could you please look into this problem? Thanks in advance.

open epoch_29_model.hdf5 file error !

HI,I have a question about this
<HDF5 file "epoch_29_model.hdf5" (mode r)>
Traceback (most recent call last):
File "1.py", line 6, in
img_ids = np.array(f['test/img_id'])
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/group.py", line 262, in getitem
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: 'Unable to open object (component not found)'

it's happen when open epoch_29_model.hdf5. file
Thanks !

About "train_ggcnn.py"

I extracted some of the information from the Cornell dataset to form a miniorell dataset,I think I've successfully converted the PCD files to depth images, but when I want to training,run"train_ggcnn.py",it can generate the appropriate files “output/models/" ,The files in ”models“ folder are empty folders,I don't know what went wrong, what should be done to be right.
If you know, hopefully you can tell me,thank you!

about loss

Hello ! dougsm:
Thank you very much for the code you provided.I have a question why you choose the MSE function as the loss function.The input is an (depth) image, the ground truth is an image, how does the MSE function ensure that the input is moving toward the label? Could you tell me your understanding?Thank you very much!
Any help will be much appreciated.

problem with Cornell dataset

Traceback (most recent call last):
File "train_ggcnn.py", line 306, in
run()
File "train_ggcnn.py", line 231, in run
Dataset = get_dataset(args.dataset)
File "D:\develop\demo_learning\湾大先端院\ggcnn-master\utils\data_init_.py", line 9, in get_dataset
raise NotImplementedError('Dataset Type {} is Not implemented'.format(dataset_name))
NotImplementedError: Dataset Type Cornell is Not implemented

Pytorch weights

@dougsm
There's something wrong with your saved pytorch model.
It seems that only you can load the file with torch.load().
image

for example in your weight file, what is the ggcnn_large ? I think it must be ggcnn2 but when I changed it manually, it turned out another errors etc.
So i think it'd better if you can send me the weights only.

About Depth Input processing in paper

Dear Doug,
when I read the paper, I have some questions about the input- depth image's process. I have search on the internet, but I could not find an understandable answer.
I am a newbie learner on deep learning and robot, the questions I have maybe is a little silly, hope you do not mind it.
Could you help me to answer these questions? thanks a lot!!

Background: As you mentioned in your paper, you subtract the mean of each depth image, center its value around 0 to provide depth invariance.
Question 1: I have a little confused about why we need to provide the depth invariance?
Question 2: what situation would happen if we don't do this step(provide depth invariance)?
Question3: In run_ggcnn.py file, line 111 , we calculate the depth by use the code below
depth_center = depth_center[:10].mean() * 1000.0
my question is why we set the scale to be 1000.0 ? which factors decided the scale's setting, camera or anything else? if I want to run this code on my robot and camera, should I change this scale?

Thanks a lotttt for ur answer!!!

Labeling the dataset

Hi Doug,
Sorry for my question in advance. How do you labeled the dataset in order to obtain the txt file, did you use a specific tool?Moreover, do you apply some augmentation to your dataset in order to increment it?
I've found this grasping-labeling tool for labeling the images (https://github.com/ulaval-damas/grasp-rectangle-labelling )but the format I obtain is different from your txt file.

Bests,

Olivia

PyTorch & Keras

Hello! Thank you very much for the code you provided. I have a question, Pytorch used python3, but ROS used python2, this is different, how do you deal with it? Thank you very much!

Pretrained weights for pytorch version

Thanks the sharing the code with the community. Would you share the pretrained weights for pytorch as well? I'd like to test trained model on images of novel object for example.

grasp quality

Hello,I have completed the training, but how do I test the grasp quality ?

Angle definition in training and network inference

Dear Doug,

First of all, congratulations for this awesome work and all the research you have been focusing including the MVP and EGAD dataset. I'm following your brilliant steps and I'm highly inspired by your grasping methods.

Can you help me with this...?

Question 1) Why do you define the grasp as a vector component sin(2*\theta) and cos(2*\theta)? I known that is proposed by Hara et al. (2017) as a way to facilitate the network training but I didn't understand the argument 2*\theta inside sin and cos.

You do this in this part of the code:

cos = self.numpy_to_torch(np.cos(2*ang_img))
sin = self.numpy_to_torch(np.sin(2*ang_img))

Question 2) In your paper, you define the grasp angle as the equation bellow (Fig. 1). Howerver, if we calculate the grasp angle by using this equation, you get values between -0,785 and 0,785 rad (am I missing something?).

Despite that, you do not consider the sin(2*\theta) and cos(2*\theta) when predicting the grasp angle in code:
https://github.com/dougsm/ggcnn_kinova_grasping/blob/004139fedd5ad304f36de76b43466b4474b2081b/ggcnn_kinova_grasping/scripts/run_ggcnn.py#L126
Should I consider the grasp angle equation as the following (without 2 in cos and sin argument)?

Would this be

Figure 1:
image

some mistakes of this code

First,I‘m very interested in this paper,and I want to implement this experiment,But,I come up with this problem.

python train_ggcnn.py
Using TensorFlow backend.
Traceback (most recent call last):
  File "train_ggcnn.py", line 91, in <module>
    x = Conv2D(no_filters[0], kernel_size=filter_sizes[0], strides=(3, 3), padding='same', activation='relu')(input_layer)
  File "/home/user/anaconda3/envs/tensorflow-py2/lib/python2.7/site-packages/keras/engine/base_layer.py", line 414, in __call__
    self.assert_input_compatibility(inputs)
  File "/home/user/anaconda3/envs/tensorflow-py2/lib/python2.7/site-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=2

please help me out

The benefit of feeding the RGB data

Hi Dougsm,

First of all, thank you for sharing the GGCNN code. I have one question: does adding RGB data to the grasping network improve the grasping?

Best,

Jun

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.