Code Monkey home page Code Monkey logo

pytorch-retinanet's Introduction

pytorch-retinanet

img3 img5

Pytorch implementation of RetinaNet object detection as described in Focal Loss for Dense Object Detection by Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollár.

This implementation is primarily designed to be easy to read and simple to modify.

Results

Currently, this repo achieves 33.5% mAP at 600px resolution with a Resnet-50 backbone. The published result is 34.0% mAP. The difference is likely due to the use of Adam optimizer instead of SGD with weight decay.

Installation

  1. Clone this repo

  2. Install the required packages:

apt-get install tk-dev python-tk
  1. Install the python packages:
pip install pandas
pip install pycocotools
pip install opencv-python
pip install requests

Training

The network can be trained using the train.py script. Currently, two dataloaders are available: COCO and CSV. For training on coco, use

python train.py --dataset coco --coco_path ../coco --depth 50

For training using a custom dataset, with annotations in CSV format (see below), use

python train.py --dataset csv --csv_train <path/to/train_annots.csv>  --csv_classes <path/to/train/class_list.csv>  --csv_val <path/to/val_annots.csv>

Note that the --csv_val argument is optional, in which case no validation will be performed.

Pre-trained model

A pre-trained model is available at:

The state dict model can be loaded using:

retinanet = model.resnet50(num_classes=dataset_train.num_classes(),)
retinanet.load_state_dict(torch.load(PATH_TO_WEIGHTS))

Validation

Run coco_validation.py to validate the code on the COCO dataset. With the above model, run:

python coco_validation.py --coco_path ~/path/to/coco --model_path /path/to/model/coco_resnet_50_map_0_335_state_dict.pt

This produces the following results:

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.335
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.499
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.357
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.167
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.369
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.466
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.282
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.429
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.458
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.255
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.508
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.597

For CSV Datasets (more info on those below), run the following script to validate:

python csv_validation.py --csv_annotations_path path/to/annotations.csv --model_path path/to/model.pt --images_path path/to/images_dir --class_list_path path/to/class_list.csv (optional) iou_threshold iou_thres (0<iou_thresh<1)

It produces following resullts:

label_1 : (label_1_mAP)
Precision :  ...
Recall:  ...

label_2 : (label_2_mAP)
Precision :  ...
Recall:  ...

You can also configure csv_eval.py script to save the precision-recall curve on disk.

Visualization

To visualize the network detection, use visualize.py:

python visualize.py --dataset coco --coco_path ../coco --model <path/to/model.pt>

This will visualize bounding boxes on the validation set. To visualise with a CSV dataset, use:

python visualize.py --dataset csv --csv_classes <path/to/train/class_list.csv>  --csv_val <path/to/val_annots.csv> --model <path/to/model.pt>

Model

The retinanet model uses a resnet backbone. You can set the depth of the resnet model using the --depth argument. Depth must be one of 18, 34, 50, 101 or 152. Note that deeper models are more accurate but are slower and use more memory.

CSV datasets

The CSVGenerator provides an easy way to define your own datasets. It uses two CSV files: one file containing annotations and one file containing a class name to ID mapping.

Annotations format

The CSV file with annotations should contain one annotation per line. Images with multiple bounding boxes should use one row per bounding box. Note that indexing for pixel values starts at 0. The expected format of each line is:

path/to/image.jpg,x1,y1,x2,y2,class_name

Some images may not contain any labeled objects. To add these images to the dataset as negative examples, add an annotation where x1, y1, x2, y2 and class_name are all empty:

path/to/image.jpg,,,,,

A full example:

/data/imgs/img_001.jpg,837,346,981,456,cow
/data/imgs/img_002.jpg,215,312,279,391,cat
/data/imgs/img_002.jpg,22,5,89,84,bird
/data/imgs/img_003.jpg,,,,,

This defines a dataset with 3 images. img_001.jpg contains a cow. img_002.jpg contains a cat and a bird. img_003.jpg contains no interesting objects/animals.

Class mapping format

The class name to ID mapping file should contain one mapping per line. Each line should use the following format:

class_name,id

Indexing for classes starts at 0. Do not include a background class as it is implicit.

For example:

cow,0
cat,1
bird,2

Acknowledgements

Examples

img1 img2 img4 img6 img7 img8

pytorch-retinanet's People

Contributors

adityakane2001 avatar jingyuanhu avatar mimoralea avatar rvandeghen avatar xu1718191411 avatar yhenon avatar yhenon-nextdroid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-retinanet's Issues

AssertionError during the validation phase

Thanks for the great repo. I am facing the following issue when train a model during the validation phase.

File "train.py", line 156, in main
mAP = csv_eval.evaluate(dataset_val, retinanet)
File "/home/a_khanss/pytorch-retinanet/csv_eval.py", line 187, in evaluate
all_detections = _get_detections(generator, retinanet, score_threshold=score_threshold, max_detections=max_det
File "/home/a_khanss/pytorch-retinanet/csv_eval.py", line 36, in _get_detections
scores, labels, boxes = retinanet(data['img'].permute(2, 0, 1).cuda().float().unsqueeze(dim=0))
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in
call
result = self.forward(*input, **kwargs)
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line
124, in forward
return self.gather(outputs, self.output_device)
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line
136, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", lin
e 67, in gather
return gather_map(outputs)
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", lin
e 62, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", lin
e 54, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/a_khanss/anaconda3/envs/pytorch-tf/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 52
, in forward
assert all(map(lambda i: i.is_cuda, inputs))
AssertionError

Poor performance on Openimages

Hi Yann,

I'd like to first thank you for posting the code of retinanet.
But as I have used the oid_dataset in training the detector on openimages v4 for 2 days with 8 GPUs.
It seems the performance is really poor (mAP ~= 0.4%). It seems there is 0 detection for most of the classes.
I have checked the code but didn't find any bugs.
May I ask for your help? Many thanks!

Best,
Yvonne

error with model uploading

  1. torch.load("coco_resnet_50_map_0_335.pt")
    ERROR:
    UnicodeDecodeError Traceback (most recent call last)
    in ()
    ----> 1 torch.load("coco_resnet_50_map_0_335.pt")

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module)
301 f = open(f, 'rb')
302 try:
--> 303 return _load(f, map_location, pickle_module)
304 finally:
305 if new_fd:

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in _load(f, map_location, pickle_module)
467 unpickler = pickle_module.Unpickler(f)
468 unpickler.persistent_load = persistent_load
--> 469 result = unpickler.load()
470
471 deserialized_storage_keys = pickle_module.load(f)

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1124: ordinal not in range(128)

  1. with open("coco_resnet_50_map_0_335.pt", 'rb') as f:
    model = torch.load(f.read().decode("latin1"))

ERROR:
ValueError Traceback (most recent call last)
in ()
1 with open("coco_resnet_50_map_0_335.pt", 'rb') as f:
----> 2 model = torch.load(f.read().decode("latin1"))

/usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module)
299 (sys.version_info[0] == 3 and isinstance(f, pathlib.Path)):
300 new_fd = True
--> 301 f = open(f, 'rb')
302 try:
303 return _load(f, map_location, pickle_module)

ValueError: embedded null byte

targets issue

hi i trained your retinanet code.

So it works well. but i have one question.

You divide the regression targets by [0.1, 0.1, 0.2, 0.2] in loss function.

But i can not understand why you did like this.

Can you answer me that question?

evaluating problem?

When I do evaluation, the function evaluate_coco_person(dataset, model, threshold=0.05) use the threshold=0.05? Why using 0.05, and why not using 0.5 as the same as in visualize.py.
` for idx, data in enumerate(dataloader_val):

	with torch.no_grad():
		st = time.time()
		scores, classification, transformed_anchors = retinanet(data['img'].cuda().float())
		print('Elapsed time: {}'.format(time.time()-st))
		idxs = np.where(scores>0.5)
		img = np.array(255 * unnormalize(data['img'][0, :, :, :])).copy()

		img[img<0] = 0
		img[img>255] = 255`

import create_extension error:

Hi,
When I run the build.sh, it shows the error:

Compiling nms kernels by nvcc...
Traceback (most recent call last):
File "build.py", line 3, in
from torch.utils.ffi import create_extension
File "/home/tensor-server/.pyenv/versions/anaconda2-5.0.0/envs/jingya_caffe2/lib/python2.7/site-packages/torch/utils/ffi/init.py", line 1, in
raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

How does the code combine 9 images in different brightness into 1 image before fitting in the model?

In visualize.py line 68, I save data['img'] before it fit to the model.
and find out the origin image was not only processed with Normalization and Resize,
but also done some brightness adjust and put 9 of them in 3 by 3 grid together.

However I can not figure out how and where does it be done.
I need to know how to do it cause I want to implement a real time webcam prediction demo.

Also, does it mean during the training the model is actually watching only one third resolution even after resized?

Thanks for the answer.

ModuleNotFoundError: No module named 'lib.nms._ext'

In a conda environment with PyTorch 1.0, running

python train.py --dataset coco --coco_path ../coco --depth 50

from the pytorch-retinanet parent directory produces this error. There appears to be no _ext module within the nms folder.

Training time

Any estimates of training time? I am trying to implement retinanet as well and I took some parts of the code from this repo. For me using a batch size of 32 takes around 1 hour / epoch using pre-trained resnet50.

`pytorch` version?

Which version of pytorch is this intended to be used with? Thanks!

EDIT: pytorch major version == 4

The nms seems not working?

Thanks for your nice code.
In visualize.py I changed the code:
retinanet = model.resnet50(num_classes=dataset_val.num_classes(),) retinanet.load_state_dict(torch.load(parser.model))
And I ran this command:
python visualize.py --dataset coco --coco_path /data/COCO --model models/coco_resnet_50_map_0_335_state_dict.pt
but I got this result:
image

classification loss sticks around 2.30212

After 1 or 2 epochs of classification loss rightly decreasing, it increases to around 2.30212 and sticks with it and never changes. I'm stuck in this situation now= =
I use my own dataset which re-organized to CSV format as descripted in README.M.
Anybody knows what is going on?

'int' object is not subscriptable

in the visualize.py, line 65, scores, classification, transformed_anchors = retinanet(data['img'].float()). why did it show this error:'int' object is not subscriptable?

model initialization

@yhenon HI
In model.py, in ResNet :line193-207

for m in self.modules():
      if isinstance(m, nn.Conv2d):
          n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
          m.weight.data.normal_(0, math.sqrt(2. / n))
       elif isinstance(m, nn.BatchNorm2d):
          m.weight.data.fill_(1)
          m.bias.data.zero_()

prior = 0.01
        
self.classificationModel.output.weight.data.fill_(0)
self.classificationModel.output.bias.data.fill_(-math.log((1.0-prior)/prior))

self.regressionModel.output.weight.data.fill_(0)
self.regressionModel.output.bias.data.fill_(0)

As for fpn module in the network, it seems that the bias of conv layer is not initialized

Dealing with crowded objects

Hello Yann,

Firstly, let me thank you for sharing this excellent project!

I have a question regarding the annotations of crowded objects in Ms COCO.
In your code, you ignore them when loading the data.

If I understand correctly, it means that these annotations are not used as positive targets for the anchors during the training (which is ok), but it also means that the network will be penalized if it detects a (valid) object inside these crowded areas (which is potentially harmful). Am I right? If yes, wouldn't it be more safe to completely ignore images with crowded annotations?

Thank you in advance!

Best,
Grigory

classification branch

@yhenon HI

For focal loss. The classification branch uses the sigmoid function.

Why background class is considered in the classification branch??such as coco, num_classes=80 instead of 81.

Purpose of padding at dataloader.py

I am just wondering why we're using padding in Resizer() function at dataloader.py.
....
pad_w = 32 - rows%32
pad_h = 32 - cols%32

    new_image = np.zeros((rows + pad_w, cols + pad_h, cns)).astype(np.float32)
    new_image[:rows, :cols, :] = image.astype(np.float32)

....

add other solutions to RetinaNet

@yhenon HI

As for some one-stage detectors, like SSD, a common solution is to perform some form of hard negative mining,such as the positive and negative samples are selected according to the ratio of 1:3 .

For the code of RetinaNet, it seems not use this strategy.If add these strategies, the effect should be better?? I wonder what you think??

many many thanks

About anchor

when I change anchor ratios or scales in anchor.py, there will be something wrong like these:
The shape of the mask [275256] at index 0 does not match the shape of the indexed tensor [206442, 1] at index 0
The shape of the mask [249660] at index 0 does not match the shape of the indexed tensor [187245, 1] at index 0
The shape of the mask [192012] at index 0 does not match the shape of the indexed tensor [144009, 1] at index 0
The shape of the mask [236904] at index 0 does not match the shape of the indexed tensor [177678, 1] at index 0
The shape of the mask [236904] at index 0 does not match the shape of the indexed tensor [177678, 1] at index 0
Do you know what they mean? Thank you

the anchors coordinate transform

in file "anchors.py",at line:66-68
the calculate is right?

transform from (x_ctr, y_ctr, w, h) -> (x1, y1, x2, y2)

anchors[:, 0::2] -= np.tile(anchors[:, 2] * 0.5, (2, 1)).T
anchors[:, 1::2] -= np.tile(anchors[:, 3] * 0.5, (2, 1)).T

in my opinion ,it should be :

transform from (x_ctr, y_ctr, w, h) -> (x1, y1, x2, y2)

anchors[:, 0::2] -= np.tile(anchors[:, 2] * 0.5, (2, 1)).T
anchors[:, 1::3] -= np.tile(anchors[:, 3] * 0.5, (2, 1)).T

is it right?

Freeze backbone weights

Hi,

first of all congratulations for this great work :)

I was wondering if it is possible to "freeze" the backbone weights so that only the FPN + regression submodel + classification submodel get fine tuned, Is there any way to achieve it?

Thank you in advance.

How about the performance on coco2014

Thanks for your effective repos. You said

Currently, this repo achieves 33.7% mAP at 600px resolution with a Resnet-50 backbone.

Unfortunately, for resnet-50 my model only reach 28.4% after 25epoch with your default setting. What is wrong?

any plan to write demo.py?

When I use trained models to predict on raw image ,there are many bugs.If you have any plan to write a demo to predict from each image?Thanks.

About Torchvision

Hi,
I am using the UA-DETRAC datasets which aims for the vehicle detection.There are some confusion in my mind when i use CSV loader to load the dataset with RESNET 50 for training.I wonder when doing datas transform with the torchvision,why the "Normalizer()"'s mean and std has the constant value of [0.485, 0.456, 0.406] and [0.229, 0.224, 0.225].Furthermore the "Resizer()" is implemented when the datasets is COCO which has the different image size,but the UA-DETRAC datasets has the same image size.I want to know whether should i need do the Resizer.

Bounding box values as Float

Most of the datasets have integer values as their bounding box co-ordinates like 56 43 22 77 etc, but for the open dataset challenge the bounding box co-ordinates are in float values like 0.0012, 0.4120, 0.1250, 0.2224 etc, so when during training my classification loss decreases but my regression loss stays 0, Any suggestion on what i could modify to make this work.For now in the dataloader script from line 274-277 i have changed the int argument to float to make it to accept float values.Thanks in advance

Changing the values of batch size

Hi Yann!

I have 2 GPU's and currently its taking up about 4 G.B of GPU, but even if i increase the batch size , the code does not throw any error, but it does not take up more GPU space.Anything obvious im missing here!

Thanks

Pascal VOC2007 performance is not as good

Hi. First of all thanks for your clean and clear repository.

I am trying to evaluate the models on VOC2007, however, I am unable to reach the required performance. SSD with resnet50 backbone on pascal voc achieves 79.7% map (reported here:https://github.com/ShuangXieIrene/ssds.pytorch#performance). However, in my experiments it is always saturating at 55mAP. I also tried with resnet18 and resnet34. The first saturates around 50, the second goes upto 55 but saturates thereafter.

The only possible difference is the input image size. I believe you are resizing images with image size of 608. I highly doubt that is indeed causing problem.

Would like to know your thoughts on this.

Also could you share your parameters to train on coco2017? In particular, how many epochs did you run it for.

Predict bounding boxes of the different objects for test set

Thanks for this great work. I really appreciate your efforts. However, I am just wondering if you can describe a brief procedure to predict bounding boxes of the different objects for a held out test set (having only images, no annotation at all) using the trained model. We used CSV to train the model.

Download dataset

Which dataset exactly is this code expecting? Is there a way to download it programatically w/ pycocotools?

Thanks!

EDIT: From looking at the code, it looks like you're training on train2017 and testing on val2017

More trained models

Thank you very much for your sharing, I would like to ask if you have a trained resnet_101 or 152 model available, can you provide the following link, thank you!@yhenon

Can it train with breakpoints?

I change some logging information and break training process. If I wanna load from exist '.pt', where should I add 'torch.load(parser.model)'?

Size must be non-negative

pytorch=0.4.1
annot_padded =torch.ones((len(annots),max_num_annots,5))*-1

max_num_annots= negative number?

about data_loader.py

@yhenon HI

In data_loader.py. in Resizer,after resize, you have the following code

pad_w = 32 - rows%32  #32
pad_h = 32 - cols%32  #17

new_image = np.zeros((rows + pad_w, cols + pad_h, cns)).astype(np.float32)  # (640, 928, 3)
new_image[:rows, :cols, :] = image.astype(np.float32)

This feels like a padding operation. What is the purpose of this?

undefined symbol : PyInt_FromLong

Hi thanks for your code!

Im getting this error when i try to run on my custom data

Traceback (most recent call last):
  File "train.py", line 19, in <module>
    import model
  File "/media/sdc/Ryan/pytorch-retinanet/model.py", line 9, in <module>
    from lib.nms.pth_nms import pth_nms
  File "/media/sdc/Ryan/pytorch-retinanet/lib/nms/pth_nms.py", line 2, in <module>
    from ._ext import nms
  File "/media/sdc/Ryan/pytorch-retinanet/lib/nms/_ext/nms/__init__.py", line 3, in <module>
    from ._nms import lib as _lib, ffi as _ffi
ImportError: /media/sdc/Ryan/pytorch-retinanet/lib/nms/_ext/nms/_nms.so: undefined symbol:PyInt_FromLong`

anything obvious i'm missing?

Pretrained models requested

Hello,
Thanks for your simple and easy to modified code.
I'm currently working on a vision guidance system which built upon on this repo.
The result of trained from scratch from our dataset is acceptable to us but we want to further compared the result to fine-tuned from pretrained coco.

I have tried manually adapt the weight from keras-retinanet (also the input scale and RGB to BGR preprocessing) but the bbox position is not good (see this demo dog).
It is wonderful if you can release your pretrained coco model.

Thanks you very much.

cpu for nms

hello:
thanks for your hard working,
now i plan to compile the nms on the machine without GPU,
what can i do? can you provide help?
look forward to your reply,thanks

VOC type mAP is only 0.25 and coco type is 0.11

Hi,

Thanks for the repo. I am trying to calculate mAP on validation set and got the following results.

VOC type mAP is only 0.25 and coco type is 0.11

I have done the following things.

  • Read an image
  • Normalize using default mean and std
  • Resize using padding
  • network outputs
  • nms and stuff
  • rescale predicted bbox to original img dim.

I am not sure where I am making a mistake. Can u please help? Also which script are u using to calc mAP ?

Regards,
Prakash V

the scale of visualize.py

@yhenon

In the result of visualize.py, the image is scaled to a certain size(min_side=608, max_side=1024.like in data_loader.py). It's not the size of the original.

Why is that?

Classification loss increasing on new data set?

Hi,

I am using UEC 256 database which gives bounding boxes across food items. I split the dataset into 75:25 ratio for training and validation respectively, and currently using CSV loader to load the dataset with RESNET 50 for training. My batch size is 2 and I am running it on Nvidia Tesla P100.

While training, the loss seems to drop as expected, but while using validation the classification loss seems to increase while the regression loss is almost around the same point, I think it is overfitting the data. Any idea how to solve this? Do I have to change the normalizing and unnormalizing factors according to my dataset?

Thank you. Any help is appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.