cityu-aim-group / sigma Goto Github PK
View Code? Open in Web Editor NEW[CVPR' 22 ORAL] SIGMA: Semantic-complete Graph Matching for Domain Adaptative Object Detection
License: MIT License
[CVPR' 22 ORAL] SIGMA: Semantic-complete Graph Matching for Domain Adaptative Object Detection
License: MIT License
如果要训练source-only的结果(单纯训练FCOS),是用tools/train_net.py, 以及yaml文件的FCOS_ON: False训练吗? 还是tools/train_net_da.py, FCOS_ON: False 即可呢?
您好,我遵循教程的步骤,到了训练的时候,进行教程中命令行进行训练是没有问题的。但如果使用pycharm自带的训练功能(绿色三角),在配置好参数后进行训练,是会报以上错的。请问有什么办法可以解决吗?
Hi,I have read a series of work on domain adaptation based graph matching; It is a wonderful work and can be very enlightening. I am very interested in the code implementation using FasterRCNN. Can you release it?
您好呀,请问如果采用4卡训练的话,多卡训练的启动命令可否给一下呢
你好,请问能分享下sim10k数据集么,官网上下载不了了,如果不方便贴网盘链接,这是我的邮箱[email protected]
如题,我个人尝试使用原repo maskrcnn的fcos_demo在运行时报错:
Traceback (most recent call last):
File "demo/fcos_demo.py", line 128, in
main()
File "demo/fcos_demo.py", line 110, in main
min_image_size=args.min_image_size
File "/home/e401/Desktop/wrs/projects/SIGMA/demo/predictor.py", line 117, in init
_ = checkpointer.load(cfg.MODEL.WEIGHT)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/utils/checkpoint.py", line 318, in load
self._load_model(checkpoint, load_dis)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/utils/checkpoint.py", line 422, in _load_model
load_state_dict(self.model["backbone"], checkpoint.pop("model_backbone"))
TypeError: 'GeneralizedRCNN' object is not subscriptable
此外论文中的点匹配效果可视化有接口吗?怎么使用呢?
提前谢谢!
There is no VOC format annotation download link.
Please check again.
Best,
Minseok
Could you provide the Sim10k ImageSets? I can't seem to find them on the internet. Thank you so much!!
我又来打扰各位作者了哈哈,这次问题是这样的
Traceback (most recent call last):
File "tools/train_net_da.py", line 726, in
main()
File "tools/train_net_da.py", line 715, in main
MODEL = train(cfg, args.local_rank, args.distributed, args.test_only,args.use_tensorboard)
File "tools/train_net_da.py", line 601, in train
meters,
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/engine/trainer.py", line 299, in do_train
model, (images_s, images_t), targets=targets_s, return_maps=True)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/engine/trainer.py", line 69, in foward_detector
(features_s, features_t), middle_head_loss = model_middle_head(images, (features_s,features_t), targets=targets, score_maps=score_maps )
File "/home/e401/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/modeling/rpn/fcos/graph_matching_head.py", line 229, in forward
features, feat_loss = self._forward_train(images, features, targets, score_maps)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/modeling/rpn/fcos/graph_matching_head.py", line 348, in _forward_train
matching_loss_quadratic = self._forward_qu(nodes_1, nodes_2, edges_1.detach(), edges_2.detach(), affinity)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/modeling/rpn/fcos/graph_matching_head.py", line 624, in _forward_qu
sin2 = torch.sqrt(1.- F.cosine_similarity(triangle_2, triangle_2_tmp).pow(2)).sort()[0]
RuntimeError: The size of tensor a (8) must match the size of tensor b (9) at non-singleton dimension 0
其他信息:
I trained about 50000 iters using 2080Ti with batch size as 2, and I found that the evaluation results are quite unstable. The AP50 fluctuated around 41 and reach a maximum of 43.5. I wanna ask you how to judge the convergence of the model and select the results to report.
Thanks a lot.
I am running Python version 3.8.10 with torch version '1.9.0+cu111' and torchvision version 0.2.1. I installed additional packages as instructed in the INSTALL.md file.
Despite this, upon testing, I encountered the following error message.
2023-04-12 10:52:45,419 fcos_core.inference INFO: Start evaluation on cityscapes_foggy_val_cocostyle dataset(500 images).
0%| | 0/125 [00:00<?, ?it/s]
Traceback (most recent call last):
File "tools/test_net.py", line 112, in <module>
main()
File "tools/test_net.py", line 96, in main
inference(
File "/root/autodl-tmp/UDA/SIGMA/fcos_core/engine/inference.py", line 87, in inference
predictions = compute_on_dataset(cfg, model, data_loader, device, inference_timer)
File "/root/autodl-tmp/UDA/SIGMA/fcos_core/engine/inference.py", line 23, in compute_on_dataset
for _, batch in enumerate(tqdm(data_loader)):
File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/root/miniconda3/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/autodl-tmp/UDA/SIGMA/fcos_core/data/datasets/coco.py", line 94, in __getitem__
img, target = self.transforms(img, target)
File "/root/autodl-tmp/UDA/SIGMA/fcos_core/data/transforms/transforms.py", line 15, in __call__
image, target = t(image, target)
File "/root/autodl-tmp/UDA/SIGMA/fcos_core/data/transforms/transforms.py", line 59, in __call__
image = F.resize(image, size)
File "/root/miniconda3/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 188, in resize
if not _is_pil_image(img):
File "/root/miniconda3/lib/python3.8/site-packages/torchvision/transforms/functional.py", line 19, in _is_pil_image
return isinstance(img, (Image.Image, accimage.Image))
AttributeError: module 'accimage' has no attribute 'Image'
The issue appears to stem from accimage
, which is a pre-installed package in torchvision. This is unexpected because the version of torchvision being utilized is accurate.
When I execute the step "python setup.py build develop", it gives an error saying that python>=3.8 is required, but your requirement is python=3.7 when creating the environment. So what should I do?
dear author:
如何解决 AttributeError: module 'torch._six' has no attribute 'PY3' 错误 in 'imports.py'
Thanks for your excellent work! I am trying to run SIGMA and SIGMA++ on a new UDA benchmark for adapting Cityscapes to ACDC dataset. I find for different UDA tasks, you adopt different GA_DIS_LAMBDA, GRL_WEIGHT_{P3-P7}, MATCHING_LOSS_WEIGHT and BG_RATIO in configs. Could you share some insights about how to tune these hyperparameters for a new UDA task? Looking forward to your response!
Hello, I noticed that your target domain dataset is only set into training set and test set, but after starting the validation after 100 iterations, whether the target domain dataset should be divided into three parts: training, validation and test, so as to adapt the unsupervised domain to target detection.
Hello, thanks for you work.
I notice there is no command in your README for multi gpu training. I use the following command to train.
python tools/train_net_da.py --confpython -m torch.distributed.launch --nproc_per_node 4 tools/train_net_da.py --config-file configs/SIGMA/sigma_vgg16_sim10k_to_cityscapes.yamlig-file configs/SIGMA/sigma_vgg16_sim10k_to_cityscapes.yaml
However, I meet a problem
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss.
You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss.
If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function.
Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
How can I solve this problem to train on multiple gpus?
@wymanCV
In your code, random seeds are used for training, so code based on Faster RCNN cannot be reproduced, the result will fluctuate, and code based on FCOS can be reproduced? My dataset is like this, thank you.
您好,我在使用命令
python tools/train_net_da.py --config-file configs/SIGMA/sigma_vgg16_sim10k_to_cityscapes.yaml
运行中遇到了如下错误,不知道您是否能提供一些帮助,十分感谢!
环境如下:
CUDA : 11.3, GCC : 7.5.0, Nvidia driver : 470.86
python : 3.7.9
conda:
cudatoolkit=10.1
pip:
torch==1.4.0
torchvision==0.2.1
scipy==1.6.0
python setup.py build develop所遇错误的解决方案(可成功编译,但不知道对实际运行的影响):
一、miniconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py中添加了'8.6'架构
二、miniconda3/envs/SIGMA1/lib/python3.7/site-packages/torchvision/transforms/functional.py中由于pillow版本导致的错误故改为__version__
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED错误的已尝试方案(均无效):
一、按如下要求修改sigma_vgg16_sim10k_to_cityscapes.yaml
二、torch.backends.cudnn.enabled=False
三、pip install tensorboardX==2.1
四、修改cudatoolkit版本为11.1, 11.3
部分日志信息如下:
log.txt
代码运行至加载完预训练权重后报错,一开始没有traceback,使用详细报错命令后有了这样的信息。
Current thread 0x00007f1a5f1c6700 (most recent call first):
File "<frozen importlib._bootstrap>", line 372 in _init_
File "<frozen importlib._bootstrap_external>", line 606 in spec_from_file_location
File "/gs/home/rswang/proj/SIGMA/fcos_core/utils/imports.py", line 12 in import_file
File "/gs/home/rswang/proj/SIGMA/fcos_core/data/build.py", line 221 in make_data_loader_source
File "tools/train_net_da.py", line 559 in train
File "tools/train_net_da.py", line 717 in main
File "tools/train_net_da.py", line 728 in
/var/spool/slurm/job8041367/slurm_script: line 23: 96067 Segmentation fault (core dumped) python tools/train_net_da.py --config-file configs/sigma_plus_plus/mine.yaml
编译用的gcc5.3没看到有报错,代码自带的环境检测如下:
Cuda compilation tools, release 10.2, V10.2.89
CUDA used to build PyTorch: 10.1
CUDA runtime version: 10.2.89
不知道是哪里出了问题呢?请作者指点一二~~
ps: cudatoolkit==10.1 torch1.4.0
Pseudo Label
From what I gather, the graphs for target images in the paper are constructed solely based on the pseudo label due to the lack of ground truth labels.
However, in cases where the domain gap between source and target is very large, such as going from a sunny day to a heavy rainy night, relying on the pseudo label can be inadequate and may lead to issues in graph construction.
Assuming my understanding is accurate, are there any potential solutions to address this problem and enhance SIGMA's performance?
Category Mismatch
I have an additional concern regarding the effectiveness of the node completion (DNC) strategy used in the paper. The datasets used for domain adaptation, such as Cityscape, FoggyCityscape, Sim10k, and KITTI, have similar categories.
As a result, I am uncertain whether DNC would perform well if the categories were significantly different between the source and target datasets.
我在使用sigma代码时,想先用source-only预训练下试试看,所以把DA_ON关闭了。但是在source-only训练过程中,发现训练到40的mAP后,模型损失不再下降。我尝试过调整batch_size和learning_rate,但mAP就是上不去。当我使用yolov5进行sourceonly数据集的训练时,是可以到96的mAP的。
您对此问题有想法吗?
fcos_core/data/transforms/transforms.py line 61, in call report "AttributeError: 'list' object has no attribute 'resize'". When I change it to "target = F.resize(target, size), it reported " TypeError: img should be PIL Image. Got <class 'list'>" . Can you help me?
作者您好!请问如何生成目标的检测框的可视化?inference中的bbox.json应当如何运用呢?
如题,我个人尝试使用原repo maskrcnn的fcos_demo在运行时报错:
Traceback (most recent call last):
File "demo/fcos_demo.py", line 128, in
main()
File "demo/fcos_demo.py", line 110, in main
min_image_size=args.min_image_size
File "/home/e401/Desktop/wrs/projects/SIGMA/demo/predictor.py", line 117, in init
_ = checkpointer.load(cfg.MODEL.WEIGHT)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/utils/checkpoint.py", line 318, in load
self._load_model(checkpoint, load_dis)
File "/home/e401/Desktop/wrs/projects/SIGMA/fcos_core/utils/checkpoint.py", line 422, in _load_model
load_state_dict(self.model["backbone"], checkpoint.pop("model_backbone"))
TypeError: 'GeneralizedRCNN' object is not subscriptable
此外论文中的点匹配效果可视化有接口吗?怎么使用呢?
提前谢谢!
作者您好,我的配置是3080ti,cuda=11.6。在执行python setup.py build develop
出现问题,具体如下:
conda install cudatoolkit=10.1 # 10.0, 10.1, 10.2, 11+ all can work!
pip install torch==1.4.0 # later is ok!
pip install --no-deps torchvision==0.2.1
(官网最新的)conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 -c pytorch
(cuda11的最低版本) conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "setup.py", line 77, in <module>
include_package_data=True,
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/command/build.py", line 132, in run
self.run_command(cmd_name)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command
self.distribution.run_command(command)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 84, in run
_build_ext.run(self)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
self.build_extensions()
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 653, in build_extensions
build_ext.build_extensions(self)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions
self._build_extensions_serial()
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial
self.build_extension(ext)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/setuptools/_distutils/command/build_ext.py", line 554, in build_extension
depends=ext.depends,
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 482, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1238, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/yjy/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1538, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension
期待作者能够抽空指点下,十分感谢您的解答!!!
Hi, I would like to ask you about the number of iters used for warm-up. Haven't run the code myself but I was wondering if the 2000 iters is where the global-align only model converged? Did you find that number out of experimentation or did you follow the original config setting from previous works? Thanks.
Hi, amazing work! I followed the step-by-step installation instruction but met the following error. Do you have any ideas about this? Did I miss modify something? Thank you!
2022-05-11 21:08:26,489 fcos_core.trainer INFO: Start training
DA_ON: True
2022-05-11 21:10:25,284 fcos_core.trainer INFO: eta: 6 days, 20:56:49 iter: 20 loss_ds: 6.1977 (6.9134) node_loss: 2.0234 (2.0159) mat_loss_aff: 0.0991 (0.0992) mat_loss_qu: 0.0005 (0.0005) loss_cls: 0.6540 (0.7507) loss_reg: 1.3081 (1.9726) loss_centerness: 0.6631 (0.6738) loss_adv_P7: 0.2785 (0.2792) loss_adv_P6: 0.2755 (0.2747) loss_adv_P5: 0.2736 (0.2740) loss_adv_P4: 0.2750 (0.2750) loss_adv_P3: 0.2762 (0.2762) time: 4.4850 (5.9393) data: 0.0478 (1.7929) dis_loss: 0.0604 (0.0615) lr_backbone: 0.000833 lr_middle_head: 0.001667 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 10271
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
BLAS : Program is Terminated. Because you tried to allocate too many memory regions.
Segmentation fault (core dumped)
您好,我在原实验参数未做更改的情况下完成了阶段性实验,下面我先对本人的实验过程做一下说明:
1、实验环境配置中我使用的高版本cuda
2、整个实验过程存在两次中断,第一次迭代了15200次,中断原因不明,在6200次自动保存了最高结果49.2;第二次迭代了21440次,中断原因为“OSError: [Errno 28] No space left on device”,在10400次自动保存了最高结果51.39;第三次迭代了30000次,无中断,在23800次自动保存了最高结果54.76。后两次训练均为接续训练,不是从头开始。
问题:
1、为什么自动保存的是效果最好的模型结果,而非是按迭代次数保存,比如每10000次迭代保存一个pth
2、github所给代码是Conference verison吗?
3、三次实验总共60000次迭代的结果是54.76,结果是否合理?若要达到57.1是否只能继续训练?
4、若我要做消融对比实验,仅对比实验结果来判断哪些模块更为有效,那么多少次迭代比较合适,或者说该如何进行比较,因为完整的100000次迭代耗时太久了
三次实验日志如下:
第一次:
train111.log
第二次:
train112.log
第三次:
train113.log
@wymanCV
Thank you for your contribution. If I want to compare with model EPM, I noticed that model EPM needs two steps of training, while yours only needs one step. If possible, may I ask how to change your model to be like EPM? Thank you for your reply.
I personally get your frustration as I think the cityscapes to foggy cityscapes adaptation is hard to achieve good semantic alignment on due to the amount of noise that distorts the semantic features.
Hello, I have the following questions, looking forward to your answers.
1.Validation was carried out after every 100 iterations, and the model with the highest accuracy was selected to ensure repeatability. Would it be impossible to guarantee repeatability if you only verified it every 2500 times? Have you done similar experiments? thank you
2.If instead of verifying every 100 iterations, the early stop method is used, is the effect similar? thank you
3.In the own data set, the target domain data set is best divided into three parts: training, verification and testing?Modify as follows:
TRAIN_TARGET: ("cityscapes_foggy_train_cocostyle", "cityscapes_foggy_val_cocostyle"),TEST: ("cityscapes_foggy_test_cocostyle", )?
@wymanCV
Hello, my environment is NVIDIA RTX A6000. During the last step of installation, I encountered the following problem. Do you know how to solve it? thank you. My cudatoolkit is 11.3.(conda install cudatoolkit=11.3)
File "/home/hc/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1027, in _get_cuda_arch_flags
raise ValueError("Unknown CUDA arch ({}) or GPU not supported".format(arch))
ValueError: Unknown CUDA arch (8.6) or GPU not supported
322 | T * data() const {
| ^~~~
gcc -pthread -B /home/hc/anaconda3/envs/SIGMA/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -Ifcos_core/csrc -I/home/hc/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/include -I/home/hc/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/hc/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/include/TH -I/home/hc/anaconda3/envs/SIGMA/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-11.8/include -I/home/hc/anaconda3/envs/SIGMA/include/python3.7m -c fcos_core/csrc/cpu/nms_cpu.cpp -o build/temp.linux-x86_64-cpython-37/fcos_core/csrc/cpu/nms_cpu.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.