Code Monkey home page Code Monkey logo

headhunter's People

Contributors

sentient07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

headhunter's Issues

Loss is nan

Thanks for providing the code to your paper.
I managed to get started, even though my gpu only has 8 GB.
I just added A.SmallestMaxSize(max_size=400, p=1.0), to the transforms in dataset.py.

Everything else is just used as is on the ScutHead-Dataset (and correct versions of torch=1.6.0
Now I get the error, directly on the first iteration:
Loss is nan, stopping training

I suppose this is because in fast_rcnn.py CustomRoIHead.forward the passed features are filled with nan
Also setting ohem=soft_nms=upscale_rpn=False and thereby using torchvisions RoiHead doesn't help.
Did you experience something like this during training?

About GPU memory requirement.

How much GPU memory is recommanded when training the model using ScutHead dataset? I've found a memory insufficience error when trying to train the model on a idel GPU with 4041MB memory and batch size 1.
Besides, there is some minor bugs in the "head_detection/data/create_scuthead.py" file, as args.dset_path is defined but args.dset_dir is used (and the same bugs to the arg.save_path and args.out_path).

Can I use this model to train our own data set

Two questions:
1. Can I use this model to train our own data set?If so, what are the differences from the three data sets mentioned by the author.
2. The groudTruth file is loaded in a different format than the readme file

errors in training

raise ValueError("Masks not supported")

ValueError: Masks not supported

CorHD dataset process program

For the preprocessing .py program of the headhunter data set, you seem to have not uploaded it yet, can you please upload it? Thank you very much

How to setup the enviroment?

Hello, the setup instruction is quite confusing, can you provide the pytorch version, required python package or config.yaml file?

ValueError: Anchors should be Tuple[Tuple[int]] ... with GPU RTX 3000 series

Hello,
If I try to run test.py the pretrained_model you provided on CroHD, I am facing a problem with Anchors:

python test.py --test_dataset CroHD/test/HT21-11/img1 --plot_folder outputs --outfile outputs --pretrained_model FT_R50_epoch_24.pth --context cpm

Output, with the Traceback:

256
FT_R50_epoch_24.pth
0it [00:00, ?it/s]/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torch/nn/functional.py:3502: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. 
  warnings.warn(
0it [00:02, ?it/s]
Traceback (most recent call last):
  File "test.py", line 176, in <module>
    test()
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "test.py", line 165, in test
    outputs = model(images)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py", line 97, in forward
    proposals, proposal_losses = self.rpn(images, features, targets)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torchvision/models/detection/rpn.py", line 345, in forward
    anchors = self.anchor_generator(images, features)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 150, in forward
    anchors_over_all_feature_maps = self.cached_grid_anchors(grid_sizes, strides)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 139, in cached_grid_anchors
    anchors = self.grid_anchors(grid_sizes, strides)
  File "/mnt/sdb/anaconda3/envs/headhunter-TT/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 103, in grid_anchors
    raise ValueError("Anchors should be Tuple[Tuple[int]] because each feature "
ValueError: Anchors should be Tuple[Tuple[int]] because each feature map could potentially have different sizes and aspect ratios. There needs to be a match between the number of feature maps passed and the number of sizes / aspect ratios specified.

These are the contents of my virtual environment:

name: headhunter-TT
channels:
  - defaults
dependencies:
  - _libgcc_mutex=0.1=main
  - ca-certificates=2021.4.13=h06a4308_1
  - certifi=2020.12.5=py38h06a4308_0
  - ld_impl_linux-64=2.33.1=h53a641e_7
  - libffi=3.3=he6710b0_2
  - libgcc-ng=9.1.0=hdf63c60_0
  - libstdcxx-ng=9.1.0=hdf63c60_0
  - ncurses=6.2=he6710b0_1
  - openssl=1.1.1k=h27cfd23_0
  - pip=21.1.1=py38h06a4308_0
  - python=3.8.10=hdb3f193_7
  - readline=8.1=h27cfd23_0
  - setuptools=52.0.0=py38h06a4308_0
  - sqlite=3.35.4=hdfb4753_0
  - tk=8.6.10=hbc83047_0
  - wheel=0.36.2=pyhd3eb1b0_0
  - xz=5.2.5=h7b6447c_0
  - zlib=1.2.11=h7b6447c_3
  - pip:
    - chardet==4.0.0
    - cycler==0.10.0
    - decorator==4.4.2
    - h5py==3.2.1
    - idna==2.10
    - imageio==2.9.0
    - kiwisolver==1.3.1
    - matplotlib==3.4.2
    - networkx==2.5.1
    - numpy==1.20.3
    - ordered-set==4.0.2
    - pillow==8.2.0
    - plyfile==0.7.4
    - pyparsing==2.4.7
    - python-dateutil==2.8.1
    - pywavelets==1.1.1
    - requests==2.25.1
    - scikit-image==0.18.1
    - scipy==1.6.3
    - six==1.16.0
    - tifffile==2021.4.8
    - torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
    - torchmeta==1.7.0
    - torchvision==0.9.1
    - tqdm==4.61.0
    - trimesh==3.9.19
    - typing-extensions==3.10.0.0
    - urllib3==1.26.5
    - munkres
    - albumentations==0.5.2
    - pyyaml 

If I try to downgrade torch to 1.6.0 and torchvision to 0.7.0, I run through the following error:

RuntimeError: CUDA error: no kernel image is available for execution on the device

Moreover, I get this warning message if I use torch 1.6.0, when I try to get a device info via: torch.cuda.get_device_name(0)

NVIDIA GeForce RTX 3xxx with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.

the code of tracking

Hi, Sentient07!
I only found the the code of head detection in GitHub, do you have the plane to place the code of head tracking in Github?

how to solve fast_rcnn.py

File "/home/project/HeadHunter-master/head_detection/models/fast_rcnn.py", line 496, in forward
boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
File "/home/project/HeadHunter-master/head_detection/models/fast_rcnn.py", line 458, in filter_proposals
keep = keep[:self.post_nms_top_n]
TypeError: slice indices must be integers or None or have an index method

raise ValueError("Anchors should be Tuple[Tuple[int]] because each feature "

2021-11-24 10-12-33 的屏幕截图
anchors = self.grid_anchors(grid_sizes, strides)
File "/home/haojie/下载/ENTER/envs/head_detection2/lib/python3.8/site-packages/torchvision/models/detection/anchor_utils.py", line 103, in grid_anchors
raise ValueError("Anchors should be Tuple[Tuple[int]] because each feature "
ValueError: Anchors should be Tuple[Tuple[int]] because each feature map could potentially have different sizes and aspect ratios. There needs to be a match between the number of feature maps passed and the number of sizes / aspect ratios specified.
Killing subprocess 1055

Questions about some results in the CVPR'21 paper

Hi @Sentient07,

I'm confused about some results in your "CVPR21 paper: Tracking Pedestrian Heads in Dense Crowd" cause some details are missing in the paper. Hope you can help me sort them out. Thanks!

  1. The test set of SCUT-HEAD dataset consists of two parts: Part-A and Part-B. I find that other methods compared in Table 2 report respective results on both parts in their papers. So, which part did you use to run the evaluation to obtain the results in Table 2? Or is it the fact that the results in Table 2 are obtained by evaluating methods on the combination of both parts? Since I couldn't find the exact results of other compared methods in Table 2 in the corresponding cited papers, I'd like to know how did you obtain the results of these methods?

  2. Table 4 shows the tracking results of your proposed tracker HeadHunter-T and other state-of-the-art trackers on the test set of CroHD. However, the results of HeadHunter-T in Table 4 are quite different from the results shown on the website of the MOT benchmark. Can you tell me what's the difference between these two results?

Looking forward to your reply. Thanks!

About test set results

Why are the results published on the croHD test set in your paper different from those published on the mot challeng website. The MOTA published in the paper is 63.6, compared to 57.8 on the mot challenge website

The env yml file?

Hi Sentient07
I have no idea about the actual env requirements of this project. Could you tell me more details?

load state dict error

I run test.py. args.pretrained_model is from this link. I meet this error:
RuntimeError: Error(s) in loading state_dict for FasterRCNN:
Unexpected key(s) in state_dict: "backbone.ssh1.branch1.0.weight", "backbone.ssh1.branch1.1.weight", "backbone.ssh1.branch1.1.bias", "backbone.ssh1.branch1.1.running_mean", "backbone.ssh1.branch1.1.running_var", "backbone.ssh1.branch1.1.num_batches_tracked", "backbone.ssh1.branch2a.0.weight", "backbone.ssh1.branch2a.1.weight", "backbone.ssh1.branch2a.1.bias", "backbone.ssh1.branch2a.1.running_mean", "backbone.ssh1.branch2a.1.running_var", "backbone.ssh1.branch2a.1.num_batches_tracked", "backbone.ssh1.branch2b.0.weight", "backbone.ssh1.branch2b.1.weight", "backbone.ssh1.branch2b.1.bias", "backbone.ssh1.branch2b.1.running_mean", "backbone.ssh1.branch2b.1.running_var", "backbone.ssh1.branch2b.1.num_batches_tracked", "backbone.ssh1.branch2c.0.weight", "backbone.ssh1.branch2c.1.weight", "backbone.ssh1.branch2c.1.bias", "backbone.ssh1.branch2c.1.running_mean", "backbone.ssh1.branch2c.1.running_var", "backbone.ssh1.branch2c.1.num_batches_tracked", "backbone.ssh1.ssh_1.weight", "backbone.ssh1.ssh_1.bias", "backbone.ssh1.ssh_dimred.weight", "backbone.ssh1.ssh_dimred.bias", "backbone.ssh1.ssh_2.weight", "backbone.ssh1.ssh_2.bias", "backbone.ssh1.ssh_3a.weight", "backbone.ssh1.ssh_3a.bias", "backbone.ssh1.ssh_3b.weight", "backbone.ssh1.ssh_3b.bias", "backbone.ssh1.ssh_final.0.weight", "backbone.ssh1.ssh_final.1.weight", "backbone.ssh1.ssh_final.1.bias", "backbone.ssh1.ssh_final.1.running_mean", "backbone.ssh1.ssh_final.1.running_var", "backbone.ssh1.ssh_final.1.num_batches_tracked", "backbone.ssh2.branch1.0.weight", "backbone.ssh2.branch1.1.weight", "backbone.ssh2.branch1.1.bias", "backbone.ssh2.branch1.1.running_mean", "backbone.ssh2.branch1.1.running_var", "backbone.ssh2.branch1.1.num_batches_tracked", "backbone.ssh2.branch2a.0.weight", "backbone.ssh2.branch2a.1.weight", "backbone.ssh2.branch2a.1.bias", "backbone.ssh2.branch2a.1.running_mean", "backbone.ssh2.branch2a.1.running_var", "backbone.ssh2.branch2a.1.num_batches_tracked", "backbone.ssh2.branch2b.0.weight", "backbone.ssh2.branch2b.1.weight", "backbone.ssh2.branch2b.1.bias", "backbone.ssh2.branch2b.1.running_mean", "backbone.ssh2.branch2b.1.running_var", "backbone.ssh2.branch2b.1.num_batches_tracked", "backbone.ssh2.branch2c.0.weight", "backbone.ssh2.branch2c.1.weight", "backbone.ssh2.branch2c.1.bias", "backbone.ssh2.branch2c.1.running_mean", "backbone.ssh2.branch2c.1.running_var", "backbone.ssh2.branch2c.1.num_batches_tracked", "backbone.ssh2.ssh_1.weight", "backbone.ssh2.ssh_1.bias", "backbone.ssh2.ssh_dimred.weight", "backbone.ssh2.ssh_dimred.bias", "backbone.ssh2.ssh_2.weight", "backbone.ssh2.ssh_2.bias", "backbone.ssh2.ssh_3a.weight", "backbone.ssh2.ssh_3a.bias", "backbone.ssh2.ssh_3b.weight", "backbone.ssh2.ssh_3b.bias", "backbone.ssh2.ssh_final.0.weight", "backbone.ssh2.ssh_final.1.weight", "backbone.ssh2.ssh_final.1.bias", "backbone.ssh2.ssh_final.1.running_mean", "backbone.ssh2.ssh_final.1.running_var", "backbone.ssh2.ssh_final.1.num_batches_tracked", "backbone.ssh3.branch1.0.weight", "backbone.ssh3.branch1.1.weight", "backbone.ssh3.branch1.1.bias", "backbone.ssh3.branch1.1.running_mean", "backbone.ssh3.branch1.1.running_var", "backbone.ssh3.branch1.1.num_batches_tracked", "backbone.ssh3.branch2a.0.weight", "backbone.ssh3.branch2a.1.weight", "backbone.ssh3.branch2a.1.bias", "backbone.ssh3.branch2a.1.running_mean", "backbone.ssh3.branch2a.1.running_var", "backbone.ssh3.branch2a.1.num_batches_tracked", "backbone.ssh3.branch2b.0.weight", "backbone.ssh3.branch2b.1.weight", "backbone.ssh3.branch2b.1.bias", "backbone.ssh3.branch2b.1.running_mean", "backbone.ssh3.branch2b.1.running_var", "backbone.ssh3.branch2b.1.num_batches_tracked", "backbone.ssh3.branch2c.0.weight", "backbone.ssh3.branch2c.1.weight", "backbone.ssh3.branch2c.1.bias", "backbone.ssh3.branch2c.1.running_mean", "backbone.ssh3.branch2c.1.running_var", "backbone.ssh3.branch2c.1.num_batches_tracked", "backbone.ssh3.ssh_1.weight", "backbone.ssh3.ssh_1.bias", "backbone.ssh3.ssh_dimred.weight", "backbone.ssh3.ssh_dimred.bias", "backbone.ssh3.ssh_2.weight", "backbone.ssh3.ssh_2.bias", "backbone.ssh3.ssh_3a.weight", "backbone.ssh3.ssh_3a.bias", "backbone.ssh3.ssh_3b.weight", "backbone.ssh3.ssh_3b.bias", "backbone.ssh3.ssh_final.0.weight", "backbone.ssh3.ssh_final.1.weight", "backbone.ssh3.ssh_final.1.bias", "backbone.ssh3.ssh_final.1.running_mean", "backbone.ssh3.ssh_final.1.running_var", "backbone.ssh3.ssh_final.1.num_batches_tracked", "backbone.ssh4.branch1.0.weight", "backbone.ssh4.branch1.1.weight", "backbone.ssh4.branch1.1.bias", "backbone.ssh4.branch1.1.running_mean", "backbone.ssh4.branch1.1.running_var", "backbone.ssh4.branch1.1.num_batches_tracked", "backbone.ssh4.branch2a.0.weight", "backbone.ssh4.branch2a.1.weight", "backbone.ssh4.branch2a.1.bias", "backbone.ssh4.branch2a.1.running_mean", "backbone.ssh4.branch2a.1.running_var", "backbone.ssh4.branch2a.1.num_batches_tracked", "backbone.ssh4.branch2b.0.weight", "backbone.ssh4.branch2b.1.weight", "backbone.ssh4.branch2b.1.bias", "backbone.ssh4.branch2b.1.running_mean", "backbone.ssh4.branch2b.1.running_var", "backbone.ssh4.branch2b.1.num_batches_tracked", "backbone.ssh4.branch2c.0.weight", "backbone.ssh4.branch2c.1.weight", "backbone.ssh4.branch2c.1.bias", "backbone.ssh4.branch2c.1.running_mean", "backbone.ssh4.branch2c.1.running_var", "backbone.ssh4.branch2c.1.num_batches_tracked", "backbone.ssh4.ssh_1.weight", "backbone.ssh4.ssh_1.bias", "backbone.ssh4.ssh_dimred.weight", "backbone.ssh4.ssh_dimred.bias", "backbone.ssh4.ssh_2.weight", "backbone.ssh4.ssh_2.bias", "backbone.ssh4.ssh_3a.weight", "backbone.ssh4.ssh_3a.bias", "backbone.ssh4.ssh_3b.weight", "backbone.ssh4.ssh_3b.bias", "backbone.ssh4.ssh_final.0.weight", "backbone.ssh4.ssh_final.1.weight", "backbone.ssh4.ssh_final.1.bias", "backbone.ssh4.ssh_final.1.running_mean", "backbone.ssh4.ssh_final.1.running_var", "backbone.ssh4.ssh_final.1.num_batches_tracked".
what should i do?

RuntimeError: Error(s) in loading state_dict for HeadHunter: Facing this error from the weights "FT_R50_epoch_24.pth" provided

Missing key(s) in state_dict: "backbone.body.layer2.4.conv1.weight", "backbone.body.layer2.4.bn1.weight", "backbone.body.layer2.4.bn1.bias", "backbone.body.layer2.4.bn1.running_mean", "backbone.body.layer2.4.bn1.running_var", "backbone.body.layer2.4.conv2.weight", "backbone.body.layer2.4.bn2.weight", "backbone.body.layer2.4.bn2.bias", "backbone.body.layer2.4.bn2.running_mean", "backbone.body.layer2.4.bn2.running_var", "backbone.body.layer2.4.conv3.weight", "backbone.body.layer2.4.bn3.weight", "backbone.body.layer2.4.bn3.bias", "backbone.body.layer2.4.bn3.running_mean", "backbone.body.layer2.4.bn3.running_var", "backbone.body.layer2.5.conv1.weight", "backbone.body.layer2.5.bn1.weight", "backbone.body.layer2.5.bn1.bias", "backbone.body.layer2.5.bn1.running_mean", "backbone.body.layer2.5.bn1.running_var", "backbone.body.layer2.5.conv2.weight", "backbone.body.layer2.5.bn2.weight", "backbone.body.layer2.5.bn2.bias", "backbone.body.layer2.5.bn2.running_mean", "backbone.body.layer2.5.bn2.running_var", "backbone.body.layer2.5.conv3.weight", "backbone.body.layer2.5.bn3.weight", "backbone.body.layer2.5.bn3.bias", "backbone.body.layer2.5.bn3.running_mean", "backbone.body.layer2.5.bn3.running_var", "backbone.body.layer2.6.conv1.weight", "backbone.body.layer2.6.bn1.weight", "backbone.body.layer2.6.bn1.bias", "backbone.body.layer2.6.bn1.running_mean", "backbone.body.layer2.6.bn1.running_var", "backbone.body.layer2.6.conv2.weight", "backbone.body.layer2.6.bn2.weight", "backbone.body.layer2.6.bn2.bias", "backbone.body.layer2.6.bn2.running_mean", "backbone.body.layer2.6.bn2.running_var", "backbone.body.layer2.6.conv3.weight", "backbone.body.layer2.6.bn3.weight", "backbone.body.layer2.6.bn3.bias", "backbone.body.layer2.6.bn3.running_mean", "backbone.body.layer2.6.bn3.running_var", "backbone.body.layer2.7.conv1.weight", "backbone.body.layer2.7.bn1.weight", "backbone.body.layer2.7.bn1.bias", "backbone.body.layer2.7.bn1.running_mean", "backbone.body.layer2.7.bn1.running_var", "backbone.body.layer2.7.conv2.weight", "backbone.body.layer2.7.bn2.weight", "backbone.body.layer2.7.bn2.bias", "backbone.body.layer2.7.bn2.running_mean", "backbone.body.layer2.7.bn2.running_var", "backbone.body.layer2.7.conv3.weight", "backbone.body.layer2.7.bn3.weight", "backbone.body.layer2.7.bn3.bias", "backbone.body.layer2.7.bn3.running_mean", "backbone.body.layer2.7.bn3.running_var", "backbone.body.layer3.6.conv1.weight", "backbone.body.layer3.6.bn1.weight", "backbone.body.layer3.6.bn1.bias", "backbone.body.layer3.6.bn1.running_mean", "backbone.body.layer3.6.bn1.running_var", "backbone.body.layer3.6.conv2.weight", "backbone.body.layer3.6.bn2.weight", "backbone.body.layer3.6.bn2.bias", "backbone.body.layer3.6.bn2.running_mean", "backbone.body.layer3.6.bn2.running_var", "backbone.body.layer3.6.conv3.weight", "backbone.body.layer3.6.bn3.weight", "backbone.body.layer3.6.bn3.bias", "backbone.body.layer3.6.bn3.running_mean", "backbone.body.layer3.6.bn3.running_var", "backbone.body.layer3.7.conv1.weight", "backbone.body.layer3.7.bn1.weight", "backbone.body.layer3.7.bn1.bias", "backbone.body.layer3.7.bn1.running_mean", "backbone.body.layer3.7.bn1.running_var", "backbone.body.layer3.7.conv2.weight", "backbone.body.layer3.7.bn2.weight", "backbone.body.layer3.7.bn2.bias", "backbone.body.layer3.7.bn2.running_mean", "backbone.body.layer3.7.bn2.running_var", "backbone.body.layer3.7.conv3.weight", "backbone.body.layer3.7.bn3.weight", "backbone.body.layer3.7.bn3.bias", "backbone.body.layer3.7.bn3.running_mean", "backbone.body.layer3.7.bn3.running_var", "backbone.body.layer3.8.conv1.weight", "backbone.body.layer3.8.bn1.weight", "backbone.body.layer3.8.bn1.bias", "backbone.body.layer3.8.bn1.running_mean", "backbone.body.layer3.8.bn1.running_var", "backbone.body.layer3.8.conv2.weight", "backbone.body.layer3.8.bn2.weight", "backbone.body.layer3.8.bn2.bias", "backbone.body.layer3.8.bn2.running_mean", "backbone.body.layer3.8.bn2.running_var", "backbone.body.layer3.8.conv3.weight", "backbone.body.layer3.8.bn3.weight", "backbone.body.layer3.8.bn3.bias", "backbone.body.layer3.8.bn3.running_mean", "backbone.body.layer3.8.bn3.running_var", "backbone.body.layer3.9.conv1.weight", "backbone.body.layer3.9.bn1.weight", "backbone.body.layer3.9.bn1.bias", "backbone.body.layer3.9.bn1.running_mean", "backbone.body.layer3.9.bn1.running_var", "backbone.body.layer3.9.conv2.weight", "backbone.body.layer3.9.bn2.weight", "backbone.body.layer3.9.bn2.bias", "backbone.body.layer3.9.bn2.running_mean", "backbone.body.layer3.9.bn2.running_var", "backbone.body.layer3.9.conv3.weight", "backbone.body.layer3.9.bn3.weight", "backbone.body.layer3.9.bn3.bias", "backbone.body.layer3.9.bn3.running_mean", "backbone.body.layer3.9.bn3.running_var", "backbone.body.layer3.10.conv1.weight", "backbone.body.layer3.10.bn1.weight", "backbone.body.layer3.10.bn1.bias", "backbone.body.layer3.10.bn1.running_mean", "backbone.body.layer3.10.bn1.running_var", "backbone.body.layer3.10.conv2.weight", "backbone.body.layer3.10.bn2.weight", "backbone.body.layer3.10.bn2.bias", "backbone.body.layer3.10.bn2.running_mean", "backbone.body.layer3.10.bn2.running_var", "backbone.body.layer3.10.conv3.weight", "backbone.body.layer3.10.bn3.weight", "backbone.body.layer3.10.bn3.bias", "backbone.body.layer3.10.bn3.running_mean", "backbone.body.layer3.10.bn3.running_var", "backbone.body.layer3.11.conv1.weight", "backbone.body.layer3.11.bn1.weight", "backbone.body.layer3.11.bn1.bias", "backbone.body.layer3.11.bn1.running_mean", "backbone.body.layer3.11.bn1.running_var", "backbone.body.layer3.11.conv2.weight", "backbone.body.layer3.11.bn2.weight", "backbone.body.layer3.11.bn2.bias", "backbone.body.layer3.11.bn2.running_mean", "backbone.body.layer3.11.bn2.running_var", "backbone.body.layer3.11.conv3.weight", "backbone.body.layer3.11.bn3.weight", "backbone.body.layer3.11.bn3.bias", "backbone.body.layer3.11.bn3.running_mean", "backbone.body.layer3.11.bn3.running_var", "backbone.body.layer3.12.conv1.weight", "backbone.body.layer3.12.bn1.weight", "backbone.body.layer3.12.bn1.bias", "backbone.body.layer3.12.bn1.running_mean", "backbone.body.layer3.12.bn1.running_var", "backbone.body.layer3.12.conv2.weight", "backbone.body.layer3.12.bn2.weight", "backbone.body.layer3.12.bn2.bias", "backbone.body.layer3.12.bn2.running_mean", "backbone.body.layer3.12.bn2.running_var", "backbone.body.layer3.12.conv3.weight", "backbone.body.layer3.12.bn3.weight", "backbone.body.layer3.12.bn3.bias", "backbone.body.layer3.12.bn3.running_mean", "backbone.body.layer3.12.bn3.running_var", "backbone.body.layer3.13.conv1.weight", "backbone.body.layer3.13.bn1.weight", "backbone.body.layer3.13.bn1.bias", "backbone.body.layer3.13.bn1.running_mean", "backbone.body.layer3.13.bn1.running_var", "backbone.body.layer3.13.conv2.weight", "backbone.body.layer3.13.bn2.weight", "backbone.body.layer3.13.bn2.bias", "backbone.body.layer3.13.bn2.running_mean", "backbone.body.layer3.13.bn2.running_var", "backbone.body.layer3.13.conv3.weight", "backbone.body.layer3.13.bn3.weight", "backbone.body.layer3.13.bn3.bias", "backbone.body.layer3.13.bn3.running_mean", "backbone.body.layer3.13.bn3.running_var", "backbone.body.layer3.14.conv1.weight", "backbone.body.layer3.14.bn1.weight", "backbone.body.layer3.14.bn1.bias", "backbone.body.layer3.14.bn1.running_mean", "backbone.body.layer3.14.bn1.running_var", "backbone.body.layer3.14.conv2.weight", "backbone.body.layer3.14.bn2.weight", "backbone.body.layer3.14.bn2.bias", "backbone.body.layer3.14.bn2.running_mean", "backbone.body.layer3.14.bn2.running_var", "backbone.body.layer3.14.conv3.weight", "backbone.body.layer3.14.bn3.weight", "backbone.body.layer3.14.bn3.bias", "backbone.body.layer3.14.bn3.running_mean", "backbone.body.layer3.14.bn3.running_var", "backbone.body.layer3.15.conv1.weight", "backbone.body.layer3.15.bn1.weight", "backbone.body.layer3.15.bn1.bias", "backbone.body.layer3.15.bn1.running_mean", "backbone.body.layer3.15.bn1.running_var", "backbone.body.layer3.15.conv2.weight", "backbone.body.layer3.15.bn2.weight", "backbone.body.layer3.15.bn2.bias", "backbone.body.layer3.15.bn2.running_mean", "backbone.body.layer3.15.bn2.running_var", "backbone.body.layer3.15.conv3.weight", "backbone.body.layer3.15.bn3.weight", "backbone.body.layer3.15.bn3.bias", "backbone.body.layer3.15.bn3.running_mean", "backbone.body.layer3.15.bn3.running_var", "backbone.body.layer3.16.conv1.weight", "backbone.body.layer3.16.bn1.weight", "backbone.body.layer3.16.bn1.bias", "backbone.body.layer3.16.bn1.running_mean", "backbone.body.layer3.16.bn1.running_var", "backbone.body.layer3.16.conv2.weight", "backbone.body.layer3.16.bn2.weight", "backbone.body.layer3.16.bn2.bias", "backbone.body.layer3.16.bn2.running_mean", "backbone.body.layer3.16.bn2.running_var", "backbone.body.layer3.16.conv3.weight", "backbone.body.layer3.16.bn3.weight", "backbone.body.layer3.16.bn3.bias", "backbone.body.layer3.16.bn3.running_mean", "backbone.body.layer3.16.bn3.running_var", "backbone.body.layer3.17.conv1.weight", "backbone.body.layer3.17.bn1.weight", "backbone.body.layer3.17.bn1.bias", "backbone.body.layer3.17.bn1.running_mean", "backbone.body.layer3.17.bn1.running_var", "backbone.body.layer3.17.conv2.weight", "backbone.body.layer3.17.bn2.weight", "backbone.body.layer3.17.bn2.bias", "backbone.body.layer3.17.bn2.running_mean", "backbone.body.layer3.17.bn2.running_var", "backbone.body.layer3.17.conv3.weight", "backbone.body.layer3.17.bn3.weight", "backbone.body.layer3.17.bn3.bias", "backbone.body.layer3.17.bn3.running_mean", "backbone.body.layer3.17.bn3.running_var", "backbone.body.layer3.18.conv1.weight", "backbone.body.layer3.18.bn1.weight", "backbone.body.layer3.18.bn1.bias", "backbone.body.layer3.18.bn1.running_mean", "backbone.body.layer3.18.bn1.running_var", "backbone.body.layer3.18.conv2.weight", "backbone.body.layer3.18.bn2.weight", "backbone.body.layer3.18.bn2.bias", "backbone.body.layer3.18.bn2.running_mean", "backbone.body.layer3.18.bn2.running_var", "backbone.body.layer3.18.conv3.weight", "backbone.body.layer3.18.bn3.weight", "backbone.body.layer3.18.bn3.bias", "backbone.body.layer3.18.bn3.running_mean", "backbone.body.layer3.18.bn3.running_var", "backbone.body.layer3.19.conv1.weight", "backbone.body.layer3.19.bn1.weight", "backbone.body.layer3.19.bn1.bias", "backbone.body.layer3.19.bn1.running_mean", "backbone.body.layer3.19.bn1.running_var", "backbone.body.layer3.19.conv2.weight", "backbone.body.layer3.19.bn2.weight", "backbone.body.layer3.19.bn2.bias", "backbone.body.layer3.19.bn2.running_mean", "backbone.body.layer3.19.bn2.running_var", "backbone.body.layer3.19.conv3.weight", "backbone.body.layer3.19.bn3.weight", "backbone.body.layer3.19.bn3.bias", "backbone.body.layer3.19.bn3.running_mean", "backbone.body.layer3.19.bn3.running_var", "backbone.body.layer3.20.conv1.weight", "backbone.body.layer3.20.bn1.weight", "backbone.body.layer3.20.bn1.bias", "backbone.body.layer3.20.bn1.running_mean", "backbone.body.layer3.20.bn1.running_var", "backbone.body.layer3.20.conv2.weight", "backbone.body.layer3.20.bn2.weight", "backbone.body.layer3.20.bn2.bias", "backbone.body.layer3.20.bn2.running_mean", "backbone.body.layer3.20.bn2.running_var", "backbone.body.layer3.20.conv3.weight", "backbone.body.layer3.20.bn3.weight", "backbone.body.layer3.20.bn3.bias", "backbone.body.layer3.20.bn3.running_mean", "backbone.body.layer3.20.bn3.running_var", "backbone.body.layer3.21.conv1.weight", "backbone.body.layer3.21.bn1.weight", "backbone.body.layer3.21.bn1.bias", "backbone.body.layer3.21.bn1.running_mean", "backbone.body.layer3.21.bn1.running_var", "backbone.body.layer3.21.conv2.weight", "backbone.body.layer3.21.bn2.weight", "backbone.body.layer3.21.bn2.bias", "backbone.body.layer3.21.bn2.running_mean", "backbone.body.layer3.21.bn2.running_var", "backbone.body.layer3.21.conv3.weight", "backbone.body.layer3.21.bn3.weight", "backbone.body.layer3.21.bn3.bias", "backbone.body.layer3.21.bn3.running_mean", "backbone.body.layer3.21.bn3.running_var", "backbone.body.layer3.22.conv1.weight", "backbone.body.layer3.22.bn1.weight", "backbone.body.layer3.22.bn1.bias", "backbone.body.layer3.22.bn1.running_mean", "backbone.body.layer3.22.bn1.running_var", "backbone.body.layer3.22.conv2.weight", "backbone.body.layer3.22.bn2.weight", "backbone.body.layer3.22.bn2.bias", "backbone.body.layer3.22.bn2.running_mean", "backbone.body.layer3.22.bn2.running_var", "backbone.body.layer3.22.conv3.weight", "backbone.body.layer3.22.bn3.weight", "backbone.body.layer3.22.bn3.bias", "backbone.body.layer3.22.bn3.running_mean", "backbone.body.layer3.22.bn3.running_var", "backbone.body.layer3.23.conv1.weight", "backbone.body.layer3.23.bn1.weight", "backbone.body.layer3.23.bn1.bias", "backbone.body.layer3.23.bn1.running_mean", "backbone.body.layer3.23.bn1.running_var", "backbone.body.layer3.23.conv2.weight", "backbone.body.layer3.23.bn2.weight", "backbone.body.layer3.23.bn2.bias", "backbone.body.layer3.23.bn2.running_mean", "backbone.body.layer3.23.bn2.running_var", "backbone.body.layer3.23.conv3.weight", "backbone.body.layer3.23.bn3.weight", "backbone.body.layer3.23.bn3.bias", "backbone.body.layer3.23.bn3.running_mean", "backbone.body.layer3.23.bn3.running_var", "backbone.body.layer3.24.conv1.weight", "backbone.body.layer3.24.bn1.weight", "backbone.body.layer3.24.bn1.bias", "backbone.body.layer3.24.bn1.running_mean", "backbone.body.layer3.24.bn1.running_var", "backbone.body.layer3.24.conv2.weight", "backbone.body.layer3.24.bn2.weight", "backbone.body.layer3.24.bn2.bias", "backbone.body.layer3.24.bn2.running_mean", "backbone.body.layer3.24.bn2.running_var", "backbone.body.layer3.24.conv3.weight", "backbone.body.layer3.24.bn3.weight", "backbone.body.layer3.24.bn3.bias", "backbone.body.layer3.24.bn3.running_mean", "backbone.body.layer3.24.bn3.running_var", "backbone.body.layer3.25.conv1.weight", "backbone.body.layer3.25.bn1.weight", "backbone.body.layer3.25.bn1.bias", "backbone.body.layer3.25.bn1.running_mean", "backbone.body.layer3.25.bn1.running_var", "backbone.body.layer3.25.conv2.weight", "backbone.body.layer3.25.bn2.weight", "backbone.body.layer3.25.bn2.bias", "backbone.body.layer3.25.bn2.running_mean", "backbone.body.layer3.25.bn2.running_var", "backbone.body.layer3.25.conv3.weight", "backbone.body.layer3.25.bn3.weight", "backbone.body.layer3.25.bn3.bias", "backbone.body.layer3.25.bn3.running_mean", "backbone.body.layer3.25.bn3.running_var", "backbone.body.layer3.26.conv1.weight", "backbone.body.layer3.26.bn1.weight", "backbone.body.layer3.26.bn1.bias", "backbone.body.layer3.26.bn1.running_mean", "backbone.body.layer3.26.bn1.running_var", "backbone.body.layer3.26.conv2.weight", "backbone.body.layer3.26.bn2.weight", "backbone.body.layer3.26.bn2.bias", "backbone.body.layer3.26.bn2.running_mean", "backbone.body.layer3.26.bn2.running_var", "backbone.body.layer3.26.conv3.weight", "backbone.body.layer3.26.bn3.weight", "backbone.body.layer3.26.bn3.bias", "backbone.body.layer3.26.bn3.running_mean", "backbone.body.layer3.26.bn3.running_var", "backbone.body.layer3.27.conv1.weight", "backbone.body.layer3.27.bn1.weight", "backbone.body.layer3.27.bn1.bias", "backbone.body.layer3.27.bn1.running_mean", "backbone.body.layer3.27.bn1.running_var", "backbone.body.layer3.27.conv2.weight", "backbone.body.layer3.27.bn2.weight", "backbone.body.layer3.27.bn2.bias", "backbone.body.layer3.27.bn2.running_mean", "backbone.body.layer3.27.bn2.running_var", "backbone.body.layer3.27.conv3.weight", "backbone.body.layer3.27.bn3.weight", "backbone.body.layer3.27.bn3.bias", "backbone.body.layer3.27.bn3.running_mean", "backbone.body.layer3.27.bn3.running_var", "backbone.body.layer3.28.conv1.weight", "backbone.body.layer3.28.bn1.weight", "backbone.body.layer3.28.bn1.bias", "backbone.body.layer3.28.bn1.running_mean", "backbone.body.layer3.28.bn1.running_var", "backbone.body.layer3.28.conv2.weight", "backbone.body.layer3.28.bn2.weight", "backbone.body.layer3.28.bn2.bias", "backbone.body.layer3.28.bn2.running_mean", "backbone.body.layer3.28.bn2.running_var", "backbone.body.layer3.28.conv3.weight", "backbone.body.layer3.28.bn3.weight", "backbone.body.layer3.28.bn3.bias", "backbone.body.layer3.28.bn3.running_mean", "backbone.body.layer3.28.bn3.running_var", "backbone.body.layer3.29.conv1.weight", "backbone.body.layer3.29.bn1.weight", "backbone.body.layer3.29.bn1.bias", "backbone.body.layer3.29.bn1.running_mean", "backbone.body.layer3.29.bn1.running_var", "backbone.body.layer3.29.conv2.weight", "backbone.body.layer3.29.bn2.weight", "backbone.body.layer3.29.bn2.bias", "backbone.body.layer3.29.bn2.running_mean", "backbone.body.layer3.29.bn2.running_var", "backbone.body.layer3.29.conv3.weight", "backbone.body.layer3.29.bn3.weight", "backbone.body.layer3.29.bn3.bias", "backbone.body.layer3.29.bn3.running_mean", "backbone.body.layer3.29.bn3.running_var", "backbone.body.layer3.30.conv1.weight", "backbone.body.layer3.30.bn1.weight", "backbone.body.layer3.30.bn1.bias", "backbone.body.layer3.30.bn1.running_mean", "backbone.body.layer3.30.bn1.running_var", "backbone.body.layer3.30.conv2.weight", "backbone.body.layer3.30.bn2.weight", "backbone.body.layer3.30.bn2.bias", "backbone.body.layer3.30.bn2.running_mean", "backbone.body.layer3.30.bn2.running_var", "backbone.body.layer3.30.conv3.weight", "backbone.body.layer3.30.bn3.weight", "backbone.body.layer3.30.bn3.bias", "backbone.body.layer3.30.bn3.running_mean", "backbone.body.layer3.30.bn3.running_var", "backbone.body.layer3.31.conv1.weight", "backbone.body.layer3.31.bn1.weight", "backbone.body.layer3.31.bn1.bias", "backbone.body.layer3.31.bn1.running_mean", "backbone.body.layer3.31.bn1.running_var", "backbone.body.layer3.31.conv2.weight", "backbone.body.layer3.31.bn2.weight", "backbone.body.layer3.31.bn2.bias", "backbone.body.layer3.31.bn2.running_mean", "backbone.body.layer3.31.bn2.running_var", "backbone.body.layer3.31.conv3.weight", "backbone.body.layer3.31.bn3.weight", "backbone.body.layer3.31.bn3.bias", "backbone.body.layer3.31.bn3.running_mean", "backbone.body.layer3.31.bn3.running_var", "backbone.body.layer3.32.conv1.weight", "backbone.body.layer3.32.bn1.weight", "backbone.body.layer3.32.bn1.bias", "backbone.body.layer3.32.bn1.running_mean", "backbone.body.layer3.32.bn1.running_var", "backbone.body.layer3.32.conv2.weight", "backbone.body.layer3.32.bn2.weight", "backbone.body.layer3.32.bn2.bias", "backbone.body.layer3.32.bn2.running_mean", "backbone.body.layer3.32.bn2.running_var", "backbone.body.layer3.32.conv3.weight", "backbone.body.layer3.32.bn3.weight", "backbone.body.layer3.32.bn3.bias", "backbone.body.layer3.32.bn3.running_mean", "backbone.body.layer3.32.bn3.running_var", "backbone.body.layer3.33.conv1.weight", "backbone.body.layer3.33.bn1.weight", "backbone.body.layer3.33.bn1.bias", "backbone.body.layer3.33.bn1.running_mean", "backbone.body.layer3.33.bn1.running_var", "backbone.body.layer3.33.conv2.weight", "backbone.body.layer3.33.bn2.weight", "backbone.body.layer3.33.bn2.bias", "backbone.body.layer3.33.bn2.running_mean", "backbone.body.layer3.33.bn2.running_var", "backbone.body.layer3.33.conv3.weight", "backbone.body.layer3.33.bn3.weight", "backbone.body.layer3.33.bn3.bias", "backbone.body.layer3.33.bn3.running_mean", "backbone.body.layer3.33.bn3.running_var", "backbone.body.layer3.34.conv1.weight", "backbone.body.layer3.34.bn1.weight", "backbone.body.layer3.34.bn1.bias", "backbone.body.layer3.34.bn1.running_mean", "backbone.body.layer3.34.bn1.running_var", "backbone.body.layer3.34.conv2.weight", "backbone.body.layer3.34.bn2.weight", "backbone.body.layer3.34.bn2.bias", "backbone.body.layer3.34.bn2.running_mean", "backbone.body.layer3.34.bn2.running_var", "backbone.body.layer3.34.conv3.weight", "backbone.body.layer3.34.bn3.weight", "backbone.body.layer3.34.bn3.bias", "backbone.body.layer3.34.bn3.running_mean", "backbone.body.layer3.34.bn3.running_var", "backbone.body.layer3.35.conv1.weight", "backbone.body.layer3.35.bn1.weight", "backbone.body.layer3.35.bn1.bias", "backbone.body.layer3.35.bn1.running_mean", "backbone.body.layer3.35.bn1.running_var", "backbone.body.layer3.35.conv2.weight", "backbone.body.layer3.35.bn2.weight", "backbone.body.layer3.35.bn2.bias", "backbone.body.layer3.35.bn2.running_mean", "backbone.body.layer3.35.bn2.running_var", "backbone.body.layer3.35.conv3.weight", "backbone.body.layer3.35.bn3.weight", "backbone.body.layer3.35.bn3.bias", "backbone.body.layer3.35.bn3.running_mean", "backbone.body.layer3.35.bn3.running_var".

Hello, when I run the model, I get the following error

ValueError: Anchors should be Tuple[Tuple[int]] because each feature map could potentially have different sizes and aspect ratios. There needs to be a match between the number of feature maps passed and the number of sizes / aspect ratios specified.

how to set the anchor and choose the benchmark when running the tracker on a CroHD dataset(use_public" is false)

when I try to run the tracker on a CroHD dataset(training set) and **set the "use_public" option in the 'det_cfg' to 'False',**the ValueError said that "Anchors should be Tuple[Tuple[int]] because each feature map could potentially have different sizes and aspect ratios. "
I noticed that in the obj_detect.py,if det_cfg['median_anchor']:the program can choose different benchmark to import different anchors,so which benchmark I should set in the det_cfg when I want to run the tracker on a CroHD dataset?And I noticed that the benchmark should be associated with the anchor.py,too.
Another question is,if i set the "use_public" option in the 'det_cfg' to 'True',does it mean that I will use the data in det.txt which the dataset offers and the detector will not be used to detect the pedestrians or output anything.
1635856049(1)
1635856090(1)

Could you share your CroHD dataset?

Hi,
I am very interested in your great work, I try to reproduce your work from scratch. But I cannot find anywhere to download CroHD dataset. Could you please give me a way to download it?

Reproduce the results

Has anyone been able to run the program properly? I followed the instruction in the repo, but haven't got any luck yet. I am confused by the mismatch of cuda versions between the one indicated in this repo and the one actually used by the author, and the extra packages I need to install to stop receiving errors. Below are the packages I installed aside from the one in the requirements, but I still couldn't get any output.

pip install pyyaml
pip install ipykernel
pip install albumentations==0.4.6
pip install scipy==1.1.0
pip install pycocotools
pip install munkres
pip install scikit-image==0.16.2

Context Module introduced in PyramidBox paper

Just to put a context: I was asked to find a paper and reproduce some results from scratch (it weights the 50% of the subject), I've my deadline around the 10 of June of 2022.


While rewriting the detection network (in order to fully understand the paper) I found strange the CPM part and I would like to ask for advice.


Papers Text

The paper says:

with Context Sensitive feature extractor followed by series of transpose convolutions to enhance spatial resolution of feature maps.

and

we augmented on top of each individual FPNs, a Context-sensitive Prediction Module (CPM) [63]. This contextual module consists of 4 Inception-ResNet-A blocks [62] with 128 and 256 filters for 3 × 3 convolution and 1024 filters for 1 × 1 convolution.

The reference 63 says:

We design the Context-sensitive Predict Module (CPM), see Fig. 3(b), in which we replace the convolution layers of context module in SSH by the residual-free prediction module of DSSD.


Issues

From the previous cites, I understand the CPM as a SSH with different convolution operations.
But your Figure 4 (from the paper) and your code shows a channel expansion which seems like the prediction module of DSSD (a kind of simplified Inception) followed by a standard SSH.

I did not find any Inception-ResNet-A blocks.

Additionally, I did not find the transpose convolutions part.


Sorry for the inconvenience, I just want to make sure I don't miss any detail and have it done correctly as soon as possible...

errors in test.py

assert len(grid_sizes) == len(strides) == len(cell_anchors)

AssertionError

About configuration files

Where are the configuration files mentioned in Requirements, such as head_detection.yml, onfig/config_chuman.yaml

not found config/config_chuman.yaml

I use the python -m torch.distributed.launch --nproc_per_node=$NGPU --use_env train.py --cfg_file config/config_chuman.yaml --world_size $NGPU --num_workers 4, but I can't find the config/config_chuman.yaml.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.