hlings / asyfod Goto Github PK
View Code? Open in Web Editor NEW(CVPR2023) The PyTorch implementation of the "AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection".
License: MIT License
(CVPR2023) The PyTorch implementation of the "AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection".
License: MIT License
Dear author, I notice that the MMD_distance and choice_topk is not used in train.py. Can you please point out where is the Source Instances Division part in the code?
Hello, when I train my own data set, the target field sample number is only about 10, assuming I use 8 of them for training, how should my validation set fit the data?
作者你好,感谢你开源这项工作,之前的AcroFOD我也在上面做了一些工作,最近准备在这项代码中研究新的工作。我有一些问题想请教您,1)代码配置中的layer4,layer6,layer9是什么意思?2)请问能否在readme中完整的说明整个的训练测试流程,以及每个训练配置文件中名称代表的含义,如dissim_to_sim等等,我知道这涉及到文中的消融实验部分,我怕自己的理解有误,所以想请您再详细的在readme中说明一下,如果您能抽空回答并完善这项开源工作,我将不胜感激!
Hi,
Hope you are doing well.
I read your paper and it is amazing and now I am running it on a custom dataset. but it is giving me this error.
Please help me resolve this issue.
Thanks!
RuntimeError: CUDA out of memory. Tried to allocate 938.00 MiB (GPU 0; 4.00 GiB total capacity; 1.81 GiB already allocated; 931.70 MiB free; 1.83 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size
_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Hi! Can i check where is the stop gradient implemented in the code?
Hi,
Hope you are doing well.
I am running your code on a custom dataset. I have preprocessed the source data following the cityscapes_to_yolo.py but now I am confused how to preprocess the target dataset as per few shot settings and more importantly how you did it in this work.
I am new to few shot domain adaptation so not able to figure out how to reshape the target domain dataset into few shot settings. Like how to divide images into various classes specially when each image has multiple classes. Please help me in this, it will be perfect if you can provide me the foggy_cityscapes_to_yolo.py file since I could not find it in either of the repos.
Thanks alot for helping me!
Hi,
First of all you have done an amazing works and thanks alot for providing the code online. I want to reproduce your results but I cannot access the preprocessed data. The link says that this data does not exists anymore.
Please help me in this.
Thanks!
This is a fantastic idea! but I cant seem to find train_MMD in the code provided. And also utils.loss is not present in the code.
There is no train_MMD.py, but there is test_MMD.py. I get an error when I bring train_MMD.py from AcroFoD directly here. Is train_MMD.py used for small sample cross-domain target detection?
Hi,
Hope you are doing well.
Can you please tell me how much gpu ram is required to reproduce your results with YOLOx?
And upon running YOLOs or YOLOm it gives dimension miss match error, Can you please help me with this?
RuntimeError: Given groups=1, weight of size [1280,1280,3,3], expected input[1,512,4,4] to have 1280 channels, but got 512 channels instead
Thanks!!!
Hi Hlings,
I hope you are doing well.
I read your work and it is amazing!!!
Now I am exploring the code but cannot find the lines of code for extracting the object-level features, could you please guide me to this block of code?
I appreciate any help you can provide.
After I trained several times using the training instructions in README.md, I was unable to achieve the performance reported in the paper. Please provide some suggestions and assistance based on the following log, thanks!
python train.py --cache --img 640 --batch 12 --epochs 300 --data ./data/eg/city_and_foggy8_3.yaml --cfg ./models/yolov5x.yaml --hyp ./data/hyp_aug/m1.yaml --weights --name test
the results
"hyp": { "desc": null, "value": { "box": 0.05, "cls": 0.5, "lr0": 0.01, "lrf": 0.2, "obj": 1, "hsv_h": 0.015, "hsv_s": 0.7, "hsv_v": 0.4, "iou_t": 0.3, "mixup": 0, "scale": 0.5, "shear": 0, "cls_pw": 1, "fliplr": 0.5, "flipud": 0, "mosaic": 1, "obj_pw": 1, "cp_type": 0, "degrees": 0, "anchor_t": 4, "fl_gamma": 0, "momentum": 0.937, "copypaste": 0, "translate": 0.1, "perspective": 0, "weight_decay": 0.00046875, "warmup_epochs": 3, "warmup_bias_lr": 0.1, "warmup_momentum": 0.8 } }
Hi, I'm really excited by your work!
While replicating AsyFOD, I've noticed that it's utilizing GPU less than I expected.
How long did it take to run the model w/ Cityscapes to Foggy Cityscapes task?
I'm currently running it on one A5000 and it's estimated to run for 4~5 days.
Is this expected? Or am I doing something wrong?
Thank you for your time.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.