Code Monkey home page Code Monkey logo

Comments (50)

CDTrans avatar CDTrans commented on May 29, 2024

'pretrain' is the first stage of the whole stage. It only uses source label data to train the source domain model initialized by the imagenet pretrained model. The performance of the source domain model is the 'Baseline' of the tables.
‘uda’ is the second stage of the whole stage. It uses source labeled data and target unlabeled data to train the target domain model initialized by the source domain model. The performance of the target domain model is the 'CDTrans' of the tables.

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

Thanks.

two little questions:

  1. Can backbone use ViT directly?

  2. After 'uda' training stage, for testing, need to run 'python test.py '?

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

First question

Yes, you can use ViT directly. Our backbone is the ViT architecture, but initialized with DeiT imagenet pre-trained model parameters. You can directly use the ViT imagenet pre-trained model parameters to initialize the source model. In addition, in our experiments, the model performance will be better if the ViT model parameters are used for initialization.

Second question

After the 'uda' training phase, the test results will be automatically displayed at the end of the output for easy viewing. So you do not run the test.py manually.

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

Nice and thanks

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

I put pretrained 'ViT-B-16.npz' in path (./data/pretrainModel/),

and change code (./scripts/pretrain/visda/run_visda.sh, line 10) to 'pretrain_model='ViT-B_16.npz''

unfortunately, I meet this error:

Traceback (most recent call last): │
File "train.py", line 92, in │
model = make_model(cfg, num_class=num_classes, camera_num=camera_num, view_num = view_num) │
File "/home/cy/project/CDTrans-master/model/make_model.py", line 410, in make_model │
model = build_transformer(num_class, camera_num, view_num, cfg, __factory_hh) │
File "/home/cy/project/CDTrans-master/model/make_model.py", line 214, in init
self._load_parameter(pretrain_choice, model_path) │
File "/home/cy/project/CDTrans-master/model/make_model.py", line 218, in _load_parameter │
self.base.load_param(model_path) │
File "/home/cy/project/CDTrans-master/model/backbones/vit_pytorch.py", line 611, in load_param │
param_dict = torch.load(model_path, map_location='cpu') │
File "/home/cy/.conda/envs/clone/lib/python3.7/site-packages/torch/serialization.py", line 585, in load │
with _open_zipfile_reader(opened_file) as opened_zipfile: │
File "/home/cy/.conda/envs/clone/lib/python3.7/site-packages/torch/serialization.py", line 242, in init
super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer)) │
RuntimeError: [enforce fail at inline_container.cc:222] . file not found: Transformer/version

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

In my opinion, the .npz format is a file format supported by NumPy. You should replace the torch.load() function with numpy.load().

Maybe you can directly use the .pth format file to initialize the model. The pre-trained model can be downloaded as follows:

ViT-Base, ViT-Small

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

Thanks.

I try now.

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

In paper for Fingure 2:
'The inputs of the framework are the selected pairs from labeling methodthe.
source and target images in the input pair are sent to source branch and target branch respectively.'

Does it means: input of Hs is Ps (for source branch), and input of Ht is Pt (for target branch) ?

from cdtrans.

cwhgn avatar cwhgn commented on May 29, 2024

The inputs of Hs and Ht are the pairs from the set P={Ps,Pt}. Specifically, for each pair (s,t) in P, s and t are sent to Hs and Ht respectively.

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

That is
all source data (i.e. all s) are sent to Hs,

all target data with pseudo lable (i.e. all t) are sent to Ht,

both pair (s,t) in P are sent to Hs_t.

Is right?

anthor question: Will the same target sample be labeled repeatedly?

from cdtrans.

cwhgn avatar cwhgn commented on May 29, 2024

Is right?

Yes.

anthor question: Will the same target sample be labeled repeatedly?

In Ps, each image in source domain is used as an anchor to build the pair from target samples. Thus it is possible that the same target sample is paired with different source images.

from cdtrans.

miss-rain avatar miss-rain commented on May 29, 2024

Ok, thanks.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Please, is it possible to reproduce these results on colabpro or colab standard? I am keenly interested in knowing more about domain adaptation through the use of transformer architecture. Thanks

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Traceback (most recent call last):
File "train.py", line 5, in
from model import make_model
File "/content/drive/MyDrive/Publication2022/CDTrans/model/init.py", line 1, in
from .make_model import make_model
File "/content/drive/MyDrive/Publication2022/CDTrans/model/make_model.py", line 7, in
from .backbones.vit_pytorch import vit_base_patch16_224_TransReID, vit_small_patch16_224_TransReID
File "/content/drive/MyDrive/Publication2022/CDTrans/model/backbones/vit_pytorch.py", line 30, in
from torch._six import container_abcs
ImportError: cannot import name 'container_abcs' from 'torch._six' (/usr/local/lib/python3.7/dist-packages/torch/_six.py)

Please could you help me with this bug?

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

It is solved now

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Traceback (most recent call last):
File "train.py", line 58, in
cfg.merge_from_list(args.opts)
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 226, in merge_from_list
cfg_list
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 545, in _assert_with_logging
assert cond, msg
AssertionError: Override list has odd length: ['MODEL.DEVICE_ID', "('0')", 'OUTPUT_DIR', '../logs/uda/deit_small/office-home/Art2Product', 'MODEL.PRETRAIN_PATH', '../logs/pretrain/deit_small/office-home/Art/transformer_10.pth', 'DATASETS.ROOT_TRAIN_DIR', './data/OfficeHomeDataset/Art.txt', 'DATASETS.ROOT_TRAIN_DIR2', './data/OfficeHomeDataset/Product.txt', 'DATASETS.ROOT_TEST_DIR', './data/OfficeHomeDataset/Product.txt', 'DATASETS.NAMES', 'OfficeHome', 'DATASETS.NAMES2', 'OfficeHome', 'MODEL.Transformer_TYPE']; it must be a list of pairs
Traceback (most recent call last):
File "train.py", line 58, in
cfg.merge_from_list(args.opts)
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 226, in merge_from_list
cfg_list
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 545, in _assert_with_logging
assert cond, msg
AssertionError: Override list has odd length: ['MODEL.DEVICE_ID', "('0')", 'OUTPUT_DIR', '../logs/uda/deit_small/office-home/Art2Real_World', 'MODEL.PRETRAIN_PATH', '../logs/pretrain/deit_small/office-home/Art/transformer_10.pth', 'DATASETS.ROOT_TRAIN_DIR', './data/OfficeHomeDataset/Art.txt', 'DATASETS.ROOT_TRAIN_DIR2', './data/OfficeHomeDataset/Real_World.txt', 'DATASETS.ROOT_TEST_DIR', './data/OfficeHomeDataset/Real_World.txt', 'DATASETS.NAMES', 'OfficeHome', 'DATASETS.NAMES2', 'OfficeHome', 'MODEL.Transformer_TYPE']; it must be a list of pairs
Traceback (most recent call last):
File "train.py", line 58, in
cfg.merge_from_list(args.opts)
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 226, in merge_from_list
cfg_list
File "/usr/local/lib/python3.7/dist-packages/yacs/config.py", line 545, in _assert_with_logging
assert cond, msg
AssertionError: Override list has odd length: ['MODEL.DEVICE_ID', "('0')", 'OUTPUT_DIR', '../logs/uda/deit_small/office-home/Art2Clipart', 'MODEL.PRETRAIN_PATH', '../logs/pretrain/deit_small/office-home/Art/transformer_10.pth', 'DATASETS.ROOT_TRAIN_DIR', './data/OfficeHomeDataset/Art.txt', 'DATASETS.ROOT_TRAIN_DIR2', './data/OfficeHomeDataset/Clipart.txt', 'DATASETS.ROOT_TEST_DIR', './data/OfficeHomeDataset/Clipart.txt', 'DATASETS.NAMES', 'OfficeHome', 'DATASETS.NAMES2', 'OfficeHome', 'MODEL.Transformer_TYPE']; it must be a list of pairs

Please, checkout this bug?

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

fix it now. You can update the code and try it now.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Traceback (most recent call last): File "train.py", line 5, in from model import make_model File "/content/drive/MyDrive/Publication2022/CDTrans/model/init.py", line 1, in from .make_model import make_model File "/content/drive/MyDrive/Publication2022/CDTrans/model/make_model.py", line 7, in from .backbones.vit_pytorch import vit_base_patch16_224_TransReID, vit_small_patch16_224_TransReID File "/content/drive/MyDrive/Publication2022/CDTrans/model/backbones/vit_pytorch.py", line 30, in from torch._six import container_abcs ImportError: cannot import name 'container_abcs' from 'torch._six' (/usr/local/lib/python3.7/dist-packages/torch/_six.py)

Please could you help me with this bug?

Maybe the torch version should be limited to 1.8 and reload the torch with torchvision.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Thank you. I will update my repos then.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 11.17 GiB total capacity; 10.45 GiB already allocated; 23.81 MiB free; 10.71 GiB reserved in total by PyTorch)

This is the problem of not having enough memory for GPU devices. Maybe you should run this script with more GPU devices by editing gpus="('0,1')".

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Any advice on that, I am running using standard colab notebook on how to edit it. I am keenly interested in your work. Thanks

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

I am very sorry, I have no experience with colab. You need to have at least 32G of GPU memory to train this project to guarantee accuracy.

Or you can reduce the batch size to run the project, but I have not tested the experimental results.
In the configs/pretrain.yaml and configs/uda.yaml, you can replace the "IMS_PER_BATCH: 64" with "IMS_PER_BATCH: 32" to reduce the batch size.

If there is still an out-of-memory error, you can use "IMS_PER_BATCH: 16".

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Thank you for the information.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Sorry, I don't understand what you said "Is this how the code is done or is it just the CDTrans experiment? Thanks."

I thought I had solved the problems, so I closed the issue. Now I have reopened the issue, so if you have any other questions, post them here. Thanks.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

reproduce_base
How to reproduce this result from the code? I need clarification on this. Thanks

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Just follow the readme settings and process, then run the script file.
But your server is not enough for the gpu memory requirement. When reducing the batch run in such an experimental situation, the accuracy will be little lower than the result of the paper.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

2022-02-28 10:23:30,669 reid_baseline.train INFO: Epoch 10 done. Time per batch: 0.747[s] Speed: 21.4[samples/s]
2022-02-28 10:23:55,809 reid_baseline.train INFO: normal accuracy 0.9804756833510827 0.6311898231506348
2022-02-28 10:23:55,810 reid_baseline.train INFO: Classify Domain Adapatation Validation Results - Epoch: 10
2022-02-28 10:23:55,810 reid_baseline.train INFO: Accuracy: 98.0% Mean Entropy: 63.1%
Loading pretrained model for finetuning from ../logs/pretrain/deit_small/office/Amazon/transformer_best_model.pth
2022-02-28 10:24:18,847 reid_baseline.train INFO: normal accuracy 0.9808306709265175 0.6045202612876892
2022-02-28 10:24:18,847 reid_baseline.train INFO: Classify Domain Adapatation Validation Results - Best Model
2022-02-28 10:24:18,847 reid_baseline.train INFO: Accuracy: 98.1%

Thank you, I did that but when I run the pretrained script I don't get results for the baseline. As you can I get result for source domain only but not A--> D and A-->W as indicated.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Sorry about the mistake information above. The pretrained script does not display the transfer results directly. When you run the uda script, the pretrained transfer result was displayed before the uda training epoch.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

0% 0/45 [00:00<?, ?it/s]/content/drive/MyDrive/Publication2022/CDTrans/processor/processor_uda.py:40: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
labels = torch.tensor(vid)
100% 45/45 [00:15<00:00, 2.93it/s]
2022-02-28 14:05:50,127 reid_baseline.train INFO: Fisrt Accuracy = 73.87% -> 76.36%
2022-02-28 14:05:50,160 reid_baseline.train INFO: Second Accuracy = 73.87% -> 77.53%

Please, is this the result, you are talking about here for the pretrained as a baseline model? Explain the results to me.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

I will re-run the experimental results to verify the effectiveness of the program.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Alright. I am using different parameter values because of my compute resources. I need more clarification of the results shown in the table from the experimental results. Please, I would be glad to have it asap for a presentation. Thanks

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

If the batch size of your pretrained script is smaller than the 64, maybe you should enlarge the training epoch "MAX_EPOCHS: 10" in the configs/pretrain.yml#L40.
I can only guarantee the results under the default parameter settings. Because my server resources are relatively tight recently, you may need to try new epoch settings yourself, such as epoch=[15, 20, 30]
There is a simple knowledge here. The final result of the pretrained script must be close to 100% to ensure that the model has learned enough information from the source domain. I see your results show that the accuracy of the model on the source domain is 98%, which is a little lower than 100%.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Please, I am able to capture the uda results but for the baseline results, I find it difficult to get it from the output. I would be glad if you could help me with that. Now I am using higher GPU memory of 52 GB which is sufficient to run the deit_small model.
Thanks

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Sorry, I was busy with other work and didn't reply in time. You can use the following evaluation script to manually get baseline results for the transfer target domain.

model='deit_small'
model_type='vit_small_patch16_224_TransReID'
pretrain_model='deit_small_distilled_patch16_224-649709d9.pth'
for target_dataset in 'dslr' 'webcam'
do
        python test.py --config_file 'configs/pretrain.yml' MODEL.DEVICE_ID "('0')" \
        TEST.WEIGHT '../logs/pretrain/'$model'/office/Amazon/transformer_10.pth' \
        DATASETS.NAMES 'Office' DATASETS.NAMES2 'Office' \
        OUTPUT_DIR '../logs/uda/'$model'/office/' \
        DATASETS.ROOT_TRAIN_DIR './data/office31/amazon_list.txt' \
        DATASETS.ROOT_TRAIN_DIR2 './data/office31/'$target_dataset'_list.txt' \
        DATASETS.ROOT_TEST_DIR './data/office31/'$target_dataset'_list.txt' \
        MODEL.Transformer_TYPE $model_type \
        MODEL.PRETRAIN_PATH './data/pretrainModel/'$pretrain_model \

done

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Thanks for your patience. I have gotten what I want.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

My results are not the same as the ones in the readme file. Could it be the version of Cuda I'm using Tesla P100-PCIE? I wanted to reproduce the same results for the small model. Thanks

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

Can you provide the training script and log to show the training details. The script is working on my training machine. I need to see the difference between your training details and mine

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

2022-03-09 11:52:26,979 reid_baseline.train INFO: Fisrt Accuracy = 69.77% -> 75.81%
2022-03-09 11:52:27,065 reid_baseline.train INFO: Second Accuracy = 69.77% -> 78.44% This is the first major accuracy result I get when I run this script : !bash scripts/uda/officehome/run_officehome_Ar.sh deit_small. My results seems not look exactly as the ones display on the page.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

model=$1
if [ ! -n "$1" ]
then
echo 'pelease input the model para: {deit_base, deit_small}'
exit 8
fi
if [ $model == 'deit_base' ]
then
model_type='vit_base_patch16_224_TransReID'
pretrain_model='deit_base_distilled_patch16_224-df68dfff.pth'
else
model='deit_small'
model_type='vit_small_patch16_224_TransReID'
pretrain_model='deit_small_distilled_patch16_224-649709d9.pth'
fi
python train.py --config_file configs/pretrain.yml MODEL.DEVICE_ID "('0')" DATASETS.NAMES 'OfficeHome'
OUTPUT_DIR '../logs/pretrain/'$model'/office-home/Art'
DATASETS.ROOT_TRAIN_DIR './data/OfficeHomeDataset/Art.txt'
DATASETS.ROOT_TEST_DIR './data/OfficeHomeDataset/Product.txt'
MODEL.Transformer_TYPE $model_type
MODEL.PRETRAIN_PATH './data/pretrainModel/'$pretrain_model \

Here is the pretrained script I used: !bash scripts/pretrain/officehome/run_officehome_Ar.sh deit_small

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

This is the log for the pretrained model script.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

I was running the exact scripts you provided on the Github page.

This is the log for the pretrained model script.

I needed to verify the results on your page.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

After testing on an RTX2080Ti machine, I got 69.0% and 80.2% for the pre-trained model and uda model respectively in Art->Product of the office-home dataset. The model has a little different result from the pre-trained model. However, according to my evaluation, the final results of UDA on this new machine have the same level of performance, with some exceeding the tabular data on the github project and some below.

2022-03-09 11:52:26,979 reid_baseline.train INFO: Fisrt Accuracy = 69.77% -> 75.81%
2022-03-09 11:52:27,065 reid_baseline.train INFO: Second Accuracy = 69.77% -> 78.44%

The accuracy shown here is the quality of the pseudo-label, not the final accuracy of the UDA, you need to check the final UDA results after the model is trained.

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

Loading pretrained model for finetuning from ../logs/uda/deit_small/office-home/Art2Clipart/transformer_best_model.pth
2022-03-09 16:32:16,494 reid_baseline.train INFO: normal accuracy 0.6041237113402061 0.07715962827205658
2022-03-09 16:32:16,494 reid_baseline.train INFO: Classify Domain Adapatation Validation Results - Best model
2022-03-09 16:32:16,494 reid_baseline.train INFO: Accuracy: 60.4%

I believe this is the CDTrans-small result but how do I get the baseline result as indicated in the table?

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

I believe this is the CDTrans-small result but how do I get the baseline result as indicated in the table?

After testing on my new machine with 2 RTX 2080Ti GPU, extending the epoch can achieve the performance of the baseline results. In the pretrain scripts, you can try to change epoch like MAX_EPOCHS=15, 20 or 25 to get same high level of baseline performance. I guess this may be related to different physical server architectures.
Please excuse the delay in reply as I am busy with work.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

You can use the following scripts to get the office-home baseline result about Art->Product, Real_World, Clipart

model='deit_small'
model_type='vit_small_patch16_224_TransReID'
pretrain_model='deit_small_distilled_patch16_224-649709d9.pth'
for target_dataset in 'Product' 'Real_World' 'Clipart'
do
        python test.py --config_file 'configs/pretrain.yml' MODEL.DEVICE_ID "('0')" \
        TEST.WEIGHT '../logs/pretrain/'$model'/office-home/Art/transformer_10.pth' \
        DATASETS.NAMES 'OfficeHome' DATASETS.NAMES2 'OfficeHome' \
        OUTPUT_DIR '../logs/uda/'$model'/office-home/' \
        DATASETS.ROOT_TRAIN_DIR './data/OfficeHomeDataset/Art.txt' \
        DATASETS.ROOT_TRAIN_DIR2 './data/OfficeHomeDataset/'$target_dataset'.txt' \
        DATASETS.ROOT_TEST_DIR './data/OfficeHomeDataset/'$target_dataset'.txt' \
        MODEL.Transformer_TYPE $model_type \
        MODEL.PRETRAIN_PATH './data/pretrainModel/'$pretrain_model \

done

from cdtrans.

Agbeli avatar Agbeli commented on May 29, 2024

You can use the following scripts to get the office-home baseline result about Art->Product, Real_World, Clipart

model='deit_small'
model_type='vit_small_patch16_224_TransReID'
pretrain_model='deit_small_distilled_patch16_224-649709d9.pth'
for target_dataset in 'Product' 'Real_World' 'Clipart'
do
        python test.py --config_file 'configs/pretrain.yml' MODEL.DEVICE_ID "('0')" \
        TEST.WEIGHT '../logs/pretrain/'$model'/office-home/Art/transformer_10.pth' \
        DATASETS.NAMES 'OfficeHome' DATASETS.NAMES2 'OfficeHome' \
        OUTPUT_DIR '../logs/uda/'$model'/office-home/' \
        DATASETS.ROOT_TRAIN_DIR './data/OfficeHomeDataset/Art.txt' \
        DATASETS.ROOT_TRAIN_DIR2 './data/OfficeHomeDataset/'$target_dataset'.txt' \
        DATASETS.ROOT_TEST_DIR './data/OfficeHomeDataset/'$target_dataset'.txt' \
        MODEL.Transformer_TYPE $model_type \
        MODEL.PRETRAIN_PATH './data/pretrainModel/'$pretrain_model \

done

!bash scripts/uda/officehome/run_officehome_Ar.sh deit_small. Please, what is the output of this one? Where can I get this script on your repos? Please, forgive me for my disturbance. you could show me that. My major issue is getting the baseline results. I am only running the exact script on the repos:

!bash scripts/pretrain/officehome/run_officehome_Ar.sh deit_small.

!bash scripts/uda/officehome/run_officehome_Ar.sh deit_small.

from cdtrans.

CDTrans avatar CDTrans commented on May 29, 2024

The above script is to get baseline results for office-home about Art->Product, Real_World, Clipart. This script is not included in the Github repository, because not everyone is interested in the baseline results. For other baseline results like Clipart->Art, Product, Real_World, you should edit this script to get it.
To summarize:
If you want to get the results of the baseline model, then you need to run the above script first or modify it and run it later.
If you want to get consistent baseline results with the table, then you need to adjust the max_epoch parameter, which may be caused by different physical machines

from cdtrans.

erhuliu avatar erhuliu commented on May 29, 2024

Please use Git terminal running bash scripts/pretrain office31 / run_office_amazon. Sh deit_base when how to debug the code? I use Windows. Or do I have to use Linux debug code?

from cdtrans.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.