Code Monkey home page Code Monkey logo

Comments (8)

dichen-cd avatar dichen-cd commented on August 24, 2024

Thank you for your feedback.
In our paper we train the pixel-wise model with batch_size=5 and without DataParallel. Please refer to Section 4.2 of our paper for more details.

from nae4ps.

serend1p1ty avatar serend1p1ty commented on August 24, 2024

I am sorry my setting is different from the paper. I will test it again.

from nae4ps.

serend1p1ty avatar serend1p1ty commented on August 24, 2024

@DeanChan
Hi, I tried the identical experimental setting as the paper given.

  1. Train an NAE model
CUDA_VISIBLE_DEVICES=0 python scripts/train_NAE.py --debug --lr_warm_up -p ./logs/ --batch_size 5 --nw 5 --w_RCNN_loss_bbox 10.0 --epochs 22 --lr 0.003

The trained model achieved 91.74% mAP.

  1. Train a pixel-wise version initialized with trained NAE weights.
CUDA_VISIBLE_DEVICES=1 python scripts/train_NAE.py --debug --lr_warm_up -p ./logs/ --batch_size 5 --nw 5 --w_RCNN_loss_bbox 10.0 --epochs 11 --lr 0.003 --pixel_wise --NAE_pretrain --embedding_feat_fuse --lr_decay_step 9

But the performance of the model is not so good (mAP should be around 92.1%).

[~] Evaluating detections:
all detection:
  recall = 92.32%
  ap = 86.27%
[~] Evaluating search: 
search ranking:
  mAP = 90.26%
  top- 1 = 90.48%
  top- 5 = 97.07%
  top-10 = 97.97%

Did I miss something? Could you give me some suggestions? I will be very grateful.

from nae4ps.

dichen-cd avatar dichen-cd commented on August 24, 2024

Your training command is correct. The result is wired. Could you provide your environment information? i.e. graphics card, nvidia driver version, cuda version, python package info, etc.?

from nae4ps.

serend1p1ty avatar serend1p1ty commented on August 24, 2024

System
Ubuntu16.04

Graphics and Nvidia driver

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04    Driver Version: 418.40.04    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-PCIE...  Off  | 00000000:18:00.0 Off |                    0 |
| N/A   71C    P0   207W / 250W |  23208MiB / 32480MiB |     86%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla V100-PCIE...  Off  | 00000000:86:00.0 Off |                    0 |
| N/A   45C    P0    38W / 250W |      0MiB / 32480MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      8419      C   python                                     23197MiB |
+-----------------------------------------------------------------------------+

CUDA version

❯ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

from nae4ps.

serend1p1ty avatar serend1p1ty commented on August 24, 2024

@DeanChan
Hi, I tried so many times, and I still can not get the same performance as the paper said.
Could you give me some suggestions?

from nae4ps.

dichen-cd avatar dichen-cd commented on August 24, 2024

Hi there~ Many thanks for your feedback!

Several factors could affect the final result such as the nvidia-driver version, cuda verson and GPU model.
The reported result is tested on NVIDIA Tesla V100 16GB with driver version 418.43 and cuda 10.1 on Debian 9.12;
On P40 with the same driver and cuda version I got higher results;
On P40 with driver version 440.82 and cuda 10.2 I got lower results.
The performance variation is usually within (-2, +2). Technically it shouldn't be too much.
I haven't figured out the exact reason for this phenomenon, but maybe you could try changing the random seed and see how the performance changes.

And one more thing, it seems that you are using python 3 instead of python 2, right? Did you make large changes to the code? Which pytorch version are you using?

Thanks.

from nae4ps.

serend1p1ty avatar serend1p1ty commented on August 24, 2024

The conda environment shown above is base env. It was my fault, and I have deleted these text. I haven't modified the code.
Thank you, I will test it on correct NVIDIA driver version and cuda version.

from nae4ps.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.