Code Monkey home page Code Monkey logo

jmodt's People

Contributors

kemo-huang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

jmodt's Issues

loss is zero

when i start training,why the rcnn_loss is zero?

About the Train Seq and Val Seq.

@Kemo-Huang
image

sorry to bother you.
In the paper, it shows that The training sequences are split into a training set and a validation set with roughly equal number of frames. Specifically, the training set has 10 sequences and 3975 frames, and the validation set contains 11 sequences and 3945 frames.
But the code split the dataset into a training set with 10 sequences that contain 3995 frames, and the validation set with 10 sequences that have 3864 frames.
If the seventeenth sequence were added to the validation set, it would contain 4009 frames.
image

So may I have the sequences that were used in your Paper, please?
If you delete some frames when you train or validate please tell me,Thanks a lot.

I check each sequence in the Kitti dataset. Got the number of each Sequence like this.

<style> </style>
Seq-ID number of  frames
0 154
1 443
2 233
3 144
4 314
5 297
6 270
7 800
8 390
9 803
10 294
11 373
12 78
13 340
14 106
15 376
16 209
17 145
18 339
19 1059
20 837

val_loss_epoch

Traceback (most recent call last):
File "tools/train.py", line 164, in
main()
File "tools/train.py", line 157, in main
val_loader
File "/home/my_com/virtualenv/JMODT/jmodt/utils/train_utils.py", line 198, in train
prev_val_loss = val_loss_epoch

How can i solve it?

Multiple GPUs for training

Hello, when I used multiple GPUs for training, it reported the following error, how should I solve it?

image

how to use the dataset

Thanks for your work, recently, I wanted to train the tracking model, so I downloaded the dataset, I can't find the TRACK_OBJECT folder, maybe you could teach me where is the folder I could download. Thank you in advance.

Feat Visualization Issues

could you please tell me how to visuallize the feat(.npy), I have try to save as png, but I can not see anything...

Link to the paper

Hello, the link to the paper no longer exists, can you provide it again?

The parameters for Affinity Compution and data_association.

@Kemo-Huang @P16101150
Excuse me, sir, sorry to bother you. I have several problems with the code with JMODT.
1st.
In the paper the affinity computation part:
Equation(7) the refined affinity X^aff = αA^app + βA^diou
image
α+β=1; I can not found that the value of α and β. But In the Experiment Results part, you set β=10α for affinity computation, I am confused that which parameter means α or β.
2nd.
In the Experiment Results part:
w^aff = 22, that I can not find which parameter is w^aff, and in data_association I did not find too.
image
3rd.
In algorithm 1:
where is the X^aff ← αA^app + βA^diou in the program?
image
I checked data_association.py, does " link_matrix = link_score * w_app + iou_matrix * w_iou + dis_matrix * w_dis" is X^aff ? If it is yes,which is α and which is β,and α+β ≠1.
The last:
if " link_matrix = link_score * w_app + iou_matrix * w_iou + dis_matrix * w_dis" is X^aff, does the X^aff is only used in evaluation step? Or does it means that X^aff is not used in the train setp, that the training step only used the correlation feature, or only used the A^app?

Thanks, a lot.

‘best_model.path’

jmodt

Thank you very much for your work. I noticed that the best checkpoint doesn't seem to have been updated while I was training, is this normal?

UnboundLocalError: local variable 'val_loss_epoch' referenced before assignment

I want to train this code for dataset, following as

$ python tools/train.py --data_root /home/my_com/dataset/KITTI/ --batch_size 4

when first epoch is done, I got error

Traceback (most recent call last):
File "tools/train.py", line 164, in
main()
File "tools/train.py", line 157, in main
val_loader
File "/home/my_com/virtualenv/JMODT/jmodt/utils/train_utils.py", line 198, in train
prev_val_loss = val_loss_epoch

Can I get some solution from this problem?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.