Code Monkey home page Code Monkey logo

Comments (7)

LancerXE avatar LancerXE commented on June 24, 2024

Same issue here

from ganav-offroad.

rayguan97 avatar rayguan97 commented on June 24, 2024

We used GeForce RTX 2080 with 8 GB RAM. In this case, I recommend that you use smaller batch size/sample_per_gpu size, which can be defined rugd_group6.py or ganav_group6_rugd.py.

However, RELLIS-3D is using sample_per_gpu size=1, so I'm not sure whether it can be run on a Nvidia RTX 2060. Another way is to reduce the crop size, which means you might not able to use the released trained model.

from ganav-offroad.

joeljosejjc avatar joeljosejjc commented on June 24, 2024

Thank you sir for your quick reply regarding this issue. I reduced the samples_per_gpu parameter in GANav-offroad/configs/ours/ganav_group6_rugd.py as per your suggestion. The above issue was still persisting for 2 or 3 samples_per_GPU, but fortunately, training successfully commenced when samples_per_gpu=1.
Once again for reference, I executed the following code to run the training phase.

python -m torch.distributed.launch ./tools/train.py --launcher='pytorch' ./configs/ours/ganav_group6_rugd.py

The training phase progressed till the 10% checkpoint at which point it produced another error as below:

image

I googled and found a solution to the error which was to set priority of DistEvalHook to LOW in \mmseg\apis\train.py.
open-mmlab/mmpretrain#488
After doing so, the model training completed successfully.

But I observed that the evaluation metrics at each checkpoint were not improving throughout the training and stayed the same as that of the 1st checkpoint. The evaluation metrics for all checkpoints were as given below:

image

Evaluation results from the Testing phase using the trained model produced similar results as well.

I am not sure what is causing this issue, and I have not altered any other parameters in the repository. Hence could you suggest what may be causing this issue?

from ganav-offroad.

rayguan97 avatar rayguan97 commented on June 24, 2024

Have you checked the number of classes? Since its prediction is all 0 (background), I image there is something wrong with the setup.

  1. Can you try evaluating on the training set and make sure it's fitting the training data?
  2. Can you check the output of the model? The error could be the eval code or the inference of the model.
  3. I have never seen this issue before, so it's possible that the solution you found about the "date_time" key error leads to this error.
  4. There is no need to use "-m torch.distributed.launch" since you only use 1 gpu. Have you tried running the exact cmd as in the instruction? That would be a good starting point to narrow down the problem since I have not run into this issue with this cmd (python ./tools/train.py ./configs/ours/ganav_group6_rugd.py)

from ganav-offroad.

joeljosejjc avatar joeljosejjc commented on June 24, 2024

To answer some of the above queries:

  1. The number of classes that I have used is 6 and to the best of knowledge, I have not changed any parameter in the code that affects the segmentation groups. This will be more evident as I present the results from the GANav model which I trained a second time after following your suggestions given above as much as possible.
  2. The output of the model seemed to be showing all of the images being colour coded completely as obstacles (which could be the reason for the high accuracy but low IoU of the obstacle category evaluation.)
  3. I had initially used the exact command that was given in the readme of the Repo:
    python ./tools/train.py ./configs/ours/ganav_group6_rugd.py

But this command generated the following error:
image

I researched and found that using SyncBN function for Batch Normalisation requires distributed training, and hence requires the command python -m torch.distributed to set the required parameters for distributed training.
I found that changing this function to BN in configs/base/model/ours_class_att.py takes away this requirement and allows me to run the original command with no errors.
Hence I tried training the model again using BN as the batch normalisation function. And to my surprise the initial evaluation metrics (at the first checkpoint) were not too bad:

image
(Screenshot of 1st Checkpoint evaluation)
But the error related to the 'date_time' key showed up once again and I found no other solution other than the one I mentioned in my comment above.

Up until the 4th checkpoint, the evaluation metrics seemed to be improving up until 5th checkpoint onwards, when the performance metrics plummeted down (below 10%) and continued to stay at this range till the last checkpoint. By the end of training, only L2 Navigable terrain had mediocre results (25% IoU and 48% Accuracy), and the rest off the classes had metrics below 10%.

image
Screenshot of 2nd Checkpoint evaluation (32000 iterations)

image
Screenshot of 3rd Checkpoint evaluation(48000 Iterations)

image
Screenshot of 5th Checkpoint evaluation (80000 Iterations)

image
Screenshot of final Checkpoint evaluation (160000 Iterations)

The evaluation metrics from the testing phase generated similar results to that of the last checkpoint metrics:
image

Finally, I also tried running the testing phase on the training data, which had also produced similar results to the above:
image

Another strange thing that was observed is that the output images from the testing phase were coloured completely in one of the annotation group colours. I have added a screenshot of the directory containing the segmented images to visualise what I mean:

image

image

image

image

I had the following doubts from the above observations from training the GANav model:

  1. Would it be possible that the poor and strange performance in training model stems from the fact that I am using 1 sample_per_gpu. As such, will there be an improvement in the training performance if I use a better GPU variant (like the one that you had used,RTX 2080 8 GB), and correspondingly, higher samples per gpu?
  2. Should more than 1 GPU be used (implemeniing distributed training) for improvement in training the model.
  3. Or, could the bad performance be resulting from the implementation of the date_time key error, or some other unforeseen issues. If so, could you suggest some possible troubleshooting methods that I could try out to resolve for the same.

Once again, thank you sir for your suggestions and help in resolving these issues.

from ganav-offroad.

rayguan97 avatar rayguan97 commented on June 24, 2024
  1. Regarding BN and SyncBN: Thank you so much for noting this error and pointing this out! I forgot to change it back after using distributed training; now it's updated in the config file.
  2. Regarding the poor performance -- Yes, in this case I believe it might be the issue of the batch size. I recommend lower the learning rate slightly if you might not have access to a RTX 2080, or change to a larger batch size with better GPU. I have never used a batch size of 1 on this dataset.
  3. If all the pixel are predicted as the same classes, maybe it has something to do with the label and the processing script? But it's highly unlikely and I could not see a reason why it failed if it's not changed in any way.
  4. Regarding more than one GPU: I do not see much improvement/degradation in performance with 1 or more gpus (there are some improvement using 2 gpus instead of 1 gpu, to the best of my recollection).
  5. I don't see a reason why date_time key error is an issue.

To sum up:

  1. Try using a GPU with larger RAM size with the provided batch size.
  2. Try lower the learning rate if using smaller batch size. I image you observe this behavior only if the learning rate has gone horribly wrong.

The current version of the code corresponds to this version of the paper(https://arxiv.org/abs/2103.04233v2), which is not the latest version that was described in the accepted RAL paper. I will update the code in the next couple of days and see if there are any obvious bugs that might cause this issue.

from ganav-offroad.

rayguan97 avatar rayguan97 commented on June 24, 2024

Hey just a quick note, I just uploaded the new code for the latest paper. Please start another issue if you still have training problems.

from ganav-offroad.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.