Code Monkey home page Code Monkey logo

Comments (20)

MartinHahner avatar MartinHahner commented on June 14, 2024

Hi Barza,
the splits given in this repo are "FOV3000strongest", so the "bad" frames are not in these lists anymore and these splits are the ones we used.

Can you please confirm if we can use "dense_dataset_snow_wet_coupled.yaml" to reproduce result for "Your snow+wet" and "dense_dataset_snow_uniform_gunn_1in10.yaml" to reproduce result for "Your snow"?

Yes, this is correct. Note though, as stated in the paper:
We report for each experiment the average performance over three independent training runs.

Does this mean you only used "lidar_hdl64_strongest" (and not hdl 64 last or vlp32 strongest/last) for training and validating all experiments?

Yes, this is correct, too. We only used "lidar_hdl64_strongest".

While training for snow/fog sim (both papers), did you ignore frames that have less than 3000 points in camera FOV? If you did, can you please point me where in your code you do that or if 3000 points check is done before or after snow/fog sim?

For the snow sim paper, we ignored all frames with less than 3000 points in camera FOV.
For the fog sim paper, we didn't, because we did not find this issue at that time yet.

Once we found out that there are multiple "bad" frames (especially in the snowy data), I did this check once "offline" with a small script that unfortunately I can't find anymore, but I can explain what I did: I merged the day & night lists from here, then I iterated over every frame in these lists and checked how many points there are in the camera FOV, if there were not at least 3000 points in the camera FOV, I removed that frame from the list. The resulting lists are the ones in the splits folder in this repository. So there is no need to check anything "online" at your end anymore.

I hope this helps.
Greets, Martin

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

From wandb:
image

In the paper:
image

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

Thank you for replying.
How did you get distance wise mAP? Did you make that code public?

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

I have marked in bold 2 AP results for moderate CAR. Which one did you report in your paper?

INFO Car [email protected], 0.70, 0.70:
bbox AP:62.0437, 59.4732, 55.5099
bev AP:59.4023, 58.1002, 53.5795
3d AP:38.1604, 37.3925, 35.1047
aos AP:42.62, 39.69, 36.74
Car [email protected], 0.70, 0.70:
bbox AP:61.7183, 59.5399, 54.4184
bev AP:58.5848, 56.7695, 51.5704
3d AP:34.6149, 34.4679, 31.3272
aos AP:41.30, 38.34, 34.48
Car [email protected], 0.50, 0.50:
bbox AP:62.0437, 59.4732, 55.5099
bev AP:74.8232, 73.8855, 67.7796
3d AP:71.9240, 70.6191, 65.6124
aos AP:42.62, 39.69, 36.74
Car [email protected], 0.50, 0.50:
bbox AP:61.7183, 59.5399, 54.4184
bev AP:75.8713, 74.3578, 69.2796
3d AP:72.3536, 71.0215, 65.5147
aos AP:41.30, 38.34, 34.48

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

I checked the frames in train_clear.txt. It contains some frames which have less than 3000 points in camera FOV (before augmenting snow). Did you only make 3000 points check on test_snow.txt?

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

Also, you included FOG_AUGMENTATION_AFTER: False in all your configs for e.g. here but in your dense_dataset.py you augment fog by just checking if 'FOG_AUGMENTATION_AFTER' is in the cfg or not and not whether it is True or False. This looks like a bug. This means if I train with dense_dataset_snow_wet_coupled.yaml, it will always augment fog after snow.

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

Shouldn't this pass self.dataset_cfg.DATA_AUGMENTOR instead of self.dataset_cfg as augmentor_configs?

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

You forward mor to data augmentor but you don't use it in data_augmentor.py. Does this need cleaning or do you actually use mor in data_augmentor but haven't pushed the latest data_augmentor.py?

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

Thank you for replying. How did you get distance-wise mAP? Did you make that code public?

see #11

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

I have marked in bold 2 AP results for moderate CAR. Which one did you report in your paper?

INFO Car [email protected], 0.70, 0.70: bbox AP:62.0437, 59.4732, 55.5099 bev AP:59.4023, 58.1002, 53.5795 3d AP:38.1604, 37.3925, 35.1047 aos AP:42.62, 39.69, 36.74 Car [email protected], 0.70, 0.70: bbox AP:61.7183, 59.5399, 54.4184 bev AP:58.5848, 56.7695, 51.5704 3d AP:34.6149, 34.4679, 31.3272 aos AP:41.30, 38.34, 34.48 Car [email protected], 0.50, 0.50: bbox AP:62.0437, 59.4732, 55.5099 bev AP:74.8232, 73.8855, 67.7796 3d AP:71.9240, 70.6191, 65.6124 aos AP:42.62, 39.69, 36.74 Car [email protected], 0.50, 0.50: bbox AP:61.7183, 59.5399, 54.4184 bev AP:75.8713, 74.3578, 69.2796 3d AP:72.3536, 71.0215, 65.5147 aos AP:41.30, 38.34, 34.48

I don't know what the two bold numbers refer to.
I did not use these confusing printout tables.

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

I checked the frames in train_clear.txt. It contains some frames which have less than 3000 points in camera FOV (before augmenting snow). Did you only make 3000 points check on test_snow.txt?

Yes, it could be that we only filtered test_snow.txt.

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

Also, you included FOG_AUGMENTATION_AFTER: False in all your configs for e.g. here but in your dense_dataset.py you augment fog by just checking if 'FOG_AUGMENTATION_AFTER' is in the cfg or not and not whether it is True or False. This looks like a bug. This means if I train with dense_dataset_snow_wet_coupled.yaml, it will always augment fog after snow.

I don't think it is a bug, (it's just a logic that got more and more complicated over time), because as long as the yaml contains

FOG_AUGMENTATION: False
FOG_AUGMENTATION_AFTER: False

in the code no augmentation method will be set.

augmentation_method = None

Then nothing happens inside foggify.

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs.
For PV-RCNN e.g. the batch size was set to eight.

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

Shouldn't this pass self.dataset_cfg.DATA_AUGMENTOR instead of self.dataset_cfg as augmentor_configs?

Yes, you should change it (back) to self.dataset_cfg.DATA_AUGMENTOR.
I played around with some changes in DataAugmentor for which I needed the entire dataset config
(see code snipped of my modified DataAugmentor below).

class DataAugmentor(object):
    def __init__(self, root_path, dataset_config, class_names, logger=None):
        self.root_path = root_path
        self.class_names = class_names
        self.logger = logger
        self.dataset_config = dataset_config

        augmentor_configs = self.dataset_config.DATA_AUGMENTOR

   ...

from lidar_snow_sim.

barzanisar avatar barzanisar commented on June 14, 2024

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs. For PV-RCNN e.g. the batch size was set to eight.

Did you use batch size of 8 per GPU meaning total batch size 4 * 8 = 32 ?

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

You forward mor to data augmentor but you don't use it in data_augmentor.py. Does this need cleaning or do you actually use mor in data_augmentor but haven't pushed the latest data_augmentor.py?

MOR is used in database_sampler.py, that is why it is passed on to data_augmentor.py.

But as I recall MOR is only relevant for the fog simulation paper.
The code below ensures that in "GT oversampling" no objects further away than the MOR are sampled.

def sample_with_fixed_number(self, class_name, sample_group, mor=np.inf):
        """
        Args:
            class_name:
            sample_group:
            mor: meteological optical range in meter
        Returns:
        """
        sample_num, pointer, indices = int(sample_group['sample_num']), sample_group['pointer'], sample_group['indices']
        if pointer >= len(self.db_infos[class_name]):
            indices = np.random.permutation(len(self.db_infos[class_name]))
            pointer = 0

        limit_by_mor = self.dataset_cfg.get('LIMIT_BY_MOR', False)

        if mor < np.inf and limit_by_mor:

            trials = 0
            sampled_dict = []

            while len(sampled_dict) < sample_num:

                try:
                    idx = indices[pointer + trials]
                except IndexError:
                    break

                sample = self.db_infos[class_name][idx]
                box = sample['box3d_lidar']
                dist = np.linalg.norm(box[0:3])

                if dist < mor:
                    sampled_dict.append(self.db_infos[class_name][idx])

                trials += 1

            sample_num = trials

        else:

            sampled_dict = [self.db_infos[class_name][idx] for idx in indices[pointer: pointer + sample_num]]

        pointer += sample_num
        sample_group['pointer'] = pointer
        sample_group['indices'] = indices
        return sampled_dict

Sorry, that this code snipped was not included in the code release.

from lidar_snow_sim.

MartinHahner avatar MartinHahner commented on June 14, 2024

Also, for me it has been impossible to obtain 40+ AP with 0.7 IoU threshold in 80 epochs for moderate Car, even when I train only on clear (without any simulation) and test on clear. I wanted to know what is your batch size per GPU and how many GPUS you are using for training?

We used the maximum batch size we could on four GeForce RTX 2080 Ti GPUs. For PV-RCNN e.g. the batch size was set to eight.

Did you use batch size of 8 per GPU meaning total batch size 4 * 8 = 32 ?

No, a total batch size of eight (so a batch size of two per GPU).

from lidar_snow_sim.

github-actions avatar github-actions commented on June 14, 2024

This issue is stale because it has been open for 30 days with no activity.

from lidar_snow_sim.

github-actions avatar github-actions commented on June 14, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

from lidar_snow_sim.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.