Physical adversarial attacks in object detection have attracted increasing attention. However, most previous works focus on hiding the objects from the detector by generating an individual adversarial patch, which only covers the planar part of the vehicle’s surface and fails to attack the detector in physical scenarios for multi-view, long-distance and partially occluded objects. To bridge the gap between digital attacks and physical attacks, we exploit the full 3D vehicle surface to propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors. Specifically, we first try rendering the nonplanar camouflage texture over the full vehicle surface. To mimic the real-world environment conditions, we then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo realistic scenario. Finally, we design an efficient loss function to optimize the camouflage texture. Experiments show that the full-coverage camouflage attack can not only outperform state-of-the-art methods under various test cases but also generalize to different environments, vehicles, and object detectors. The code of FCA will be available at: https://idrl-lab.github.io/Full-coveragecamouflage adversarial-attack/.
Framework
Cases of Digital Attack
Multi-view Attack: Carmear distance is 3
Elevation 0
Elevation 30
Elevation 50
Original
FCA
Multi-view Attack: Carmear distance is 5
Elevation 20
Elevation 40
Elevation 50
Original
FCA
Multi-view Attack: Carmear distance is 10
Elevation 30
Elevation 40
Elevation 50
Original
FCA
Multi-view Attack: different distance, elevation and azimuth
Original
FCA
Partial occlusion
Original
FCA
Ablation study
Different combination of loss terms
As we can see from the Figure, different loss terms plays different roles in attacking. For example, the camouflaged car generated by obj+smooth (we omit the smooth loss, and denotes as obj) can hidden the vehicle successfully, while the camouflaged car generated by iou can successfully suppress the detecting bounding box of the car region, and finally the camouflaged car generated by cls successfully make the detector to misclassify the car to anther category.
Hello!
I found your result videos on Carla very interesting. However, I have a question regarding the experiment details.
Could you please provide more information about it? I'm trying to replicate your results, and I'd be grateful for any guidance or advice you can offer.
Thank you very much for your time and consideration.
After trainning, I got a bad mask and a bad mask, texture and test_total in the logs:
The mask.png is as follow:
The texture2.png is all white as follow:
The test_total.png is as follow:
Hello,
I found that the default dataset used in your code is "phy_attack" which is the same one used in DAS.
But I don't know how to set other directories such as "train_new", "train_label_new", etc.
Could you give me more detailed guidelines for using the dataset?
I'd really appreciate it if you reply to my question.
Thank you.
Are there some tricks that can effectively improve the attack performance? My attack performance is not high after training, only about 70%, and it is worse for small objects. Thank you~
Hello, thanks for the great work.
I am wondering how we can extract the generated attack texture to use it on other 3D software or print it for the real world?
Is it possible for us to save the texture as an image file?
Because we found that the output texture is saved as a NumPy file, which can only be used for this neural renderer code.
Excuse me, there is only one data1.png in train_new in carla_dataset, but there are a large number of labels in train_label_new. How to download the data sets corresponding to train_label_new in train_new?