GDIP: Gated Differentiable Image Processing for Object-Detection in Adverse Conditions [website]
Abstract
Detecting objects under adverse weather and
lighting conditions is crucial for the safe and continuous
operation of an autonomous vehicle, and remains an unsolved
problem. We present a Gated Differentiable Image Processing
(GDIP) block, a domain-agnostic network architecture, which
can be plugged into existing object detection networks (e.g.,
Yolo) and trained end-to-end with adverse condition images
such as those captured under fog and low lighting. Our proposed GDIP block learns to enhance images directly through the
downstream object detection loss. This is achieved by learning
parameters of multiple image pre-processing (IP) techniques
that operate concurrently, with their outputs combined using
weights learned through a novel gating mechanism. We further
improve GDIP through a multi-stage guidance procedure for
progressive image enhancement. Finally, trading off accuracy
for speed, we propose a variant of GDIP that can be used as
a regularizer for training Yolo, which eliminates the need for
GDIP-based image enhancement during inference, resulting in
higher throughput and plausible real-world deployment. We
demonstrate significant improvement in detection performance
over several state-of-the-art methods through quantitative and
qualitative studies on synthetic datasets such as PascalVOC, and
real-world foggy (RTTS) and low-lighting (ExDark) datasets.