Synthetic datasets
Synthetic HDR stereo dataset generated using the CARLA simulator. The video shows a sequence of left images and corresponding ground-truth depth maps.
Achieving robust stereo 3D imaging under diverse illumination conditions is challenging due to the limited dynamic range of conventional cameras, causing existing stereo depth estimation methods to suffer from under- or over-exposed images.
In this paper, we propose dual- exposure stereo that combines auto-exposure control and dual-exposure bracketing to achieve stereo 3D imaging with extended dynamic range. Specifically, we capture stereo im- age pairs with alternating dual exposures, which automat- ically adapt to scene illumination and effectively distribute the scene dynamic range across the dual-exposure frames. We then estimate stereo depth from these dual-exposure stereo images by compensating for motion between con- secutive frames.
To validate our approach, we develop a robotic vision system, acquire real-world HDR stereo video datasets, and generate additional synthetic datasets. Experimental results demonstrate that our method outperforms existing exposure control methods.
Synthetic HDR stereo dataset generated using the CARLA simulator. The video shows a sequence of left images and corresponding ground-truth depth maps.
Real-world stereo-LiDAR dataset captured across diverse scenes. Includes tone-mapped stereo images and sparse LiDAR ground-truth, recorded with a calibrated mobile camera system.
The video showcases our Auto Dual Exposure Control (ADEC).
Our Auto Dual Exposure Control (ADEC) algorithm analyzes the histogram and skewness of two captured images to adaptively adjust exposure values based on scene characteristics. If the scene's dynamic range exceeds that of the camera, ADEC increases the exposure gap to capture complementary highlight and shadow details. If the scene's dynamic range is within the camera's limits or the case is uncertain, the algorithm adjusts each exposure toward a balanced state by minimizing skewness. This dynamic adjustment allows ADEC to optimize exposure distribution, improving performance in downstream tasks such as depth estimation in extreme dynamic range scenes.
This video illustrates our dual-exposure stereo disparity estimation.
We use two stereo image pairs with different exposures to handle challenging lighting conditions. To compensate for object and camera motion between frames, we estimate optical flow and temporally align features. By fusing features from both exposures based on pixel intensity, we construct disparity volumes that capture both highlight and shadow details. This enables robust depth estimation even under extreme dynamic range.
The video showcases a side-by-side comparison of image capture and depth estimation results using AverageAE and our proposed ADEC method:
@article{choi2025dual,
author = {Juhyung Choi, Jinnyeong Kim, Jinwoo Lee, Samuel Brucker, Mario Bijelic, Felix Heide, Seung-Hwan Baek},
title = {Dual Exposure Stereo for Extended Dynamic Range 3D Imaging},
journal = {CVPR},
year = {2025},
}