WideDepth: Millimeter-Accurate Benchmark for Fisheye Depth Estimation

Ilia Indyk1, Ignat Penshin1, Ivan Sosin1, Maxim Monastyrny1, Aleksei Valenkov1, Ilya Makarov2, 3
1Robotics Center, 2AXXX, 3ISP RAS,
ICRA 2026
WideDepth pipeline overview

Overview of our proposed pipeline for generating fisheye stereo RGB, depth, and disparity from LiDAR scans.

Abstract

Fisheye cameras are increasingly adopted in robotics for near-field manipulation, navigation, and immersive perception, yet indoor depth benchmarks with accurate ground truth are still missing. To address this, we introduce WideDepth — the first indoor dataset for fisheye depth estimation, featuring 101 scenes containing 5K high-resolution stereo pairs labeled with millimeter-level ground truth depth and disparity. Our dataset also includes paired pinhole and fisheye samples across varying fields of view and baselines in both horizontal and vertical stereo setups. We further propose a method to adapt pinhole-trained stereo models to fisheye images and introduce a novel stereo fisheye image generation pipeline based on high-resolution LiDAR scans. Leveraging these methods, we thoroughly evaluate state-of-the-art monocular depth, stereo matching, and depth completion models on our benchmark. Additionally, we provide 18K LiDAR-derived sparse depth training samples, achieving up to a 62% performance boost on fisheye data when fine-tuning pinhole-based stereo models. In summary, the high precision and versatility of our benchmark set a strong foundation for advancing research in fisheye depth estimation and robotics perception.

Video Presentation