Reliable depth estimation from spherical images is crucial for 360∘ vision in robotic navigation and immersive scene understanding. However, the onboard spherical camera can experience unintentional pose variations in real-world robotic platforms that, along with the geometric distortions inherent in equirectangular projections, significantly impact the effectiveness of depth estimation. To study this issue, a novel public benchmark, called Sphere-Depth, is introduced to systematically evaluate the robustness of monocular depth estimation models from equirectangular images in a reproducible way. Camera pose perturbations are simulated and used to assess the performance of a popular perspective-based model, Depth Anything, and of spherical-aware models such as Depth Anywhere, ACDNet, Bifuse++, and SliceNet. Furthermore, to ensure meaningful evaluation across models, a depth calibration-based error protocol is proposed to convert predicted relative depth values into metric depth values using supervised learned scaling factors for each model. Experiments show that even models explicitly designed to process spherical images exhibit substantial performance degradation when variations in the camera pose are observed with respect to the canonical pose. The full benchmark, evaluation protocol, and dataset splits are made publicly available at: https://github.com/sgazzeh/Sphere_depth.

Gazzeh, S., Mazzola, G., Lo Presti, L., La Cascia, M. (2025). Sphere-Depth: A Benchmark for Depth Estimation Methods with Varying Spherical Camera Orientations. In Lecture Notes in Computer Science (pp. 364-374). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-032-04968-1_31].

Sphere-Depth: A Benchmark for Depth Estimation Methods with Varying Spherical Camera Orientations

Gazzeh, Soulayma
;
Mazzola, Giuseppe;Lo Presti, Liliana;La Cascia, Marco
2025-09-01

Abstract

Reliable depth estimation from spherical images is crucial for 360∘ vision in robotic navigation and immersive scene understanding. However, the onboard spherical camera can experience unintentional pose variations in real-world robotic platforms that, along with the geometric distortions inherent in equirectangular projections, significantly impact the effectiveness of depth estimation. To study this issue, a novel public benchmark, called Sphere-Depth, is introduced to systematically evaluate the robustness of monocular depth estimation models from equirectangular images in a reproducible way. Camera pose perturbations are simulated and used to assess the performance of a popular perspective-based model, Depth Anything, and of spherical-aware models such as Depth Anywhere, ACDNet, Bifuse++, and SliceNet. Furthermore, to ensure meaningful evaluation across models, a depth calibration-based error protocol is proposed to convert predicted relative depth values into metric depth values using supervised learned scaling factors for each model. Experiments show that even models explicitly designed to process spherical images exhibit substantial performance degradation when variations in the camera pose are observed with respect to the canonical pose. The full benchmark, evaluation protocol, and dataset splits are made publicly available at: https://github.com/sgazzeh/Sphere_depth.
set-2025
9783032049674
9783032049681
Gazzeh, S., Mazzola, G., Lo Presti, L., La Cascia, M. (2025). Sphere-Depth: A Benchmark for Depth Estimation Methods with Varying Spherical Camera Orientations. In Lecture Notes in Computer Science (pp. 364-374). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-032-04968-1_31].
File in questo prodotto:
File Dimensione Formato  
978-3-032-04968-1.pdf

Solo gestori archvio

Descrizione: Articolo
Tipologia: Versione Editoriale
Dimensione 2.29 MB
Formato Adobe PDF
2.29 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/692831
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact