Earth system science and autonomous navigation sound unrelated and yet they both share the critical need to scan the world for 3D reasoning. In Earth system science, it is crucial to scan water surfaces in search of small-scale processes (e.g., waves breaking) and objects floating on them (e.g., sea ice) to advance knowledge and model capacity to predict global climate change. For autonomous navigation (e.g., self-driving cars, delivery robots, assisted navigation), it is fundamental to detect and locate objects in 3D (e.g., icebergs/ships on the ocean, or pedestrians/vehicles on the roads) to plan a safe yet efficient navigation.
There have been significant advances in the computer vision community to reconstruct 3D information (also referred to as depth estimation) from affordable RGB cameras as opposed to using expensive Lidar sensors. But there’s still limitations, and hence opportunities: in this project, researchers propose to develop new deep learning based methods that will learn to be generic enough to estimate depth from one or two (stereo) 360 cameras instead of the 8 needed actually. It will reduce the complexity and cost of deploying several cameras. The research field, particularly within Earth System science also lacks a generic enough open source library to effectively use state-of-the-art depth estimation techniques. They propose to make their generic learning framework an open source library catering the needs of many imaging applications. Researchers think there is large potential to harness their proposed 360 camera solution simultaneously for Earth system science and navigation: a single set-up on sea-going vessels can be used as near-field navigational aid and as scientific measurement for wave reconstruction and sea ice classification, thereby essentially replacing current (very expensive) utilization of ship-radars, where the detection range needs to be switched back and forth to accommodate both purposes.