![]() |
Scalable Underwater Perception: Building the Foundations for Embodied Marine AutonomyJingyu Song, Researcher at University of Michigan Robotics DepartmentIn this talk, I will present scalable underwater perception solutions designed to empower intelligent and autonomous marine robots. Focusing on a critical challenge in underwater perception - acquiring more high-quality data - we start by discussing our recent project on real-time localization and dense mapping using a low-cost underwater robot, which demonstrates superior robustness even in textureless environments. This cost-effective approach is key to facilitating the wider deployment of underwater robotic systems. I will then share our ongoing research on developing a unified, high-performance underwater simulator built on NVIDIA Isaac Sim. This high-fidelity, scalable, and GPU-accelerated simulation platform replicates complex underwater environments with advanced realism, enabling rapid testing and validation of perception algorithms. Together, these projects build a solid foundation for enabling marine autonomy. |
---|---|
![]() |
Enabling Visual Understanding of Underwater ScenesProf. A.N. Rajagopalan, IPCV Lab, Indian Institute of Technology MadrasUnderwater scene understanding from visual data poses many challenges due to loss of contrast, low light, haziness, and color distortion in the captured images. Recovering a clean image and the corresponding 3D depth map from underwater observations is fundamental to high-level tasks involving scene understanding. Towards this end, we have devised a learning methodology that effectively utilizes both haze and geometry by harnessing the physical model for underwater image formation in conjunction with view-synthesis constraint. The proposed method is completely self supervised and simultaneously outputs the depth map and the restored image in real-time (55 fps). To facilitate self-supervision, we collected a Dataset of Real-world Underwater Videos of Artifacts (DRUVA) in shallow sea waters. DRUVA is the first UW video dataset that contains video sequences of 20 different submerged artifacts with almost full azimuthal coverage of each artifact. The proposed approach, the dataset DRUVA, and results and comparisons will be discussed along with recent developments. |