Enabling Immersive Indoor Navigation and Control Through Augmented Reality With Computer Vision

Enabling Immersive Indoor Navigation and Control Through Augmented Reality With Computer Vision

Published
Published: 30 September 2024
Tags
Robotics
Mobile Robotics
Computer Vision
Algorithms
Network Protocols
Cloud Integration
Augmented Reality
Status
Published
Select
Conference Paper
Multi-select

Abstract

This project integrates computer vision, Augmented Reality (AR), and cloud-based communication to create a realtime 3D map of the robot’s surroundings. The outcome is an engaging and immersive control experience for users in indoor robot monitoring and controlling. Furthermore, combining an image semantic segmentation model with the Microsoft Kinect V2 depth camera introduces a novel method for achieving 1cm accuracy in estimating the distance to a particular object. This precise depth estimation is a significant benefit for tasks that require accurate object identification, segmentation, and depth information to generate a virtual representation of the real-world environment of a robot. We have enabled remote communication with the robot’s environment by incorporating AWS cloud services, guaranteeing minimal delay. The commercial viability of this project can be identified in a wide range of applications that find the system useful due to its easyto use interface, remote access capability, and precise object rendering capabilities. Examples of such applications include surgical robotics, instructional robotics platforms in classrooms, and automated guided vehicles in warehouses.