This research aims to develop an algorithm capable of exploring a completely unfamiliar environment usingvision-based techniques. The study applied simultaneous localization and mapping (SLAM) and navigation algorithms ontheROSsimulator and Turtlebot robot, as well as the YOLOv7 neural network for object detection and depth estimation. The robot's abilityto track and track humans in its environment has been improved by combining data from YOLOv7 and Lidar, and applyingneuralnetworks to predict human trajectories. Research results have demonstrated the effectiveness of vision-based techniquesindeveloping autonomous robots capable of navigating and exploring an unfamiliar environment, making it useful for applicationssuch as search and rescue operations, environmental monitoring, and human-robot interaction.

Document Type

Final Report

Author's School

McKelvey School of Engineering

Author's Department

Mechanical Engineering and Materials Science

Class Name

Mechanical Engineering and Material Sciences Independent Study

Date of Submission