
DETAILS
Overview
The Objective
The goal of this project was to transform a base PenguinPi mobile robot, initially capable of only rudimentary, low level motion, into a semi/fully autonomous system. We were tasked with engineering a robot that could navigate a structured arena, map its environment, avoid hazards (represented by ArUco markers), and identify and move to specific target items (fruits and vegetables) from a predefined "shopping list."
Development
System Architecture and Milestones
Development of the system was down into several milestones:
Kinematics & Movement: The base PenguinPi platform initially only supported rudimentary, low-level motion. We modelled the robot's kinematics to bridge this gap and calibrated it ensuring that high-level navigational decisions were accurately translated into precise, integrated motor control.
Mapping & Localisation: The team implemented SLAM (Simultaneous Localisation and Mapping) alongside classical computer vision techniques. This enabled the robot to detect ArUco markers for accurate pose estimation while constructing a real-time map of its surroundings.
Object Detection & Target Recognition: For the task, the system needed to identify its specific goals. A machine learning classifier was developed and integrated to successfully identify and visually differentiate the target fruit objects from other environmental noise.
Path Planning & Obstacle Avoidance (my focus): Relying on the live SLAM map data, I implemented a modified A* path-planning algorithm that involved gaussian obstacles to allow for decision making based on weighting for safety versus distance. This architecture provided the system with global obstacle-aware navigation, allowing it to compute the most efficient routes to the target items while actively avoiding the hazards effectively.
Project Type
University Project
Tools Used
Python ROS2 Gazebo Simulation PenguinPi Robot Raspberry Pi
Duration
12 Weeks
Year
2024





GALLERY