Optimised obstacle detection and avoidance model for autonomous vehicle navigation

dc.contributor.advisorAvdelidis, Nico
dc.contributor.advisorTang, Gilbert
dc.contributor.authorAdiuku, Ndidiamaka
dc.date.accessioned2025-05-07T10:54:09Z
dc.date.available2025-05-07T10:54:09Z
dc.date.freetoread2025-05-07
dc.date.issued2024-02
dc.description.abstractDriven by cutting-edge research in AI vision, sensor fusion and autonomous systems, intelligent robotics is poised to revolutionise Aviation hangars, shaping the "hangars of the future" by reducing inspection time and improving defect detection accuracy. Many hangar environments, especially in maintenance, repair and overhaul (MRO) operations, rely on manual processes and algorithms that need to be optimised for the increasing complexity of these settings. These include varied obstacle structures, often low-light conditions, and frequent changes in the scene. The application of mobile robot solutions demands enhanced perception, accurate obstacle avoidance, and efficient path planning, essential for effective navigation in the busy hangar environment and aircraft inspections. The application of ROS navigation stack has been at the center of most solutions and is mostly efficient in static settings while limited in complex environments. These systems are often computationally intensive and require pre-configuration of environmental parameters, making them less efficient in changing environments with real-time demand. Deep learning models and ROS integration have shown promising improvements, leveraging experiential learning and large datasets. However, accurately detecting obstacles of different shapes and sizes, especially in varying lighting conditions, poses a significant challenge and affects safe navigation. To overcome these challenges in complex environments, this research proposes a novel solution for enhanced obstacle detection, avoidance and path planning. Our system leverages LiDAR and camera data fusion with a real-time and accurate YOLOv7/YOLOv5 object detection model for robust identification of diverse obstacles. Additionally, we proposed a combination with ROS planners, including Dijkstra, RRT and DWA, for path planning optimisation to enable collision-free navigation. The system was validated with ROS Gazebo and real- Turtlebot3 robot. It achieved zero collisions with YOLOv7 and RRT integration, a 2.7% increase in obstacle detection accuracy, and an estimated 2.4% faster navigation speed than the baseline methods.
dc.description.coursenamePhD in Transport Systems
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/23857
dc.language.isoen
dc.publisherCranfield University
dc.publisher.departmentSATM
dc.rights© Cranfield University, 2024. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright holder.
dc.subjectAutonomous navigation
dc.subjectobject detection
dc.subjectobstacle avoidance
dc.subjectmobile robot
dc.subjectdeep learning
dc.subjectcomputer vision
dc.subjectROS navigation
dc.titleOptimised obstacle detection and avoidance model for autonomous vehicle navigation
dc.typeThesis
dc.type.qualificationlevelDoctoral
dc.type.qualificationnamePhD

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Adiuku_A_2023.pdf
Size:
3.3 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.63 KB
Format:
Item-specific license agreed upon to submission
Description: