Browsing by Author "Adiuku, Ndidiamaka"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access Advancements in 3D x-ray imaging: development and application of a twin robot system(Brunel University, 2024-08-31) Asif, Seemal; Hryshchenko Sumina, Yuliya; Holden, Martin; Contino, Matteo; Adiuku, Ndidiamaka; Hughes, Bryn; Plastropoulos, Angelos; Avdelidis, Nico; Webb, Phildevelopment of a novel twin robot system for 3D X-ray imaging integrates advanced robotic control with mobile X-ray technology to significantly enhance diagnostic accuracy and efficiency in both medical and industrial applications. Key technical aspects, including innovative design specifications and system architecture, are discussed in detail. The twin robots operate in tandem, providing comprehensive imaging capabilities with high precision. This novel approach offers potential applications ranging from medical diagnostics to industrial inspections, significantly improving over traditional imaging methods. Preliminary results demonstrate the system's effectiveness in producing detailed 3D images, underscoring its potential for wide-ranging uses. Future research will focus on optimizing image quality and automating the imaging process to increase utility and efficiency. This development signifies a step forward in integrating robotics and imaging technology, promising enhanced outcomes in various fields.Item Open Access Advancements in learning-based navigation systems for robotic applications in MRO hangar: review(MDPI, 2024-02-21) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, AngelosThe field of learning-based navigation for mobile robots is experiencing a surge of interest from research and industry sectors. The application of this technology for visual aircraft inspection tasks within a maintenance, repair, and overhaul (MRO) hangar necessitates efficient perception and obstacle avoidance capabilities to ensure a reliable navigation experience. The present reliance on manual labour, static processes, and outdated technologies limits operation efficiency in the inherently dynamic and increasingly complex nature of the real-world hangar environment. The challenging environment limits the practical application of conventional methods and real-time adaptability to changes. In response to these challenges, recent years research efforts have witnessed advancement with machine learning integration aimed at enhancing navigational capability in both static and dynamic scenarios. However, most of these studies have not been specific to the MRO hangar environment, but related challenges have been addressed, and applicable solutions have been developed. This paper provides a comprehensive review of learning-based strategies with an emphasis on advancements in deep learning, object detection, and the integration of multiple approaches to create hybrid systems. The review delineates the application of learning-based methodologies to real-time navigational tasks, encompassing environment perception, obstacle detection, avoidance, and path planning through the use of vision-based sensors. The concluding section addresses the prevailing challenges and prospective development directions in this domain.Item Open Access Improved hybrid model for obstacle detection and avoidance in robot operating system framework (rapidly exploring random tree and dynamic windows approach)(MDPI, 2024-04-02) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, AngelosThe integration of machine learning and robotics brings promising potential to tackle the application challenges of mobile robot navigation in industries. The real-world environment is highly dynamic and unpredictable, with increasing necessities for efficiency and safety. This demands a multi-faceted approach that combines advanced sensing, robust obstacle detection, and avoidance mechanisms for an effective robot navigation experience. While hybrid methods with default robot operating system (ROS) navigation stack have demonstrated significant results, their performance in real time and highly dynamic environments remains a challenge. These environments are characterized by continuously changing conditions, which can impact the precision of obstacle detection systems and efficient avoidance control decision-making processes. In response to these challenges, this paper presents a novel solution that combines a rapidly exploring random tree (RRT)-integrated ROS navigation stack and a pre-trained YOLOv7 object detection model to enhance the capability of the developed work on the NAV-YOLO system. The proposed approach leveraged the high accuracy of YOLOv7 obstacle detection and the efficient path-planning capabilities of RRT and dynamic windows approach (DWA) to improve the navigation performance of mobile robots in real-world complex and dynamically changing settings. Extensive simulation and real-world robot platform experiments were conducted to evaluate the efficiency of the proposed solution. The result demonstrated a high-level obstacle avoidance capability, ensuring the safety and efficiency of mobile robot navigation operations in aviation environments.Item Open Access Mobile robot obstacle detection and avoidance with NAV-YOLO(EJournal Publishing, 2024-03-22) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, Angelos; Diallo, YanisIntelligent robotics is gaining significance in Maintenance, Repair, and Overhaul (MRO) hangar operations, where mobile robots navigate complex and dynamic environments for Aircraft visual inspection. Aircraft hangars are usually busy and changing, with objects of varying shapes and sizes presenting harsh obstacles and conditions that can lead to potential collisions and safety hazards. This makes Obstacle detection and avoidance critical for safe and efficient robot navigation tasks. Conventional methods have been applied with computational issues, while learning-based approaches are limited in detection accuracy. This paper proposes a vision-based navigation model that integrates a pre-trained Yolov5 object detection model into a Robot Operating System (ROS) navigation stack to optimise obstacle detection and avoidance in a complex environment. The experiment is validated and evaluated in ROS-Gazebo simulation and turtlebot3 waffle-pi robot platform. The results showed that the robot can increasingly detect and avoid obstacles without colliding while navigating through different checkpoints to the target location.