Browsing by Author "Tang, Gilbert"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Open Access Advancements in learning-based navigation systems for robotic applications in MRO hangar: review(MDPI, 2024-02-21) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, AngelosThe field of learning-based navigation for mobile robots is experiencing a surge of interest from research and industry sectors. The application of this technology for visual aircraft inspection tasks within a maintenance, repair, and overhaul (MRO) hangar necessitates efficient perception and obstacle avoidance capabilities to ensure a reliable navigation experience. The present reliance on manual labour, static processes, and outdated technologies limits operation efficiency in the inherently dynamic and increasingly complex nature of the real-world hangar environment. The challenging environment limits the practical application of conventional methods and real-time adaptability to changes. In response to these challenges, recent years research efforts have witnessed advancement with machine learning integration aimed at enhancing navigational capability in both static and dynamic scenarios. However, most of these studies have not been specific to the MRO hangar environment, but related challenges have been addressed, and applicable solutions have been developed. This paper provides a comprehensive review of learning-based strategies with an emphasis on advancements in deep learning, object detection, and the integration of multiple approaches to create hybrid systems. The review delineates the application of learning-based methodologies to real-time navigational tasks, encompassing environment perception, obstacle detection, avoidance, and path planning through the use of vision-based sensors. The concluding section addresses the prevailing challenges and prospective development directions in this domain.Item Open Access An AI-powered navigation framework to achieve an automated acquisition of cardiac ultrasound images(Springer Nature, 2023-09-11) Soemantoro, Raska; Kardos, Attila; Tang, Gilbert; Zhao, YifanEchocardiography is an effective tool for diagnosing cardiovascular disease. However, numerous challenges affect its accessibility, including skill requirements, workforce shortage, and sonographer strain. We introduce a navigation framework for the automated acquisition of echocardiography images, consisting of 3 modules: perception, intelligence, and control. The perception module contains an ultrasound probe, a probe actuator, and a locator camera. Information from this module is sent to the intelligence module, which grades the quality of an ultrasound image for different echocardiography views. The window search algorithm in the control module governs the decision-making process in probe movement, finding the best location based on known probe traversal positions and image quality. We conducted a series of simulations using the HeartWorks simulator to assess the proposed framework. This study achieved an accuracy of 99% for the image quality model, 96% for the probe locator model, and 99% for the view classification model, trained on an 80/20 training and testing split. We found that the best search area corresponds with general guidelines: at the anatomical left of the sternum between the 2nd and 5th intercostal space. Additionally, the likelihood of successful acquisition is also driven by how long it stores past coordinates and how much it corrects itself. Results suggest that achieving an automated echocardiography system is feasible using the proposed framework. The long-term vision is of a widely accessible and accurate heart imaging capability within hospitals and community-based settings that enables timely diagnosis of early-stage heart disease.Item Open Access An aircraft-manipulator system for virtual flight testing of longitudinal flight dynamics(MDPI, 2024-12-15) Ishola, Ademayowa A; Whidborne, James F; Tang, GilbertA virtual flight test is the process of flying an aircraft model inside a wind tunnel in a manner that replicates free-flight. In this paper, a 3-DOF aircraft-manipulator system is proposed that can be used for longitudinal dynamics virtual flight tests. The system consists of a two rotational degrees-of-freedom manipulator arm with an aircraft wind tunnel model attached to the third joint. This aircraft-manipulator system is constrained to operate for only the longitudinal motion of the aircraft. Thus, the manipulator controls the surge and heave of the aircraft whilst the pitch is free to rotate and can be actively controlled by means of an all-moving tailplane of the aircraft if required. In this initial study, a flight dynamics model of the aircraft is used to obtain dynamic response trajectories of the aircraft in free-flight. A model of the coupled aircraft-manipulator system developed using the Euler method is presented, and PID controllers are used to control the manipulator so that the aircraft follows the free-flight trajectory (with respect to the air). The inverse kinematics are used to produce the reference joint angles for the manipulator. The system is simulated in MATLAB/Simulink and a virtual flight test trajectory is compared with a free-flight test trajectory, demonstrating the potential of the proposed system for virtual flight tests.Item Open Access Application of model-based system engineering to a planetary rover design(IEEE, 2024-01-01) Castaing, Hugo; Tang, Gilbert; Chacin, Marco; Bonnefoi, FabienTraditional system engineering methods have shown their limits when applied to highly complex systems such as rovers. In parallel, the digitalisation of the industry and the democratisation of the use of models in engineering force the system engineering field to adapt and change its practices by adopting a new approach: Model-Based System Engineering. This project aimed to provide a comprehensive example of an MBSE approach applied to a planetary rover design case study. The MathWorks MBSE toolchain is used to reverse engineer the NASA Sojourner rover. As a result, the defined requirements, and functional and logical architectures are implemented and dynamically linked together. Additionally, a multi-physical simulation model of the rover's power and mobility system is implemented. Requirements, architectures, and physical models are linked together to form a unique and comprehensive knowledge base, providing a vertical model-centric approach. The results of the simulation are used to conduct a trade study and deduct a design change for the system. The results presented in this study form the basis for a discussion aimed at evaluating the benefits and limitations of the MBSE approach mentioned in the scientific literature.Item Open Access Autonomous ground refuelling approach for civil aircrafts using computer vision and robotics(IEEE, 2021-11-15) Yildirim, Suleyman; Rana, Zeeshan; Tang, Gilbert3D visual servoing systems need to detect the object and its pose in order to perform. As a result accurate, fast object detection and pose estimation play a vital role. Most visual servoing methods use low-level object detection and pose estimation algorithms. However, many approaches detect objects in 2D RGB sequences for servoing, which lacks reliability when estimating the object’s pose in 3D space. To cope with these problems, firstly, a joint feature extractor is employed to fuse the object’s 2D RGB image and 3D point cloud data. At this point, a novel method called PosEst is proposed to exploit the correlation between 2D and 3D features. Here are the results of the custom model using test data; precision: 0,9756, recall: 0.9876, F1 Score(beta=1): 0.9815, F1 Score(beta=2): 0.9779. The method used in this study can be easily implemented to 3D grasping and 3D tracking problems to make the solutions faster and more accurate. In a period where electric vehicles and autonomous systems are gradually becoming a part of our lives, this study offers a safer, more efficient and more comfortable environment.Item Open Access CNN-fusion architecture with visual and thermographic images for object detection(SPIE, 2023-06-12) Adiuku, Amaka; Avdelidis, Nicolas Peter; Tang, Gilbert; Plastropoulos, Angelos; Perinpanayagam, SureshMobile robots performing aircraft visual inspection play a vital role in the future automated aircraft maintenance, repair and overhaul (MRO) operations. Autonomous navigation requires understanding the surroundings to automate and enhance the visual inspection process. The current state of neural network (NN) based obstacle detection and collision avoidance techniques are suitable for well-structured objects. However, their ability to distinguish between solid obstacles and low-density moving objects is limited, and their performance degrades in low-light scenarios. Thermal images can be used to complement the low-light visual image limitations in many applications, including inspections. This work proposes a Convolutional Neural Network (CNN) fusion architecture that enables the adaptive fusion of visual and thermographic images. The aim is to enhance autonomous robotic systems’ perception and collision avoidance in dynamic environments. The model has been tested with RGB and thermographic images acquired in Cranfield’s University hangar, which hosts a Boeing 737-400 and TUI hangar. The experimental results prove that the fusion-based CNN framework increases object detection accuracy compared to conventional models.Item Open Access Code - Nonlinear Dynamics and Control of a Novel 3-DOF Aircraft-Manipulator for Dynamic Wind Tunnel Tests(2024-09-16) Whidborne, James; Tang, Gilbert; Ishola, AdemayowaItem Open Access A dataset for autonomous aircraft refueling on the ground (AGR)(IEEE, 2023-09-01) Kuang, Boyu; Barnes, Stuart; Tang, Gilbert; Jenkins, Karl W.Automatic aircraft ground refueling (AAGR) can improve the safety, efficiency, and cost-effectiveness of aircraft ground refueling (AGR), a critical and frequent operation on almost all aircraft. Recent AAGR relies on machine vision, artificial intelligence, and robotics to implement automation. An essential step for automation is AGR scene recognition, which can support further component detection, tracking, process monitoring, and environmental awareness. As in many practical and commercial applications, aircraft refueling data is usually confidential, and no standardized workflow or definition is available. These are the prerequisites and critical challenges to deploying and benefitting advanced data-driven AGR. This study presents a dataset (the AGR Dataset) for AGR scene recognition using image crawling, augmentation, and classification, which has been made available to the community. The AGR dataset crawled over 3k images from 13 databases (over 26k images after augmentation), and different aircraft, illumination, and environmental conditions were included. The ground-truth labeling is conducted manually using a proposed tree-formed decision workflow and six specific AGR tags. Various professionals have independently reviewed the AGR dataset to keep it no-bias. This study proposes the first aircraft refueling image dataset, and an image labeling software with a UI to automate the labeling workflow.Item Open Access Design and control of a 7-degree-of-freedom symmetric manipulator module for in-orbit operations(IEEE, 2024-01-01) Cotrina de los Mozos, Irene; Tang, GilbertThis paper proposes a modular redesign of multi-armed robotic systems with application to in-orbit operations. The manipulators that the robot includes are made independent and the connection to the central body is achieved through additional standard interfaces (SI). This grants the system the ability to self-repair through self-reconfiguration. The arms are also upgraded, giving rise to symmetrical manipulators of 7 degrees of freedom (DOF) capable of locomoting by themselves. The models of the new manipulator design are presented along with its nominal workspace. In addition to this, a novel algorithm for the control of the arm is developed based on the F ABRIK approach. This programme is tested using diverse target poses as input and the consequent results are shown too. All the models and sketches were created in SolidWorks; the algorithm was coded in MATLAB.Item Open Access The design and evaluation of an ergonomic contactless gesture control system for industrial robots(Hindawi Publishing Corporation, 2018-05-14) Tang, Gilbert; Webb, PhilIn industrial human-robot collaboration, variability commonly exists in the operation environment and the components, which induces uncertainty and error that require frequent manual intervention for rectification. Conventional teach pendants can be physically demanding to use and require user training prior to operation. Thus, a more effective control interface is required. In this paper, the design and evaluation of a contactless gesture control system using Leap Motion is described. The design process involves the use of RULA human factor analysis tool. Separately, an exploratory usability test was conducted to compare three usability aspects between the developed gesture control system and an off-the-shelf conventional touchscreen teach pendant. This paper focuses on the user-centred design methodology of the gesture control system. The novelties of this research are the use of human factor analysis tools in the human-centred development process, as well as the gesture control design that enable users to control industrial robot’s motion by its joints and tool centre point position. The system has potential to use as an input device for industrial robot control in a human-robot collaboration scene. The developed gesture control system was targeting applications in system recovery and error correction in flexible manufacturing environment shared between humans and robots. The system allows operators to control an industrial robot without the requirement of significant training.Item Open Access Development and assessment of a contactless 3D joystick approach to industrial manipulator gesture control(Elsevier, 2022-11-18) Bordoni, Sam; Tang, GilbertThis paper explores a novel design of ergonomic gesture control with visual feedback for the UR3 collaborative robot that aims to allow users with little to no familiarity with robots to complete basic tasks and programming. The principle behind the design mirrors that of a 3D joystick but utilises the Leapmotion device to track the user's hands and prevents any need for a physical joystick or buttons. The Rapid Upper Limb Assessment (RULA) ergonomic tool was used to inform the design and ensure the system was safe for long-term use. The developed system was assessed using the RULA tool for an ergonomic score and through an experiment requiring 19 voluntary participants to complete a basic task with both the gesture system and the UR3's RTP (Robot Teach Pendant), then filling out SUS (System Usability Scale) questionnaires to compare the usability of both systems. The task involved controlling the robot to pick up a pipe and then insert it into a series of slots of decreasing diameter, allowing for both the speed and accuracy of each system to be compared. The experiment found that even those with no previous robot experience were able to complete the tasks after only a brief description of how the gesture system works. Despite beating the RTP's ergonomic score, the system narrowly lost on average usability scores. However, as a contactless gesture system it has other advantages over the RTP and through this experiment many potential improvements were identified, paving the way for future work into assessing the significance of including the visual feedback and comparing this system against other gesture-based systems.Item Open Access The development and evaluation of Robot Light Skin: A novel robot signalling system to improve communication in industrial human–robot collaboration(Elsevier, 2018-09-12) Tang, Gilbert; Webb, Phil; Thrower, JohnIn a human–robot collaborative production system, the robot could make request for interaction or notify the human operator if an uncertainty arises. Conventional industrial tower lights were designed for generic machine signalling purposes which may not be the ultimate solution for robot signalling in a collaborative setting. In this type of system, human operators could be monitoring multiple robots while carrying out a manual task so it is important to minimise the diversion of their attention. This paper presents a novel robot signalling solution, the Robot Light Skin (RLS),which is an integrated signalling system that could be used on most articulated robots. Our experiment was conducted to validate this concept in terms of its effect on improving operator's reaction time, hit-rate, awareness and task performance. The results showed that participants reacted faster to the RLS as well as achieved higher hit-rate. An eye tracker was used in the experiment which shows a reduction in diversion away from the manual task when using the RLS. Future study should explore the effect of the RLS concept on large-scale systems and multi-robot systems.Item Open Access The development of a human-robot interface for industrial collaborative system(Cranfield University, 2016-04) Tang, Gilbert; Webb, PhilIndustrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot.Item Open Access Do speed and proximity affect human-robot collaboration with an industrial robot arm?(Springer, 2022-01-07) Story, Matthew; Webb, Phil; Fletcher, Sarah R.; Tang, Gilbert; Jaksic, Cyril; Carberry, JonCurrent guidelines for Human-Robot Collaboration (HRC) allow a person to be within the working area of an industrial robot arm whilst maintaining their physical safety. However, research into increasing automation and social robotics have shown that attributes in the robot, such as speed and proximity setting, can influence a person’s workload and trust. Despite this, studies into how an industrial robot arm’s attributes affect a person during HRC are limited and require further development. Therefore, a study was proposed to assess the impact of robot’s speed and proximity setting on a person’s workload and trust during an HRC task. Eighty-three participants from Cranfield University and the ASK Centre, BAE Systems Samlesbury, completed a task in collaboration with a UR5 industrial robot arm running at different speeds and proximity settings, workload and trust were measured after each run. Workload was found to be positively related to speed but not significantly related to proximity setting. Significant interaction was not found for trust with speed or proximity setting. This study showed that even when operating within current safety guidelines, an industrial robot can affect a person’s workload. The lack of significant interaction with trust was attributed to the robot’s relatively small size and high success rate, and therefore may have an influence in larger industrial robots. As workload and trust can have a significant impact on a person’s performance and satisfaction, it is key to understand this relationship early in the development and design of collaborative work cells to ensure safe and high productivity.Item Open Access Evaluating the use of human aware navigation in industrial robot arms(Walter de Gruyter, 2021-08-27) Story, Matthew; Jaksic, Cyril; Fletcher, Sarah R.; Webb, Philip; Tang, Gilbert; Carberry, JonathanAlthough the principles followed by modern standards for interaction between humans and robots follow the First Law of Robotics popularized in science fiction in the 1960s, the current standards regulating the interaction between humans and robots emphasize the importance of physical safety. However, they are less developed in another key dimension: psychological safety. As sales of industrial robots have been increasing over recent years, so has the frequency of human–robot interaction (HRI). The present article looks at the current safety guidelines for HRI in an industrial setting and assesses their suitability. This article then presents a means to improve current standards utilizing lessons learned from studies into human aware navigation (HAN), which has seen increasing use in mobile robotics. This article highlights limitations in current research, where the relationships established in mobile robotics have not been carried over to industrial robot arms. To understand this, it is necessary to focus less on how a robot arm avoids humans and more on how humans react when a robot is within the same space. Currently, the safety guidelines are behind the technological advance, however, with further studies aimed at understanding HRI and applying it to newly developed path finding and obstacle avoidance methods, science fiction can become science fact.Item Open Access Improved hybrid model for obstacle detection and avoidance in robot operating system framework (rapidly exploring random tree and dynamic windows approach)(MDPI, 2024-04-02) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, AngelosThe integration of machine learning and robotics brings promising potential to tackle the application challenges of mobile robot navigation in industries. The real-world environment is highly dynamic and unpredictable, with increasing necessities for efficiency and safety. This demands a multi-faceted approach that combines advanced sensing, robust obstacle detection, and avoidance mechanisms for an effective robot navigation experience. While hybrid methods with default robot operating system (ROS) navigation stack have demonstrated significant results, their performance in real time and highly dynamic environments remains a challenge. These environments are characterized by continuously changing conditions, which can impact the precision of obstacle detection systems and efficient avoidance control decision-making processes. In response to these challenges, this paper presents a novel solution that combines a rapidly exploring random tree (RRT)-integrated ROS navigation stack and a pre-trained YOLOv7 object detection model to enhance the capability of the developed work on the NAV-YOLO system. The proposed approach leveraged the high accuracy of YOLOv7 obstacle detection and the efficient path-planning capabilities of RRT and dynamic windows approach (DWA) to improve the navigation performance of mobile robots in real-world complex and dynamically changing settings. Extensive simulation and real-world robot platform experiments were conducted to evaluate the efficiency of the proposed solution. The result demonstrated a high-level obstacle avoidance capability, ensuring the safety and efficiency of mobile robot navigation operations in aviation environments.Item Open Access Mobile robot obstacle detection and avoidance with NAV-YOLO(EJournal Publishing, 2024-03-22) Adiuku, Ndidiamaka; Avdelidis, Nicolas P.; Tang, Gilbert; Plastropoulos, Angelos; Diallo, YanisIntelligent robotics is gaining significance in Maintenance, Repair, and Overhaul (MRO) hangar operations, where mobile robots navigate complex and dynamic environments for Aircraft visual inspection. Aircraft hangars are usually busy and changing, with objects of varying shapes and sizes presenting harsh obstacles and conditions that can lead to potential collisions and safety hazards. This makes Obstacle detection and avoidance critical for safe and efficient robot navigation tasks. Conventional methods have been applied with computational issues, while learning-based approaches are limited in detection accuracy. This paper proposes a vision-based navigation model that integrates a pre-trained Yolov5 object detection model into a Robot Operating System (ROS) navigation stack to optimise obstacle detection and avoidance in a complex environment. The experiment is validated and evaluated in ROS-Gazebo simulation and turtlebot3 waffle-pi robot platform. The results showed that the robot can increasingly detect and avoid obstacles without colliding while navigating through different checkpoints to the target location.Item Open Access Navigation for a mobile robot to inspect aircraft(IEEE, 2023-08-08) Mansakul, Thanavin; Fan, Ip-Shing; Tang, GilbertPreflight inspection is an important part of aircraft maintenance, as it does not affect only safety but also expenditure. In order to reduce the risks from human errors and the cost of operation, a mobile robot is introduced as an effective solution. This research examines navigation for aircraft inspection using a mobile robot. The two path planning methods, Dijkstra with DWA and A* with TEB, are compared by navigation performance in the simulation and three environments which are office, atrium, and hangar, to ensure that the robot is able to handle a variety of situations. In addition, an ultrasonic sensor was installed to perform obstacle avoidance and support LIDAR when it failed. The result is that both algorithms have their advantages depending on the purpose; if an accurate position is required, A* with TEB should be selected, while Dijkstra with DWA offers a smooth path and less time spent in small and complex environments. The notable parameters in navigation for the mobile robot, TurtleBot3, are presented, and limitations are detailed. The use of the autonomous mobile robot could support the operation with high precision and repeatability.Item Open Access Reading and understanding house numbers for delivery robots using the ”SVHN Dataset”(IEEE, 2024-06-05) Pradhan, Omkar N.; Tang, Gilbert; Makris, Christos; Gudipati, RadhikaDetecting street house numbers in complex environments is a challenging robotics and computer vision task that could be valuable in enhancing the accuracy of delivery robots' localisation. The development of this technology also has positive implications for address parsing and postal services. This project focuses on building a robust and efficient system that deals with the complexities associated with detecting house numbers in street scenes. The models in this system are trained on Stanford University's SVHN (Street View House Numbers) dataset. By fine-tuning the YOLO's (You Only Look Once) nano model results with an effective detection range from 1.02 meters to 4.5. The optimum allowance for angle of tilt was ±15°. The inference resolution was obtained to be 2160 * 1620 with inference delay of 35 milliseconds.Item Open Access A ROS-based simulation and control framework for in-orbit multi-arm robot assembly operations(European Space Agency (ESA), 2023-10-20) Bhadani, Saksham; Dillikar, Sairaj R.; Pradhan, Omkar N.; Cotrina de los Mozos, Irene; Felicetti, Leonard; Upadhyay, Saurabh; Tang, GilbertThis paper develops a simulation and control framework for a multi-arm robot performing in-orbit assembly. The framework considers the robot locomotion on the assembled structure, the assembly planning, and multi-arm control. An inchworm motion is mimicked using a sequential docking approach to achieve locomotion. An RRT* based approach is implemented to complete the sequential assembly as well as the locomotion of MARIO across the structure. A semi-centralised controller model is used to control the robotic arms for these operations. The architecture uses MoveIt! libraries, Gazebo simulator and Python to simulate the desired locomotion and assembly tasks. The simulation results validate the viability of the developed framework.