Cranfield Defence and Security
Permanent URI for this community
Browse
Browsing Cranfield Defence and Security by Supervisor "Aouf, Nabil"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Open Access 3D automatic target recognition for missile platforms(2017-05) Kechagias Stamatis, Odysseas; Aouf, NabilThe quest for military Automatic Target Recognition (ATR) procedures arises from the demand to reduce collateral damage and fratricide. Although missiles with two-dimensional ATR capabilities do exist, the potential of future Light Detection and Ranging (LIDAR) missiles with three-dimensional (3D) ATR abilities shall significantly improve the missile’s effectiveness in complex battlefields. This is because 3D ATR can encode the target’s underlying structure and thus reinforce target recognition. However, the current military grade 3D ATR or military applied computer vision algorithms used for object recognition do not pose optimum solutions in the context of an ATR capable LIDAR based missile, primarily due to the computational and memory (in terms of storage) constraints that missiles impose. Therefore, this research initially introduces a 3D descriptor taxonomy for the Local and the Global descriptor domain, capable of realising the processing cost of each potential option. Through these taxonomies, the optimum missile oriented descriptor per domain is identified that will further pinpoint the research route for this thesis. In terms of 3D descriptors that are suitable for missiles, the contribution of this thesis is a 3D Global based descriptor and four 3D Local based descriptors namely the SURF Projection recognition (SPR), the Histogram of Distances (HoD), the processing efficient variant (HoD-S) and the binary variant B-HoD. These are challenged against current state-of-the-art 3D descriptors on standard commercial datasets, as well as on highly credible simulated air-to-ground missile engagement scenarios that consider various platform parameters and nuisances including simulated scale change and atmospheric disturbances. The results obtained over the different datasets showed an outstanding computational improvement, on average x19 times faster than state-of-the-art techniques in the literature, while maintaining or even improving on some occasions the detection rate to a minimum of 90% and over of correct classified targets.Item Open Access Kernel-based fault diagnosis of inertial sensors using analytical redundancy(2017) Vitanov, Ivan V.; Aouf, NabilKernel methods are able to exploit high-dimensional spaces for representational advantage, while only operating implicitly in such spaces, thus incurring none of the computational cost of doing so. They appear to have the potential to advance the state of the art in control and signal processing applications and are increasingly seeing adoption across these domains. Applications of kernel methods to fault detection and isolation (FDI) have been reported, but few in aerospace research, though they offer a promising way to perform or enhance fault detection. It is mostly in process monitoring, in the chemical processing industry for example, that these techniques have found broader application. This research work explores the use of kernel-based solutions in model-based fault diagnosis for aerospace systems. Specifically, it investigates the application of these techniques to the detection and isolation of IMU/INS sensor faults – a canonical open problem in the aerospace field. Kernel PCA, a kernelised non-linear extension of the well-known principal component analysis (PCA) algorithm, is implemented to tackle IMU fault monitoring. An isolation scheme is extrapolated based on the strong duality known to exist between probably the most widely practiced method of FDI in the aerospace domain – the parity space technique – and linear principal component analysis. The algorithm, termed partial kernel PCA, benefits from the isolation properties of the parity space method as well as the non-linear approximation ability of kernel PCA. Further, a number of unscented non-linear filters for FDI are implemented, equipped with data-driven transition models based on Gaussian processes - a non-parametric Bayesian kernel method. A distributed estimation architecture is proposed, which besides fault diagnosis can contemporaneously perform sensor fusion. It also allows for decoupling faulty sensors from the navigation solution.Item Open Access Multimodal Navigation for Accurate Space Rendezvous Missions(2021-05) Rondao, Duarte O De M A; Aouf, Nabil; Richardson, Mark ARelative navigation is paramount in space missions that involve rendezvousing between two spacecraft. It demands accurate and continuous estimation of the six degree-of-freedom relative pose, as this stage involves close-proximity-fast-reaction operations that can last up to five orbits. This has been routinely achieved thanks to active sensors such as lidar, but their large size, cost, power and limited operational range remain a stumbling block for en masse on-board integration. With the onset of faster processing units, lighter and cheaper passive optical sensors are emerging as the suitable alternative for autonomous rendezvous in combination with computer vision algorithms. Current vision-based solutions, however, are limited by adverse illumination conditions such as solar glare, shadowing, and eclipse. These effects are exacerbated when the target does not hold cooperative markers to accommodate the estimation process and is incapable of controlling its rotational state. This thesis explores novel model-based methods that exploit sequences of monoc ular images acquired by an on-board camera to accurately carry out spacecraft relative pose estimation for non-cooperative close-range rendezvous with a known artificial target. The proposed solutions tackle the current challenges of imaging in the visible spectrum and investigate the contribution of the long wavelength infrared (or “thermal”) band towards a combined multimodal approach. As part of the research, a visible-thermal synthetic dataset of a rendezvous approach with the defunct satellite Envisat is generated from the ground up using a realistic orbital camera simulator. From the rendered trajectories, the performance of several state-of-the-art feature detectors and descriptors is first evaluated for both modalities in a tailored scenario for short and wide baseline image processing transforms. Multiple combinations, including the pairing of algorithms with their non-native counterparts, are tested. Computational runtimes are assessed in an embedded hardware board. From the insight gained, a method to estimate the pose on the visible band is derived from minimising geometric constraints between online local point and edge contour features matched to keyframes generated offline from a 3D model of the target. The combination of both feature types is demonstrated to achieve a pose solution for a tumbling target using a sparse set of training images, bypassing the need for hardware-accelerated real-time renderings of the model. The proposed algorithm is then augmented with an extended Kalman filter which processes each feature-induced minimisation output as individual pseudo measurements, fusing them to estimate the relative pose and velocity states at each time-step. Both the minimisation and filtering are established using Lie group formalisms, allowing for the covariance of the solution computed by the former to be automatically incorporated as measurement noise in the latter, providing an automatic weighing of each feature type directly related to the quality of the matches. The predicted states are then used to search for new feature matches in the subsequent time-step. Furthermore, a method to derive a coarse viewpoint estimate to initialise the nominal algorithm is developed based on probabilistic modelling of the target’s shape. The robustness of the complete approach is demonstrated for several synthetic and laboratory test cases involving two types of target undergoing extreme illumination conditions. Lastly, an innovative deep learning-based framework is developed by processing the features extracted by a convolutional front-end with long short-term memory cells, thus proposing the first deep recurrent convolutional neural network for spacecraft pose estimation. The framework is used to compare the performance achieved by visible-only and multimodal input sequences, where the addition of the thermal band is shown to greatly improve the performance during sunlit sequences. Potential limitations of this modality are also identified, such as when the target’s thermal signature is comparable to Earth’s during eclipse.Item Open Access Obstacle voidance for Unmanned Aerial Vehicles during teleoperation(2019-09) Courtois, Hugo; Aouf, NabilUnmanned Aerial Vehicles (UAVs) use is on the rise, both for civilian and military applications. Autonomous UAV navigation is an active research topic, but human operators still provide a flexibility that currently matches or outperforms computers controlled aerial vehicles. For this reason, the remote control of a UAV by a human operator, or teleoperation, is an important subject of study. The challenge for UAV teleoperation comes from the loss of sensory information available for the operator who has to rely on onboard sensors to perceive the environment and the state of the UAV. Navigation in cluttered environment or small spaces is especially hard and demanding. A flight assistance framework could then bring significant benefits to the operator. In this thesis, an intelligent flight assistance framework for the teleoperation of rotary wings UAVs in small spaces is designed. A 3D haptic device serves as a remote control to improve ease of UAV manipulation for the operator. Moreover, the designed system provides benefits regarding three essential criteria: safety of the UAV, efficiency of the teleoperation and workload of the operator. In order to leverage the use of a 3D haptic controller, the initial obstacle avoidance algorithm proposed in this thesis is based on haptic feedback, where the feedback repels the UAV away from obstacles. This method is tested by human subjects, showing safety benefits but no manoeuvrability improvements. In order to improve on those criteria, the perception of the environment is studied using Light Detection And Ranging (LIDAR) and stereo cameras sensors data. The result of this led to the development of a mobile map of the obstacles surrounding the UAV using the LIDAR in addition to the stereo camera adopted to improve density. This map allows the creation of a flight assistance system that analyses and corrects the user’s inputs so that collisions are avoided while improving manoeuvrability. The proposed flight assistance system is validated through experiments involving untrained human subjects in a synthetically simulated environment. The results show that the proposed flight assistance system sharply reduces the number of collisions, the time required to complete the navigation task and the workload of the participantsItem Open Access Robust 3D registration and tracking with RGBD sensors(2015-06-26) Amamra, A; Aouf, NabilThis thesis investigates the utilisation of cheap RGBD sensors in rigid body tracking and 3D multiview registration for augmented and Virtual reality applications. RGBD sensors can be used as an affordable substitute for the more sophisticated, but expensive, conventional laser-based scanning and tracking solutions. Nevertheless, the low-cost sensing technology behind them has several drawbacks such as the limited range, significant noisiness and instability. To deal with these issues, an innovative adaptation of Kalman filtering scheme is first proposed to improve the precision, smoothness and robustness of raw RGBD outputs. It also extends the native capabilities of the sensor to capture further targets. The mathematical foundations of such an adaptation are explained in detail, and its corrective effect is validated with real tracking as well as 3D reconstruction experiments. A Graphics Processing Unit (GPU) implementation is also proposed with the different optimisation levels in order to ensure real-time responsiveness. After extensive experimentation with RGBD cameras, a significant difference in accuracy was noticed between the newer and ageing sensors. This decay could not be restored with conventional calibration. Thus, a novel method for worn RGBD sensors correction is also proposed. Another algorithm for background/foreground segmentation of RGBD images is contributed. The latter proceeds through background subtraction from colour and depth images separately, the resulting foreground regions are then fused for a more robust detection. The three previous contributions are used in a novel approach for multiview vehicle tracking for mixed reality needs. The determination of the position regarding the vehicle is achieved in two stages: the former is a sensor-wise robust filtering algorithm that is able to handle the uncertainties in the system and measurement models resulting in multiple position estimates; the latter algorithm aims at merging the independent estimates by using a set of optimal weighting coefficients. The outcome of fusion is used to determine vehicle’s orientation in the scene. Finally, a novel recursive filtering approach for sparse registration is proposed. Unlike ordinary state of the art alignment algorithms, the proposed method has four advantages that are not available altogether in any previous solution. It is able to deal with inherent noise contaminating sensory data; it is robust to uncertainties related to feature localisation; it combines the advantages of both L2 , L (infinity) norms for a higher performance and prevention of local minima; it also provides an estimated rigid body transformation along with its error covariance. This 3D registration scheme is validated in various challenging scenarios with both synthetic and real RGBD data.Item Open Access Robust airborne 3D visual simultaneous localisation and mapping(2011-09-09) Nemra, A.; Aouf, NabilThe aim of this thesis is to present robust solutions to technical problems of airborne three-dimensional (3D) Visual Simultaneous Localisation And Mapping (VSLAM). These solutions are developed based on a stereovision system available onboard Unmanned Aerial Vehicles (UAVs). The proposed airborne VSLAM enables unmanned aerial vehicles to construct a reliable map of an unknown environment and localise themselves within this map without any user intervention. Current research challenges related to Airborne VSLAM include the visual processing through invariant feature detectors/descriptors, efficient mapping of large environments and cooperative navigation and mapping of complex environments. Most of these challenges require scalable representations, robust data association algorithms, consistent estimation techniques, and fusion of different sensor modalities. To deal with these challenges, seven Chapters are presented in this thesis as follows: Chapter 1 introduces UAVs, definitions, current challenges and different applications. Next, in Chapter 2 we present the main sensors used by UAVs during navigation. Chapter 3 presents an important task for autonomous navigation which is UAV localisation. In this chapter, some robust and optimal approaches for data fusion are proposed with performance analysis. After that, UAV map building is presented in Chapter 4. This latter is divided into three parts. In the first part, a new imaging alternative technique is proposed to extract and match a suitable number of invariant features. The second part presents an image mosaicing algorithm followed by a super-resolution approach. In the third part, we propose a new feature detector and descriptor that is fast, robust and detect suitable number of features to solve the VSLAM problem. A complete Airborne Visual Simultaneous Localisation and Mapping (VSLAM) solution based on a stereovision system is presented in Chapter (5). Robust data association filters with consistency and observability analysis are presented in this chapter as well. The proposed algorithm is validated with loop closing detection and map management using experimental data. The airborne VSLAM is extended then to the multiple UAVs case in Chapter (6). This chapter presents two architectures of cooperation: a Centralised and a Decentralised. The former provides optimal precision in terms of UAV positions and constructed map while the latter is more suitable for real time and embedded system applications. Finally, conclusions and future works are presented in Chapter (7).Item Open Access Robust convex optimisation techniques for autonomous vehicle vision-based navigation(2015-09-09) Boulekchour, M; Aouf, NabilThis thesis investigates new convex optimisation techniques for motion and pose estimation. Numerous computer vision problems can be formulated as optimisation problems. These optimisation problems are generally solved via linear techniques using the singular value decomposition or iterative methods under an L2 norm minimisation. Linear techniques have the advantage of offering a closed-form solution that is simple to implement. The quantity being minimised is, however, not geometrically or statistically meaningful. Conversely, L2 algorithms rely on iterative estimation, where a cost function is minimised using algorithms such as Levenberg-Marquardt, Gauss-Newton, gradient descent or conjugate gradient. The cost functions involved are geometrically interpretable and can statistically be optimal under an assumption of Gaussian noise. However, in addition to their sensitivity to initial conditions, these algorithms are often slow and bear a high probability of getting trapped in a local minimum or producing infeasible solutions, even for small noise levels. In light of the above, in this thesis we focus on developing new techniques for finding solutions via a convex optimisation framework that are globally optimal. Presently convex optimisation techniques in motion estimation have revealed enormous advantages. Indeed, convex optimisation ensures getting a global minimum, and the cost function is geometrically meaningful. Moreover, robust optimisation is a recent approach for optimisation under uncertain data. In recent years the need to cope with uncertain data has become especially acute, particularly where real-world applications are concerned. In such circumstances, robust optimisation aims to recover an optimal solution whose feasibility must be guaranteed for any realisation of the uncertain data. Although many researchers avoid uncertainty due to the added complexity in constructing a robust optimisation model and to lack of knowledge as to the nature of these uncertainties, and especially their propagation, in this thesis robust convex optimisation, while estimating the uncertainties at every step is investigated for the motion estimation problem. First, a solution using convex optimisation coupled to the recursive least squares (RLS) algorithm and the robust H filter is developed for motion estimation. In another solution, uncertainties and their propagation are incorporated in a robust L convex optimisation framework for monocular visual motion estimation. In this solution, robust least squares is combined with a second order cone program (SOCP). A technique to improve the accuracy and the robustness of the fundamental matrix is also investigated in this thesis. This technique uses the covariance intersection approach to fuse feature location uncertainties, which leads to more consistent motion estimates. Loop-closure detection is crucial in improving the robustness of navigation algorithms. In practice, after long navigation in an unknown environment, detecting that a vehicle is in a location it has previously visited gives the opportunity to increase the accuracy and consistency of the estimate. In this context, we have developed an efficient appearance-based method for visual loop-closure detection based on the combination of a Gaussian mixture model with the KD-tree data structure. Deploying this technique for loop-closure detection, a robust L convex posegraph optimisation solution for unmanned aerial vehicle (UAVs) monocular motion estimation is introduced as well. In the literature, most proposed solutions formulate the pose-graph optimisation as a least-squares problem by minimising a cost function using iterative methods. In this work, robust convex optimisation under the L norm is adopted, which efficiently corrects the UAV’s pose after loop-closure detection. To round out the work in this thesis, a system for cooperative monocular visual motion estimation with multiple aerial vehicles is proposed. The cooperative motion estimation employs state-of-the-art approaches for optimisation, individual motion estimation and registration. Three-view geometry algorithms in a convex optimisation framework are deployed on board the monocular vision system for each vehicle. In addition, vehicle-to-vehicle relative pose estimation is performed with a novel robust registration solution in a global optimisation framework. In parallel, and as a complementary solution for the relative pose, a robust non-linear H solution is designed as well to fuse measurements from the UAVs’ on-board inertial sensors with the visual estimates. The suggested contributions have been exhaustively evaluated over a number of real-image data experiments in the laboratory using monocular vision systems and range imaging devices. In this thesis, we propose several solutions towards the goal of robust visual motion estimation using convex optimisation. We show that the convex optimisation framework may be extended to include uncertainty information, to achieve robust and optimal solutions. We observed that convex optimisation is a practical and very appealing alternative to linear techniques and iterative methods.Item Open Access Robust multispectral image-based localisation solutions for autonomous systems(2019-11-21) Beauvisage, Axel; Aouf, NabilWith the recent increase of interest in multispectral imaging, new image-based localisation solutions have emerged. However, its application to visual odometry remains overlooked. Most localisation techniques are still being developed with visible cameras only, because the portability they can offer and the wide variety of cameras available. Yet, other modalities have great potentials for navigation purposes. Infrared imaging for example, provides different information about the scene and is already used to enhance visible images. This is especially the case of far-infrared cameras which can produce images at night and see hot objects like other cars, animals or pedestrians. Therefore, the aim of this thesis is to tackle the lack of research in multispectral localisation and to explore new ways of performing visual odometry accurately with visible and thermal images. First, a new calibration pattern made of LED lights is presented in Chapter 3. Emitting both visible and thermal radiations, it can easily be seen by infrared and visible cameras. Due to its peculiar shape, the whole pattern can be moved around the cameras and automatically identified in the different images recorded. Monocular and stereo calibration are then performed to precisely estimate the camera parameters. Then, a multispectral monocular visual odometry algorithm is proposed in Chapter 4. This generic technique is able to operate in infrared and visible modalities, regardless of the nature of the images. Incoming images are processed at a high frame rate based on a 2D-to-2D unscaled motion estimation method. However, specific keyframes are carefully selected to avoid degenerate cases and a bundle adjustment optimisation is performed on a sliding window to refine the initial estimation. The advantage of visible-thermal odometry is shown on a scenario with extreme illumination conditions, where the limitation of each modality is reached. The simultaneous combination of visible and thermal images for visual odometry is also explored. In Chapter 5, two feature matching techniques are presented and tested in a multispectral stereo visual odometry framework. One method matches features between stereo pairs independently while the other estimates unscaled motion first, before matching the features altogether. Even though these techniques require more processing power to overcome the dissimilarities between V multimodal images, they have the benefit of estimating scaled transformations. Finally, the camera pose estimates obtained with multispectral stereo odometry are fused with inertial data to create a robustified localisation solution which is detailed in Chapter 6. The full state of the system is estimated, including position, velocity, orientation and IMU biases. It is shown that multispectral visual odometry can correct drifting IMU measurements effectively. Furthermore, it is demonstrated that such multi-sensors setups can be beneficial in challenging situations where features cannot be extracted or tracked. In that case, inertial data can be integrated to provide a state estimate while visual odometry cannot.Item Open Access Robust vision based slope estimation and rocks detection for autonomous space landers(2017-06-13) Feetham, Luke; Aouf, NabilAs future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution.Item Open Access Single and cooperative 2D/3D image mosaicing(2014-06-10) Imran, S A; Aouf, NabilThis thesis investigates robust and fast methods for single and cooperative 2D/3D image mosaicing to enhance field of view of images by joining them together. Image mosaicing is underlined by the process of image registration and a significant portion of the contributions of this work are dedicated to it. Image features are identified as a solution to the problem of image registration that uses feature-to-feature matching between images to solve for inter-image transformations. We have developed a novel two signature distribution based feature descriptor that combines grey level gradients and a colour histogram. This descriptor is robust to illumination changes and shows better matching accuracy compared to state of the art. Furthermore, we introduce a feature clustering technique that uses colour codes assigned to each feature to group them together. This allows fast and accurate feature matching as the search space is reduced. Taking into account feature location uncertainty we have introduced a novel information fusion technique to reduce this error by covariance intersection. This reduced error location is consequently fed to an H∞ filter taking into account system uncertainty for parameter estimation. We show that this technique outperforms costly nonlinear optimisation techniques. We have also developed a novel coupled filtering scheme based on H∞ filtering that estimates inter-image geometric and photometric transformations simultaneously. This is shown to perform better than standard least square techniques. Furthermore, we have introduced time varying parameter estimation using recursive techniques that facilitate in tracking changing parameters of inter-image transformations, suitable for image mosaicing between moving platforms. A method for rapid 3D scene reconstruction is developed that uses homographic lines between images for semi-dense pixel matching. Triangular meshes are then used for a complete visualisation of the scene and to fill in the gaps. To tackle cooperative mosaicing scenarios, additional methods are presented that include descriptor compression using principal components and 3D scene merging using the trifocal tensor. Capabilities of the proposed techniques are illustrated with real world images.Item Open Access Single and multiple stereo view navigation for planetary rovers(2013-10-08) Bartolome, D R; Aouf, NabilThis thesis deals with the challenge of autonomous navigation of the ExoMars rover. The absence of global positioning systems (GPS) in space, added to the limitations of wheel odometry makes autonomous navigation based on these two techniques - as done in the literature - an inviable solution and necessitates the use of other approaches. That, among other reasons, motivates this work to use solely visual data to solve the robot’s Egomotion problem. The homogeneity of Mars’ terrain makes the robustness of the low level image processing technique a critical requirement. In the first part of the thesis, novel solutions are presented to tackle this specific problem. Detection of robust features against illumination changes and unique matching and association of features is a sought after capability. A solution for robustness of features against illumination variation is proposed combining Harris corner detection together with moment image representation. Whereas the first provides a technique for efficient feature detection, the moment images add the necessary brightness invariance. Moreover, a bucketing strategy is used to guarantee that features are homogeneously distributed within the images. Then, the addition of local feature descriptors guarantees the unique identification of image cues. In the second part, reliable and precise motion estimation for the Mars’s robot is studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous Localisation And Mapping (VSLAM) is investigated, proposing enhancements and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation techniques are explored. Alternative photogrammetry reprojection concepts are tested. Lastly, data fusion techniques are proposed to deal with the integration of multiple stereo view data. Our robust visual scheme allows good feature repeatability. Because of this, dimensionality reduction of the feature data can be used without compromising the overall performance of the proposed solutions for motion estimation. Also, the developed Egomotion techniques have been extensively validated using both simulated and real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot motion estimation are introduced, presenting interesting benefits. The obtained results prove the innovative methods presented here to be accurate and reliable approaches capable to solve the Egomotion problem in a Mars environment.Item Open Access Standalone and embedded stereo visual odometry based navigation solution(2015-07-17) Chermak, Lounis; Aouf, NabilThis thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories.Item Open Access The use of modern tools for modelling and simulation of UAV with Haptic(2017-05-12) Akhtar, S N; Aouf, NabilUnmanned Aerial Vehicle (UAV) is a research field in robotics which is in high demand in recent years, although there still exist many unanswered questions. In contrast, to the human operated aerial vehicles, it is still far less used to the fact that people are dubious about flying in or flying an unmanned vehicle. It is all about giving the control right to the computer (which is the Artificial Intelligence) for making decisions based on the situation like human do but this has not been easy to make people understand that it’s safe and to continue the enhancement on it. These days there are many types of UAVs available in the market for consumer use, for applications like photography to play games, to map routes, to monitor buildings, for security purposes and much more. Plus, these UAVs are also being widely used by the military for surveillance and for security reasons. One of the most commonly used consumer product is a quadcopter or quadrotor. The research carried out used modern tools (i.e., SolidWorks, Java Net Beans and MATLAB/Simulink) to model controls system for Quadcopter UAV with haptic control system to control the quadcopter in a virtual simulation environment and in real time environment. A mathematical model for the controlling the quadcopter in simulations and real time environments were introduced. Where, the design methodology for the quadcopter was defined. This methodology was then enhanced to develop a virtual simulation and real time environments for simulations and experiments. Furthermore, the haptic control was then implemented with designed control system to control the quadcopter in virtual simulation and real time experiments. By using the mathematical model of quadcopter, PID & PD control techniques were used to model the control setup for the quadcopter altitude and motion controls as work progressed. Firstly, the dynamic model is developed using a simple set of equations which evolves further by using complex control & mathematical model with precise function of actuators and aerodynamic coefficients Figure5-7. The presented results are satisfying and shows that flight experiments and simulations of the quadcopter control using haptics is a novel area of research which helps perform operations more successfully and give more control to the operator when operating in difficult environments. By using haptic accidents can be minimised and the functional performance of the operator and the UAV will be significantly enhanced. This concept and area of research of haptic control can be further developed accordingly to the needs of specific applications.Item Open Access Visual / acoustic detection and localisation in embedded systems(2016-10-05) Azzam, R; Aouf, NabilThe continuous miniaturisation of sensing and processing technologies is increasingly offering a variety of embedded platforms, enabling the accomplishment of a broad range of tasks using such systems. Motivated by these advances, this thesis investigates embedded detection and localisation solutions using vision and acoustic sensors. Focus is particularly placed on surveillance applications using sensor networks. Existing vision-based detection solutions for embedded systems suffer from the sensitivity to environmental conditions. In the literature, there seems to be no algorithm able to simultaneously tackle all the challenges inherent to real-world videos. Regarding the acoustic modality, many research works have investigated acoustic source localisation solutions in distributed sensor networks. Nevertheless, it is still a challenging task to develop an ecient algorithm that deals with the experimental issues, to approach the performance required by these systems and to perform the data processing in a distributed and robust manner. The movement of scene objects is generally accompanied with sound emissions with features that vary from an environment to another. Therefore, considering the combination of the visual and acoustic modalities would offer a significant opportunity for improving the detection and/or localisation using the described platforms. In the light of the described framework, we investigate in the first part of the thesis the use of a cost-effective visual based method that can deal robustly with the issue of motion detection in static, dynamic and moving background conditions. For motion detection in static and dynamic backgrounds, we present the development and the performance analysis of a spatio- temporal form of the Gaussian mixture model. On the other hand, the problem of motion detection in moving backgrounds is addressed by accounting for registration errors in the captured images. By adopting a robust optimisation technique that takes into account the uncertainty about the visual measurements, we show that high detection accuracy can be achieved. In the second part of this thesis, we investigate solutions to the problem of acoustic source localisation using a trust region based optimisation technique. The proposed method shows an overall higher accuracy and convergence improvement compared to a linear-search based method. More importantly, we show that through characterising the errors in measurements, which is a common problem for such platforms, higher accuracy in the localisation can be attained. The last part of this work studies the different possibilities of combining visual and acoustic information in a distributed sensors network. In this context, we first propose to include the acoustic information in the visual model. The obtained new augmented model provides promising improvements in the detection and localisation processes. The second investigated solution consists in the fusion of the measurements coming from the different sensors. An evaluation of the accuracy of localisation and tracking using a centralised/decentralised architecture is conducted in various scenarios and experimental conditions. Results have shown the capability of this fusion approach to yield higher accuracy in the localisation and tracking of an active acoustic source than by using a single type of data.Item Open Access Visual navigation in unmanned air vehicles with simultaneous location and mapping (SLAM)(2014-08-15) Li, X; Aouf, NabilThis thesis focuses on the theory and implementation of visual navigation techniques for Autonomous Air Vehicles in outdoor environments. The target of this study is to fuse and cooperatively develop an incremental map for multiple air vehicles under the application of Simultaneous Location and Mapping (SLAM). Without loss of generality, two unmanned air vehicles (UAVs) are investigated for the generation of ground maps from current and a priori data. Each individual UAV is equipped with inertial navigation systems and external sensitive elements which can provide the possible mixture of visible, thermal infrared (IR) image sensors, with a special emphasis on the stereo digital cameras. The corresponding stereopsis is able to provide the crucial three-dimensional (3-D) measurements. Therefore, the visual aerial navigation problems tacked here are interpreted as stereo vision based SLAM (vSLAM) for both single and multiple UAVs applications. To begin with, the investigation is devoted to the methodologies of feature extraction. Potential landmarks are selected from airborne camera images as distinctive points identified in the images are the prerequisite for the rest. Feasible feature extraction algorithms have large influence over feature matching/association in 3-D mapping. To this end, effective variants of scale-invariant feature transform (SIFT) algorithms are employed to conduct comprehensive experiments on feature extraction for both visible and infrared aerial images. As the UAV is quite often in an uncertain location within complex and cluttered environments, dense and blurred images are practically inevitable. Thus, it becomes a challenge to find feature correspondences, which involves feature matching between 1st and 2nd image in the same frame, and data association of mapped landmarks and camera measurements. A number of tests with different techniques are conducted by incorporating the idea of graph theory and graph matching. The novel approaches, which could be tagged as classification and hypergraph transformation (HGTM) based respectively, have been proposed to solve the data association in stereo vision based navigation. These strategies are then utilised and investigated for UAV application within SLAM so as to achieve robust matching/association in highly cluttered environments. The unknown nonlinearities in the system model, including noise would introduce undesirable INS drift and errors. Therefore, appropriate appraisals on the pros and cons of various potential data filtering algorithms to resolve this issue are undertaken in order to meet the specific requirements of the applications. These filters within visual SLAM were put under investigation for data filtering and fusion of both single and cooperative navigation. Hence updated information required for construction and maintenance of a globally consistent map can be provided by using a suitable algorithm with the compromise between computational accuracy and intensity imposed by the increasing map size. The research provides an overview of the feasible filters, such as extended Kalman Filter, extended Information Filter, unscented Kalman Filter and unscented H Infinity Filter. As visual intuition always plays an important role for humans to recognise objects, research on 3-D mapping in textures is conducted in order to fulfil the purpose of both statistical and visual analysis for aerial navigation. Various techniques are proposed to smooth texture and minimise mosaicing errors during the reconstruction of 3-D textured maps with vSLAM for UAVs. Finally, with covariance intersection (CI) techniques adopted on multiple sensors, various cooperative and data fusion strategies are introduced for the distributed and decentralised UAVs for Cooperative vSLAM (C-vSLAM). Together with the complex structure of high nonlinear system models that reside in cooperative platforms, the robustness and accuracy of the estimations in collaborative mapping and location are achieved through HGTM association and communication strategies. Data fusion among UAVs and estimation for visual navigation via SLAM were impressively verified and validated in conditions of both simulation and real data sets.