Browsing by Author "Kechagias-Stamatis, Odysseas"
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item Open Access 3D automatic target recognition for future LIDAR missiles(IEEE, 2017-01-10) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Richardson, Mark A.We present a real-time three-dimensional automatic target recognition approach appropriate for future light detection and ranging-based missiles. Our technique extends the speeded-up robust features method into the third dimension by solving multiple two-dimensional problems and performs template matching based on the extreme case of a single pose per target. Evaluation on military targets shows higher recognition rates under various transformations and perturbations at lower processing time compared to state-of-the-art approaches.Item Open Access Automatic x-ray image segmentation and clustering for threat detection(SPIE, 2017-10-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Nam, David; Belloni, CaroleFirearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance.Item Open Access B-HoD: A Lightweight and Fast Binary descriptor for 3D Object Recognition and Registration(IEEE, 2017-08-03) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Chermak, Lounis3D object recognition and registration in computer vision applications has lately drawn much attention as it is capable of superior performance compared to its 2D counterpart. Although a number of high performing solutions do exist, it is still challenging to further reduce processing time and memory requirements to meet the needs of time critical applications. In this paper we propose an extension of the 3D descriptor Histogram of Distances (HoD) into the binary domain named the Binary-HoD (B-HoD). Our binary quantization procedure along with the proposed preprocessing step reduce an order of magnitude both processing time and memory requirements compared to current state of the art 3D descriptors. Evaluation on two popular low quality datasets shows its promising performance.Item Open Access DeepLO: Multi-projection deep LIDAR odometry for space orbital robotics rendezvous relative navigation(Elsevier, 2020-07-30) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Dubanchet, Vincent; Richardson, Mark A.This work proposes a new Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative robotic space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture suggests a Deep Recurrent Convolutional Neural Network (DRCNN) that exploits multi-projected imagery of the acquired 3D LIDAR data. Advantages of the proposed DRCNN are; an effective feature representation facilitated by the Convolutional Neural Network module within DRCNN, a robust modeling of the navigation dynamics due to the Recurrent Neural Network incorporated in the DRCNN, and a low processing time. Our trials evaluate several current state-of-the-art space navigation methods on various simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Additionally, we evaluate real satellite LIDAR data acquired in our lab. Results demonstrate that the proposed architecture, although trained solely on simulated data, is highly adaptable and is more appealing compared to current algorithms on both simulated and real LIDAR data scenarios affording better odometry accuracy at lower computational requirements.Item Open Access Evaluating 3D local descriptors and recursive filtering schemes for LIDAR based uncooperative relative space navigation(Wiley, 2019-09-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Dubanchet, VincentWe propose a light detection and ranging (LIDAR)‐based relative navigation scheme that is appropriate for uncooperative relative space navigation applications. Our technique combines the encoding power of the three‐dimensional (3D) local descriptors that are matched exploiting a correspondence grouping scheme, with the robust rigid transformation estimation capability of the proposed adaptive recursive filtering techniques. Trials evaluate several current state‐of‐the‐art 3D local descriptors and recursive filtering techniques on a number of both real and simulated scenarios that involve various space objects including satellites and asteroids. Results demonstrate that the proposed architecture affords a 50% odometry accuracy improvement over current solutions, while also affording a low computational burden. From our trials we conclude that the 3D descriptor histogram of distances short (HoD‐S) combined with the adaptive αβ filtering poses the most appealing combination for the majority of the scenarios evaluated, as it combines high quality odometry with a low processing burden.Item Open Access Evaluating 3D local descriptors for future LIDAR missiles with automatic target recognition capabilities(Taylor and Francis, 2017-08-14) Kechagias-Stamatis, Odysseas; Aouf, NabilFuture light detection and ranging seeker missiles incorporating 3D automatic target recognition (ATR) capabilities can improve the missile’s effectiveness in complex battlefield environments. Considering the progress of local 3D descriptors in the computer vision domain, this paper evaluates a number of these on highly credible simulated air-to-ground missile engagement scenarios. The latter take into account numerous parameters that have not been investigated yet by the literature including variable missile – target range, 6-degrees-of-freedom missile motion and atmospheric disturbances. Additionally, the evaluation process utilizes our suggested 3D ATR architecture that compared to current pipelines involves more post-processing layers aiming at further enhancing 3D ATR performance. Our trials reveal that computer vision algorithms are appealing for missile-oriented 3D ATR.Item Open Access A furcated visual collision avoidance system for an autonomous micro robot(IEEE, 2018-07-23) Isakhani, Hamid; Aouf, Nabil; Kechagias-Stamatis, Odysseas; Whidborne, James F.This paper proposes a secondary reactive collision avoidance system for micro class of robots based on a novel approach known as the Furcated Luminance-Difference Processing (FLDP) inspired by the Lobula Giant Movement Detector, a wide-field visual neuron located in the lobula layer of a locust nervous system. This paper addresses some of the major collision avoidance challenges; obstacle proximity & direction estimation, and operation in GPS-denied environment with irregular lighting. Additionally, it has proven effective in detecting edges independent of background color, size, and contour. The FLDP executes a series of image enhancement and edge detection algorithms to estimate collision threat-level which further determines whether or not the robot’s field of view must be dissected where each section’s response is compared against the others to generate a simple collision-free maneuver. Ultimately, the computation load and the performance of the model is assessed against an eclectic set of off-line as well as real-time real-world collision scenarios validating the proposed model’s asserted capability to avoid obstacles at more than 670 mm prior to collision, moving at 1.2 ms¯¹ with a successful avoidance rate of 90% processing at 120 Hz on a simple single core microcontroller, sufficient to conclude the system’s feasibility for real-time real-world applications that possess fail-safe collision avoidance system.Item Open Access Fusing deep learning and sparse coding for SAR ATR(IEEE, 2018-08-10) Kechagias-Stamatis, Odysseas; Aouf, NabilWe propose a multi-modal and multi-discipline data fusion strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar imagery. Our architecture fuses a proposed Clustered version of the AlexNet Convolutional Neural Network with Sparse Coding theory that is extended to facilitate an adaptive elastic net optimization concept. Evaluation on the MSTAR dataset yields the highest ATR performance reported yet which is 99.33% and 99.86% for the 3 and 10-class problems respectively.Item Open Access High-speed multi-dimensional relative navigation for uncooperative space objects(Elsevier, 2019-05-03) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Richardson, Mark A.This work proposes a high-speed Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture transforms the odometry problem from the 3D space into multiple 2.5D ones and completes the odometry problem by utilizing a recursive filtering scheme. Trials evaluate several current state-of-the-art 2D keypoint detection and local feature description methods as well as recursive filtering techniques on a number of simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Most appealing performance is attained by the 2D keypoint detector Good Features to Track (GFFT) combined with the feature descriptor KAZE, that are further combined with either the H∞ or the Kalman recursive filter. Experimental results demonstrate that compared to current algorithms, the GFTT/KAZE combination is highly appealing affording one order of magnitude more accurate odometry and a very low processing burden, which depending on the competitor method, may exceed one order of magnitude faster computation.Item Open Access Histogram of distances for local surface description(IEEE, 2016-06-09) Kechagias-Stamatis, Odysseas; Aouf, Nabil3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance.Item Open Access H∞ LIDAR odometry for spacecraft relative navigation(IET, 2016-01-04) Kechagias-Stamatis, Odysseas; Aouf, NabilCurrent light detection and ranging (LIDAR) based odometry solutions that are used for spacecraft relative navigation suffer from quite a few deficiencies. These include an off-line training requirement and relying on the iterative closest point (ICP) that does not guarantee a globally optimum solution. To encounter this, the authors suggest a robust architecture that overcomes the problems of current proposals by combining the concepts of 3D local feature matching with an adaptive variant of the H∞ recursive filtering process. Trials on real laser scans of an EnviSat model demonstrate that the proposed architecture affords at least one order of magnitude better accuracy compared to ICP.Item Open Access Local feature based automatic target recognition for future 3D active homing seeker missiles(Elsevier, 2017-12-13) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Gray, Greer Jillian; Chermak, Lounis; Richardson, Mark A.; Oudyi, F.We propose an architecture appropriate for future Light Detection and Ranging (LIDAR) active homing seeker missiles with Automatic Target Recognition (ATR) capabilities. Our proposal enhances military targeting performance by extending ATR into the 3rd dimension. From a military and aerospace industry point of view, this is appealing as weapon effectiveness against camouflage, concealment and deception techniques can be substantially improved. Specifically, we present a missile seeker 3D ATR architecture that relies on the 3D local feature based SHOT descriptor and a dual-role pipeline with a number of pre and post-processing operations. We evaluate our architecture on a number of missile engagement scenarios in various environmental setups with the missile being under various altitudes, obliquities, distances to the target and scene resolutions. Under these demanding conditions, the recognition performance gained is highly promising. Even in the extreme case of reducing the database entries to a single template per target, our interchangeable ATR architecture still provides a highly acceptable performance. Although we focus on future intelligent missile systems, our approach can be implemented to a great range of time-critical complex systems for space, air and ground environments for military, law-enforcement, commercial and research purposes.Item Open Access A new passive 3-D automatic target recognition architecture for aerial platforms(IEEE, 2018-07) Kechagias-Stamatis, Odysseas; Aouf, NabilThe 3-D automatic target recognition (ATR) has many advantages over its 2-D counterpart, but there are several constraints in the context of small low-cost unmanned aerial vehicles (UAVs). These limitations include the requirement for active rather than passive monitoring, high equipment costs, sensor packaging size, and processing burden. We, therefore, propose a new structure from motion (SfM) 3-D ATR architecture that exploits the UAV's onboard sensors, i.e., the visual band camera, gyroscope, and accelerometer, and meets the requirements of a small UAV system. We tested the proposed 3-D SfM ATR using simulated UAV reconnaissance scenarios and found that the performance was better than classic 3-D light detection and ranging (LIDAR) ATR, combining the advantages of 3-D LIDAR ATR and passive 2-D ATR. The main advantages of the proposed architecture include the rapid processing, target pose invariance, small template size, passive scene sensing, and inexpensive equipment. We implemented the SfM module under two keypoint detection, description and matching schemes, with the 3-D ATR module exploiting several current techniques. By comparing SfM 3-D ATR, 3-D LIDAR ATR, and 2-D ATR, we confirmed the superior performance of our new architecture.Item Open Access Performance evaluation of single and cross-dimensional feature detection and description(Institution of Engineering and Technology (IET), 2019-10-18) Kechagias-Stamatis, Odysseas; Aoufi, Abdelkader; Richardson, Mark A.Three-dimensional local feature detection and description techniques are widely used for object registration and recognition applications. Although several evaluations of 3D local feature detection and description methods have already been published, these are constrained in a single dimensional scheme, i.e. either 3D or 2D methods that are applied onto multiple projections of the 3D data. However, cross-dimensional (mixed 2D and 3D) feature detection and description has yet to be investigated. Here, we evaluated the performance of both single and cross-dimensional feature detection and description methods on several 3D datasets and demonstrated the superiority of cross-dimensional over single-dimensional schemes.Item Open Access SAR automatic target recognition based on convolutional neural networks(IEEE, 2018-05-28) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Belloni, Carole D. L.We propose a multi-modal multi-discipline strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) imagery. Our architecture relies on a pre-trained, in the RGB domain, Convolutional Neural Network that is innovatively applied on SAR imagery, and is combined with multiclass Support Vector Machine classification. The multi-modal aspect of our architecture enforces the generalisation capabilities of our proposal, while the multi-discipline aspect bridges the modality gap. Even though our technique is trained in a single depression angle of 17°, average performance on the MSTAR database over a 10-class target classification problem in 15°, 30° and 45° depression is 97.8%. This multi-target and multi-depression ATR capability has not been reported yet in the MSTAR database literature.Item Open Access Stixel Based Scene Understanding for Autonomous Vehicles(IEEE, 2017-08-03) Wieszok, Z; Aouf, Nabil; Kechagias-Stamatis, Odysseas; Chermak, LounisWe propose a stereo vision based obstacle detection and scene segmentation algorithm appropriate for autonomous vehicles. Our algorithm is based on an innovative extension of the Stixel world, which neglects computing a disparity map. Ground plane and stixel distance estimation is improved by exploiting an online learned color model. Furthermore, the stixel height estimation is leveraged by an innovative joined membership scheme based on color and disparity information. Stixels are then used as an input for the semantic scene segmentation providing scene understanding, which can be further used as a comprehensive middle level representation for high-level object detectors.Item Open Access Target recognition for synthetic aperture radar imagery based on convolutional neural network feature fusion(SPIE, 2018-12-04) Kechagias-Stamatis, OdysseasDriven by the great success of deep convolutional neural networks (CNNs) that are currently used by quite a few computer vision applications, we extend the usability of visual-based CNNs into the synthetic aperture radar (SAR) data domain without employing transfer learning. Our SAR automatic target recognition (ATR) architecture efficiently extends the pretrained Visual Geometry Group CNN from the visual domain into the X-band SAR data domain by clustering its neuron layers, bridging the visual—SAR modality gap by fusing the features extracted from the hidden layers, and by employing a local feature matching scheme. Trials on the moving and stationary target acquisition dataset under various setups and nuisances demonstrate a highly appealing ATR performance gaining 100% and 99.79% in the 3-class and 10-class ATR problem, respectively. We also confirm the validity, robustness, and conceptual coherence of the proposed method by extending it to several state-of-the-art CNNs and commonly used local feature similarity/match metrics.