Browsing by Author "Chermak, Lounis"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Open Access B-HoD: A Lightweight and Fast Binary descriptor for 3D Object Recognition and Registration(IEEE, 2017-08-03) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Chermak, Lounis3D object recognition and registration in computer vision applications has lately drawn much attention as it is capable of superior performance compared to its 2D counterpart. Although a number of high performing solutions do exist, it is still challenging to further reduce processing time and memory requirements to meet the needs of time critical applications. In this paper we propose an extension of the 3D descriptor Histogram of Distances (HoD) into the binary domain named the Binary-HoD (B-HoD). Our binary quantization procedure along with the proposed preprocessing step reduce an order of magnitude both processing time and memory requirements compared to current state of the art 3D descriptors. Evaluation on two popular low quality datasets shows its promising performance.Item Open Access Countermeasure Leveraging Optical Attractor Kits (CLOAK): interpretational disruption of a visual-based workflow(SPIE, 2020-09-20) Di Fraia, Marco Zaccaria; Chermak, LounisDue to their negligible cost, small energy footprint, compact size and passive nature, cameras are emerging as one of the most appealing sensing approaches for the realization of fully autonomous intelligent mobile platforms. In defence contexts, passive sensors, such as cameras, represent an important asset due to the absence of a detectable external operational signature – with at most some radiation generated by their components. This characteristic, however, makes targeting them a quite daunting task, as their active neutralization requires pinning a small angular diameter moving at a high speed. In this paper we introduce an interpretational countermeasure acting against autonomous platforms relying on featurebased optical workflows. We classify our approach as an interpretational disruption because it exploits the heuristics of the model used by the on-board artificial intelligence to interpret the available data. To remove the struggle of accurately pinpointing such an imperceptible target, our approach consists in passively corrupting, from a perception point of view, the whole environment with a small, sparse set of physical observables. The concrete design of these systems is developed from the response of a feature detector of interest. We define an optical attractor as the collection of pixels inducing an exceptionally strong response for a target feature detector. We also define a physical object inducing these pixel structures for defense purposes as a CLOAK: Countermeasure Leveraging Optical Attractor Kits. Using optical attractors, any optical based algorithm relying on features extraction can potentially be disrupted, in a completely passive and nondestructive fashion.Item Open Access DFSGD: Machine Learning Based Intrusion Detection for Resource Constrained Devices(2019-12) Lee, Seo Jin; Chermak, Lounis; Richardson, Mark A.; Yoo, Paul D.; Asyhari, TaufiqAn ever increasing number of smart and mobile devices interconnected through wireless networks such as Internet of Things (IoT) and huge sensitive network data transmitted between them has raised security and privacy issues. Intrusion detection system (IDS) is known as an effective defence system and often, machine learning (ML) and its subfield deep learning (DL) methods are used for its development. However, IoT devices have limited computational resources such as limited energy source and computational power and thus, traditional IDS that require extensive computational resource are not suitable for running on such devices. Therefore, the aim of this research is to design and develop a lightweight ML-based IDS for the resource-constrained devices. The research proposes a lightweight ML-based IDS model based on Deep Feature Learning with Linear SVM and Gradient Descent optimisation (DFSGD) to deploy and run on resource-constrained devices by reducing the number of features through feature extraction and selection using a stacked autoencoder (SAE), mutual information (MI) and C4.5 wrapper. The DFSGD is trained on Aegean Wi-Fi Intrusion Dataset (AWID) to detect impersonation attack and utilises support vector machine (SVM) and gradient descent as the classifier and optimisation algorithm respectively. As one of the key contributions of this research, the features in AWID dataset utilised for the development of the model, were also investigated for its usability for further development of IDS. Finally, the DFSGD was run on Raspberry Pi to show its possible deployment on resource-constrained devices.Item Open Access HySim: a tool for space-to-space hyperspectral resolved imagery(International Astronautical Federation (IAF), 2023-10-06) Felicetti, Leonard; Hobbs, Stephen; Leslie, Cameron; Rowling, Samuel; Dhesi, Mekhi; Harris, Toby; Brydon, George; Chermak, Lounis; Soori, Umair; Allworth, James; Balson, DavidThis paper introduces HySim, a novel tool addressing the need for hyperspectral space-to-space imaging simulations, vital for in-orbit spacecraft inspection missions. This tool fills the gap by enabling the generation of hyperspectral space-to-space images across various scenarios, including fly-bys, inspections, rendezvous, and proximity operations. HySim combines open-source tools to handle complex scenarios, providing versatile configuration options for imaging scenarios, camera specifications, and material properties. It accurately simulates hyperspectral images of the target scene. This paper outlines HySim's features, validation against real space-borne images, and discusses its potential applications in space missions, emphasising its role in advancing space-to-space inspection and in-orbit servicing planning.Item Open Access IMPACT: Impersonation attack detection via edge computing using deep auto encoder and feature abstraction(IEEE, 2020-04-02) Lee, Seo Jin; Yoo, Paul D.; Asyhari, A. Taufiq; Jhi, Yoonchan; Chermak, Lounis; Yeun, Chan Yeob; Taha, KamalAn ever-increasing number of computing devices interconnected through wireless networks encapsulated in the cyber-physical-social systems and a significant amount of sensitive network data transmitted among them have raised security and privacy concerns. Intrusion detection system (IDS) is known as an effective defence mechanism and most recently machine learning (ML) methods are used for its development. However, Internet of Things (IoT) devices often have limited computational resources such as limited energy source, computational power and memory, thus, traditional ML-based IDS that require extensive computational resources are not suitable for running on such devices. This study thus is to design and develop a lightweight ML-based IDS tailored for the resource-constrained devices. Specifically, the study proposes a lightweight ML-based IDS model namely IMPACT (IMPersonation Attack deteCTion using deep auto-encoder and feature abstraction). This is based on deep feature learning with gradient-based linear Support Vector Machine (SVM) to deploy and run on resource-constrained devices by reducing the number of features through feature extraction and selection using a stacked autoencoder (SAE), mutual information (MI) and C4.8 wrapper. The IMPACT is trained on Aegean Wi-Fi Intrusion Dataset (AWID) to detect impersonation attack. Numerical results show that the proposed IMPACT achieved 98.22% accuracy with 97.64% detection rate and 1.20% false alarm rate and outperformed existing state-of-the-art benchmark models. Another key contribution of this study is the investigation of the features in AWID dataset for its usability for further development of IDS.Item Open Access Local feature based automatic target recognition for future 3D active homing seeker missiles(Elsevier, 2017-12-13) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Gray, Greer Jillian; Chermak, Lounis; Richardson, Mark A.; Oudyi, F.We propose an architecture appropriate for future Light Detection and Ranging (LIDAR) active homing seeker missiles with Automatic Target Recognition (ATR) capabilities. Our proposal enhances military targeting performance by extending ATR into the 3rd dimension. From a military and aerospace industry point of view, this is appealing as weapon effectiveness against camouflage, concealment and deception techniques can be substantially improved. Specifically, we present a missile seeker 3D ATR architecture that relies on the 3D local feature based SHOT descriptor and a dual-role pipeline with a number of pre and post-processing operations. We evaluate our architecture on a number of missile engagement scenarios in various environmental setups with the missile being under various altitudes, obliquities, distances to the target and scene resolutions. Under these demanding conditions, the recognition performance gained is highly promising. Even in the extreme case of reducing the database entries to a single template per target, our interchangeable ATR architecture still provides a highly acceptable performance. Although we focus on future intelligent missile systems, our approach can be implemented to a great range of time-critical complex systems for space, air and ground environments for military, law-enforcement, commercial and research purposes.Item Open Access NAV-Landmarks: deployable 3D infrastructures to enable CubeSats navigation near asteroids(IEEE, 2020-08-21) Di Fraia, Marco Zaccaria; Chermak, Lounis; Cuartielles, Joan-Pau; Felicetti, Leonard; Scannapieco, Antonio FulvioAutonomous operations in the proximity of Near Earth Objects (NEO) are perhaps the most challenging and demanding type of mission operation currently being considered. The exceptional variability of geometric and illumination conditions, the scarcity of large scale surface features and the strong perturbations in their proximity require incredibly robust systems to be handled. Robustness is usually introduced by either increasing the number and/or the complexity of on-board sensors, or by employing algorithms capable of handling uncertainties, often computationally heavy. While for a large satellite this would be predominantly an economic issue, for small satellites these constraints might push the ability to accomplish challenging missions beyond the realm of technical possibility. The scope of this paper is to present an active approach that allows small satellites deployed by a mothership to perform robust navigation using only a monocular visible camera. In particular, the introduction of Non-cooperative Artificial Visual landmarks (NAVLandmarks) on the surface of the target object is proposed to augment the capabilities of small satellites. These external elements can be effectively regarded as an infrastructure forming an extension of the landing system. The quantitative efficiency estimation of this approach will be performed by comparing the outputs of a visual odometry algorithm, which operates on sequences of images representing ballistic descents around a small non-rotating asteroid. These sequences of virtual images will be obtained through the integration of two simulated models, both based on the Apollo asteroid 101955 Bennu. The first is a dynamical model, describing the landing trajectory, realized by integrating over time the gravitational potential around a three-axis ellipsoid. The second model is visual, generated by introducing in Unreal Engine 4 a CAD model of the asteroid (with a resolution of 75 cm) and scattering on its surface a number N of cubes with side length L. The effect of both N and L on the navigation accuracy will be reported. While defining an optimal shape for the NAV-Landmarks is out of the scope of this paper, prescriptions about the beacons geometry will be provided. In particular, in this work the objects will be represented as high-visibility cubes. This shape satisfies, albeit in a non-optimal way, most of the design goals.Item Open Access Perception fields: analysing distributions of optical features as a proximity navigation tool for autonomous probes around asteroids(IEEE, 2021-08-19) Di Fraia, Marco Zaccaria; Feetham, Luke; Felicetti, Leonard; Sanchez, Joan-Pau; Chermak, LounisThis paper suggests a new way of interpreting visual information perceived by visible cameras in the proximity of small celestial bodies. At close ranges, camera-based perception processes generally rely on computational constructs known as features. Our hypothesis is that trends in the quantity of available optical features can be correlated to variations in the angular distance from the source of illumination. Indeed, the discussed approach is based on treating properties related to these detected optical features as readings of a field - the perception fields of the title, assumed induced by the coupling of the environmental conditions and the state of the sensing device. The extreme spectrum of shapes, surface properties and gravity fields of small celestial bodies heavily affects visual proximity operational procedures. Therefore, self-contained ancillary tools providing context and an evaluation of estimators' performance while using the least number of priors are extremely significant in these conditions. This preliminary study presents an analysis of the occurrences of optical feature observed around two asteroids, 101955 Bennu and (8567) 1996 HW1 in visual data simulated within Blender, a computer graphics engine. The comparison of three different feature detectors showed distinctive trends in the distribution of the detected optical features, directly correlated to the spacecraft-target-Sun angle, confirming our hypothesis.Item Open Access PUGTIFs: Passively user-generated thermal invariant features(IEEE, 2019-08-08) Jackson, Edward; Chermak, LounisFeature detection is a vital aspect of computer vision applications, but adverse environments, distance and illumination can affect the quality and repeatability of features or even prevent their identification. Invariance to these constraints would make an ideal feature attribute. Here we propose the first exploitation of consistently occurring thermal signatures generated by a moving platform, a paradigm we define as passively user-generated thermal invariant features (PUGTIFs). In this particular instance, the PUGTIF concept is applied through the use of thermal footprints that are passively and continuously user generated by heat differences, so that features are no longer dependent on the changing scene structure (as in classical approaches) but now maintain a spatial coherency and remain invariant to changes in illumination. A framework suitable for any PUGTIF has been designed consisting of three methods: first, the known footprint size is used to solve for monocular localisation and thus scale ambiguity; second, the consistent spatial pattern allows us to determine heading orientation; and third, these principles are combined in our automated thermal footprint detector (ATFD) method to achieve segmentation/feature detection. We evaluated the detection of PUGTIFs in four laboratory environments (sand, grass, grass with foliage, and carpet) and compared ATFD to typical image segmentation methods. We found that ATFD is superior to other methods while also solving for scaled monocular camera localisation and providing user heading in multiple environments.Item Open Access Real-time smart and standalone vision/IMU navigation sensor(2016-07-26) Chermak, Lounis; Aouf, Nabil; Richardson, Mark A.; Visentin, G.In this paper, we present a smart, standalone, multi-platform stereo vision/IMU-based navigation system, providing ego-motion estimation. The real-time visual odometry algorithm is run on a nano ITX single-board computer (SBC) of 1.9 GHz CPU and 16-core GPU. High-resolution stereo images of 1.2 megapixel provide high-quality data. Tracking of up to 750 features is made possible at 5 fps thanks to a minimal, but efficient, features detection–stereo matching–feature tracking scheme runs on the GPU. Furthermore, the feature tracking algorithm benefits from assistance of a 100 Hz IMU whose accelerometer and gyroscope data provide inertial features prediction enhancing execution speed and tracking efficiency. In a space mission context, we demonstrate robustness and accuracy of the real-time generated 6-degrees-of-freedom trajectories from our visual odometry algorithm. Performance evaluations are comparable to ground truth measurements from an external motion capture system.Item Open Access Scale robust IMU-assisted KLT for stereo visual odometry solution(Cambridge University Press, 2016-08-30) Chermak, Lounis; Aouf, Nabil; Richardson, Mark A.We propose a novel stereo visual IMU-assisted (Inertial Measurement Unit) technique that extends to large inter-frame motion the use of KLT tracker (Kanade–Lucas–Tomasi). The constrained and coherent inter-frame motion acquired from the IMU is applied to detected features through homogenous transform using 3D geometry and stereoscopy properties. This predicts efficiently the projection of the optical flow in subsequent images. Accurate adaptive tracking windows limit tracking areas resulting in a minimum of lost features and also prevent tracking of dynamic objects. This new feature tracking approach is adopted as part of a fast and robust visual odometry algorithm based on double dogleg trust region method. Comparisons with gyro-aided KLT and variants approaches show that our technique is able to maintain minimum loss of features and low computational cost even on image sequences presenting important scale change. Visual odometry solution based on this IMU-assisted KLT gives more accurate result than INS/GPS solution for trajectory generation in certain context.Item Open Access Standalone and embedded stereo visual odometry based navigation solution(2015-07-17) Chermak, Lounis; Aouf, NabilThis thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories.Item Open Access Stixel Based Scene Understanding for Autonomous Vehicles(IEEE, 2017-08-03) Wieszok, Z; Aouf, Nabil; Kechagias-Stamatis, Odysseas; Chermak, LounisWe propose a stereo vision based obstacle detection and scene segmentation algorithm appropriate for autonomous vehicles. Our algorithm is based on an innovative extension of the Stixel world, which neglects computing a disparity map. Ground plane and stixel distance estimation is improved by exploiting an online learned color model. Furthermore, the stixel height estimation is leveraged by an innovative joined membership scheme based on color and disparity information. Stixels are then used as an input for the semantic scene segmentation providing scene understanding, which can be further used as a comprehensive middle level representation for high-level object detectors.Item Open Access Thermal stereo odometry for UAVs(IEEE, 2015-07-14) Mouats, Tarek; Aouf, Nabil; Chermak, Lounis; Richardson, Mark A.In the last decade, visual odometry (VO) has attracted significant research attention within the computer vision community. Most of the works have been carried out using standard visible-band cameras. These sensors offer numerous advantages but also suffer from some drawbacks such as illumination variations and limited operational time (i.e., daytime only). In this paper, we explore techniques that allow us to extend the concepts beyond the visible spectrum. We introduce a localization solution based on a pair of thermal cameras. We focus on VO and demonstrate the accuracy of the proposed solution in daytime as well as night-time. The first challenge with thermal cameras is their geometric calibration. Here, we propose a solution to overcome this issue and enable stereopsis. VO requires a good set of feature correspondences. We use a combination of Fast-Hessian detector with for Fast Retina Keypoint descriptor for that purpose. A range of optimization techniques can be used to compute the incremental motion. Here, we propose the double dogleg algorithm and show that it presents an interesting alternative to the commonly used Levenberg-Marquadt approach. In addition, we explore thermal 3-D reconstruction and show that similar performance to the visible-band can be achieved. In order to validate the proposed solution, we build an innovative experimental setup to capture various data sets, where different weather and time conditions are considered.Item Open Access Towards in-orbit hyperspectral imaging of space debris(2023-01-26) Hobbs, Stephen E.; Felicetti, Leonard; Leslie, Cameron; Rowling, Samuel; Brydon, George; Dhesi, Mekhi; Harris, Toby; Chermak, Lounis; Soori, UmairSatellites are vulnerable to space debris larger than ~1 cm, but much of this debris cannot be tracked from the ground. In-orbit detection and tracking of debris is one solution to this problem. We present some steps towards achieving this, and in particular to use hyperspectral imaging to maximise the information obtained. We present current work related to hyperspectral in-orbit imaging of space debris in three areas: scenario evaluation, a reflectance database, and an image simulator. Example results are presented. Hyperspectral imaging has the potential to provide valuable additional information, such as assessments of spacecraft or debris condition and even spectral “finger-printing” of material types or use (e.g. propellant contamination). These project components are being merged to assess mission opportunities and to develop enhanced data processing methods to improve knowledge and understanding of the orbital environment.