Browsing by Author "Chermak, L"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access 3D Panoptic Segmentation with Unsupervised Clustering for Visual Perception in Autonomous Driving(2021-09) Grenier, Amelie; Chermak, LFor the past decade, substantial progress has been achieved in the field of visual per ception for autonomous driving application thanks notably to the capabilities of deep learning techniques. This work aims to leverage stereovision and explore different methods, in particular unsupervised clustering approaches, to perform 3D panoptic segmentation for navigation purposes. The main contribution of this work consists in the development, test and validation of a novel framework in which geometric and semantic understanding of the scene are obtained separately at the pixel level. The combination of both for the extracted visual 2D information of the desired class provides a 3D sparse classified point cloud, which is used afterward for instance clustering. Preliminary tests of the baseline version of the framework for Vehicle objects were conducted on urban driving datasets. Results demonstrate for the first time the via bility for processing of this type of point cloud from visual data, and reveal improve ments areas. Specially, the importance of the boundary F-score in semantic seg mentation is highlighted for the first time in this application, with an increase up to 32 percentage point in this study. Additional contribution was made by applying distribution clustering as well as density based clustering for instance segmentation in a visual based 3D space representa tion. Results showed that DBSCAN was well suited for this application. As a result, it was proven that the presented framework can successfully provide genuine 3D profile map representation and localisation of vehicles in a urban environment from 2D visual information only. Furthermore, the mathematical formalisation of the link between DBSCAN’s param eter selection and camera projective geometry was presented as future work and a mean to demystify parameter selection.Item Open Access Visual / LiDAR relative navigation for space applications: autonomous systems signals and autonomy group(Cranfield University, 2018) Camarena, M E; Scannapieco, A F; Chermak, LNowadays, robotic systems are shifting towards increased autonomy. This is the case of autonomous navigation, which has been widely studied in literature and extensively implemented for ground applications, for example on cars or motorbike. However, autonomous navigation in space poses a number of additional and different constraints, e.g. reduced number of features, limited power and processing capabilities and life-cycle, among others, that differentiate the problem of that of the ground. In this framework, the I3DS Integrated 3D Sensors project intends to propose a solution for autonomous operations in space. I3DS is a joint venture between Cranfield University, Thales Alenia Space and other industrial European partners. The ambition of I3DS is to produce a standardised modular Inspector Sensor Suite (INSES) for autonomous orbital and planetary applications for future space missions. The project is co-founded under the Horizon 2020 EU research and development program and is part of the Strategic Research Cluster on Space Robotics Technologies. The goal for space applications is hence to develop a LiDAR- and visual-based navigation solution able to estimate the relative pose, i.e. position and orientation, of a non-cooperative target in orbit with respect to the chaser satellite. The navigation solution also encompasses a dedicated and application-oriented pre-processing of the raw data. This work aims to respond to this need by assessing the suitability and limitations of the different pre-processing and navigation algorithms for relative navigation on both the on-board computer and FPGA in space, given the particular and specific constraints imposed by the space environment. The data generated by the sensors require specific pre-processing in order to be converted into an optimum format for the posterior navigation algorithms. In particular, image pre-processing includes spatial and spectral corrections, while LiDAR data pre-processing comprises point cloud downsizing and outlier removal. Therefore, in order to properly simulate the I3DS INSES, FPGA and on-board computer (OBC) have been considered as hardware solutions to run the algorithms. The OBC has been simulated using a standard desktop computer on which LiDAR codes have been tested, whereas the FPGA has been simulated with the Xilinx UltraZed-EG FPGA board for image pre-processing. Different algorithms have been tested and tuned to achieve the navigation solution. ICP (Iterative Closest Point), GICP (Generalised Iterative Closest Point), TICP (Trimmed Iterative Closes Point) and a Kalman filter-based registration using a Histogram of Distances descriptor have been evaluated for LiDAR navigation. Stereo-based visual odometry and monocular navigation based on fiducial markers on the surface of the target satellite represented the solutions for visual navigation. Experimental tests on simulated data showed good results in terms of accuracy, with an error on position below 5% for all the sensors. However, the computational load on the FPGA board should be further optimised. Possible avenues, such as parallelisation on one or multiple FPGA boards, further optimisation of the algorithms and decreasing the acquisition frequency as a last resource are finally discussed.