Rondao, DuarteAouf, NabilRichardson, Mark A.2022-08-012022-08-012022-07-22Rondao D, Aouf N, Richardson MA. (2023) ChiNet: deep recurrent convolutional learning for multimodal spacecraft pose estimation. IEEE Transactions on Aerospace and Electronic Systems, Volume 59, Issue 2, April 2023, pp. 937-9490018-9251https://doi.org/10.1109/TAES.2022.3193085https://dspace.lib.cranfield.ac.uk/handle/1826/18261This paper presents an innovative deep learning pipeline which estimates the relative pose of a spacecraft by incorporating the temporal information from a rendezvous sequence. It leverages the performance of long short-term memory (LSTM) units in modelling sequences of data for the processing of features extracted by a convolutional neural network (CNN) backbone. Three distinct training strategies, which follow a coarse-to-fine funnelled approach, are combined to facilitate feature learning and improve end-to-end pose estimation by regression. The capability of CNNs to autonomously ascertain feature representations from images is exploited to fuse thermal infrared data with electro-optical red-green-blue (RGB) inputs, thus mitigating the effects of artifacts from imaging space objects in the visible wavelength. Each contribution of the proposed framework, dubbed ChiNet, is demonstrated on a synthetic dataset, and the complete pipeline is validated on experimental data.enAttribution-NonCommercial 4.0 Internationalhttp://creativecommons.org/licenses/by-nc/4.0/Feature extractionPose estimationSpace vehiclesSolid modelingTask analysisEstimationConvolutional neural networksChiNet: deep recurrent convolutional learning for multimodal spacecraft pose estimationArticle1557-9603