ChiNet: deep recurrent convolutional learning for multimodal spacecraft pose estimation

Loading...
Thumbnail Image

Date published

Free to read from

Authors

Rondao, Duarte
Aouf, Nabil
Richardson, Mark A.

Supervisor/s

Journal Title

Journal ISSN

Volume Title

Publisher

Department

Course name

ISSN

0018-9251

Format

Citation

Rondao D, Aouf N, Richardson MA. (2023) ChiNet: deep recurrent convolutional learning for multimodal spacecraft pose estimation. IEEE Transactions on Aerospace and Electronic Systems, Volume 59, Issue 2, April 2023, pp. 937-949

Abstract

This paper presents an innovative deep learning pipeline which estimates the relative pose of a spacecraft by incorporating the temporal information from a rendezvous sequence. It leverages the performance of long short-term memory (LSTM) units in modelling sequences of data for the processing of features extracted by a convolutional neural network (CNN) backbone. Three distinct training strategies, which follow a coarse-to-fine funnelled approach, are combined to facilitate feature learning and improve end-to-end pose estimation by regression. The capability of CNNs to autonomously ascertain feature representations from images is exploited to fuse thermal infrared data with electro-optical red-green-blue (RGB) inputs, thus mitigating the effects of artifacts from imaging space objects in the visible wavelength. Each contribution of the proposed framework, dubbed ChiNet, is demonstrated on a synthetic dataset, and the complete pipeline is validated on experimental data.

Description

Software Description

Software Language

Github

Keywords

Feature extraction, Pose estimation, Space vehicles, Solid modeling, Task analysis, Estimation, Convolutional neural networks

DOI

Rights

Attribution-NonCommercial 4.0 International

Funder/s

Relationships

Relationships

Resources