Trajectory optimization with sparse gauss-hermite quadrature

Loading...
Thumbnail Image

Date published

Free to read from

2025-07-23

Authors

Journal Title

Journal ISSN

Volume Title

Department

SATM

Course name

Type

ISSN

Format

Citation

Abstract

This thesis aims to innovate knowledge on the trajectory optimization problem for aerospace applications by adopting a numerical integration in optimal control designs. Quadrature point scheme is considered to substitute a derivative of an optimal cost-to-go function and system dynamics approximated by Gaussian Quadrature rule. Based on the Differential Dynamic Programming (DDP) algorithm, a sequence of optimal control input is derived by sparsely chosen quadrature points with the Smolyak’s rule over an exponential weighting function on the Gaussian quadrature, called as Sparse Gauss-Hermite Quadrature (SGHQ). The sampling points propagated via system dynamics computes the mean and covariance of a probability distribution of the value function avoiding a numerical differentiation in the DDP. This approach improves an accuracy and robustness against the numerical calculation in the highly nonlinear environment despite the less number of the quadrature points compared with the fully composed by Gauss- Hermite Quadrature. Besides, the number of sampling points can be determined by Smolyak’s rule definitely, while the other sampling point-based approach chooses by heuristic/empirical method such as a trial and error. The proposed method is carried out and validate via numerical simulation: a fixed-wing aircraft controller and missile guidance. Considering a stochastic environment and control policy, trajectory optimization problem can be extended to a stochastic trajectory optimization and maximum entropy problem by adding an entropy term in a deterministic trajectory optimization problem: a Guided Policy Search (GPS). This entropy term in a stochastic problem enables to prevent a control policy from falling into local minima by maximizing exploration in the given unknown environment, and improves robustness to respond to a potentially flexible environment. A local policy is updated by DDP framework where the SGHQ-DDP can be implemented to find a mean and covariance of policy distribution by solving the soft Bellman equation. The numerical simulation shows the feasibility of the SGHQ-GPS method under an unknown system dynamics under the fitted model.

Description

Software Description

Software Language

Github

Keywords

Optimal Control, Trajectory Optimization, Differential Dynamic Programming, Constrained Differential Dynamic Programming, Unscented Dynamic Programming, Sparse Gauss- Hermite Quadrature, Reinforcement Learning, Guided Policy Search

DOI

Rights

© Cranfield University, 2023. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright holder.

Funder/s

Education Department - Korean Government

Relationships

Relationships

Resources