Reinforcement learning for UAV path planning under complicated constraints with GNSS quality awareness

Loading...
Thumbnail Image

Date published

Free to read from

2025-07-21

Supervisor/s

Journal Title

Journal ISSN

Volume Title

Publisher

Department

Course name

ISSN

Format

Citation

Alyammahi A, Xu Z, Petrunin I, et al., (2025) Reinforcement learning for UAV path planning under complicated constraints with GNSS quality awareness. Engineering Proceedings, Volume 88, Issue 1, June 2025, Article number 66. European Navigation Conference 2024. 22-24 May 2024, Noordwijk, The Netherlands

Abstract

Requirements for Unmanned Aerial Vehicle (UAV) applications in low-altitude operations are escalating, which demands resilient Position, Navigation and Timing (PNT) solutions incorporating global navigation satellite system (GNSS) services. However, UAVs often operate in stringent environments with degraded GNSS performance. Practical challenges often arise from dense, dynamic, complex, and uncertain obstacles. When flying in complex environments, it is important to consider signal degradation caused by reflections (multipath) and obscuration (Non-Line of Sight (NLOS)), which can lead to positioning errors that must be minimized to ensure mission reliability. Recent works integrate GNSS reliability maps derived from pseudorange error estimations into path planning to reduce loss-of-GNSS risks with PNT degradations. To accommodate multiple constraint conditions attempting to improve flight resilience against GNSS-degraded environments, this paper proposes a reinforcement learning (RL) approach to feature GNSS signal quality awareness during path planning. The non-linear relations between GNSS signal quality in the form of dilution of precision (DoP), geographic locations, and the policy of searching sub-minima points are learned by the clipped Proximal Policy Optimization (PPO) method. Other constraints considered include static obstacle occurrence, altitude boundary, forbidden flying regions, and operational volumes. The reward and punishment functions and the training method are designed to maximize the success criteria of approaching destinations. The proposed RL approach is demonstrated using a real 3D map of Indianapolis, USA, in the Godot engine, incorporating forecasted DoP data generated by a Geospatial Augmentation system named GNSS Foresight from Spirent. Results indicate a 36% enhancement in mission success rates when GNSS performance is included in the path planning training. Additionally, the varying tensor size, representing the UAV’s DoP perception range, exhibits a positive proportion relation to a higher mission rate, despite an increment in computational complexity.

Description

Software Description

Software Language

Github

Keywords

4605 Data Management and Data Science, 46 Information and Computing Sciences, 4602 Artificial Intelligence, Behavioral and Social Science, Basic Behavioral and Social Science, path planning, GNSS quality awareness, dilution of precision, reinforcement learning, clipped proximal policy optimization

DOI

Rights

Attribution 4.0 International

Funder/s

Relationships

Relationships

Resources