Saldiran, EmreHasanzade, MehmetInalhan, GokhanTsourdos, Antonios2023-09-152023-09-152023-08-02Saldiran E, Hasanzade M, Inalhan G, Tsourdos A. (2023) Explainability of AI-driven air combat agent. In: 2023 IEEE Conference on Artificial Intelligence (CAI 2023), 5-6 June 2023, Santa Clara, USA, pp. 85-86979-8-3503-3985-7https://doi.org/10.1109/CAI54212.2023.00044https://dspace.lib.cranfield.ac.uk/handle/1826/20220In safety-critical applications, it is crucial to verify and certify the decisions made by AI-driven Autonomous Systems (ASs). However, the black-box nature of neural networks used in these systems often makes it challenging to achieve this. The explainability of these systems can help with the verification and certification process, which will speed up their deployment in safety-critical applications. This study investigates the explainability of AI-driven air combat agents via semantically grouped reward decomposition. The paper presents two use cases to demonstrate how this approach can help AI and non-AI experts to evaluate and debug the behavior of RL agents.enAttribution-NonCommercial 4.0 Internationalhttp://creativecommons.org/licenses/by-nc/4.0/explainablereinforcement learningreward decompositionair combatExplainability of AI-driven air combat agentConference paper979-8-3503-3984-0