AI, Robotics and Space
Browse
Browsing AI, Robotics and Space by Author "Arana-Catania, Miguel"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Causal reinforcement learning for optimisation of robot dynamics in unknown environments(IEEE, 2024-10-29) Dcruz, Julian Gerald; Mahoney, Sam; Chua, Jia Yun; Soukhabandith, Adoundeth; Mugabe, John; Guo, Weisi; Arana-Catania, MiguelAutonomous operations of robots in unknown environments are challenging due to the lack of knowledge of the dynamics of the interactions, such as the objects' movability. This work introduces a novel Causal Reinforcement Learning approach to enhancing robotics operations and applies it to an urban search and rescue (SAR) scenario. Our proposed machine learning architecture enables robots to learn the causal relationships between the visual characteristics of the objects, such as texture and shape, and the objects’ dynamics upon interaction, such as their movability, significantly improving their decision-making processes. We conducted causal discovery and RL experiments demonstrating the Causal RL’s superior performance, showing a notable reduction in learning times by over 24.5% in complex situations, compared to non-causal models.Item Open Access Explainable reinforcement and causal learning for improving trust to 6G stakeholders(IEEE, 2025-06-01) Arana-Catania, Miguel; Sonee, Amir; Khan, Abdul-Manan; Fatehi, Kavan; Tang, Yun; Jin, Bailu; Soligo, Anna; Boyle, David; Calinescu, Radu; Yadav, Poonam; Ahmadi, Hamed; Tsourdos, Antonios; Guo, Weisi; Russo, AlessandraFuture telecommunications will increasingly integrate AI capabilities into network infrastructures to deliver seamless and harmonized services closer to end-users. However, this progress also raises significant trust and safety concerns. The machine learning systems orchestrating these advanced services will widely rely on deep reinforcement learning (DRL) to process multi-modal requirements datasets and make semantically modulated decisions, introducing three major challenges: (1) First, we acknowledge that most explainable AI research is stakeholder-agnostic while, in reality, the explanations must cater for diverse telecommunications stakeholders, including network service providers, legal authorities, and end users, each with unique goals and operational practices; (2) Second, DRL lacks prior models or established frameworks to guide the creation of meaningful long-term explanations of the agent's behaviour in a goal-oriented RL task, and we introduce state-of-the-art approaches such as reward machine and sub-goal automata that can be universally represented and easily manipulated by logic programs and verifiably learned by inductive logic programming of answer set programs; (3) Third, most explainability approaches focus on correlation rather than causation, and we emphasise that understanding causal learning can further enhance 6G network optimisation. Together, in our judgement they form crucial enabling technologies for trustworthy services in 6G. This review offers a timely resource for academic researchers and industry practitioners by highlighting the methodological advancements needed for explainable DRL (X-DRL) in 6G. It identifies key stakeholder groups, maps their needs to X-DRL solutions, and presents case studies showcasing practical applications. By identifying and analysing these challenges in the context of 6G case studies, this work aims to inform future research, transform industry practices, and highlight unresolved gaps in this rapidly evolving field.