Browsing by Author "Riley, Joshua"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Assured Multi-Agent Reinforcement Learning for Safety-Critical Scenarios(Cranfield University, 2022-01-11T16:35:25Z) Riley, JoshuaMulti-agent reinforcement learning involves and facilitates a team of agents to solve complex decision-making problems in shared environments. This learning process is largely successful in many areas, but its inherently stochastic nature is problematic when applied to safety-critical domains.To solve this limitation, we propose our Multi-Agent Reinforcement Learning (AMARL), which uses a model checking technique called quantitative verification. Quantitative verification provides formal guarantees of agent compliance to safety, performance, and other non-functional requirements, while reinforcement learning occurs and after a policy has been learned.Our AMARL approach is demonstrated using three separate navigation domains, which contain patrolling problems. The multi-agent systems must learn to visit patrol points to satisfy mission objectives while limiting exposure to risky areas in these domains. Different reinforcement learning algorithms have been utilised within these domains: temporal difference learning, game theory, and direct policy search. The performance of these algorithms, while combined with our approach, are presented. Lastly, we demonstrate AMARL with differing system sizes in both homogeneous and heterogeneous multi-agent systems through our extensive experimentation. This experimentation shows that the use of AMARL leads to faster and more efficient performance than standard reinforcement learning and consistently meets safety requirements.Item Open Access Safe Multi-Agent Reinforcement Learning Towards the Engineering of Safe Robotic Teams(Cranfield University, 2019-11-19 15:36) Riley, JoshuaMulti-agent systems are a collection of agents working in shared environments, often with shared goals while being required to adhere to limited resources. These systems have universal applications and are often deemed as the future of automation in industry; however, an open issue within these systems is ensuring a degree of trustworthiness, allowing human counterparts to be confident that these systems and their individual agents will adhere to expected behaviours even when issues occur. The need for “Safety”, which is often defined in the literature in a post hoc fashion, in these systems can be seen at its most crucial within sensitive operations such as within military application and search and rescue operations. The current state of safety in agents, learning or otherwise, shows much promise with the use of quantitative analysis methods, to deliver a statistical foundation of how likely safety standards will be adhered to. In Multi-agent systems, a large area of literature is dedicated to Petri-net modelling, and using these models to constrict agent behaviour, however, Petri-nets require expertise to design, and analysis of these tools for safety remains an open question. This project aims to look further into the use of the Petri-net tool in modelling multi-agent systems to constrict “unsafe” behaviour while they learn to relatively optimise their behaviours and after this learning has concluded. The project aims to do this by increasing the accessibility of Petri-nets when modelling robot teams, and also further investigate ways to analysis these Petri-net models to deliver a high quality of trustworthiness.