Browsing by Author "Spayne, Peter"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Autonomy is the answer, but what was the question?(The Institute of Marine Engineering, Science and Technology, 2024-10-21) Spayne, Peter; Lacey, Laura; Cahillane, Marie; Saddington, Alistair J.In recent years aspirations regarding the implementation of autonomous systems have rapidly matured. Consequently, establishing the assurance and certification processes necessary for ensuring their safe deployment, across various industries, is critical. In the United Kingdom Ministry of Defence distinctive duty holder structures - formed since the publication of the Haddon Cave report in 2009 - are central to risk management. The objective of this research is to evaluate the duty holder constructs suitability to cater for the unique merits of artificial intelligence-based technology that is the beating heart of highly autonomous systems. A comprehensive literature review examined the duty holder structure and underpinning processes that form two established concepts: i) confirming the safety of individual equipment and platforms (safe to operate); and ii)the safe operation of equipment by humans to complete the human-machine team (operate safely). Bothtraditional and emerging autonomous assurance methods from various domains were compared, includingwithin wider fields, such as space, medical technology, automotive, software, and controls engineering. Thesemethods were analysed, adapted, and amalgamated to formulate recommendations for a single militaryapplication. A knowledge gap was identified where autonomous systems were proposed but could not be adequately assured. Exploration of this knowledge gap revealed a notable intersection between the two operating concepts when autonomous systems were considered. This overlap formed the development of a third concept, safe to operate itself safely, envisioned as a novel means to certify the safe usage of autonomous systems within the UK's military operations. A hypothetical through-life assurance model is proposed to underpin the concept of safe to operate itself safely. At the time of writing the proposed model is undergoing validation through a series of qualitative interviews with key stakeholders; duty holders, commanding officers, industry leaders, technology accelerator organisation leaders, requirements managers, system designers, Artificial Intelligence developers and other specialist technical experts from within the Ministry of Defence, academia, and industry. Preliminary analysis queries whether a capability necessitates the use of autonomy at all. Recognising that some autonomous systems will never be certified as safe to operate themselves safely, voiding ambitious development aspirations. This highlights that autonomy is simply one of many tools available to a developer, to be used sparingly alongside traditional technology, and not a panacea to replace human resource as originally thought. This paper provides a comprehensive account of the convergence between safe to operate and operate safely, enabling the creation of the safe to operate itself safely concept for autonomous systems. Furthermore, it outlines the methodology employed to establish this concept and makes recommendations for its integration within the duty holder construct.Item Open Access Autonomy is the answer; what was the question?(Cranfield University Defence and Security, 2024-11-13) Spayne, Peter; Lacey, Laura J.; Cahillane, Marie; Saddington, Alistair J.Item Open Access Operating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence(Taylor and Francis, 2024-12-31) Spayne, Peter; Lacey, Laura; Cahillane, Marie; Saddington, Alistair J.The Ministry of Defence, specifically the Royal Navy, uses the ‘Duty Holder Structure’ to manage how it complies with deviations to maritime laws and health and safety regulations where military necessity requires it. The output statements ensuring compliance are ‘safe to operate’ certification for all platforms and equipment, and the ‘operate safely’ declaration for people who are suitably trained within the organisation. Together this forms the Safety Case. Consider a handgun; the weapon has calibration, design and maintenance certification to prove it is safe to operate, and the soldier is trained to be qualified competent to make predictable judgement calls on how and when to pull the trigger (operate safely). Picture those statements as separate circles drawn on a Venn diagram. As levels of autonomy and complexity are dialled up the two circles converge. Should autonomy increase to the point that the decision to fire be under the control of an Artificial Intelligence within the weapon’s software then the two circles will overlap. This paper details research conclusions within the overlap, and proposes a new methodology able to certify that an AI based autonomous weapons system is “safe to operate itself safely” when in an autonomous state.