Operating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence

dc.contributor.authorSpayne, Peter
dc.contributor.authorLacey, Laura
dc.contributor.authorCahillane, Marie
dc.contributor.authorSaddington, Alistair J.
dc.date.accessioned2024-10-21T08:54:11Z
dc.date.available2024-10-21T08:54:11Z
dc.date.freetoread2024-10-21
dc.date.issued2024-12-31
dc.date.pubOnline2024-10-17
dc.description.abstractThe Ministry of Defence, specifically the Royal Navy, uses the ‘Duty Holder Structure’ to manage how it complies with deviations to maritime laws and health and safety regulations where military necessity requires it. The output statements ensuring compliance are ‘safe to operate’ certification for all platforms and equipment, and the ‘operate safely’ declaration for people who are suitably trained within the organisation. Together this forms the Safety Case. Consider a handgun; the weapon has calibration, design and maintenance certification to prove it is safe to operate, and the soldier is trained to be qualified competent to make predictable judgement calls on how and when to pull the trigger (operate safely). Picture those statements as separate circles drawn on a Venn diagram. As levels of autonomy and complexity are dialled up the two circles converge. Should autonomy increase to the point that the decision to fire be under the control of an Artificial Intelligence within the weapon’s software then the two circles will overlap. This paper details research conclusions within the overlap, and proposes a new methodology able to certify that an AI based autonomous weapons system is “safe to operate itself safely” when in an autonomous state.
dc.description.journalNameDefence Studies
dc.identifier.citationSpayne P, Lacey L, Cahillane M, Saddington A. (2024) Operating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence. Defence Studies, Available online 17 October 2024
dc.identifier.eissn1743-9698
dc.identifier.elementsID555124
dc.identifier.issn1470-2436
dc.identifier.urihttps://doi.org/10.1080/14702436.2024.2415712
dc.identifier.urihttps://dspace.lib.cranfield.ac.uk/handle/1826/23097
dc.languageEnglish
dc.language.isoen
dc.publisherTaylor and Francis
dc.publisher.urihttps://www.tandfonline.com/doi/full/10.1080/14702436.2024.2415712
dc.rightsAttribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.subject4408 Political science
dc.subjectAI
dc.subjectLAWS
dc.subjectSafety
dc.subjectMOD
dc.titleOperating itself safely: merging the concepts of ‘safe to operate’ and ‘operate safely’ for lethal autonomous weapons systems containing artificial intelligence
dc.typeArticle
dcterms.dateAccepted2024-10-06

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Operating_itself_safely-2024.pdf
Size:
4.65 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.63 KB
Format:
Plain Text
Description: