Browsing by Author "Paxton-Fear, Katie"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access An analysis of the writing of ‘suicide cult’ members(Oxford Academic, 2021-06-03) Hodges, Duncan; Paxton-Fear, KatieThe infamous ‘Heaven’s Gate cult’ committed a mass suicide in 1995 believing members of the group would achieve salvation through bodily transformation and departure aboard UFOs. The group left a large volume of writing available as a book and a website which outlined their belief structure. This writing, largely by the group’s leaders Ti and Do, is supplemented by ‘exit statements’ written by the group members. We analysed these writings and demonstrated how the texts evolve from accessible texts for recruiting individuals into the group through more complex texts for cementing the belief structure and reinforcing the ingroup. We also identify differences in the ‘exit statements’ that demonstrate the ideas and concepts that gained traction with the group members.Item Open Access Increasing the accessibility of NLP techniques for Defence and Security using a web-based tool(Cranfield University, 2019-11-19 15:39) Paxton-Fear, KatieAs machine learning becomes more common in defence and security, there is a real risk that the low accessibility of techniques to non-specialists will hinder the process of operationalising the technologies. This poster will present a tool to support a variety of Natural Language Processing (NLP) techniques including the management of corpora – data sets of documents used for NLP tasks, creating and training models, in addition to visualising the output of the models. The aim of this tool is to allow non-specialists to exploit complex NLP techniques to understand the content of large volumes of reports.NLP techniques are the mechanisms by which a machine can process and analyse text written by humans. These methods can used for a range of tasks including categorising documents, translation and summarising text. For many of these tasks the ability to process and analyse large corpora of text is key. With current methods, the ability to manage corpora is rarely considered, instead relying on researchers and practitioners to do this manually in their file system. To train models, researchers use ad-hoc code directly, writing scripts or code and compiling or running them through an interpreter. These approaches can be a challenge when working in multidisciplinary fields, such as defence and security and cyber security. This is even more salient when delivering research where outputs may be operationalised and the accessibility can be a limiting factor in their deployment and use.We present a web interface that uses an asynchronous service-based architecture to enable non-specialists to easily manage multiple large corpora and create and operationalise a variety of different models – at this early stage we have focussed on one NLP technique, that of topic models.This tool-support has been created as part of a project considering the use of NLP to better understand reports of insider threat attacks. These are security incidents where the attacker is a member of staff or another trusted individual. Insider threat attacks are particularly difficult to defend against due to the level of access these individuals gain during the regular course of their employment. The wider use of these techniques would generate greater impact both tactically in defending against these attacks and strategically in developing policy and procedures. There are tools available, however they are often complex and perform a single-task, limiting their use. To generate maximum impact from our research we have developed this web-based software to make the tools more accessible, especially to non-specialist researchers, customers and potential users.Item Open Access Understanding insider threat attacks using natural language processing: automatically mapping organic narrative reports to existing insider threat frameworks(Springer, 2020-07-10) Paxton-Fear, Katie; Hodges, Duncan; Buckley, OliverTraditionally cyber security has focused on defending against external threats, over the last decade we have seen an increasing awareness of the threat posed by internal actors. Current approaches to reducing this risk have been based upon technical controls, psychologically understanding the insider’s decision-making processes or sociological approaches ensuring constructive workplace behaviour. However, it is clear that these controls are not enough to mitigate this threat with a 2019 report suggesting that 34% of breaches involved internal actors. There are a number of Insider threat frameworks that bridge the gap between these views, creating a holistic view of insider threat. These models can be difficult to contextualise within an organisation and hence developing actionable insight is challenging. An important task in understanding an insider attack is to gather a 360-degree understanding of the incident across multiple business areas: e.g. co-workers, HR, IT, etc. can be key to understanding the attack. We propose a new approach to gathering organic narratives of an insider threat incident that then uses a computational approach to map these narratives to an existing insider threat framework. Leveraging Natural Language Processing (NLP) we exploit a large collection of insider threat reporting to create an understanding of insider threat. This understanding is then applied to a set of reports of a single attack to generate a computational representation of the attack. This representation is then successfully mapped to an existing, manual insider threat framework.