PhD, EngD, MPhil and MSc by research theses (CDS)
Permanent URI for this collection
Browse
Browsing PhD, EngD, MPhil and MSc by research theses (CDS) by Title
Now showing 1 - 20 of 338
Results Per Page
Sort Options
Item Open Access 3D automatic target recognition for missile platforms(2017-05) Kechagias Stamatis, Odysseas; Aouf, NabilThe quest for military Automatic Target Recognition (ATR) procedures arises from the demand to reduce collateral damage and fratricide. Although missiles with two-dimensional ATR capabilities do exist, the potential of future Light Detection and Ranging (LIDAR) missiles with three-dimensional (3D) ATR abilities shall significantly improve the missile’s effectiveness in complex battlefields. This is because 3D ATR can encode the target’s underlying structure and thus reinforce target recognition. However, the current military grade 3D ATR or military applied computer vision algorithms used for object recognition do not pose optimum solutions in the context of an ATR capable LIDAR based missile, primarily due to the computational and memory (in terms of storage) constraints that missiles impose. Therefore, this research initially introduces a 3D descriptor taxonomy for the Local and the Global descriptor domain, capable of realising the processing cost of each potential option. Through these taxonomies, the optimum missile oriented descriptor per domain is identified that will further pinpoint the research route for this thesis. In terms of 3D descriptors that are suitable for missiles, the contribution of this thesis is a 3D Global based descriptor and four 3D Local based descriptors namely the SURF Projection recognition (SPR), the Histogram of Distances (HoD), the processing efficient variant (HoD-S) and the binary variant B-HoD. These are challenged against current state-of-the-art 3D descriptors on standard commercial datasets, as well as on highly credible simulated air-to-ground missile engagement scenarios that consider various platform parameters and nuisances including simulated scale change and atmospheric disturbances. The results obtained over the different datasets showed an outstanding computational improvement, on average x19 times faster than state-of-the-art techniques in the literature, while maintaining or even improving on some occasions the detection rate to a minimum of 90% and over of correct classified targets.Item Open Access 3D conformal antennas for radar applications(2018) Fourtinon, L; Balleri, AlessioEmbedded below the radome of a missile, existing RF-seekers use a mechanical rotating antenna to steer the radiating beam in the direction of a target. Latest research is looking at replacing the mechanical antenna components of the RF seeker with a novel 3D conformal antenna array that can steer the beam electronically. 3D antennas may oer signicant advantages, such as faster beamsteering and better coverage but, at the same time, introduce new challenges resulting from a much more complex radiation pattern than that of 2D antennas. Thanks to the mechanical system removal, the new RF-seeker has a wider available space for the design of a new 3D conformal antenna. To take best benets of this space, dierent array shapes are studied, hence the impact of the position, orientation and conformation of the elements is assessed on the antenna performance in terms of directivity, ellipticity and polarisation. To facilitate this study of 3D conformal arrays, a Matlab program has been developed to compute the polarisation pattern of a given array in all directions. One of the task of the RF-seeker consists in estimating the position of a given target to correct the missile trajectory accordingly. Thus, the impact of the array shape on the error between the measured direction of arrival of the target echo and its true value is addressed. The Cramer-Rao lower bound is used to evaluate the theoretical minimum error. The model assumes that each element receives independently and allows therefore to analyse the potential of active 3D conformal arrays. Finally, the phase monopulse estimator is studied for 3D conformal arrays whose quadrants do not have the same characteristics. A new estimator more adapted to non-identical quadrants is also proposed.Item Open Access 3D Panoptic Segmentation with Unsupervised Clustering for Visual Perception in Autonomous Driving(2021-09) Grenier, Amelie; Chermak, LFor the past decade, substantial progress has been achieved in the field of visual per ception for autonomous driving application thanks notably to the capabilities of deep learning techniques. This work aims to leverage stereovision and explore different methods, in particular unsupervised clustering approaches, to perform 3D panoptic segmentation for navigation purposes. The main contribution of this work consists in the development, test and validation of a novel framework in which geometric and semantic understanding of the scene are obtained separately at the pixel level. The combination of both for the extracted visual 2D information of the desired class provides a 3D sparse classified point cloud, which is used afterward for instance clustering. Preliminary tests of the baseline version of the framework for Vehicle objects were conducted on urban driving datasets. Results demonstrate for the first time the via bility for processing of this type of point cloud from visual data, and reveal improve ments areas. Specially, the importance of the boundary F-score in semantic seg mentation is highlighted for the first time in this application, with an increase up to 32 percentage point in this study. Additional contribution was made by applying distribution clustering as well as density based clustering for instance segmentation in a visual based 3D space representa tion. Results showed that DBSCAN was well suited for this application. As a result, it was proven that the presented framework can successfully provide genuine 3D profile map representation and localisation of vehicles in a urban environment from 2D visual information only. Furthermore, the mathematical formalisation of the link between DBSCAN’s param eter selection and camera projective geometry was presented as future work and a mean to demystify parameter selection.Item Open Access Adequacy of test standards in evaluating blast overpressure (BOP) protection for the torso(2016-11-18) Whyte, Tamlin; Horsfall, IanThe blast wave emanating from an explosion produces an almost instantaneous rise in pressure which can then cause Blast Overpressure (BOP) injuries to nearby persons. BOP injury criteria are specified in test standards to relate BOP measurements in a testing environment to a risk of BOP injury. This study considered the adequacy of test standards in evaluating BOP protection concepts for the torso. Four potential BOP injury scenarios were studied to determine the likelihood of injury and the adequacy of test standards for appropriate protection concepts. In the case of vehicle blast, BOP injury is unlikely and test standards are adequate. In the scenario involving an explosive charge detonated within a vehicle, and the close proximity to a hand grenade scenario, test standards are not available. The demining scenario was identified as of importance as test standards are available, but do not mandate the evaluation of BOP protection. A prototype South African Torso Surrogate (SATS) was developed to explore this scenario further. The SATS was required to be relatively inexpensive and robust. The SATS was cast from silicone (selected to represent body tissue characteristics) using a torso mould containing a steel frame and instrumented with chest face-on pressure transducer and accelerometer. The SATS was subjected to an Anti-Personnel (AP) mine test and the Chest Wall Velocity Predictor and Viscous Criterion were used to predict that BOP injuries would occur in a typical demining scenario. This result was confirmed by applying the injury criteria to empirical blast predictions from the Blast Effects Calculator Version 4 (BECV4). Although limitations exist in the ability of injury criteria and measurement methods to accurately predict BOP injuries, generally a conservative approach should be taken. Thus, it is recommended that the risk of BOP injuries should be evaluated in demining personal protective equipment test standards.Item Open Access The aerodynamic interference effects of side walll proximity on a generic car model(2010-11-03) Strachan, R K; Knowles, KevinThe flow around a generic car model both in isolation and in proximity to a near side wall has been investigated utilising experimental and computational methods. Phase one of this investigation tested a range of Ahmed generic road vehicle models with varying backlight angles in isolation, employing laser-Doppler anemometry, static pressure and aerodynamic force and moment measurements in the experimental section. Additionally, numerical simulations were conducted using a commercial Reynolds-averaged Navier Stokes (RANS) code with the RNG k-ε turbulence model. This phase served both to extend the previous knowledge of the flow around the Ahmed model, and analyse the effects of both the supporting strut and rolling road. Phase two then used similar methods to investigate the Ahmed model in proximity to a non-moving side wall. Results from phase two are compared with previous near-wall studies in order that an understanding of the effects of wall proximity can be presented, an area lacking in the existing literature. It is found that the flow on the isolated model must be understood before the effects of side wall proximity can be assessed. There is though, in general, a breakdown of any longitudinal vortices on the near-wall side of the model as model-to-wall distance reduces, with an increase in longitudinal vortex strength on the model side away from the wall. There also exists a large pressure drop on the near-wall model side, which increases in magnitude as model-to-wall distance reduces, before dissipating at separations where the boundary layer restricts the flow. Additionally, there is found to be a pressure drop on the top and bottom of the model with decreasing wall distance, with the relative magnitudes of these dependent on model geometry.Item Open Access Aerodynamic problems of urban UAV operations(2011-09-09) Kittiyoungkun, S; Saddington, Alistair J.; Knowles, KevinUnmanned Air Vehicles, UAVs are designed to operate without any onboard controllers. Consequently, they are considered to operate in a wide range of applications. Missions in undesirable conditions such as bad weather and/or highly unsteady gustiness could cause an unsuccessful operation. In many ways, aerodynamics is a key feature in the performance of UAVs such as influencing deformation vehicle, guidance and control. Two aspects of this research are, therefore, to understand flying conditions of UAVs in an urban environment and how the flying performance is affected by such conditions. The first objective relies on understanding air flow behaviour in the lower part of the urban environment which has the most important role on the response of UAVs. The second objective will be to look at the characteristics of a three-dimensional airfoil when it encounters an unsteady sinusoidal gust at different oscillation frequencies and freestream velocities. As the first step of the studies on the aerodynamic problem of UAV operations in the lower part of an atmospheric boundary layer in an urban environment, the boundary layer thickness in a suitable wind tunnel facility were the first experimental results obtained. Experimental measurements of the mean velocity profile in a turbulent boundary layer were investigated for three different floor roughness conditions as well as a smooth wall condition. As a result, three different boundary layer thicknesses were then classified depending on the wall surface roughness and a combination with turbulence generators providing a maximum thickness of 280 mm at the centre of the tunnel test section. However,the experimental investigations into the turbulent boundary layer over a rough wall have shown that the boundary layer thickness is dependent on the surface roughness and is different from that obtained under the smooth wall condition. An experimental study into a simulated urban flow regime was then carried out after the measurement of the boundary layer. Wind tunnel experiments on the airflow around a single and twin buildings including an investigation of the airflow between the gap of the buildings were obtained. Wind in the lower part of the atmospheric boundary layer is more a micro-scale problem which increases or decreases the wind speed induced by buildings nearby. The studies have found some strong concentrated vortices caused by the flow separation essentially independent of the nature of the upstream flow and usually as a direct result of the building geometry and orientation. As the measurement location increased further downstream from the back of the buildings, the concentrated vortices were found to be weak and disappeared into the wake region. Finally, an experiment was conducted using a sinusoidal gust generator to describe the effects of wind oscillation parameters such as oscillation amplitude, oscillation frequency and reduced frequency under static and dynamic conditions. An evaluation was made of the onset of dynamic stall due to rapid changes in angle of attack during an unsteady pitch motion. The NACA 23012 wing profile was tested at a fixed angle of attack condition with varying oscillation flow parameters. Results demonstrate that those parameters influence the dynamic stall and hysteresis loop based on lift coefficient and angle of attackItem Open Access Aerodynamics and performance enhancement of a ground-effect diffuser(2018-04) Ehirim, O H; Knowles, Kevin; Saddington, Alistair J.This study involved experimental and equivalent computational investigations into the automobile-type 3―D flow physics of a diffuser bluff body in ground-effect and novel passive flow-control methods applied to the diffuser flow to enhance the diffuser’s aerodynamic performance. The bluff body used in this study is an Ahmed-like body employed in an inverted position with the slanted section together with the addition of side plates along both sides forming the ramped diffuser section. The first part of the study confirmed reported observations from previous studies that the downforce generated by the diffuser in proximity to a ground plane is influenced by the peak suction at the diffuser inlet and subsequent static pressure-recovery towards the diffuser exit. Also, when the bluff body ride height is gradually reduced from high to low, the diffuser flow as indicated by its force curve and surface flow features undergoes four distinct flow regimes (types A to D). The types A and B regimes are reasonably symmetrical, made up of two low-pressure core longitudinal vortices travelling along both sides of the diffuser length and they increase downforce and drag with reducing ride height. However, below the ride heights of the type B regime, types C and D regimes are asymmetrical because of the breakdown of one vortex; consequently a significant loss in downforce and drag occurs. The second part of the study involved the use ― near the diffuser exit ― of a convex bump on the diffuser ramp surface and an inverted wing between the diffuser side plates as passive flow control devices. The modification of the diffuser geometry with these devices employed individually or in combination, induced a second-stage pressure-drop and recovery near the diffuser exit. This behaviour was due to the radial pressure gradient induced on the diffuser flow by the suction surface ii curvature of the passive devices. As a result of this aerodynamic phenomenon, the diffuser generated across the flow regimes additional downforce, and a marginal increase in drag due to the profile drag induced by the devices.Item Open Access Agent-Based Modelling of Offensive Actors in Cyberspace(2021-12) Sidorenko, Tatjana; Hodges, D; Buckley, OWith the rise of the Information Age, there has also been a growing rate of attacks targeting information. In order to better defend against these attacks being able to understand attackers and simulate their behaviour is of utmost importance. A recent approach of using serious games provides an avenue to explore offensive cyber attacks in a safe and fun environment. There exists a wide range of cyber attackers, with varying levels of expertise whose motivations are different. This project provides a novel contribution in using games to allow people to role play as malicious attackers and then using these games as inputs into the simulation. A board game has been designed that emulates a cyber environment, where players represent offensive actors, with seven roles - Cyber Mercenary (low and high capability), State-backed (low and high capability), Script Kiddy, Hacktivist and Counter-culture (not motivated by finances or ideology). The facilitator or the Games Master (GM) represents the organisation under attack, and players use the Technique cards to perform attacks on the organisation, all cards are sourced from existing Tools, Techniques and Procedures (TTPs). Along with the game, players also provided responses to a questionnaire that encapsulated three individual differences: Sneider's self-report, DOSPERT and Barratt's Impulsiveness scale. There were a total of 15 players participating in 13 games, and three key groups of individual differences players. No correlation was identifed with the individual Technique card pick rate and role. However, the complexity of the attack patterns (Technique card chains) was modulated by roles, and the players' individual differences. A proof-of-concept simulation has been made using an Agent-Based Modelling framework that re-plays the actions of a player. One of the aspects of future work is the exploitation of the game data to be used as a learning model to create intelligent standalone agents.Item Open Access Alternative explanation of North Korea's survival: successful application of smart power(2017-02-24) Shin, D. W.; Cleary, LauraThe original contribution of this study is to demonstrate how North Korea survives by using smart power. The existing literature has offered partial explanations, but many have lost their explanatory power over time and there seems to be no definitive answer to explain how North Korea survives. This multi-case study was designed to explore how the North uses smart power by examining its provocations from the Korean War to August 2015. The rationale for this study is to increase understanding of Pyongyang’s behavior and offer recommendations to bring long-term stability to the Korean Peninsula. This study purposefully began with the Korean War because it was assumed that, without understanding the origin of North Korean provocations, it would be difficult to provide the proper temporal context for other provocations. This study reveals that Kim Il-sung and his guerrillas consolidated power and established totalitarian rule dominated by his Juche ideology (self-reliance). Subsequently, they waged a long war of reunification from 1948 to the 1980s. Although Kim’s smart power attempts failed to achieve his principal aim of reunification, when Beijing and Moscow abandoned him in the early 1990s he focused on regime survival. He bolstered his weak hand by playing the nuclear card to buy more time to ensure the hereditary succession by his son Kim Jong-il, who defied predictions he would not survive and proclaimed Songun (military-first) to deal with the changing international environment. He demonstrated his own skill by exploiting Seoul’s Sunshine Policy and successfully negotiating three nuclear agreements with the U.S. After his death, Kim Jong-un waged a reign of terror to consolidate power and manufactured crises to bolster his legitimacy and demonstrate his leadership. He also invoked his grandfather’s anti-Japanese legacy and the Byungjin policy (simultaneous development of nuclear weapons and the economy) to legitimize his rule. The evidence shows he is rational and that offers opportunities to resolve the North Korea issue.Item Open Access The analysis of latent fingermark chemistry using fourier=transform infrared spectroscopy(2018-01) Johnston, A; Rogers, Prof KeithLatent fingerprints are comprised of a complex mixture of orfanic and inorganic components that exhibit broad chemical variability. Fingermarks are dynamic compositions prone to degradation over time and in varying environmental conditions. The complexity of latent fingermark chemistry has led to an abundance of literature over a number of years utilizing various analytical techniques, which have endeavoured to provide a greater understanding of these complex chemical systems. In particular, a key focus has been on fingermark decomposition and with recent advances in analytical instrumentation a more in-depth understanding of the dynamics of fingermark chemistry has been achieved despite this, there remain significant gaps in the literature. The work presented within this thesis looks at various aspectsof latent fingermark chemistry that aim to address these gaps. During this research the capabilities and limitations of Fourier-Transform Infrared (FTIR) spectromicroscopy were compared to the more established analytical technique of gas chromatography-mass spectrometry for the analysis of latent fingermarks. A novel approach to analysing the change in latent fingermark chemistry over time at various moderate temperatures was demonstrated. An investigation into the intermolecular interactions of lipid components within simplified analogue “fingermark” solutions was conducted, and the implications of these interactions for natural fingermark chemistry considered. Finally the temporal degradation of illicit substances in latent fingermarks using Spectroscopic imaging was investigated. The results of this study, structured in the form of four research papers, demonstrate the complexity of latent fingermark composition, variability, and analysis. The issue FTIR spectromicroscopy to study in-situ, real-time changes in fingermark chemistry subjected to varying temperatures showed that total composition is affected by temperatures above 50oC, and oxidation mechanisms take place almost immediately after deposition, even ar room temperature. The use of simplified analogue fingermark solutions to study intermolecular interactions within natural fingermarks identified two key components, squalene and cholesterol, that potentially affect downstream organic interactions post-deposition. Finally spectroscopic imaging successfully identified and spatially mapped aged illicit substances present within latent fingermarks up to thirty days post- deposision. It was also possible to quantify degradation of those illicit compounds over time. Due to the different facets of this research, the results of this thesis are expected to have an impact on a broad range of disciplines both qithin academia and fo more piratical forensic applications.Item Open Access Analysis of performance of automatic target recognition systems(2012-08-22) Marino, G.; Hughes, Evan J.An Automatic Target Recognition (ATR) system is a sensor which is usually able to recognize targets or objects based on gathered data. The application of automatic target recognition technology is a critical element of robotic warfare. ATR systems are used in unmanned aerial vehicles and cruise missiles. There are many systems which are able to collect data (e.g. radar sensor, electro-optic sensor, infra-red devices) which are commonly used to collect information and detect, recognise and classify potential targets. Despite significant effort during the last decades, some problems in ATR systems have not been solved yet. This Ph.D. tried to understand the variation of the information content into an ATR system and how to measure as well as how to preserve information when it passes through the processing chain because they have not been investigated properly yet. Moreover the investigation focused also on the definition of class-separability in ATR system and on the definition of the degree of separability. As a consequence, experiments have been performed for understanding how to assess the degree of class-separability and how the choice of the parameters of an ATR system can affect the final classifier performance (i.e. selecting the most reliable as well as the most information ii iii preserving ones). As results of the investigations of this thesis, some important results have been obtained: Definition of the class-separability and of the degree of classseparability (i.e. the requirements that a metric for class-separability has to satisfy); definition of a new metric for assessing the degree of classseparability; definition of the most important parameters which affect the classifier performance or reduce/increase the degree of class-separability (i.e. Signal to Clutter Ratio, Clutter models, effects of despeckling processing). Particularly the definition of metrics for assessing the presence of artefacts introduced by denoising algorithms, the ability of denoising algorithms in preserving geometrical features of potential targets, the suitability of current mathematical models at each stage of processing chain (especially for clutter models in radar systems) and the measurement of variation of information content through the processing chain are some of them most important issues which have been investigated.Item Open Access Antenna performance optimisation using evolutionary algorithms(2010-11-08) Ansell, D. W.; Hughes, Evan J.This thesis investigates the novel idea of using evolutionary algorithms to optimise control and design aspects of active array antenna systems. Active arrays differ from most mechanically scanned antennas in that they offer the ability to control the shape of their radiation pattern. As active arrays consist of a multiplicity of transmit and receive modules (TRMs), the task of optimally controlling them in order to generate a desired radiation pattern becomes difficult. The control problem is especially true of conformal (non-planar) array antennas that require additional phase control to achieve good radiation pattern performance. This thesis describes a number of significant advances in the optimisation of array antenna performance. Firstly a genetic algorithm (GA) is shown to be effective at optimising both planar and conformal antenna performance. A number of examples are used to illustrate and promote the basic optimisation concept. Secondly, in this thesis the techniques are advanced to apply multiobjective evolutionary optimisation algorithms to array performance optimisation. It is shown that Evolutionary Algorithms allow users to simultaneously optimise many aspects of array performance without the need to fine-tune a large number of weights. The multiple-objective analysis methods shown demonstrate the advantages to be gained by holding knowledge of the Pareto optimal solution set. Thirdly, this thesis examines the problems of optimising the design of large (many element) array antennas. Larger arrays are often divided into smaller sub-arrays for manufacturing reasons and to promote formation of difference beam patterns for monopulse operation. In the past, the partitioning has largely been left to trial-and-error or simple randomisation techniques. This thesis describes a new and novel approach for optimally subdividing both planar and conformal array antennas as well as improving gain patterns in a single optimisation process. This approach contains a new method of partitioning array antennas, inspired from a biological process and is also presented and optimised using evolutionary algorithms. Additionally, the technique can be applied to any size or shape of array antenna, with the processing load dependent on the number of subarrays, rather than the number of elements. Finally, the success of these new techniques is demonstrated by presenting a range of performance optimised examples of planar and conformal array antenna installations including examples of optimally evolved subarray partitions.Item Open Access The Application of Deep Learning Algorithms to Longwave Infrared Missile Seekers(2021-12) Westlake, Samuel T; James, D BConvolutional neural networks (CNNs) have already surpassed human-level performance in complex computer vision applications, and can potentially significantly advance the performance of infrared anti-ship guided missile seeker algorithms. But the performance of CNN-based algorithms is very dependent on the data used to optimise them, typically requiring large sets of fully-annotated real-world training examples. Across four technical chapters, this thesis addresses the challenges involved with applying CNNs to longwave infrared ship detection, recognition, and identification. Across four technical chapters, this thesis addresses the challenges involved with applying CNNs to longwave infrared ship detection, recognition, and identification. The absence of suitable longwave infrared training data was addressed through the synthetic generation of a large, thermally-realistic dataset of 972,000 fully labelled images of military ships with varying seascapes and background clutter. This dataset—IRShips—is the largest openly available repository of such images worldwide. Configurable automated workflow pipelines significantly enhance the development of CNN-based algorithms. No such tool was available when this body of work began, so an integrated modular deep learning development environment—Deeplodocus—was created. Publicly-available, it now features among the top 50% of packages on the Python Package Index repository. Using Deeplodocus, the fully-convolutional one-stage YOLOv3 object detection algorithm was trained to detect ships in a highly-cluttered sequence of real world longwave infrared imagery. Further enhancement of YOLOv3 resulted in an F-score of 0.945 being achieved, representing the first time synthetic data has been used to train a CNN algorithm to successfully detect military ships in longwave infrared imagery.Benchmarking YOLOv3’s detection accuracy against two alternative CNNs— Faster R-CNN and Mask R-CNN—using visual-spectrum and near-infrared data from the Singapore Maritime dataset, showed that YOLOv3 was three times faster, but 3% less accurate than Mask R-CNN. Modifying YOLOv3 through the use of spectral domain-dependent encoding delivered state-of-the-art accuracy with respect to the near-infrared test data, while maintaining YOLOv3’s considerable speed advantage.Item Open Access An Approach to the evaluation of blast loads on finite and semi-infinite structures(2010-02-22T12:32:37Z) Rose, T. A.; Smith, P. D.This thesis is concerned with the use of Computational Fluid Dynamics techniques coupled with experimental studies to establish useful relationships between explosively generated blast loads and the principal aspects of the geometry of both single buildings and many buildings, as might be found in any urban environment. A method for the treatment of blast loading problems is described which is based on a large number of numerical simulations validated by key physical experiments. The idea of using numerical simulation to investigate aspects of How problems which are too difficult, expensive or time-consuming to consider experimentally is not new. The emphasis of this thesis, however, is not the treatment of specific problems but whole classes of problems. Chapter l introduces the diiiiculties associated with the evaluation of blast loads on structures. It briefly describes several existing techniques and introduces the approach suggested by this study. It also contains a number of useful deinitions which assist ap- preciation of the difficulties of numerical simulation of blast loading. Chapter 2 is in the form of a narrative and describes the process by which the solu- tion algorithm of the program used for the blast simulations (Air3d) was selected. The final choice, AUSMDV (a variant of the Advection Upstream Splitting Method) with MUSCL-Hancock integration (MUSCL standing for “Monotone Upstream~centred Scheme for Conservation Laws”), is essentially the combination of two methods which are “cheap” in terms of computational resources to obtain one of only moderate “expense” but which has sufiicient accuracy and robustness for these demanding applications. Chapter 3 contains a description of the computational tool Air3d, and it acts as a user’s guide to the program. Chapter 3 also contains a discussion of the treatment by the program ,Air3d of the processes which govern the formation of spherical blast waves in air. The chapter concludes with a comparison between the results of Air3d and a commercially available program and demonstrates the eflicacy of the solution algorithm adopted. Chapter 4 demonstrates the potential of the approach to obtain useful information in the main areas of application (Chapters 5 to 7) in this thesis. This is achieved by comparison of Air3d simulations with established sets of experimentally determined scaled blast parameters. Chapter 5 describes the problem of blast wave clearing, or loads on single finite struc- tures, and uses the approach to produce a relationship which is applicable over almost the whole range of practical interest to engineers. Chapter 6 is concerned with the effect of street width and building height on the blast overpressure impulses which load the facades of a street when an explosive incident occurs in an urban setting. It considers semi-infinite straight streets and describes, in broad terms, the limits of width and height which determine the blast impulse loads. Chapter 7 contains a discussion of the blast environment behind a serni~infinite protec- tive barrier wall when an explosive device is detonated on the near side. This problem has particular diiiiculties, which are discussed, and has illustrated the limits of the suggested approach. Chapter 8 summarises the approach adopted by this study for blast load evaluation, and it describes the progress made, difficulties encountered and the limitations of the method. Recommendations are made which would improve the approach for future in- vestigators, and the possibility of extending it for use in more varied applications is also considered.Item Open Access The Armed Forces of Australia, Britain and Canada and the impact of culture on joint, combined and multi-national operations : a methodology for profiling national and organisational cultural values and assessing their influence in the international workplace(Cranfield University, 2004-01) Stocker, Ashley; Taylor, Prof T.This study identifies the influence of national and military organisational values on the cultures of the armed forces of Australia, Britain and Canada, in order to assess the impact of culture on Joint, Combined and Multinational operations. This is achieved by: · Defining culture, values and related concepts. · Outlining a viable methodology to examine and profile cultural values. · Demonstrating why values form the basis of this study. · Reviewing the body of cross-cultural academic literature on cultural values and the military. · Executing a measurement of values in a consistent and academically sound manner. · Examining national influences on the culture of the armed forces of Australia, Britain and Canada. · Examining intra- national organisational influences on the culture of the services of the armed forces of Australia, Britain and Canada. · Examining international organisational influences on the culture of the services of the armed forces of Australia, Britain and Canada. · Focusing on the values of the armed forces examined in this study in order to compare the findings with the results obtained from the Values Survey Module. · Discussing the implications of the findings of this study and demonstrate how the values of the nations and organisations that have been examined can be expected to affect future operations.Item Open Access Arming the British Home Guard, 1940-1944(2011-09-19) Clarke, D M; Holmes, Prof E RThe Second World War saw British society mobilised to an unprecedented extent to meet the threat of Total War. ‘Total Defence’ was manifest in organisations such as the ARP and Home Guard. What sets the Home Guard apart was its combatant role. This thesis examines the arms provided for the Home Guard, and concludes that its combat power has been seriously underestimated. It benefitted from huge quantities of high quality smallarms purchased from the United States, which were not issued to the Regular Army, because they chambered American ammunition. What is extraordinary is that these weapons are always characterised as ancient relics, yet the oldest of them was years younger, in real and design terms, than the British Army equivalent. In 1940 Britain lacked the capacity to manufacture arms in the quantities needed to repair the losses of Dunkirk and meet the needs of the expanding armed forces. The remedy was unorthodox weaponry such as the ‘Sticky Bomb’ and the ‘Blacker Bombard’. These are always associated with the Home Guard, yet saw active service against the Africa Corps. These unconventional weapons were more capable than many modern authors suggest, but they suffer from an impenetrable ‘orthodox view’ that characterises Home Guard weapons as ancient, whimsical and inefficient. This has its origins in the Local Defence Volunteers’ disappointment when the Government failed to meet its promise to arm every volunteer; their dismay at receiving foreign equipment; the way in which the media portrayed the Home Guard; and the fact that the great threats the Home Guard existed to combat – invasion and subversion – appeared to be illusory, making the Home Guard itself seem quixotic. This study strips away that conventional narrative, and exposes a Home Guard that was well equipped for its tasks – frequently better equipped than other components of Home Defence.Item Open Access Armoured vehicle manufacturing in the Gulf States challenges and future vision: a systems engineering perspective(Cranfield University, 2019) Aljeeran, Isa Khalifa Abdulla; Hameed, Amer; McCormack, John; Adcock, RickThe armoured vehicles manufacturers (AVMs) in the Gulf States encounter many difficulties related to their current performance, their customers' circumstances and the interactions between them. The AVMs are Small and Medium Enterprises (SME), owned by entrepreneurs who manage their organisations intuitively, leading to likely performance degradation which affects their outputs and thus customer satisfaction. On the other side, the customers lack essential elements of the acquisition process such as the non-existence of published defence strategies documents, customer needs not being precisely clarified to the developers, demand fluctuation, customer individuals’ knowledge being insufficient to contribute toward developing the intended values, etc. Third, the interactions between AVMs and their stakeholders, the customer in particulars, do not rise to the level of product importance. These environments form the dynamic environment that AVMs in the Gulf states currently face besides other circumstances, such as the fierce competition worldwide, considerably changes regarding the threats and needs, constant technology advancements, and political challenges, which combined may hinder AVMs from attaining their instant (customer satisfaction) and future (market sustainability) goals. Therefore, this thesis pursues aims to enable the owners/managers (entrepreneurs) of AVMs in the Arabian Gulf States to employ their resources efficiently to deliver innovative values that satisfy the needs of all of their stakeholders, customers in particular, within the dynamic environment. Dealing with the dynamic environment requires intensive planning and the execution of known managerial disciplines, such as strategy, supply chain and business to business (B2B) interactions along with utilising essential tools provided by the System Engineering (SE) discipline. The latter subject has adequate means to optimise the strategy and supply chain technical tools by integrating them with the related managerial tools to enhance the development efforts. Moreover, organised interactions among various related entities that share a well-designed network enforce the desirable integration and enhance the relationship in the B2B context which ensures customer satisfaction, confirms the AVM market’s sustainment, strengthens the defence industry and attains arms independence. These efforts must be monitored and controlled by higher national authorities’ substantial strategies to ensure that the national goals are achieved. Therefore, the author suggests a conceptual model to guide all interested parties, the AVM’s management, to enhance their performance by considering all essential managerial and technical aspects. The model also emphasises the importance of interactions in enforcing the applications of the strategic, design, production and test and evaluation process to enable AVMs to enhance their product development in order to capture customer satisfaction and succeed in business. The success of the national AVMs will lead to the attainment of one of the most important national objectives, i.e. arms independence.Item Open Access Artillery and Warfare 1945-2025(2009-11-24T18:18:23Z) Bailey, J P A; Holmes, Prof E RFor millennia battles were essentially affairs of linear encounter. From the 10th Century to the 20th Century, artillery generally fired directly in the two dimensional plane,limiting potential effects. The development of indirect fire changed this , two-dimensional model. Warfare became not so much a matter of linear encounter as one of engagement as cross and throughout an area; and artillery dominated land operations in both the First and Second World Wars as a result. Firepower was subsequently often applied in even greater weights, but its effects were frequently excessive and high-value targets proved elusive. During the Cold War in Europe,the importance of field artillery wanded relative to other arms. Artillery could only regain its utility by acquiring the highest-value targets and engaging them effectively with the appropriate degree of force in time and space true precision, as opposed to mere accuracy at a point. Improvements in target acquisition and accuracy will enable land systems once more to engage targets effectively throughout the battlespace with implications for warfare analogous to those precipitated by the introduction of indirect fire a century ago. Land operations will become increasingly three-dimensional and Joint. The effects of fire will increasingly be applied in, not merely via, the third dimensions, since targets themselves will increasingly be located, not just on the area of a battlefield, but in the volume of three-dimensional battlespace with values of indetermined by considerations of the fourth dimension, time. Fire, lethal and non-lethal, will also be targeted in other less tangible dimensions such as cyber-space and new types of 'virtual counterfire' will also emerge in the forms of legal and moral restraint. All will be viewed through the lens of perceptions. The burgeoning of firepower from all sources now becomes the spur for changes in the relationship between the land and air components, mindful of those novel factors that will increasingly inhibit the application of that firepower.Item Open Access Assessing the evidential value of artefacts recovered from the cloud(2017-06-14) Mustafa, Z. S.; Maddison Warren, Annie; Morris, S.; Nobles, P.Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP.Item Open Access Assessing the Reliability of Digital Evidence from Live Investigations Involving Encryption(2009-11-24T17:34:14Z) Hargreaves, C. J.; Chivers, HThe traditional approach to a digital investigation when a computer system is encountered in a running state is to remove the power, image the machine using a write blocker and then analyse the acquired image. This has the advantage of preserving the contents of the computer’s hard disk at that point in time. However, the disadvantage of this approach is that the preservation of the disk is at the expense of volatile data such as that stored in memory, which does not remain once the power is disconnected. There are an increasing number of situations where this traditional approach of ‘pulling the plug’ is not ideal since volatile data is relevant to the investigation; one of these situations is when the machine under investigation is using encryption. If encrypted data is encountered on a live machine, a live investigation can be performed to preserve this evidence in a form that can be later analysed. However, there are a number of difficulties with using evidence obtained from live investigations that may cause the reliability of such evidence to be questioned. This research investigates whether digital evidence obtained from live investigations involving encryption can be considered to be reliable. To determine this, a means of assessing reliability is established, which involves evaluating digital evidence against a set of criteria; evidence should be authentic, accurate and complete. This research considers how traditional digital investigations satisfy these requirements and then determines the extent to which evidence from live investigations involving encryption can satisfy the same criteria. This research concludes that it is possible for live digital evidence to be considered to be reliable, but that reliability of digital evidence ultimately depends on the specific investigation and the importance of the decision being made. However, the research provides structured criteria that allow the reliability of digital evidence to be assessed, demonstrates the use of these criteria in the context of live digital investigations involving encryption, and shows the extent to which each can currently be met.