DSDS 16
Permanent URI for this collection
Browse
Browsing DSDS 16 by Subject "'DSDS16 technical paper'"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access Anomaly Detection for Security Imaging(Cranfield University, 2017-01-24 15:26) Andrews, JeroneTechnical paper presented at the 2016 Defence and Security Doctoral Symposium.Non-intrusive inspection systems are increasingly used to scan intermodal freight shipping containers, at national ports, to ensure cargo conformity with customs regulations. Initially, each container is risk assessed based on shipping information such as origin, destination, and manifest. If the risk is deemed sufficiently high the container is imaged, typically by non-intrusive X-ray radiography. Finally, on the basis of the X-ray image, a human operator must make a shrewd decision, as to whether a container then necessitates physical inspection. These processes aim to minimise the number of false searches, whilst maximising the number of true searches, thus facilitating the detection of suspicious cargoes with negligible interference to the flow of commerce. However, due to the large number of containers being transported yearly, the number of X-ray transmission images to be visually inspected is high. Moreover, the heterogeneity within and between the X-ray images provides an appreciable visual challenge to human operators, exacerbated by overlapping, semi-transparent cargo. Previous approaches to automated security image analysis focus on the detection of particular classes of threat. However, this mode of inspection is ineffectual when dealing with mature classes of threat, for which adversaries have refined effective concealment techniques. To detect these hidden threats, customs officers often observe anomalies of shape, texture, weight, feel or response to perturbation. Inspired by the practice of customs officers, we are developing algorithms to discover visual anomalies in X-ray images. This paper investigates an anomaly detection framework, at X-ray image patch-level, for the automated discovery of absolute-, positional-, and relative-anomalies. The framework consists of two main components: (i) image features, and (ii) the detection of anomalies relative to those features.The development of discriminative features is problematic, since we have no a prior knowledge of the underlying, generating distribution of anomalies. Therefore, we pursue features that have been optimised for a related, very general, task, on similar data, which we found to be useful in previous works [1,2]. The features, for each patch, are then scored using a Forest of Isolation Trees – a recently proposed machine learning algorithm for general-purpose anomaly detection in data. The Forest is constructed under the working assumption that anomalies are ‘few and different’. Therefore, patches that are more readily separated from the main cluster, by randomly selected criteria, give rise to higher anomaly scores. The patch-level results are then fused into an overall anomaly heat map of the entire container, to facilitate human inspection. Lastly, our system is evaluated qualitatively using illustrations of example outputs and test cases with real and contrived anomalies.[1] Andrews, J. T., Morton, E. J., & Griffin, L. D. (2016). Detecting Anomalous Data Using Auto-Encoders. International Journal of Machine Learning and Computing, 6(1), 21.[2] Andrews, J. T., Morton, E. J., & Griffin, L. D. (2016). Transfer Representation-Learning for Anomaly Detection. International Conference on Machine Learning Anomaly Detection Workshop.Biographical Notes:Jerone Andrews has an MSc in Mathematics from King’s College London, and an MRes in Security Science from University College London. He is currently a PhD candidate in Applied Mathematics at University College London, jointly supervised by Computer Science and Statistical Science. His main topic of interest is representation-learning for anomaly detection in computer vision.This work was supported by the Department for Transport (DfT), the Engineering and Physical Sciences Research Council (EPSRC) under CASE Award Grant 157760, and Rapiscan Systems.Item Open Access Modelling the effects of temperature-dependent material properties in shear melt layers(Cranfield University, 2016-12-08 11:03) Timms, RobertTechnical paper presented at the 2016 Defence and Security Doctoral Symposium. This paper won second place in the Technical Paper category. The mechanisms responsible for ignition of explosive materials in response to low energy stimuli, known as “insults" in the literature, are still not well understood. It is in general believed that explosive ignition is of thermal origin, with mechanical energy being converted into heat energy in localised regions, forming so-called "hot spots". When an explosive sample is subject to a mechanical insult pre-existing, or new, microcracks will be in compression and shear. It is possible for such microcracks to grow in size if the local stress is great enough and, due to friction between solid surfaces, heat is released during the growth process. Subsequent to sufficient heat release, the crack surface temperature will be raised to the solid melting point and a thin sheared melt layer will be formed, separating the solid surfaces. This thin melt layer will continue to be heated through viscous dissipation and subsequent chemical reaction, and is thought to be a prime location for so-called hot spot generation. Mechanical insults, resulting from low-speed impacts which shear an explosive, have been identified as a possible ignition source. However, modelling such an ignition mechanism numerically with hydrocodes proves to offer some considerable challenges. To supplement the numerical approach, we develop an analytical model of the shearing, melting and subsequent ignition of an explosive material. We consider the melting of a thin viscous layer of explosive material due to an applied shear in an idealised planar geometry. The model accounts for self-heating due to mechanical dissipation, and a single-step Arrhenius reaction is used to describe the heating of the explosive due to subsequent chemical reaction. A solution is sought by considering perturbations from a melt layer of uniform width. In particular, we consider the effects of modelling the temperature dependence of the liquid viscosity and specific heat are studied. In contrast to previous work which does not account for the temperature dependence of material properties, it is shown that allowing the viscosity to vary with temperature can lead to non-uniform mechanical heating in the layer to leading order. Such localised heating may be associated with generation of localised hot spots which give rise to ignition.Item Open Access Nonlinear vibration analysis of a complex aerospace structure(Cranfield University, 2016-12-08 11:03) Cooper, SamsonTechnical paper presented at the 2016 Defence and Security Doctoral Symposium. Complex shaped aerodynamic structures such as deployable missiles are prone to exhibit some level of nonlinear phenomena due to their aerodynamic tailored design and application. Aside from the aeroelastic control challenges experienced by a missile, a fundamental challenge encountered by a deployable missile is the inevitable concentrated structural nonlinearities which are observed around the hinge of its fins. Due to the current design and manufacturing process, the hinge of the fin of a missile often consist of complex configurations, such as joints, friction and other nonlinear features which may lead to concentrated structural nonlinearities. Some of the nonlinearities encountered includes piecewise linearity, bilinear nonlinearity, hysteresis, coulomb friction and nonlinear damping mechanisms. These nonlinearities are frequently triggered at large vibration amplitudes, caused by high pressure loads during operational flight. Activation of these nonlinearities often affect the dynamic response of the missile and in some cases lead to structural failures in the major components of the air vehicle. In this context, identifying and predicting the vibration response of such aerodynamic structures with nonlinearities, may be of great advantage to the present structural dynamic community. In this paper, the nonlinear dynamic behaviour of a B61 prototype missile has been examined. A two-step methodology for integrating nonlinear system identification for estimating nonlinear stiffness and damping mechanism and nonlinear finite element modelling has been adopted in this investigation. The first step made use of acquired input and output data from random and sine sweep vibration test to derive a nonlinear experimental model for the missile, where the nonlinear experimental model was developed using a white box identification process, namely (detection, characterisation and parameter estimation). The second step implements the parameters of the identified nonlinear system into a finite element model (FEM) of the missile to develop a nonlinear FEM. The nonlinear dynamic response of the FEM was computed using the Harmonic balance method (HB) and pseudo-arclength continuation in the frequency domain. In addition, Force controlled stepped sine experiments at several excitation levels were conducted to validate the numerical solution obtained from the nonlinear FEM computation. The results obtained were used to understand the amplitude dependant behaviour of the missile under a vibration controlled environment and in addition predict the dynamic response of the missile in the existence of deployable hinge nonlinearity.Item Open Access Nuclear arms control: optimising verification processes through formal modelling(Cranfield University, 2016-12-09 11:15) Beaumont, PaulTechnical paper presented at the 2016 Defence and Security Doctoral Symposium. Arms control verification processes do not in practice allow for the parties involved to gather complete information about each other. Instead, each must make decisions about whether or not other parties are complying with their obligations on the basis of limited information. They must also make decisions during negotiation of a verification regime about the measures to be used, and during implementation of that regime about how and when to use the tools at their disposal. Decision-making under uncertainty is therefore a core element of the arms control verification problem. Our work aims to extend and combine mathematical modelling and verification approaches such that they can cope with the inherent lack of available data in this domain, and potentially be used to support policy-makers in practice. Our approach is to model the beliefs of each party and the various inspection control processes in a type of software known as a Satisfiability Modulo Theories (SMT) solver. This offers a general purpose approach to the automated analysis of mathematical models; in our case we use SMT to deal with uncertainty in (or absence of) data by expressing them as under-specifications of parameters in model verification processes. In other words, we don’t have to choose values for parameters – such as the number of nuclear weapons that tne of the parties holds, for example – if we don’t know them: we can pick a range of possible values, or leave the value totally unconstrained. We demonstrate the capabilities of this approach by exploring a representative, quantitative model of an arms control process in which two parties engage in mutual nuclear arms reduction and verification activities. We show that we are able to answer pertinent questions such as “given uncertainty in our treaty partners’ initial weapon stockpile, with scheduled inspections every 6 months and 2 other unscheduled inspections per year, what timing for unscheduled inspections leads to the minimum difference between our partners’ declaration and our assessment of their actual arsenal?” These new modelling and analysis methods allow for a much more sophisticated approach to modelling arms control: we have harnessed a supercomputer to analyse over 134 million possible inspection timelines, allowing the software to compute an inspection schedule over a treaty lifespan of 2 years for which performance against one or more measures of interest is optimised. The models and results can then be studied and their expected outcomes assessed to assist in decision-making regarding proposed arms control regimes. Biographical Notes: Paul Beaumont is a final year Postgraduate Research student in the Department of Computing at Imperial College London. His PhD focusses on understanding and solving mathematical models in the absence of data, and follows on from a Masters and undergraduate in Mathematics, also at Imperial. He works with colleagues from AWE and is applying his PhD techniques to the problem of nuclear arms verification.Item Open Access Solubility and chemical interaction of nitrocellulose in plasticisers(Cranfield University, 2016-11-07 09:10) Flood, Nathan; Parker, MatthewTechnical paper presented at the 2016 Defence and Doctoral Symposium. Abstract: Nitrocellulose (NC) is commonly used as an energetic binder in explosive and propellant formulations. During the formulation and casting stages NC can be mixed with a variety of plasticisers with the aim of tuning the mechanical properties of the charge to suit the specified requirements. Historically there have been issues with the solubility of NC in various plasticisers which has created manufacturing problems leading to failures of missiles.The research presented from two programmes of work funded by the WSTC, Propellant Bonding and Nitrocellulose: Degrees of Freedom, has investigated how NC interacts with three plasticisers; Triacetin (TA), Diallyl Phthalate (DAP) and Nitroglycerine (NG). The solubility was investigated using time-lapse microscopy and the chemical interaction was investigated using Attenuated Total Reflectance Infrared Spectroscopy (ATR-FT-IR) and compared with the five modes describing the swelling and dissolution mechanisms of wood and cellulose fibres.The mechanisms observed for nitrocellulose follow the dissolution modes described for wood and cotton fibres. It was found that TA had the highest solubility with respect to NC whilst NG had the lowest; variation in the swelling and gelation of NC has been rationalised by the crystallinity within the sample. Changes in the cellulosic fibrillar substructure of NC due to nitration and processing results in changes in its crystallinity. This variation of crystallinity subsequently affects the chemical interactions of solvent and plasticiser molecules with NC and the bulk movement of these molecules through the material. ATR-FT-IR demonstrates the presence of constructive bonding interactions between NC and TA or DAP, which manifests as swelling and gelation at the bulk-level. NG exhibits no apparent molecular bonding by IR measurement, and sorption only into the NC fibre, without the extensive swelling and gelation observed in the other regimes.