Anomaly Detection for Security Imaging

2017-01-24T15:26:27Z (GMT) by Jerone Andrews
Technical paper presented at the 2016 Defence and Security Doctoral Symposium.<br><div><br></div><div><div>Non-intrusive inspection systems are increasingly used to scan intermodal freight shipping containers, at national ports, to ensure cargo conformity with customs regulations. Initially, each container is risk assessed based on shipping information such as origin, destination, and manifest. If the risk is deemed sufficiently high the container is imaged, typically by non-intrusive X-ray radiography. Finally, on the basis of the X-ray image, a human operator must make a shrewd decision, as to whether a container then necessitates physical inspection.</div><div> </div><div>These processes aim to minimise the number of false searches, whilst maximising the number of true searches, thus facilitating the detection of suspicious cargoes with negligible interference to the flow of commerce. However, due to the large number of containers being transported yearly, the number of X-ray transmission images to be visually inspected is high. Moreover, the heterogeneity within and between the X-ray images provides an appreciable visual challenge to human operators, exacerbated by overlapping, semi-transparent cargo. </div><div><br></div><div>Previous approaches to automated security image analysis focus on the detection of particular classes of threat. However, this mode of inspection is ineffectual when dealing with mature classes of threat, for which adversaries have refined effective concealment techniques. To detect these hidden threats, customs officers often observe anomalies of shape, texture, weight, feel or response to perturbation. Inspired by the practice of customs officers, we are developing algorithms to discover visual anomalies in X-ray images. </div><div><br></div><div>This paper investigates an anomaly detection framework, at X-ray image patch-level, for the automated discovery of absolute-, positional-, and relative-anomalies. The framework consists of two main components: (i) image features, and (ii) the detection of anomalies relative to those features.</div><div><br></div><div>The development of discriminative features is problematic, since we have no a prior knowledge of the underlying, generating distribution of anomalies. Therefore, we pursue features that have been optimised for a related, very general, task, on similar data, which we found to be useful in previous works [1,2]. The features, for each patch, are then scored using a Forest of Isolation Trees – a recently proposed machine learning algorithm for general-purpose anomaly detection in data. The Forest is constructed under the working assumption that anomalies are ‘few and different’. Therefore, patches that are more readily separated from the main cluster, by randomly selected criteria, give rise to higher anomaly scores. The patch-level results are then fused into an overall anomaly heat map of the entire container, to facilitate human inspection. Lastly, our system is evaluated qualitatively using illustrations of example outputs and test cases with real and contrived anomalies.</div><div><br></div><div>[1] Andrews, J. T., Morton, E. J., & Griffin, L. D. (2016). Detecting Anomalous Data Using Auto-Encoders. International Journal of Machine Learning and Computing, 6(1), 21.</div><div>[2] Andrews, J. T., Morton, E. J., & Griffin, L. D. (2016). Transfer Representation-Learning for Anomaly Detection. International Conference on Machine Learning Anomaly Detection Workshop.</div><div><br></div><div><div>Biographical Notes:</div><div>Jerone Andrews has an MSc in Mathematics from King’s College London, and an MRes in Security Science from University College London. He is currently a PhD candidate in Applied Mathematics at University College London, jointly supervised by Computer Science and Statistical Science. His main topic of interest is representation-learning for anomaly detection in computer vision.</div><div><br></div><div>This work was supported by the Department for Transport (DfT), the Engineering and Physical Sciences Research Council (EPSRC) under CASE Award Grant 157760, and Rapiscan Systems.<br></div></div></div>