CERES
CERES TEST Only!
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Nam, David"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Automatic x-ray image segmentation and clustering for threat detection
    (SPIE, 2017-10-05) Kechagias-Stamatis, Odysseas; Aouf, Nabil; Nam, David; Belloni, Carole
    Firearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Towards scene understanding implementing the stixel world
    (IEEE, 2019-03-07) Grenier, Amélie; Alzoubi, Alaa; Feetham, Luke; Nam, David
    In this paper, we present our work towards scene understanding based on modeling the scene prior to understanding its content. We describe the environment representation model used, the Stixel World, and its benefits for compact scene representation. We show our preliminary results of its application in a diverse environment and the limitations reached in our experiments using imaging systems. We argue that this method has been developed in an ideal scenario and does not generalise well to uncommon changes in the environment. We also found that this method is sensitive to the quality of the stereo rectification and the calibration of the optics, among other parameters, which makes it time-consuming and delicate to prepare in real-time applications. We think that pixel-wise semantic segmentation techniques can address some of the shortcomings of the concept presented in a theoretical discussion.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Vehicle Obstacle Interaction Dataset (VOIDataset)
    (Cranfield University, 2018-10-11 13:26) Alzoubi, Alaa; Nam, David
    Vehicle-Obstacle Interaction Dataset (VOIDataset) includes 277 trajectories (sequences of x,y positions of the vehicle and the obstacle) of three different scenarios (67 crash, 106 left-pass, and 104 right-pass trajectories). The distance between the vehicle and the obstacle (length of the trajectory) is 50 meters. The trajectories were manually annotated, and used to evaluate our activity recognition method. Data was gathered using a simulation environment developed in Virtual Battlespace 3 (VBS3), with the Logitech G29 Driving Force Racing Wheel and pedals. Here a model of a Dubai highway was used. We consider a six lane road with an obstacle in the centre lane. The experiment consisted of 40 participants, all of varying ages, genders and driving experiences. Participants were asked to use their driving experience to avoid the obstacle. A Skoda Octavia was used in all trails, and with maximum speed 50KPH. We recorded the obstacle and ego-vehicle's coordinates (the centre position of the vehicle), velocity, heading angle, and distance from each other. The generated trajectories were recorded at 10Hz. Version 2: no change to the dataset, but appending contact details for more information: Alaa Alzoubi: alaa.alzoubi@buckingham.ac.uk David Nam: d.nam@cranfield.ac.uk

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback