CERES
CERES TEST Only!
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Katramados, Ioannis"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Adaptive object placement for augmented reality use in driver assistance systems
    (2011-11-17T00:00:00Z) Bordes, Lucie; Breckon, Toby P.; Katramados, Ioannis; Kheyrollahi, Alireza
    We present an approach for adaptive object placement for Augmented Reality (AR) use in driver assistance systems. Combined vanishing point and road surface detection enable the real-time adaptive emplacement of AR objects within a drivers' natural field of view for on-road information display. This work combines both automotive vision and multimedia production aspects of real-time visual engineering.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    CASS•E: Cranfield astrobiological stratospheric sampling experiment
    (2010-12-31T00:00:00Z) Naicker, L.; Grama, V. V.; Juanes-Vallejo, Clara M.; Katramados, Ioannis; Rato, Carla Cristina Pereira Salgueiro Catarino; Rix, Catherine S.; Sanchez, E.; Cullen, David C.
    CASS•E is a life detectionexperimentthat aims to be capable of collecting microorganisms in Earth's Stratosphere. Theexperiment will be launched on astratosphericballoon in collaboration with Eurolaunch through the BEXUS (Balloon-borneExperimentsfor Universitv Students) program from Esrange Sweden in October 2010. It essentially consists of a pump which draws air from the Stratosphere through a collection filter mechanism. Due to the low number density of microbes in the Stratosphere compared to the known levels of contamination at ground level, theexperimentincorporated Planetary Protection and Contamination Control (PP&CC) protocols in its design and construction in order to confirm that any microbes detected are trulyStratosphericin origin. Space qualified cleaning and sterilisation techniques were employed throughout Assembly Integration and Testing (AIT) as well as biobarriers which were designed to open only in the stratosphere and so prevent recontamination of the instrument alter sterilisation. The material presented here covers the design and AIT of CASS•E. Copyright ©2010 by the International Astronautical Federation. All rights rese
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Real-time object detection using monocular vision for low-cost automotive sensing systems
    (Cranfield University, 2013-02) Katramados, Ioannis; Breckon, Toby P.
    This work addresses the problem of real-time object detection in automotive environments using monocular vision. The focus is on real-time feature detection, tracking, depth estimation using monocular vision and finally, object detection by fusing visual saliency and depth information. Firstly, a novel feature detection approach is proposed for extracting stable and dense features even in images with very low signal-to-noise ratio. This methodology is based on image gradients, which are redefined to take account of noise as part of their mathematical model. Each gradient is based on a vector connecting a negative to a positive intensity centroid, where both centroids are symmetric about the centre of the area for which the gradient is calculated. Multiple gradient vectors define a feature with its strength being proportional to the underlying gradient vector magnitude. The evaluation of the Dense Gradient Features (DeGraF) shows superior performance over other contemporary detectors in terms of keypoint density, tracking accuracy, illumination invariance, rotation invariance, noise resistance and detection time. The DeGraF features form the basis for two new approaches that perform dense 3D reconstruction from a single vehicle-mounted camera. The first approach tracks DeGraF features in real-time while performing image stabilisation with minimal computational cost. This means that despite camera vibration the algorithm can accurately predict the real-world coordinates of each image pixel in real-time by comparing each motion-vector to the ego-motion vector of the vehicle. The performance of this approach has been compared to different 3D reconstruction methods in order to determine their accuracy, depth-map density, noise-resistance and computational complexity. The second approach proposes the use of local frequency analysis of i ii gradient features for estimating relative depth. This novel method is based on the fact that DeGraF gradients can accurately measure local image variance with subpixel accuracy. It is shown that the local frequency by which the centroid oscillates around the gradient window centre is proportional to the depth of each gradient centroid in the real world. The lower computational complexity of this methodology comes at the expense of depth map accuracy as the camera velocity increases, but it is at least five times faster than the other evaluated approaches. This work also proposes a novel technique for deriving visual saliency maps by using Division of Gaussians (DIVoG). In this context, saliency maps express the difference of each image pixel is to its surrounding pixels across multiple pyramid levels. This approach is shown to be both fast and accurate when evaluated against other state-of-the-art approaches. Subsequently, the saliency information is combined with depth information to identify salient regions close to the host vehicle. The fused map allows faster detection of high-risk areas where obstacles are likely to exist. As a result, existing object detection algorithms, such as the Histogram of Oriented Gradients (HOG) can execute at least five times faster. In conclusion, through a step-wise approach computationally-expensive algorithms have been optimised or replaced by novel methodologies to produce a fast object detection system that is aligned to the requirements of the automotive domain.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Real-time visual saliency by division of Gaussians
    (2011-09-14T00:00:00Z) Katramados, Ioannis; Breckon, Toby P.
    This paper introduces a novel method for deriving visual saliency maps in real- time without compromising the quality of the output. This is achieved by replacing the computationally expensive centre-surround filters with a simpler mathematical model named Division of Gaussians (DIVoG). The results are compared to five other approaches, demonstrating at least six times faster execution than the current state-of-the-art whilst maintaining high detection accuracy. Given the multitude of computer vision applications that make use of visual saliency algorithms such a reduction in computational complexity is essential for improving their real-time performance.

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback