CERES
CERES TEST Only!
  • Communities & Collections
  • Browse CERES
  • Library Staff Log In
    New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Yang, Lichao"

Now showing 1 - 20 of 22
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Achieving on-site trustworthy AI implementation in the construction industry: a framework across the AI lifecycle
    (MDPI AG, 2024-12-25) Yang, Lichao; Allen, Gavin; Zhang, Zichao; Zhao, Yifan
    In recent years, the application of artificial intelligence (AI) technology in the construction industry has rapidly emerged, particularly in areas such as site monitoring and project management. This technology has demonstrated its great potential in enhancing safety and productivity in construction. However, concerns regarding the technical maturity and reliability, safety, and privacy implications have led to a lack of trust in AI among stakeholders and end users in the construction industry, which slows the intelligent transformation of the industry, particularly for on-site AI implementation. This paper reviews frameworks for AI system design across various sectors and government regulations and requirements for achieving trustworthy and responsible AI. The principles for the AI system design are then determined. Furthermore, a lifecycle design framework specifically tailored for AI systems deployed in the construction industry is proposed. This framework addresses six key phases, including planning, data collection, algorithm development, deployment, maintenance, and archiving, and clarifies the design principles and development priorities needed for each phase to enhance AI system trustworthiness and acceptance. This framework provides design guidance for the implementation of AI in the construction industry, particularly for on-site applications, aiming to facilitate the intelligent transformation of the construction industry.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Attention mechanism enhanced spatiotemporal-based deep learning approach for classifying barely visible impact damages in CFRP materials
    (Elsevier, 2024-03-14) Deng, Kailun; Liu, Haochen; Cao, Jun; Yang, Lichao; Du, Weixiang; Xu, Yigeng; Zhao, Yifan; This work was partially supported by the Royal Academy of Engineering Industrial Fellowship [#grant IF2223B-110], and partially supported by the Science and Technology Department of Gansu Province Science and Technology Project Funding, 22YF7GA072.
    Most existing machine learning approaches for analysing thermograms mainly focus on either thermal images or pixel-wise temporal profiles of specimens. To fully leverage useful information in thermograms, this article presents a novel spatiotemporal-based deep learning model incorporating an attention mechanism. Using captured thermal image sequences, the model aims to better characterise barely visible impact damages (BVID) in composite materials caused by different impact energy levels. This model establishes the relationship between patterns of BVID in thermography and their corresponding impact energy levels by learning from spatial and temporal information simultaneously. Validation of the model using 100 composite specimens subjected to five different low-velocity impact forces demonstrates its superior performance with a classification accuracy of over 95%. The proposed approach can contribute to Structural Health Monitoring (SHM) community by enabling cause analysis of impact incidents based on predicting the potential impact energy levels. This enables more targeted predictive maintenance, which is especially significant in the aviation industry, where any impact incidents can have catastrophic consequences.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Automatic reconstruction of irregular shape defects in pulsed thermography using deep learning neural network
    (Springer, 2022-07-25) Liu, Haochen; Li, Wenhan; Yang, Lichao; Deng, Kailun; Zhao, Yifan
    Quantitative defect and damage reconstruction play a critical role in industrial quality management. Accurate defect characterisation in Infrared Thermography (IRT), as one of the widely used Non-Destructive Testing (NDT) techniques, always demands adequate pre-knowledge which poses a challenge to automatic decision-making in maintenance. This paper presents an automatic and accurate defect profile reconstruction method, taking advantage of deep learning Neural Networks (NN). Initially, a fast Finite Element Modelling (FEM) simulation of IRT is introduced for defective specimen simulation. Mask Region-based Convolution NN (Mask-RCNN) is proposed to detect and segment the defect using a single thermal frame. A dataset with a single-type-shape defect is tested to validate the feasibility. Then, a dataset with three mixed shapes of defect is inspected to evaluate the method’s capability on the defect profile reconstruction, where an accuracy over 90% on Intersection over Union (IoU) is achieved. The results are compared with several state-of-the-art of post-processing methods in IRT to demonstrate the superiority at detailed defect corners and edges. This research lays solid evidence that AI deep learning algorithms can be utilised to provide accurate defect profile reconstruction in thermography NDT, which will contribute to the research community in material degradation analysis and structural health monitoring.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Classification of barely visible impact damage in composite laminates using deep learning and pulsed thermographic inspection
    (Springer, 2023-01-31) Deng, Kailun; Liu, Haochen; Yang, Lichao; Addepalli, Sri; Zhao, Yifan
    With the increasingly comprehensive utilisation of Carbon Fibre-Reinforced Polymers (CFRP) in modern industry, defects detection and characterisation of these materials have become very important and draw significant research attention. During the past 10 years, Artificial Intelligence (AI) technologies have been attractive in this area due to their outstanding ability in complex data analysis tasks. Most current AI-based studies on damage characterisation in this field focus on damage segmentation and depth measurement, which also faces the bottleneck of lacking adequate experimental data for model training. This paper proposes a new framework to understand the relationship between Barely Visible Impact Damage features occurring in typical CFRP laminates to their corresponding controlled drop-test impact energy using a Deep Learning approach. A parametric study consisting of one hundred CFRP laminates with known material specification and identical geometric dimensions were subjected to drop-impact tests using five different impact energy levels. Then Pulsed Thermography was adopted to reveal the subsurface impact damage in these specimens and recorded damage patterns in temporal sequences of thermal images. A convolutional neural network was then employed to train models that aim to classify captured thermal photos into different groups according to their corresponding impact energy levels. Testing results of models trained from different time windows and lengths were evaluated, and the best classification accuracy of 99.75% was achieved. Finally, to increase the transparency of the proposed solution, a salience map is introduced to understand the learning source of the produced models.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Dementia classification using a graph neural network on imaging of effective brain connectivity
    (Elsevier, 2023-11-18) Cao, Jun; Yang, Lichao; Sarrigiannis, Ptolemaios Georgios; Blackburn, Daniel; Zhao, Yifan
    Alzheimer's disease (AD) and Parkinson's disease (PD) are two of the most common forms of neurodegenerative diseases. The literature suggests that effective brain connectivity (EBC) has the potential to track differences between AD, PD and healthy controls (HC). However, how to effectively use EBC estimations for the research of disease diagnosis remains an open problem. To deal with complex brain networks, graph neural network (GNN) has been increasingly popular in very recent years and the effectiveness of combining EBC and GNN techniques has been unexplored in the field of dementia diagnosis. In this study, a novel directed structure learning GNN (DSL-GNN) was developed and performed on the imaging of EBC estimations and power spectrum density (PSD) features. In comparison to the previous studies on GNN, our proposed approach enhanced the functionality for processing directional information, which builds the basis for more efficiently performing GNN on EBC. Another contribution of this study is the creation of a new framework for applying univariate and multivariate features simultaneously in a classification task. The proposed framework and DSL-GNN are validated in four discrimination tasks and our approach exhibited the best performance, against the existing methods, with the highest accuracy of 94.0% (AD vs. HC), 94.2% (PD vs. HC), 97.4% (AD vs. PD) and 93.0% (AD vs. PD vs. HC). In a word, this research provides a robust analytical framework to deal with complex brain networks containing causal directional information and implies promising potential in the diagnosis of two of the most common neurodegenerative conditions.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Driver behaviour characterization using artificial intelligence techniques in level 3 automated vehicle.
    (Cranfield University, 2021-09) Yang, Lichao; Zhao, Yifan; Brighton, James L.
    Autonomous vehicles free drivers from driving and allow them to engage in some non-driving related activities. However, the engagement in such activities could reduce their awareness of the driving environment, which could bring a potential risk for the takeover process in the current automation level of the intelligent vehicle. Therefore, it is of great importance to monitor the driver's behaviour when the vehicle is in automated driving mode. This research aims to develop a computer vision-based driver monitoring system for autonomous vehicles, which characterises driver behaviour inside the vehicle cabin by their visual attention and hand movement and proves the feasibility of using such features to identify the driver's non-driving related activities. This research further proposes a system, which employs both information to identify driving related activities and non-driving related activities. A novel deep learning- based model has been developed for the classification of such activities. A lightweight model has also been developed for the edge computing device, which compromises the recognition accuracy but is more suitable for further in-vehicle applications. The developed models outperform the state-of-the-art methods in terms of classification accuracy. This research also investigates the impact of the engagement in non-driving related activities on the takeover process and proposes a category method to group the activities to improve the extendibility of the driving monitoring system for unevaluated activities. The finding of this research is important for the design of the takeover strategy to improve driving safety during the control transition in Level 3 automated vehicles.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A dual-cameras-based driver gaze mapping system with an application on non-driving activities monitoring
    (IEEE, 2019-09-13) Yang, Lichao; Dong, Kuo; Dmitruk, Arkadiusz Jan; Brighton, James; Zhao, Yifan
    Characterisation of the driver's non-driving activities (NDAs) is of great importance to the design of the take-over control strategy in Level 3 automation. Gaze estimation is a typical approach to monitor the driver's behaviour since the eye gaze is normally engaged with the human activities. However, current eye gaze tracking techniques are either costly or intrusive which limits their applicability in vehicles. This paper proposes a low-cost and non-intrusive dual-cameras based gaze mapping system that visualises the driver's gaze using a heat map. The challenges introduced by complex head movement during NDAs and camera distortion are addressed by proposing a nonlinear polynomial model to establish the relationship between the face features and eye gaze on the simulated driver's view. The Root Mean Square Error of this system in the in-vehicle experiment for the X and Y direction is 7.80±5.99 pixel and 4.64±3.47 pixel respectively with the image resolution of 1440 x 1080 pixels. This system is successfully demonstrated to evaluate three NDAs with visual attention. This technique, acting as a generic tool to monitor driver's visual attention, will have wide applications on NDA characterisation for intelligent design of take over strategy and driving environment awareness for current and future automated vehicles.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    The evaluations of the impact of the pilot’s visual behaviours on the landing performance by using eye tracking technology
    (Springer, 2023-07-09) Wang, Yifan; Yang, Lichao; Korek, Wojciech; Zhao, Yifan; Li, Wen-Chin
    Introduction. Eye tracking technology can be used to characterise a pilot's visual behaviour as well as to further analyse the workload and status of the pilot, which is crucial for tracking and predicting pilot performance and enhancing flight safety. Research questions. This research aims to investigate and identify the visual-related factors that could affect the pilot's landing operation performance (depending on whether the landing was successful or not). Method. There are 23 participants who performed the task of landing in the Future system simulator (FSS) while wearing eye trackers. Their eye tracking parameters including proportion of fixation count on primary flight display (PFC on PFD), proportion of fixation count on out the window (PFC on OTW), percentage change in pupil diameter (PCPD) and blink count were trained for classification using XGBoost according to whether they landed successfully or not. Results & Discussion. The results demonstrated that eye-movement features can be used to classify and predict a pilot's landing performance with an accuracy of 77.02%. PCPD and PFC on PFD are more crucial for performance classification out of the four features. Conclusion. It is practical to classify and predict pilot performance using eye-tracking technologies. The high importance of PCPD and PFC on PFD indicates that there is a correlation between pilots’ workload and attention distribution and performance, which has important implications for future predictive and analytical research on performance. The prediction of performance using eye tracking suggests that pilot status monitoring has a useful application in flight deck design.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Fast personal protective equipment detection for real construction sites using deep learning approaches
    (MDPI, 2021-05-17) Wang, Zijian; Wu, Yimin; Yang, Lichao; Thirunavukarasu, Arjun; Evison, Colin; Zhao, Yifan
    The existing deep learning-based Personal Protective Equipment (PPE) detectors can only detect limited types of PPE and their performance needs to be improved, particularly for their deployment on real construction sites. This paper introduces an approach to train and evaluate eight deep learning detectors, for real application purposes, based on You Only Look Once (YOLO) architectures for six classes, including helmets with four colours, person, and vest. Meanwhile, a dedicated high-quality dataset, CHV, consisting of 1330 images, is constructed by considering real construction site background, different gestures, varied angles and distances, and multi PPE classes. The comparison result among the eight models shows that YOLO v5x has the best mAP (86.55%), and YOLO v5s has the fastest speed (52 FPS) on GPU. The detection accuracy of helmet classes on blurred faces decreases by 7%, while there is no effect on other person and vest classes. And the proposed detectors trained on the CHV dataset have a superior performance compared to other deep learning approaches on the same datasets. The novel multiclass CHV dataset is open for public use.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A full 3D reconstruction of rail tracks using a camera array
    (Elsevier, 2023-12-14) Wang, Yizhong; Liu, Haochen; Yang, Lichao; Durazo-Cardenas, Isidro; Namoano, Bernadin; Zhong, Cheng; Zhao, Yifan
    This research addresses limitations found in existing 3D track reconstruction studies, which often focus solely on specific rail sections or encounter deployment challenges with rolling stock. To address this challenge, we propose an innovative solution: a rolling-stock embedded arch camera array scanning system. The system includes a semi-circumferential focusing vision array, an arch camera holder, and a Computer Numerical Control machine to simulate track traverse. We propose an optimal configuration that balances accuracy, full rail coverage, and modelling efficiency. Sensitivity analysis demonstrates a reconstruction accuracy within 0.4 mm when compared to Lidar-generated ground truth models. Two real-world experiments validate the system's effectiveness following essential data preprocessing. This integrated technique, when combined with rail rolling stocks and robotic maintenance platforms, facilitates swift, unmanned, and highly accurate track reconstruction and surveying.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    The identification of non-driving activities with associated implication on the take-over process
    (MDPI, 2021-12-22) Yang, Lichao; Semiromi, Mahdi Babayi; Xing, Yang; Lv, Chen; Brighton, James; Zhao, Yifan
    In conditionally automated driving, the engagement of non-driving activities (NDAs) can be regarded as the main factor that affects the driver’s take-over performance, the investigation of which is of great importance to the design of an intelligent human–machine interface for a safe and smooth control transition. This paper introduces a 3D convolutional neural network-based system to recognize six types of driver behaviour (four types of NDAs and two types of driving activities) through two video feeds based on head and hand movement. Based on the interaction of driver and object, the selected NDAs are divided into active mode and passive mode. The proposed recognition system achieves 85.87% accuracy for the classification of six activities. The impact of NDAs on the perspective of the driver’s situation awareness and take-over quality in terms of both activity type and interaction mode is further investigated. The results show that at a similar level of achieved maximum lateral error, the engagement of NDAs demands more time for drivers to accomplish the control transition, especially for the active mode NDAs engagement, which is more mentally demanding and reduces drivers’ sensitiveness to the driving situation change. Moreover, the haptic feedback torque from the steering wheel could help to reduce the time of the transition process, which can be regarded as a productive assistance system for the take-over process.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    The implication of non-driving activities on situation awareness and take-over performance in level 3 automation
    (IEEE, 2020-11-18) Yang, Lichao; Semiromi, Mahdi Babayi; Auger, Daniel J.; Dmitruk, Arkadiusz; Brighton, James; Zhao, Yifan
    The driver's take-over performance is of great importance for driving safety in conditionally automated driving since the driver is required to respond appropriately to control the vehicle if there is a system failure. The engagement of different non-driving activities (NDAs), considered as the main factor of the driver's take-over performance has been investigated in this study from both perspectives of the driver's situation awareness and take-over quality. The activities are divided into 2 groups, which are active interaction mode and passive interaction mode based on the engagement of human and object. The results suggest that the engagement of NDAs could reduce the driver's situation awareness. Driver's attention level is different for each activity. Particularly, active interaction mode NDAs requests more mentally demanding and drivers are not sensitive to the driving situation change when they are doing such activities. In addition, there is no significant difference in the maximum lateral error with NDAs engagement. However, it takes more time to achieve a safe control transition for drivers who are doing the NDAs. The active interaction mode NDAs request even more time. Moreover, the transition process could benefit from steering wheel haptic feedback torque, which can be considered as an effective take-over assistance system.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Infer thermal information from visual information: a cross imaging modality edge learning (CIMEL) framework
    (MDPI, 2021-11-10) Wang, Shuozhi; Mei, Jianqiang; Yang, Lichao; Zhao, Yifan
    The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Keypoints-based heterogeneous graph convolutional networks for construction
    (Elsevier, 2023-09-22) Wang, Shuozhi; Yang, Lichao; Zhang, Zichao; Zhao, Yifan
    Artificial intelligence algorithms employed for classifying excavator-related activities predominantly rely on sensors embedded within individual machinery or computer vision (CV) techniques encompassing a large scene. The existing CV-based methods are often difficult to tackle an image including multiple excavators and other cooperating machinery. This study presents a novel framework tailored to the classification of excavator activities, accounting for both the excavator itself and the dumpers collaborating with the excavator during operations. Distinct from most existing related studies, this method centres on the transformed heterogeneous graph data constructed using the keypoints of all cooperating machinery extracted from an image. The resulting model leverages the relationships between the mechanical components of an excavator in varying activation states and the associations between the excavator and the collaborating machinery. The framework commences with a novel definition of keypoints representing different machinery relevant to the targetted activities. A customised Machinery Keypoint R-CNN method is then developed to extract these keypoints, forming the basis of graph notes. By considering the type, attribute and edge of nodes, a Heterogeneous Graph Convolutional Network is finally utilised for activity recognition. The results suggest that the proposed framework can effectively predict earthwork activities (with an accuracy of up to 97.5%) when the image encompasses multiple excavators and cooperating machinery. This solution holds promising potential for the automated measurement and management of earthwork productivity within the construction industry. Code and data are available at: https://github.com/gillesflash/Keypoints-Based-Heterogeneous-Graph-Convolutional-Networks.git.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Learning spatio-temporal representations with a dual-stream 3-D residual network for nondriving activity recognition
    (IEEE, 2021-07-28) Yang, Lichao; Shan, Xiaocai; Lv, Chen; Brighton, James; Zhao, Yifan
    Accurate recognition of non-driving activity (NDA) is important for the design of intelligent Human Machine Interface to achieve a smooth and safe control transition in the conditionally automated driving vehicle. However, some characteristics of such activities like limited-extent movement and similar background pose a challenge to the existing 3D convolutional neural network (CNN) based action recognition methods. In this paper, we propose a dual-stream 3D residual network, named D3D ResNet, to enhance the learning of spatio-temporal representation and improve the activity recognition performance. Specifically, a parallel 2-stream structure is introduced to focus on the learning of short-time spatial representation and small-region temporal representation. A 2-feed driver behaviour monitoring framework is further build to classify 4 types of NDAs and 2 types of driving behaviour based on the drivers head and hand movement. A novel NDA dataset has been constructed for the evaluation, where the proposed D3D ResNet achieves 83.35% average accuracy, at least 5% above three selected state-of-the-art methods. Furthermore, this study investigates the spatio-temporal features learned in the hidden layer through the saliency map, which explains the superiority of the proposed model on the selected NDAs.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A lightweight temporal attention-based convolution neural network for driver's activity recognition in edge
    (Elsevier, 2023-07-06) Yang, Lichao; Du, Weixiang; Zhao, Yifan
    Low inference latency and accurate response to environment changes play a crucial role in the automated driving system, especially in the current Level 3 automated driving. Achieving the rapid and reliable recognition of driver's non-driving related activities (NDRAs) is important for designing an intelligent takeover strategy that ensures a safe and quick control transition. This paper proposes a novel lightweight temporal attention-based convolutional neural network (LTA-CNN) module dedicated to edge computing platforms, specifically for NDRAs recognition. This module effectively learns spatial and temporal representations at a relatively low computational cost. Its superiority has been demonstrated in an NDRA recognition dataset, achieving 81.01% classification accuracy and an 8.37% increase compared to the best result of the efficient network (MobileNet V3) found in the literature. The inference latency has been evaluated to demonstrate its effectiveness in real applications. The latest NVIDIA Jetson AGX Orin could complete one inference in only 63 ms.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Pattern recognition of barely visible impact damage in carbon composites using pulsed thermography
    (IEEE, 2021-12-13) Zhou, Jia; Du, Weixiang; Yang, Lichao; Deng, Kailun; Addepalli, Sri; Zhao, Yifan
    This paper proposes a novel framework to characterise the morphological pattern of Barely Visible Impact Damage using machine learning. Initially, a sequence of image processing methods are introduced to extract the damage contour, which is then described by a Fourier descriptor-based filter. The uncertainty associated with the damage contour under the same impact energy level is then investigated. A variety of geometric features of the contour are extracted to develop an AI model, which effectively groups the tested 100 samples impacted by 5 different impact energy levels with an accuracy of 96%. Predictive polynomial models are finally established to link the impact energy to the three selected features. It is found that the major axis length of the damage has the best prediction performance, with an R2 value up to 0.97. Additionally, impact damage caused by low energy exhibits higher uncertainty than that of high energy, indicating lower predictability.
  • Loading...
    Thumbnail Image
    ItemOpen Access
    Recognition of visual-related non-driving activities using a dual-camera monitoring system
    (Elsevier, 2021-03-25) Yang, Lichao; Dong, Kuo; Ding, Yan; Brighton, James; Zhan, Zhenfei; Zhao, Yifan
    For a Level 3 automated vehicle, according to the SAE International Automation Levels definition (J3016), the identification of non-driving activities (NDAs) that the driver is engaging with is of great importance in the design of an intelligent take-over interface. Much of the existing literature focuses on the driver take-over strategy with associated Human-Machine Interaction design. This paper proposes a dual-camera based framework to identify and track NDAs that require visual attention. This is achieved by mapping the driver's gaze using a nonlinear system identification approach, on the object scene, recognised by a deep learning algorithm. A novel gaze-based region of interest (ROI) selection module is introduced and contributes about a 30% improvement in average success rate and about a 60% reduction in average processing time compared to the results without this module. This framework has been successfully demonstrated to identify five types of NDA required visual attention with an average success rate of 86.18%. The outcome of this research could be applicable to the identification of other NDAs and the tracking of NDAs within a certain time window could potentially be used to evaluate the driver's attention level for both automated and human-driving vehicles
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A refined non-driving activity classification using a two-stream convolutional neural network
    (IEEE, 2020-06-29) Yang, Lichao; Yang, Tingyu; Liu, Haochen; Shan, Xiaocai; Brighton, James; Skrypchuk, Lee; Mouzakitis, Alexandros; Zhao, Yifan
    It is of great importance to monitor the driver’s status to achieve an intelligent and safe take-over transition in the level 3 automated driving vehicle. We present a camera-based system to recognise the non-driving activities (NDAs) which may lead to different cognitive capabilities for take-over based on a fusion of spatial and temporal information. The region of interest (ROI) is automatically selected based on the extracted masks of the driver and the object/device interacting with. Then, the RGB image of the ROI (the spatial stream) and its associated current and historical optical flow frames (the temporal stream) are fed into a two-stream convolutional neural network (CNN) for the classification of NDAs. Such an approach is able to identify not only the object/device but also the interaction mode between the object and the driver, which enables a refined NDA classification. In this paper, we evaluated the performance of classifying 10 NDAs with two types of devices (tablet and phone) and 5 types of tasks (emailing, reading, watching videos, web-browsing and gaming) for 10 participants. Results show that the proposed system improves the averaged classification accuracy from 61.0% when using a single spatial stream to 90.5%
  • Loading...
    Thumbnail Image
    ItemOpen Access
    A review of digital twin technologies for enhanced sustainability in the construction industry
    (MDPI, 2024-04-16) Zhang, Zichao; Wei, Zhuangkun; Court, Samuel; Yang, Lichao; Wang, Shuozhi; Thirunavukarasu, Arjun; Zhao, Yifan
    Carbon emissions present a pressing challenge to the traditional construction industry, urging a fundamental shift towards more sustainable practices and materials. Recent advances in sensors, data fusion techniques, and artificial intelligence have enabled integrated digital technologies (e.g., digital twins) as a promising trend to achieve emission reduction and net-zero. While digital twins in the construction sector have shown rapid growth in recent years, most applications focus on the improvement of productivity, safety and management. There is a lack of critical review and discussion of state-of-the-art digital twins to improve sustainability in this sector, particularly in reducing carbon emissions. This paper reviews the existing research where digital twins have been directly used to enhance sustainability throughout the entire life cycle of a building (including design, construction, operation and maintenance, renovation, and demolition). Additionally, we introduce a conceptual framework for this industry, which involves the elements of the entire digital twin implementation process, and discuss the challenges faced during deployment, along with potential research opportunities. A proof-of-concept example is also presented to demonstrate the validity of the proposed conceptual framework and potential of digital twins for enhanced sustainability. This study aims to inspire more forward-thinking research and innovation to fully exploit digital twin technologies and transform the traditional construction industry into a more sustainable sector.
  • «
  • 1 (current)
  • 2
  • »

Quick Links

  • About our Libraries
  • Cranfield Research Support
  • Cranfield University

Useful Links

  • Accessibility Statement
  • CERES Takedown Policy

Contacts-TwitterFacebookInstagramBlogs

Cranfield Campus
Cranfield, MK43 0AL
United Kingdom
T: +44 (0) 1234 750111
  • Cranfield University at Shrivenham
  • Shrivenham, SN6 8LA
  • United Kingdom
  • Email us: researchsupport@cranfield.ac.uk for REF Compliance or Open Access queries

Cranfield University copyright © 2002-2025
Cookie settings | Privacy policy | End User Agreement | Send Feedback