Browsing by Author "Shan, Xiaocai"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Open Access An adaptive pig face recognition approach using convolutional neural networks(Elsevier, 2020-04-16) Marsot, Mathieu; Mei, Jiangqiang; Shan, Xiaocai; Ye, Liyong; Feng, Peng; Yan, Xuejun; Li, Chenfan; Zhao, YifanThe evolution of agriculture towards intensive farming leads to an increasing demand for animal identification associated with high traceability, driven by the need for quality control and welfare management in agricultural animals. Automatic identification of individual animals is an important step to achieve individualised care in terms of disease detection and control, and improvement of the food quality. For example, as feeding patterns can differ amongst pigs in the same pen, even in homogenous groups, automatic registration shows the most potential when applied to an individual pig. In the EU for instance, this capability is required for certification purposes. Although the RFID technology has been gradually developed and widely applied for this task, chip implanting might still be time-consuming and costly for current practical applications. In this paper, a novel framework composed of computer vision algorithms, machine learning and deep learning techniques is proposed to offer a relatively low-cost and scalable solution of pig recognition. Firstly, pig faces and eyes are detected automatically by two Haar feature-based cascade classifiers and one shallow convolutional neural network to extra high-quality images. Secondly, face recognition is performed by employing a deep convolutional neural network. Additionally, class activation maps generated by grad-CAM and saliency maps are utilised to visually understand how the discriminating parameters have been learned by the neural network. By applying the proposed approach on 10 randomly selected pigs filmed in farm condition, the proposed method demonstrates the superior performance against the state-of-art method with an accuracy of 83% over 320 testing images. The outcome of this study will facilitate the real-application of AI-based animal identification in swine production.Item Open Access Brain functional and effective connectivity based on electroencephalography recordings: a review(Wiley, 2021-10-20) Cao, Jun; Zhao, Yifan; Shan, Xiaocai; Wei, Hua-Liang; Guo, Yuzhu; Chen, Liangyu; Erkoyuncu, John Ahmet; Sarrigiannis, Ptolemaios GeorgiosFunctional connectivity and effective connectivity of the human brain, representing statistical dependence and directed information flow between cortical regions, significantly contribute to the study of the intrinsic brain network and its functional mechanism. Many recent studies on electroencephalography (EEG) have been focusing on modeling and estimating brain connectivity due to increasing evidence that it can help better understand various brain neurological conditions. However, there is a lack of a comprehensive updated review on studies of EEG-based brain connectivity, particularly on visualization options and associated machine learning applications, aiming to translate those techniques into useful clinical tools. This article reviews EEG-based functional and effective connectivity studies undertaken over the last few years, in terms of estimation, visualization, and applications associated with machine learning classifiers. Methods are explored and discussed from various dimensions, such as either linear or nonlinear, parametric or nonparametric, time-based, and frequency-based or time-frequency-based. Then it is followed by a novel review of brain connectivity visualization methods, grouped by Heat Map, data statistics, and Head Map, aiming to explore the variation of connectivity across different brain regions. Finally, the current challenges of related research and a roadmap for future related research are presented.Item Open Access Learning spatio-temporal representations with a dual-stream 3-D residual network for nondriving activity recognition(IEEE, 2021-07-28) Yang, Lichao; Shan, Xiaocai; Lv, Chen; Brighton, James; Zhao, YifanAccurate recognition of non-driving activity (NDA) is important for the design of intelligent Human Machine Interface to achieve a smooth and safe control transition in the conditionally automated driving vehicle. However, some characteristics of such activities like limited-extent movement and similar background pose a challenge to the existing 3D convolutional neural network (CNN) based action recognition methods. In this paper, we propose a dual-stream 3D residual network, named D3D ResNet, to enhance the learning of spatio-temporal representation and improve the activity recognition performance. Specifically, a parallel 2-stream structure is introduced to focus on the learning of short-time spatial representation and small-region temporal representation. A 2-feed driver behaviour monitoring framework is further build to classify 4 types of NDAs and 2 types of driving behaviour based on the drivers head and hand movement. A novel NDA dataset has been constructed for the evaluation, where the proposed D3D ResNet achieves 83.35% average accuracy, at least 5% above three selected state-of-the-art methods. Furthermore, this study investigates the spatio-temporal features learned in the hidden layer through the saliency map, which explains the superiority of the proposed model on the selected NDAs.Item Open Access A refined non-driving activity classification using a two-stream convolutional neural network(IEEE, 2020-06-29) Yang, Lichao; Yang, Tingyu; Liu, Haochen; Shan, Xiaocai; Brighton, James; Skrypchuk, Lee; Mouzakitis, Alexandros; Zhao, YifanIt is of great importance to monitor the driver’s status to achieve an intelligent and safe take-over transition in the level 3 automated driving vehicle. We present a camera-based system to recognise the non-driving activities (NDAs) which may lead to different cognitive capabilities for take-over based on a fusion of spatial and temporal information. The region of interest (ROI) is automatically selected based on the extracted masks of the driver and the object/device interacting with. Then, the RGB image of the ROI (the spatial stream) and its associated current and historical optical flow frames (the temporal stream) are fed into a two-stream convolutional neural network (CNN) for the classification of NDAs. Such an approach is able to identify not only the object/device but also the interaction mode between the object and the driver, which enables a refined NDA classification. In this paper, we evaluated the performance of classifying 10 NDAs with two types of devices (tablet and phone) and 5 types of tasks (emailing, reading, watching videos, web-browsing and gaming) for 10 participants. Results show that the proposed system improves the averaged classification accuracy from 61.0% when using a single spatial stream to 90.5%Item Open Access A revised Hilbert-Huang transformation to track non-stationary association of electroencephalography signals(IEEE, 2021-04-28) Shan, Xiaocai; Huo, Shoudong; Yang, Lichao; Cao, Jun; Zou, Jiaru; Chen, Liangyu; Sarrigiannis, Ptolemaios Georgios; Zhao, YifanThe time-varying cross-spectrum method has been used to effectively study transient and dynamic brain functional connectivity between non-stationary electroencephalography (EEG) signals. Wavelet-based cross-spectrum is one of the most widely implemented methods, but it is limited by the spectral leakage caused by the finite length of the basic function that impacts the time and frequency resolutions. This paper proposes a new time-frequency brain functional connectivity analysis framework to track the non-stationary association of two EEG signals based on a Revised Hilbert-Huang Transform (RHHT). The framework can estimate the cross-spectrum of decomposed components of EEG, followed by a surrogate significance test. The results of two simulation examples demonstrate that, within a certain statistical confidence level, the proposed framework outperforms the wavelet-based method in terms of accuracy and time-frequency resolution. A case study on classifying epileptic patients and healthy controls using interictal seizure-free EEG data is also presented. The result suggests that the proposed method has the potential to better differentiate these two groups benefiting from the enhanced measure of dynamic time-frequency association.Item Open Access Spatial–temporal graph convolutional network for Alzheimer classification based on brain functional connectivity imaging of electroencephalogram(Wiley, 2022-06-25) Shan, Xiaocai; Cao, Jun; Huo, Shoudong; Chen, Liangyu; Sarrigiannis, Ptolemaios Georgios; Zhao, YifanFunctional connectivity of the human brain, representing statistical dependence of information flow between cortical regions, significantly contributes to the study of the intrinsic brain network and its functional mechanism. To fully explore its potential in the early diagnosis of Alzheimer's disease (AD) using electroencephalogram (EEG) recordings, this article introduces a novel dynamical spatial–temporal graph convolutional neural network (ST-GCN) for better classification performance. Different from existing studies that are based on either topological brain function characteristics or temporal features of EEG, the proposed ST-GCN considers both the adjacency matrix of functional connectivity from multiple EEG channels and corresponding dynamics of signal EEG channel simultaneously. Different from the traditional graph convolutional neural networks, the proposed ST-GCN makes full use of the constrained spatial topology of functional connectivity and the discriminative dynamic temporal information represented by the 1D convolution. We conducted extensive experiments on the clinical EEG data set of AD patients and Healthy Controls. The results demonstrate that the proposed method achieves better classification performance (92.3%) than the state-of-the-art methods. This approach can not only help diagnose AD but also better understand the effect of normal ageing on brain network characteristics before we can accurately diagnose the condition based on resting-state EEG.Item Open Access Ultra-high-resolution time-frequency analysis of EEG to characterise brain functional connectivity with the application in Alzheimer's disease(IOP Publishing, 2022-08-11) Cao, Jun; Zhao, Yifan; Shan, Xiaocai; Blackburn, Daniel; Wei, Jize; Erkoyuncu, John Ahmet; Chen, Liangyu; Sarrigiannis, Ptolemaios G.Objective. This study aims to explore the potential of high-resolution brain functional connectivity based on electroencephalogram, a non-invasive low-cost technique, to be translated into a long-overdue biomarker and a diagnostic method for Alzheimer's disease (AD). Approach. The paper proposes a novel ultra-high-resolution time-frequency nonlinear cross-spectrum method to construct a promising biomarker of AD pathophysiology. Specifically, using the peak frequency estimated from a revised Hilbert–Huang transformation (RHHT) cross-spectrum as a biomarker, the support vector machine classifier is used to distinguish AD from healthy controls (HCs). Main results. With the combinations of the proposed biomarker and machine learning, we achieved a promising accuracy of 89%. The proposed method performs better than the wavelet cross-spectrum and other functional connectivity measures in the temporal or frequency domain, particularly in the Full, Delta and Alpha bands. Besides, a novel visualisation approach developed from topography is introduced to represent the brain functional connectivity, with which the difference between AD and HCs can be clearly displayed. The interconnections between posterior and other brain regions are obviously affected in AD. Significance. Those findings imply that the proposed RHHT approach could better track dynamic and nonlinear functional connectivity information, paving the way for the development of a novel diagnostic approach.Item Open Access Using interictal seizure-free EEG data to recognise patients with epilepsy based on machine learning of brain functional connectivity(Elsevier, 2021-03-12) Cao, Jun; Grajcar, Kacper; Shan, Xiaocai; Zhao, Yifan; Zou, Jiaru; Chen, Liangyu; Li, Zhiqing; Grunewald, Richard; Zis, Panagiotis; De Marco, Matteo; Unwin, Zoe; Blackburn, Daniel; Sarrigiannis, Ptolemaios G.Most seizures in adults with epilepsy occur rather infrequently and as a result, the interictal EEG plays a crucial role in the diagnosis and classification of epilepsy. However, empirical interpretation, of a first EEG in adult patients, has a very low sensitivity ranging between 29-55%. Useful EEG information remains buried within the signals in seizure-free EEG epochs, far beyond the observational capabilities of any specialised physician in this field. Unlike most of the existing works focusing on either seizure data or single-variate method, we introduce a multi-variate method to characterise sensor level brain functional connectivity from interictal EEG data to identify patients with generalised epilepsy. A total of 9 connectivity features based on 5 different measures in time, frequency and time frequency domains have been tested. The solution has been validated by the K-Nearest Neighbour algorithm, classifying an epilepsy group (EG) vs healthy controls (HC) and subsequently with another cohort of patients characterised by non-epileptic attacks (NEAD), a psychogenic type of disorder. A high classification accuracy (97%) was achieved for EG vs HC while revealing significant spatio temporal deficits in the frontocentral areas in the beta frequency band. For EG vs NEAD, the classification accuracy was only about 73%, which might be a reflection of the well-described coexistence of NEAD with epileptic attacks. Our work demonstrates that seizure-free interictal EEG data can be used to accurately classify patients with generalised epilepsy from HC and that more systematic work is required in this direction aiming to produce a clinically useful diagnostic method.