Full Text:  <486>

CLC number: 

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 0000-00-00

Cited: 0

Clicked: 855

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition


Author(s):  Yanping ZHU, Lei HUANG, Jixin CHEN, Shenyun WANG, Fayu WAN, Jianan CHEN

Affiliation(s):  Nanjing University of Information Science and Technology, School of Electronic and Information Engineering, Nanjing 210044, China

Corresponding email(s):  001520@nuist.edu.cn, 20211249221@nuist.edu.cn, 202212490689@nuist.edu.cn, wangsy2006@126.com, 002470@nuist.edu.cn, 202212490688@nuist.edu.cn

Key Words:  Emotion recognition; EEG; Depthwise over-parameterized convolutional (DO-Conv); Transformer; VAE-GAN


Share this article to: More <<< Previous Paper|Next Paper >>>

Yanping ZHU, Lei HUANG, Jixin CHEN, Shenyun WANG, Fayu WAN, Jianan CHEN. VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2300781

@article{title="VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition",
author="Yanping ZHU, Lei HUANG, Jixin CHEN, Shenyun WANG, Fayu WAN, Jianan CHEN",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2300781"
}

%0 Journal Article
%T VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition
%A Yanping ZHU
%A Lei HUANG
%A Jixin CHEN
%A Shenyun WANG
%A Fayu WAN
%A Jianan CHEN
%J Frontiers of Information Technology & Electronic Engineering
%P
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2300781"

TY - JOUR
T1 - VG-DOCoT: a novel DO-Conv and transformer framework via VAE-GAN technique for EEG emotion recognition
A1 - Yanping ZHU
A1 - Lei HUANG
A1 - Jixin CHEN
A1 - Shenyun WANG
A1 - Fayu WAN
A1 - Jianan CHEN
J0 - Frontiers of Information Technology & Electronic Engineering
SP -
EP -
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2300781"


Abstract: 
Human emotions are intricate psychological phenomena that reflect an individual's current physiological and psychological state. Emotions have a pronounced influence on human behavior, cognition, communication, and decision-making. However, current emotion recognition methods often suffer from suboptimal performance and limited scalability in practical applications. To solve this problem, a novel electroencephalogram (EEG) emotion recognition network named VG-DOCoT is proposed, which is based on depthwise over-parameterized convolutional (DO-Conv), Transformer, and VAE-GAN structures. Specifically, the differential entropy can be extracted from EEG signals to create mappings into the temporal, spatial, and frequency information in preprocessing. To enhance the training data, VAE-GAN is employed for data augmentation. A novel convolution module DO-Conv is used to replace the traditional convolution layer to improve the network. A Transformer structure is introduced into the network framework to reveal the global dependencies from EEG signals. Using the proposed model, a binary classification on the DEAP dataset is carried out, which achieves an accuracy of 92.52% for arousal and 92.27% for valence. Next, a ternary classification is conducted on the SEED dataset, which classifies neutral, positive, and negative emotions; an impressive average prediction accuracy of 93.77% is obtained. The proposed method significantly improves the accuracy for EEG-based emotion recognition.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE