2023; 21(4): 701-714  https://doi.org/10.9758/cpn.22.1028
Acoustic and Subjective Basis of Emotional Perception in Comatose Patients: A Comparative Study
Galina V. Portnova1,2, Elena V. Proskurnina3
1Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of the Russian Academy of Sciences, Moscow, Russia
2Department of Scientific Activities, Pushkin Institute of Russian Language, Moscow, Russia
3Laboratory of Molecular Biology, Research Centre for Medical Genetics, Moscow, Russia
Correspondence to: Elena V. Proskurnina
Laboratory of Molecular Biology, Research Centre for Medical Genetics, Moskvorechye St. 1, Moscow 115522, Russia
E-mail: proskurnina@gmail.com
ORCID: https://orcid.org/0000-0002-8243-6339
Received: September 19, 2022; Revised: February 15, 2023; Accepted: March 12, 2023; Published online: June 29, 2023.
© The Korean College of Neuropsychopharmacology. All rights reserved.

This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Objective: The acoustic stimulation in consciousness patients may improve the diagnosis assessment and the effectiveness of rehabilitation procedures. We aimed to investigate the event-related potential (ERP) response to emotional auditory stimuli in comatose patients.
Methods: We measured the nonlinear and linear electroencephalogram (EEG) features, prepared the acoustic analysis of stimuli parameters, and assessed the subjective emotional rates of stimuli characteristics.
Results: Patients with better outcomes had recognizable ERP responses and significant changes of the nonlinear EEG features to emotional sounds, unlike patients with worse outcomes. The response of comatose patients was attributed to acoustical features of emotional sounds, whereas the EEG response of healthy subjects was associated with their subjective feelings. The comatose patients demonstrated the variable EEG activity for neutral and emotional sounds.
Conclusion: Thus, the EEG reactivity followed the better outcome of comatose patients to emotional stimuli. The study assumed the substantial differences of emotional stimuli perception in the healthy and unconscious brain.
Keywords: Event-related potential; Coma, post head injury; Patient outcome; Sounds, emotional aspects; Sounds, pitch, loudness
INTRODUCTION

Previous studies of comatose patients’ reactivity showed that unconscious patients demonstrated higher responses to emotionally charged stimuli than neutral stimuli [1-3]. Some researchers found that unconscious patients or sleeping subjects responded to emotional stimuli such as their names or alarm sounds, whereas comparable neutral stimuli did not elicit any significant response [2,4].

Considering some previous findings, we could hypo-thesize that the emotional perception in comatose patients was not the same as the emotional perception of healthy subjects [5]. Even with a mild or moderate traumatic brain injury, it is known that patients demonstrated emotional impairment. For example, post-traumatic emotional changes included impaired capacity for social activity [6], impaired interpretation of non-verbal communication and increased auditory processing time, a lack of emotional attachment and empathy, lack of warmth in social interactions, or chronic social and emotional deficits [7-10]. For comas caused by severe traumatic brain injury, we could expect a significantly impaired perception of charged emotional stimuli. According to the memories reported by comatose patients, they could not distinguish whether they were awake or dreaming, whether the reality was actual or imagined. They only reported unpleasant sensations such as pain, cold, or thirst [11]. At the same time, the hearing could be the last sense to be lost in comatose patients, and some patients said that they had recognized the voice of a family member and remembered a familiar voice pronouncing the patient’s name or some personal words [12-14].

To investigate the processing of emotional stimuli in comatose patients, we applied the event-related potential (ERP) analysis that allowed us to recognize the cognitive and mental functions involved in emotional perception [15]. As was previously shown, some ERP components and parameters had a particular prognostic value for unconscious patients regarding their association with emotional perception and cognitive functions and included the P300 wave [16,17], earlier positive components, negative components [18-21], and late negative components [22]. At the same time, the emotional perception was widely investigated using various electroencephalogram (EEG) parameters such as spectral power, fractal dimen-sion, Hjorth complexity (HC), and other features [23]. Pre-vious findings have also demonstrated the dynamics in theta-, beta-, and alpha-rhythm dynamics during emotional stimulation [24]. These features could also have an excellent prognostic value regarding comatose patients [25]. In addition, the fractal dimension was previously associated with the perception of emotional stimuli; an increase in these EEG parameters was previously associated with emotional arousal, empathy, fear, and other emotional states and could be revealed in patients with a mental disability or brain damage [5,26-28]. The HC was also associated with emotional states such as irritation, happiness, or scariness [29].

Our approach to emotional perception in the coma focused not only on responding to emotional stimuli but also on the specificity of emotional acoustic features that could induce the electrophysiological response. Based on the previous findings [30], we hypothesized that the response to emotional stimuli could be associated with their particular acoustic parameters, such as pitch or loudness [31,32]. At the same time, we assumed that not average values of pitch and loudness of sound could indicate emotional charging stimuli and their valence, but their dynamics. In particular, the rise and fall of the voice’s pitch and loudness during oral presentations could induce increased interest and attention of listeners [33].

We used a spectrum of emotionally significant sounds, which were selected taking into account different emotional significance and variability of acoustic indicators. For these sounds, in addition to the average ‘pitch’ and ‘loudness’ indicators previously studied [21,32], temporal dynamics (sound variability in time) was taken into account [34]. This indicator is extremely important for conveying the emotional significance of sounds, including music and intonation of speech [35]. Recently, to study the response of comatose patients to the acoustic characteristics of prosody and other emotional sounds, it is the variability of the physical characteristics of the sound that has been used more and more often [36]. Here, we applied the method of distances between the physical characteristics of sounds, which was previously used to compare 2 complex objects [37]. This method allowed us to take into consideration the minimal differences between emotionally significant sounds and compare them with the corresponding changes in the amplitudes of ERPs in patients and healthy volunteers.

METHODS

Participants

The study involved 69 adult participants, namely, 25 individuals in a control group and 56 comatose patients. The patients were recruited in the subacute phase after injury (from 14 days to 3 months). All patients had severe traumatic brain injury with Glasgow Coma Scale (GCS) scores between 4 and 8. Individuals who were younger than 18 and older than 60 or had neurological or mental illnesses in anamnesis or with as epi-activity EEG signs were excluded from the study. Diffuse axonal damage was proved with magnetic resonance imaging and multi-slice computer tomography in all participants. Patients with focal brain lesions were excluded from the study (3 patients).

The outcome was estimated with the Glasgow Outcome Scale-Extended (GOS-E) 3−5 months after the study (3.6 ± 1.3 months). The GOS-E is a widely used outcome instrument to assess the recovery after severe traumatic brain injury [38] with 8-point scale, where “1” is a minimal score (a patient was dead) and 8 is a maximal score (a patient returned to normal life). None of the patients had GOS-E = 8. Therefore, 4 patients with GOS-E = 2 (vegetative state) were excluded from the study. We also excluded patients with temporary improvement with later aggravation and those with unstable somatic status (5 patients). In the end, 12 from 56 patients met the exclusion criterion, and 44 patients were taken into final statistical analysis. The GOS-E scores were divided into 2 groups: (1) 20 patients (14 males and 6 females) in the “Coma+” subgroup with GOS-E scores from 5 to 7 (5.9 ± 0.8); (2) 24 patients (16 males and 8 females) in the “Coma−” subgroup with GOS-E scores from 2 to 4 (2.7 ± 0.6). Information on patients’ age, GCS, and the time passed after the traumatic brain injury (the number of days that passed between the date of the injury occurrence and the date of the EEG study) is summarized in Table 1.

The control group included 25 healthy volunteers (19 males and 6 females) were aged 18 to 35 years (28.2 ± 7.9). All they were right-handed with no brain traumas or other neurological or psychiatric disorders in anamnesis. They did not use psychoactive substances or drugs; they also denied recent sleep deprivation. Information on participants’ age is summarized in Table 1.

The ethics committee of the Institute of Higher Nervous Activity and Neurophysiology of the Russian Academy of Sciences (#06, February 2016) approved the work. All healthy volunteers and the legal representatives of each patient have signed the informed consent.

Stimuli

We used ~1,000-ms sounds as stimuli, downloaded from free web sound databases (Sound Jay https://www.soundjay.com, Sound Library https://www.epidemicsound.com, Freesound https://freesound.org, Soundboard https://www.soundboard.com). The sounds were presented using the Presentation software (Neurobehavioral Systems, Inc.). The raw audio files were downsampled at 44,100 Hz to mono.wav files with 32-bit resolution. Before EEG examination, we presented about 40 non-verbal sounds to 67 healthy experts (mean age 26.9 years) to assess the sounds in scales of pleasantness, arousal, fear, empathy, etc. As a result of this preliminary study, we have selected sounds with the highest rates of “pleasantness”, “empathy,” and “arousal,” characterizing with similar duration, pitch, and loudness. Wavelab 10.0 (Steinberg Media Technol-ogies) and WavePad (NCH Software) were used to analyze the acoustic characteristics of the stimuli (Table 2).

First, the auditory paradigm was presented to the individuals from the control group. Furthermore, right after that, each sound was presented once more step by step for the subjects to assess the emotional valence and arousal level of the stimuli in scales of “pleasantness” (1−9), “arousal” (1−9), and “empathy” (1−9). Each stimulus was presented 40 times. The stimuli were presented randomly with the 0.7−2.0 seconds gaps between them. The background EEGs with open and closed eyes were recorded within 2 minutes at the beginning and end of the study, so the complete EEG registration took about 30 minutes.

Electroencephalogram Registration, Processing, and Analysis

The subjects sat in a comfortable position in an armchair in an acoustically and electrically shielded chamber during the EEG recording. The comatose patients lay in a hospital bed in a resuscitation hospital unit. The participants were instructed to remain calm, listen to the presented sounds, keep their eyes closed (avoid visual interference), and avoid falling asleep. The stimuli were presented via earphones. EEG was recorded using an Ence-phalan device (Medicom MTD) with the recording of polygraphic channels (these data are not presented). Nineteen AgCl electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2) were placed according to the International “10−20” system. The electrodes on the left and right mastoids served as joint references under unipolar montage. The vertical electrooculogram (EOG) was measured with AgCl cup electrodes placed 1 cm above and below the left eye. The horizontal EOG was measured with electrodes placed 1 cm lateral from the outer canthi of both eyes. The amplifier sampling rate was 250 Hz, the filtering was set to bandpass 1.6−30 Hz. The electrode impedances were maintained at less than 10 kW.

Eye movement artifacts were deleted with an independent component analysis-based algorithm with the EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out through manual data in-spection. The continuous resting-state EEG of each subject was filtered with the bandpass filter set to 0.5−30 Hz.

We segmented EEGs into fixed‐length epochs with 100 ms prestimulus period and 1,100 ms poststimulus period corresponding to each type of stimulus. As a result, we obtained ~40 EEG fragments 1,200 ms long for each subject and each type of stimuli. Thus, we analyzed from 32 to 40 free from artifacts EEG fragments for each participant and each type of stimuli (38.2 ± 1.7 seconds in the ‘Coma+’ group; 36.8 ± 1.9 seconds in the ‘Coma−’ group, and 37.9 in healthy volunteers). The amount of used EEG fragments didn’t differ significantly between groups.

Next, we performed ERP analysis and analysis of linear and nonlinear features of EEG fragments.

Event-related Potential Analysis

The EEG data were analyzed and processed using EEGLAB 14.1.1b, a neural electrophysiological analysis tool based on MATLAB (MathWorks) for the ERP analyses. The EEG data were processed using a 1.6−30 Hz bandpass filter (the finite impulse response filter). The 50 Hz power frequency was rejected by processing. The reference electrode was changed to a global brain average reference. We measured and analyzed the amplitudes and latencies of P50, N100, P200, P300, and N400. Based on the topographical distribution of the grand-averaged ERP activity, Fz, F3, F4, Сz, C3, and C4 were selected for the analysis of P50 component (20−120 ms) and N100 (80−180 ms); P200 (160−250 ms) were analyzed at Cz, C3, C4, Pz, P3, P4, O1, and O2 electrode sites, P300 (250−500 ms) and N400 components (250−500 ms) were analyzed at Cz, C3, C4, Pz, P3, and P4. To visualize ERP, Matlab software has been used which depicted ERPs for each electrode. Figure 1 presents the ERPs in Cz electrode used for the measurements of all ERP components.

Nonlinear Electroencephalogram Analysis

Considering the previous findings [5,28], we applied nonlinear EEG analysis to assess the emotional response to the presented sounds. We used a Butterworth filter of 12 order to calculate the examined signal bandpass filtered in the range of interest (1.6−30 Hz). Higuchi’s fractal dimension (HFD) was evaluated using the Higuchi algorithm [39].

HC represents a change in frequency and indicates the similarity of the signal to a pure sine wave. This parameter was calculated for the wideband 1.6−30 Hz filtered signal as follows:

HC=mobility(y'(t))mobility(y(t)), where mobility=var(y'(t))var(y(t)), where y(t) is an EEG signal, and y’(t) is the variance of EEG. The HC and mobility showed high similarity in our study, so we refer to them both as Hjorth parameters.

The Method of Distances for Electroencephalogram Data, Subjective Assessment, and Acoustical Parameters of Stimuli

We hypothesized that acoustical features, such as pitch and loudness, play a crucial role in emotion perception in comatose patients [30]. Thus, we calculated distances between presented auditory stimuli according to their physical features (loudness and pitch’s mean values and the standard deviations [SD]). Next, we applied the coordinate method to calculate a distance between acoustical features of sounds using formula AB = [(xb − xa)2 + (yb − ya)2)]1/2 [40]. Here, A and B are coordinates of pitch’s (or loudness’s) mean and SD for 2 different sounds, xa, xb, yb, and ya are values of pitch’s (or loudness’s) mean and SD for these sounds. Thus, AB is a “distance” between coordinates A (xa; ya) and B (xb; yb) is a vector that has magnitude (size) and has not direction. In particular, the Pitch distance = [(Crying mean Pitch − Laughter mean Pitch)2 + (Crying Pitch’s SD − Laughter Pitch’s SD)2)]1/2. Similarly, the distances between pleasantness and arousal scores for the stimuli have been calculated using the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where xa, xb, yb, and ya are average scores of “pleasantness” and “arousal” for 2 pairs of sounds. The distances for each participant were calculated separately and then averaged inside each group. For example, if participant rated sound “crying” with 6 and 2 scores and sound “laughter” with 2 and 6 scores by scales “arousal” and “pleasantness” correspondently the AB = [(2 − 6)2 + (6 − 2)2)]1/2 = 5.66. The program for these calculations was implemented on C# programming language by the lab’s IT specialist mentioned in the Acknowledgements section. Finally, we obtained 21 distances between stimuli for loudness and pitch.

Similarly, we calculated the distances of the ERP components’ amplitudes for 2 pairs of components P200−N100 and P300−N400 by the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where A and B are the coordinates of the component amplitudes for 2 different sounds; xa, xb, yb, and ya are the amplitudes for 2 pairs of sounds. As a result, 21 distances between stimuli for each pair of components were calculated. The distances for each participant were calculated and then averaged inside each group [41]. The distances between nonlinear EEG features that corresponded to different sounds were calculated using the HFD and HC. We applied the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where A and B are the coordinates of HFD and HC values for 2 different sounds, xa, xb, yb, ya are HFD and HC values for 2 pairs of sounds. These parameters were previously associated with different emotional responses and had multidirectional changes during emotional perception.

Statistical Analysis

A one-way and a repeated measures ANOVA with Bonferroni correction for multiple comparisons (p < 0.05) were used to calculate the group differences between ERP components (amplitudes and latencies) and non-linear EEG features. One-way repeated ANOVA trials with Bonferroni correction for multiple comparisons (p < 0.05) were used to determine age effects on the EEG metrics. To compare the differences in the EEG distances between the groups were assessed with repeated ANOVA trials with Bonferroni correction for multiple comparisons and Student’s ttest (p < 0.05). Spearman’s rank-order correlation coefficients were calculated for the distances for ERP metrics, acoustic features, subjective emotional assessment for each pair of stimuli (p < 0.05). Spearman’s rank-order correlation coefficients were calculated to estimate the association between the age and the distances of ERP metrics (for the P200−N100 and P300−N400 complexes); distances between acoustic features (x is the mean of pitch or loudness; y is the SD of pitch or loudness), and distances between subjective emotional assessment for each pair of stimuli (x is pleasantness; y is arousal). The Bonferroni correction for multiple comparisons (p < 0.05) was applied.

RESULTS

Distances for Electroencephalogram Data, Subjective Assessment, and Acoustical Parameters of Stimuli and Their Correlations

Comatose patients with better outcomes (Coma+) demonstrated the most prominent response to the stimuli with the highest pitch and loudness indices (Table 3). The individual correlation analysis supported the group correlations: 15 of 20 patients had significant correlations (Sup-plementary Table 1; available online). The significant correlations were not found in the Coma− group excepting 4 of 24 patients who demonstrated significant correlations.

For healthy participants, the correlations between acoustic parameters of stimuli and EEG response were not similar, but there was a significant correlation between the subjective assessment of the stimulus and the distances of the P300−N400 complex. Such individual significant correlations were found in 20 of 26 subjects.

The associations between the distances of the ERP components and the distances of the acoustical parameters of the stimuli in the comatose patients and healthy volunteers were depicted in Figure 2.

Event-related Potentials to Emotional Sounds

The ERP data for 7 stimuli are presented in Figure 1 and summarized in Table 4.

The control group demonstrated significant differences in N100 amplitudes depending on the stimuli type. The highest N100 amplitudes were recorded for a neutral stimulus, and the lowest values were for dog barking and screaming sounds (Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. barking p = 0.0008, neutral stimulus vs. screaming p = 0.0014). The patients in the Coma+ group demonstrated an inverse response for these 3 stimuli (Group effect − Post Hoc Bonferroni test: unpleasant stimuli for the control group vs. Coma+ p < 0.0001, unpleasant stimuli for the control group vs. Coma− p = 0.00021). The N100 amplitudes were significantly lower for the neutral sound compared to crying and laughter in control group (Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. crying p = 0.0063, neutral stimulus vs. laughter p = 0.0027). The coma group patients did not reveal significant differences in the N100 amplitudes (F(4, 180) = 17.417, p = 0.00000, η2 = 0.31).

The P200 amplitudes were significantly higher for barking, screaming, and crying than the neutral sounds only in patients of the Coma+ group (F(6, 270) = 14.134, p = 0.00000, η2 = 0.24; Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. barking, screaming, crying p < 0.0001). There was no significant difference between the early ERP component for the pleasant and unpleasant stimuli, neither in the control group nor in the coma groups.

The later components (N300, N400, and P300) in the control group were associated with emotional valence. The P300 amplitude was significantly higher in the control group compared to the coma groups (F(2, 90) = 25.275, p = 0.00000, η2 = 0.37; Post Hoc Bonferroni test: Group effect − control group vs. Coma+ p < 0.0001, control group vs. Coma− p < 0.0001). However, the amplitudes of unpleasant (barking and screaming) and pleasant (birds’ singing and laughter) stimuli significantly differed only in the control group subjects (F(2, 90) = 13.946, p = 0.00001, η2 = 0.21), and the P300 amplitudes were significantly higher for the unpleasant stimuli (Stimuli*Group effect − Post Hoc Bonferroni test: pleasant vs. unpleasant in the control group p = 0.0001). Similarly, the N400 amplitudes were significantly higher for the pleasant stimuli compared to the unpleasant ones (F(2, 90) = 10.5250, p = 0.00048, η2 = 0.14; Stimuli*Group effect − Post Hoc Bonferroni test: pleasant vs. unpleasant in the control group p = 0.0004). The amplitudes N300 were signifi-cantly higher for barking and screaming than the pleasant sounds (F(4, 180) = 10.226, p = 0.00044, η2 = 0.21).

Therefore, for the later ERP components, the difference between the pleasant and unpleasant sounds was found both in the control group and in the Coma+ group. At the same time, we noticed that pleasant stimuli did not differ from the neutral one in the comatose group. The statistical analysis proved that the difference between the N300 amplitudes for the pleasant stimuli (average bird song and laughter) and the neutral sound (noise) was not significant (t = 0.7, p = 0.46). On the other hand, the difference between P300 amplitudes (t = 3.2, p = 0.003) and N400 amplitudes (t = 2.9, p = 0.008) for the pleasant stimuli (average bird song and laughter) and the neutral sound (noise) in the control group was significant.

Correlation Analysis between Components’ Amplitudes and Latencies and Subjective Assessment of Stimuli and Acoustic Parameters of Stimuli

We have found a significant association between the amplitudes of N100 and P200 components and scores of GCS in the unified patients’ group during listening to scream, barking, crying, birds singing, and coughing (Table 5). Individual correlations between ERP indices and acoustical indices in patients and healthy volunteers are presented in Supplementary Tables 2 and 3 (available online), respectively.

Subjects of the control group demonstrated a significant negative correlation between the amplitude of N400 and pleasantness of stimuli for the following sounds: crying, scream, and laughter, and a significant positive correlation between the amplitude of P300 and pleasantness of the following sounds: crying, scream, barking, laughter and birds singing. In addition, the subjective arousal rate for barking, screaming, and birds singing also significantly correlated with the amplitude of N400.

Nonlinear Features

Only healthy volunteers showed a significant increase in the fractal dimension of the EEG during listening to the sounds of coughing, scream and crying (F(4, 180) = 30.683, p = 0.0000, η2 = 0.41; Stimuli*Group effect − Post Hoc Bonferroni test: resting state vs. coughing, scream and crying in the control group p < 0.0001). On the other hand, the patients in the Coma+ group showed a significant decrease in the HC during listening to sounds of crying, barking, and screaming (sounds with highest mean pitch and loudness) F(4, 180) = 9.6157, p = 0,00000, η2 = 0.22; Stimuli*Group effect − Post Hoc Bonferroni test: resting state vs. crying p = 0.0027; resting state vs. barking, and screaming p < 0.0001 in Coma+ (Fig. 3).

DISCUSSION

Here, we have studied the ERP and nonlinear EEG response of comatose patients to acoustic stimuli and found that patients with better outcomes had recognizable responses to emotional sounds, unlike patients with worse outcomes. At the same time, the response was contributed to acoustical features of emotional sounds, whereas EEG response of healthy volunteers correlated with subjective pleasantness arousal and empathy during emotional per-ception. Similar results were obtained using longer duration emotional sounds and demonstrated that patients after severe TBI have difficulties processing the emotional tone of sounds and perceived the particular acoustic parameters of emotional stimuli 30 primarily. The acoustical analysis of stimuli showed that auditory stimulation’s emotional valence or impact might be changed by manipulating the prosodic parameters of sound, such as pitch, duration, and loudness [42-44]. Other researchers demonstrated that the ranges of pitch variation and overall amplitudes of acoustic stimuli were solid acoustic indicators for the targeted vocal emotions and contributed to emotional recognition and correlated with the emotional response [45-48]. The difference in EEG response to some acoustical features, such as loudness or tempo, could be one of the neurophysiological markers of mental diseases. For example, schizophrenic patients have significantly higher loudness dependence of auditory evoked potentials than the controls [49-52]. Thus, the revealed correlation between EEG changes during emotional stimuli perception and the acoustical features of the stimuli indicated about, first, altered emotional perception in comatose patients and, second, the necessity and potential benefit of this response for the patient’s recovery [53].

The specificity of ERP response to emotional stimuli should be discussed. The N100−P200 waveform revealed in both patients and healthy volunteers was previously associated with the processing of emotional sounds. In particular, P200 amplitude was previously suggested as an indicator of active differentiation and recognition of emotional prosody [45] and was associated with the potential ability of subjects to perceive particular frequencies of auditory information [54]. Regarding our study, the differences in P200 and N100 amplitudes between short emotionally charging and neutral non-verbal sounds did not achieve significant differences in healthy participants. In contrast, the amplitude of P200 was significantly more positive, and N100 amplitude was significantly more negative for emotional sounds in comatose patients with a better outcome. Other results suggest that patients with better outcomes demonstrated the ability to process the pitch values that contribute to the emotional recognition [46].

The contribution of N100−P200 waveform to emotional perception was accompanied, at the same time, by the absence of the typical response of the healthy subject to the emotional valence of stimuli associated with N400 and P300 amplitude. Our results showed that the unpleasant stimuli induced a higher P300 amplitude in subjects of a control group, whereas pleasant sounds were associated with higher N400 amplitude. These differences between pleasant and unpleasant stimuli were not detected in comatose patients, who demonstrated the N300 and N400 peaks only for some unpleasant stimuli. Ac-cording to the previous findings, the P300 and N400 components were previously used to process positive versus negative affective vocalizations, prosody, and music [55]. The P300 amplitude was previously associated with the cingulate cortex activity and the emotional states [56] and attributed to the perception of pleasant and hardly recognized emotional states [57]. Whereas N400 and N300 components responded to a semantically and emotionally incongruent stimulus [58] and a higher N400 amplitude was previously associated with the perception of vocal emotional expression [59,60], and the amplitude of the N300 component contributed to response attenuated an incongruent stimulus [61]. Considering the previous findings, we hypothesized that the patients’ response was associated only with acoustic parameters of unpleasant stimuli that had a prior evolutionary rate compared to pleasant and neutral stimuli [62] and induced recognizable responses in comatose patients.

Finally, as we mentioned, we found significant changes in some nonlinear EEG parameters associated with emotional perception [5,28] both in healthy individuals and patients with mental or neurological diseases. In particular, previous studies demonstrated that higher HFD was associated with different emotional states, including affect, fear, happiness, sadness, and empathy [5,26,28,63, 64]. The dynamics of Hjorth’s complexity were previously contributed to unpleasant emotions such as irritation [5, 28]; the decrease of this parameter during emotional auditory perception was more typical for individuals with mental and neurological disorders than for healthy subjects [3,5,65].

In conclusion, the EEG of healthy subjects reflects an acoustical assessment of stimuli, while in patients in a coma, it reflects the physical parameters of sounds. The emo-tional perception of comatose patients was associated with early ERP components, namely N100−P200, whereas the emotional perception of healthy volunteers was contributed to the amplitude of N400 and P300. The comatose patients did not show typical healthy subjects’ differences of EEG response between pleasant and unpleasant stimuli; however, they demonstrated the variable EEG activity for the neutral and emotional sounds. The higher HFD was associated with the emotional perception of healthy volunteers, whereas emotional processing of comatose patients was contributed to the decrease of HC.

This study has several limitations. The number of participants was relatively small. To avoid this limitation, we equalized the study and control groups by age and sex. We also equalized the patients’ groups by the GCS and time passed after the injury. However, the applied scale and methods of clinical assessment do not take into account the specific characteristics of the patient and the adaptive capabilities of his body. We also tried to take into account the presence of chronic concomitant diseases and the emotional and personal characteristics of the patients before the injury; however, it was not always possible to get a complete premorbid history. Another problem was the inability to examine the auditory thresholds in patients and adjust the volume of stimulation in accordance with these thresholds. As a result, all subjects were offered sounds of the same volume.

ACKNOWLEDGEMENTS

We thank the engineer O. Kashevarova and Dr G. Ivanitsky for implementing the cognitive space construction method and other technical help. We are grateful to Dr. V. Podlepich for providing the patients’ database. The authors thank Mikhail Atanov and Olga Kashevarova for writing programs and helping with data analysis.

Funding

This work was supported by the Russian Academy of Sciences, Russian Foundation for Basic Research (16-04- 00092).

Conflicts of Interest

No potential conflict of interest relevant to this article was reported.

Author Contributions

Conceptualization: Galina V. Portnova. Data acquisition: Galina V. Portnova. Formal analysis: Galina V. Portnova, Elena V. Proskurnina. Funding: The State Assignment of the Ministry of Education and Science of the Russian Federation for 2021−2023. Supervision: Galina V. Portnova. Writing—original draft: Galina V. Portnova, Elena V. Proskurnina. Writing—review & editing: all authors.

Figures
Fig. 1. The ERPs for the stimuli used for the control group, Coma+, and Coma−. To visualize ERP data we used Cz electrode (common for each ERP components).
ERP, event-related potential.
Fig. 2. The distances between ERP components, physical parameters of stimuli and subjective assessments of stimuli: (A1) mean loudness in dB vs. SD for loudness values; (A2) mean pitch in Hz vs. SD for pitch values; (A3) pleasantness vs. arousal; (B1) the distances for ERP components amplitude in the Coma+ group; (B2) the distances for ERP components amplitude in the control group.
ERP, event-related potential; SD, standard deviation.
Fig. 3. The correlations between the emotional assessment of the stimuli and nonlinear electroencephalogram features.
Tables

Information on the participants

Group Mean age ± SD
Descriptive statistics Mann–Whitney Utest
Valid N Mean ± SD U Z plevel
Сontrol group 25 28.48 ± 6.96 224 −0.59 0.55
Coma+ 20 32.05 ± 9.56
Сontrol group 25 28.48 ± 6.96 228 −1.45 0.15
Coma− 24 36.92 ± 7.84
Coma+ 20 32.05 ± 9.56 205 −0.83 0.41
Coma− 24 36.92 ± 7.84
Time after injury (day)
Coma+ 20 28.81 ± 5.74 178.5 −1.45 0.15
Coma− 24 24.16 ± 7.09
GCS
Coma+ 20 5.72 ± 1.29 179 −1.44 0.15
Coma− 24 5.13 ± 1.03

SD, standard deviation; GCS, Glasgow Coma Scale.

Acoustic parameters of the stimuli

Stimulus Duration (ms) Mean pitch (SD) (Hz) Mean loudness (SD) (dB) Maximal loudness (dB) Minimal loudness (dB)
Crying 1,022 1,673 (114) −22.05 (21.5) −12.19 −66.45
Noise 1,000 1,000 (38) −24.2 (8.4) −18.41 −54.10
Laughter 1,021 903 (76) −23.4 (22.1) −6.96 −101.28
Coughing 1,003 1,057 (121) −24.66 (19.1) −11.29 −57.39
Scream 1,005 2,210 (125) −20.1 (26.5) −4.2 −105.68
Bird singing 1,018 1,272 (98) −23.28 (18.6) −18.33 −100.73
Barking 1,009 2,219 (117) −20.9 (24.5) −5.51 −68.69

The threshold was 50 dB.

SD, standard deviation.

Spearman’s rank-order correlation coefficients rs between ERP indices and indices of acoustic features or the subjective emotional assessment (p < 0.05)

Distances Control group Coma+ Coma−
rS p rS p rS p
Pitch distance (mean−SD)
Index P200−N100 −0.173 0.453 0.738 0.0001 0.229 0.163
Index P300−N400 −0.148 0.520 0.315 0.012 0.040 0.862
Loudness distance (mean−SD)
Index P200−N100 0.181 0.430 0.685 0.0006 0.421 0.057
Index P300−N400 −0.151 0.510 0.617 0.003 0.297 0.191
Subjective distance (arousal−pleasantness)
Index P200−N100 0.297 0.191 −0.070 0.762 0.136 0.555
Index P300−N400 0.715 0.0003 −0.074 0.748 0.077 0.739

In the “Coma+” subgroup, GOS-E scores were from 5 to 7; in the “Coma−” subgroup, GOS-E scores were from 2 to 4.

ERP, event-related potentials; SD, standard deviation; GOS-E, Glasgow Outcome Scale-Extended.

ERP amplitudes and latencies in response to the acoustic stimuli

Group Stimuli ERP metrics
N100 P200 N300 P300 N400
Latency Amplitude Latency Amplitude Latency Amplitude Latency Amplitude Latency Amplitude
Coma− Crying 162 −0.87 248 0.91 306 −0.05 398 0.16 411 −0.09
Noise 111 0.05 214 0.79 303 −0.42 375 −0.02 435 −0.22
Laughter 105 0.12 253 0.42 309 −0.59 382 −0.11 401 −0.03
Coughing 101 −0.16 237 −0.13 288 −0.14 416 0.09 444 −0.21
Scream 125 −1.28 222 1.54 299 −0.45 401 0.13 436 −0.05
Birds singing 131 −1.08 219 1.06 301 −0.54 406 0.04 449 −0.12
Dog barking 129 −1.16 215 1.38 279 −0.18 394 −0.03 430 −0.06
Coma+ Crying 122 −0.78 224 1.51 307 −0.26 352 −0.02 401 −0.06
Noise 129 −0.48 234 0.62 308 −0.07 354 0.05 403 −0.12
Laughter 126 −0.59 231 0.948 306 0.01 351 0.06 398 0.01
Coughing 118 −0.64 226 1.38 311 −0.21 369 0.15 409 −0.02
Scream 112 −0.95 218 1.89 339 −1.12 379 −0.88 397 −0.66
Birds singing 119 −0.72 222 1.03 307 0.02 365 −0.04 404 0.02
Dog barking 115 −0.89 219 1.78 326 −0.76 382 0.01 402 −0.07
Control group Crying 112 −2.91 219 4.12 328 −0.15 409 0.77 439 0.13
Noise 115 −3.45 211 4.45 347 −1.31 397 0.21 405 −1.02
Laughter 112 −3.12 210 4.02 339 −0.33 374 −0.49 412 −0.76
Coughing 109 −3.16 219 4.11 347 −0.54 398 0.23 415 −0.12
Scream 108 −2.82 225 3.85 327 −0.19 402 0.83 436 0.22
Birds singing 112 −3.24 216 4.19 351 −1.04 391 −0.41 408 −1.16
Dog barking 107 −2.78 223 4.05 332 −0.39 398 0.72 444 0.34

In the “Coma+” subgroup, GOS-E scores were from 5 to 7; in the “Coma−” subgroup, GOS-E scores were from 2 to 4.

ERP, event-related potential; GOS-E, Glasgow Outcome Scale-Extended.

Spearman’s rank-order correlation coefficients rs between the ERP metrics and subjective assessment in healthy subjects or values of GCS in the comatose patients

Stimulus Coma+ & Coma− (valid n = 44) Control group (valid n = 25)
A B C
P200 vs. GCS N100 vs. GCS P300 vs. pleasantness N400 vs. pleasantness P300 vs. arousal N400 vs. arousal
rs p rs p rs p rs p rs p rs p
Crying 0.4893 0.0016 −0.465 0.0028 0.4753 0.0141 −0.518 0.0068 0.2708 0.1808 −0.319 0.1122
Noise 0.0887 0.5914 −0.035 0.8324 0.2791 0.1674 −0.066 0.7474 0.3577 0.0728 −0.262 0.1955
Laughter 0.036 0.8276 −0.188 0.252 0.5898 0.0015 −0.509 0.008 0.2041 0.3173 −0.35 0.0801
Coughing −0.3 0.0638 −0.429 0.0065 0.3457 0.0837 −0.223 0.2735 0.3457 0.0837 −0.31 0.1293
Scream 0.5138 0.0008 −0.514 0.0008 0.5538 0.0033 −0.565 0.0026 0.223 0.2735 −0.677 0.0001
Birds singing 0.4755 0.0025 −0.452 0.0039 0.4298 0.0284 −0.262 0.1955 0.3591 0.0716 −0.465 0.0166
Dog barking 0.4862 0.0017 −0.558 0.0002 0.5462 0.0039 −0.319 0.1122 0.223 0.2735 −0.533 0.005

In the “Coma+” subgroup, GOS-E scores were from 5 to 7; in the “Coma−” subgroup, GOS-E scores were from 2 to 4.

ERP, event-related potential; GCS, Glasgow Coma Scale; GOS-E, Glasgow Outcome Scale-Extended.

References
  1. Puggina ACG, Silva MJP, Gatti MFZ, Graziano KU, Kimura M. The auditory perception in the patients in state of coma: A bibliographical revision. Acta Paul Enferm 2005;18:313-319.
    CrossRef
  2. Cheng Q, Jiang B, Xi J, Li ZY, Liu JF, Wang JY. The relation between persistent coma and brain ischemia after severe brain injury. Int J Neurosci 2013;123:832-836.
    Pubmed CrossRef
  3. Portnova GV, Martynova OV, Ivanitskiĭ GA. [Age differences of event-related potentials in the perception of successive and spacial components of auditory information]. Fiziol Cheloveka 2014;40:26-35. Russian.
    Pubmed CrossRef
  4. Chen H, Chan YL, Nguyen LT, Mao Y, de Rosa A, Beh IT, et al. Moderate traumatic brain injury is linked to acute behaviour deficits and long term mitochondrial alterations. Clin Exp Pharmacol Physiol 2016;43:1107-1114.
    Pubmed CrossRef
  5. Portnova GV, Atanov MS. Nonlinear EEG parameters of emotional perception in patients with moderate traumatic brain injury, coma, stroke and schizophrenia. AIMS Neurosci 2018;5:221-235.
    Pubmed KoreaMed CrossRef
  6. Lezak MD. Psychological implications of traumatic brain damage for the patient's family. Rehabil Psychol 1986;31:241-250.
    CrossRef
  7. Williams C, Wood RL. Alexithymia and emotional empathy following traumatic brain injury. J Clin Exp Neuropsychol 2010;32:259-267.
    Pubmed CrossRef
  8. Hynes CA, Stone VE, Kelso LA. Social and emotional competence in traumatic brain injury: New and established assessment tools. Soc Neurosci 2011;6:599-614.
    Pubmed CrossRef
  9. Hattiangadi N, Pillion JP, Slomine B, Christensen J, Trovato MK, Speedie LJ. Characteristics of auditory agnosia in a child with severe traumatic brain injury: A case report. Brain Lang 2005;92:12-25.
    Pubmed CrossRef
  10. Coelho CA, Grela B, Corso M, Gamble A, Feinn R. Microlinguistic deficits in the narrative discourse of adults with traumatic brain injury. Brain Inj 2005;19:1139-1145.
    Pubmed CrossRef
  11. Silva SC, Silveira LM, Marchi-Alves LM, Mendes IAC, Godoy S. Real and illusory perceptions of patients in induced coma. Rev Bras Enferm 2019;72:818-824.
    Pubmed CrossRef
  12. Olding M, McMillan SE, Reeves S, Schmitt MH, Puntillo K, Kitto S. Patient and family involvement in adult critical and intensive care settings: A scoping review. Health Expect 2016;19:1183-1202.
    Pubmed KoreaMed CrossRef
  13. van Tol DG, Kouwenhoven P, van der Vegt B, Weyers H. Dutch physicians on the role of the family in continuous sedation. J Med Ethics 2015;41:240-244.
    Pubmed CrossRef
  14. Pereira JM, Barradas FJDR, Sequeira RMC, Marques MCMP, Batista MJ, Galhardas M, et al. Delirium in critically ill patients: Risk factors modifiable by nurses. J Nurs Refer 2016;4:29-36.
    CrossRef
  15. Eimer M, Holmes A. An ERP study on the time course of emotional face processing. Neuroreport 2002;13:427-431.
    Pubmed CrossRef
  16. Hauger SL, Olafsen K, Schnakers C, Andelic N, Nilsen KB, Helseth E, et al. Cognitive event-related potentials during the sub-acute phase of severe traumatic brain injury and their relationship to outcome. J Neurotrauma 2017;34:3124-3133.
    Pubmed CrossRef
  17. Davis TM, Hill BD, Evans KJ, Tiffin S, Stanley N, Fields K, et al. P300 event-related potentials differentiate better performing individuals with traumatic brain injury: A preliminary study of semantic processing. J Head Trauma Rehabil 2017;32:E27-E36.
    Pubmed CrossRef
  18. Aguado L, Valdés-Conroy B, Rodríguez S, Román FJ, Diéguez- Risco T, Fernández-Cahill M. Modulation of early perceptual processing by emotional expression and acquired valence of faces: an ERP study. J Psychophysiol 2012;26:29-41.
    CrossRef
  19. Rochas V, Rihs TA, Rosenberg N, Landis T, Michel CM. Very early processing of emotional words revealed in temporoparietal junctions of both hemispheres by EEG and TMS. Exp Brain Res 2014;232:1267-1281.
    Pubmed CrossRef
  20. Paulmann S, Kotz SA. An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo- and lexical-sentence context. Brain Lang 2008;105:59-69.
    Pubmed CrossRef
  21. Daltrozzo J, Wioland N, Mutschler V, Kotchoubey B. Predicting coma and other low responsive patients outcome using event- related brain potentials: A meta-analysis. Clin Neurophysiol 2007;118:606-614.
    Pubmed CrossRef
  22. Bostanov V, Kotchoubey B. Recognition of affective prosody: continuous wavelet measures of event-related brain potentials to emotional exclamations. Psychophysiology 2004;41:259-268.
    Pubmed CrossRef
  23. Portnova GV, Gladun KV, Sharova EA, Ivanitskiĭ AM. [Changes of EEG power spectrum in response to the emotional auditory stimuli in patients in acute and recovery stages of TBI (traumatic brain injury)]. Zh Vyssh Nerv Deiat Im I P Pavlova 2013;63:753-765. Russian.
    Pubmed CrossRef
  24. Jadhav N, Manthalkar R, Joshi Y. Effect of meditation on emotional response: An EEG-based study. Biomed Signal Process Control 2017;34:101-113.
    CrossRef
  25. Portnova G, Girzhova I, Filatova D, Podlepich V, Tetereva A, Martynova O. Brain oscillatory activity during tactile stimulation correlates with cortical thickness of intact areas and predicts outcome in post-traumatic comatose patients. Brain Sci 2020;10:720.
    Pubmed KoreaMed CrossRef
  26. Bornas X, Tortella-Feliu M, Balle M, Llabrés J. Self-focused cognitive emotion regulation style as associated with widespread diminished EEG fractal dimension. Int J Psychol 2013;48:695-703.
    Pubmed CrossRef
  27. Ruiz-Padial E, Ibáñez-Molina AJ. Fractal dimension of EEG signals and heart dynamics in discrete emotional states. Biol Psychol 2018;137:42-48.
    Pubmed CrossRef
  28. Portnova GV. Lack of a sense of threat and higher emotional lability in patients with chronic microvascular ischemia as measured by non-linear EEG parameters. Front Neurol 2020;11:122.
    Pubmed KoreaMed CrossRef
  29. Mehmood RM, Lee HJ. EEG based emotion recognition from human brain using Hjorth parameters and SVM. Int J Bio-Sci Bio-Technol 2015;7:23-32.
    CrossRef
  30. Portnova GV, Atanov MS. EEG of patients in coma after traumatic brain injury reflects physical parameters of auditory stimulation but not its emotional content. Brain Inj 2019;33:370-376.
    Pubmed CrossRef
  31. Patel DA. Music, language, and the brain. Oxford Scholarship Online. Oxford University Press;2007.
    CrossRef
  32. Shin Y, Lee S, Ahn M, Cho H, Jun SC, Lee HN. Noise robustness analysis of sparse representation based classification method for non-stationary EEG signal classification. Biomed Signal Process Control 2015;21:8-18.
    CrossRef
  33. Pihan H. Affective and linguistic processing of speech prosody: DC potential studies. Prog Brain Res 2006;156:269-284.
    Pubmed CrossRef
  34. Piarulli A, Charland-Verville V, Laureys S. Cognitive auditory evoked potentials in coma: Can you hear me? Brain 2015;138(Pt 5):1129-1137.
    Pubmed KoreaMed CrossRef
  35. Everhardt M, Sarampalis A, Coler M, Başkent D, Lowie W. Speech prosody: The musical, magical quality of speech. Front Young Minds 2022;10:698575.
    CrossRef
  36. Pruvost-Robieux E, André-Obadia N, Marchi A, Sharshar T, Liuni M, Gavaret M, et al. It's not what you say, it's how you say it: A retrospective study of the impact of prosody on own-name P300 in comatose patients. Clin Neurophysiol 2022;135:154-161.
    Pubmed CrossRef
  37. Blackburne BP, Whelan S. Measuring the distance between multiple sequence alignments. Bioinformatics 2012;28:495-502.
    Pubmed CrossRef
  38. Wilson L, Boase K, Nelson LD, Temkin NR, Giacino JT, Markowitz AJ, et al. A manual for the Glasgow Outcome Scale-Extended interview. J Neurotrauma 2021;38:2435-2446.
    Pubmed KoreaMed CrossRef
  39. Higuchi T. Approach to an irregular time series on the basis of the fractal theory. Phys D Nonlinear Phenom 1988;31:277-283.
    CrossRef
  40. Cohen D. Precalculus: A problems-oriented approach. Brooks Cole;2004. p.1184.
    CrossRef
  41. Sammon JW. A nonlinear mapping for data structure analysis. IEEE Trans Comput 1969;C-18:401-409.
    CrossRef
  42. Cummings KE, Clements MA. Analysis of the glottal excitation of emotionally styled and stressed speech. J Acoust Soc Am 1995;98:88-98.
    Pubmed CrossRef
  43. Proverbio AM, Santoni S, Adorni R. ERP markers of valence coding in emotional speech processing. iScience 2020;23:100933.
    Pubmed KoreaMed CrossRef
  44. Pell MD, Rothermich K, Liu P, Paulmann S, Sethi S, Rigoulot S. Preferential decoding of emotion from human non-linguistic vocalizations versus speech prosody. Biol Psychol 2015;111:14-25.
    Pubmed CrossRef
  45. Agrawal D, Timm L, Viola FC, Debener S, Büchner A, Dengler R, et al. ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies. BMC Neurosci 2012;13:113.
    Pubmed KoreaMed CrossRef
  46. Nogueira W, Büchner A, Lenarz T, Edler B. A psychoacoustic "NofM"-type speech coding strategy for cochlear implants. EURASIP J Adv Signal Process 2005;2005:101672.
    CrossRef
  47. Frick RW. Communicating emotion: The role of prosodic features. Psychol Bull 1985;97:412-429.
    CrossRef
  48. Berlyne DE, Nicki RM. Effects of the pitch and duration of tones on EEG desynchronization. Psychon Sci 1966;4:101-102.
    CrossRef
  49. Portnova GV, Maslennikova AV. Atypical EEG responses to nonverbal emotionally charged stimuli in children with ASD. Behav Neurol 2020;2020:2807946.
    Pubmed KoreaMed CrossRef
  50. Portnova G, Maslennikova A, Varlamov A. Same music, different emotions: Assessing emotions and EEG correlates of music perception in children with ASD and typically developing peers. Adv Autism 2018;4:85-94.
    CrossRef
  51. Wyss C, Hitz K, Hengartner MP, Theodoridou A, Obermann C, Uhl I, et al. The loudness dependence of auditory evoked potentials (LDAEP) as an indicator of serotonergic dysfunction in patients with predominant schizophrenic negative symptoms. PLoS One 2013;8:e68650.
    Pubmed KoreaMed CrossRef
  52. Ogata S. Human EEG responses to classical music and simulated white noise: Effects of a musical loudness component on consciousness. Percept Mot Skills 1995;80(3 Pt 1):779-790.
    Pubmed CrossRef
  53. Xiong KL, Zhang JN, Zhang YL, Zhang Y, Chen H, Qiu MG. Brain functional connectivity and cognition in mild traumatic brain injury. Neuroradiology 2016;58:733-739.
    Pubmed CrossRef
  54. Spreckelmeyer KN, Kutas M, Urbach T, Altenmüller E, Münte TF. Neural processing of vocal emotion and identity. Brain Cogn 2009;69:121-126.
    Pubmed KoreaMed CrossRef
  55. Proverbio AM, Zani A. Time course of brain activation during graphemic/phonologic processing in reading: an ERP study. Brain Lang 2003;87:412-420.
    Pubmed CrossRef
  56. Huster RJ, Westerhausen R, Pantev C, Konrad C. The role of the cingulate cortex as neural generator of the N200 and P300 in a tactile response inhibition task. Hum Brain Mapp 2010;31:1260-1271.
    Pubmed KoreaMed CrossRef
  57. Lu Z, Li Q, Gao N, Yang J, Bai O. Happy emotion cognition of bimodal audiovisual stimuli optimizes the performance of the P300 speller. Brain Behav 2019;9:e01479.
    Pubmed KoreaMed CrossRef
  58. Maguire MJ, Magnon G, Ogiela DA, Egbert R, Sides L. The N300 ERP component reveals developmental changes in object and action identification. Dev Cogn Neurosci 2013;5:1-9.
    Pubmed KoreaMed CrossRef
  59. De Pascalis V, Arwari B, D'Antuono L, Cacace I. Impulsivity and semantic/emotional processing: An examination of the N400 wave. Clin Neurophysiol 2009;120:85-92.
    Pubmed CrossRef
  60. Kutas M, Federmeier KD. Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP). Annu Rev Psychol 2011;62:621-647.
    Pubmed KoreaMed CrossRef
  61. Mazerolle EL, D'Arcy RC, Marchand Y, Bolster RB. ERP assessment of functional status in the temporal lobe: Examining spatiotemporal correlates of object recognition. Int J Psycho-physiol 2007;66:81-92.
    Pubmed CrossRef
  62. Daltrozzo J, Wioland N, Mutschler V, Lutun P, Calon B, Meyer A, et al. Emotional electrodermal response in coma and other low-responsive patients. Neurosci Lett 2010;475:44-47.
    Pubmed CrossRef
  63. Hagerhall CM, Laike T, Küller M, Marcheschi E, Boydston C, Taylor RP. Human physiological benefits of viewing nature: EEG responses to exact and statistical fractal patterns. Nonlinear Dynamics Psychol Life Sci 2015;19:1-12.
    Pubmed
  64. Portnova GV, Maslennikova AV, Zakharova NV, Martynova OV. The deficit of multimodal perception of congruent and non-congruent fearful expressions in patients with schizophrenia: the ERP study. Brain Sci 2021;11:96.
    Pubmed KoreaMed CrossRef
  65. Zhang Z, Xu S, Zhang S, Qiao T, Cao S. Learning attentive representations for environmental sound classification. IEEE Access 2019;7:130327-130339.
    CrossRef


This Article

Close ✕


Cited By Articles
  • CrossRef (0)
  • Scopus (0)
  • Download (158)

Author ORCID Information

Funding Information

Services
Social Network Service

e-submission

Archives