
2022 Impact Factor
Previous studies of comatose patients’ reactivity showed that unconscious patients demonstrated higher responses to emotionally charged stimuli than neutral stimuli [1-3]. Some researchers found that unconscious patients or sleeping subjects responded to emotional stimuli such as their names or alarm sounds, whereas comparable neutral stimuli did not elicit any significant response [2,4].
Considering some previous findings, we could hypo-thesize that the emotional perception in comatose patients was not the same as the emotional perception of healthy subjects [5]. Even with a mild or moderate traumatic brain injury, it is known that patients demonstrated emotional impairment. For example, post-traumatic emotional changes included impaired capacity for social activity [6], impaired interpretation of non-verbal communication and increased auditory processing time, a lack of emotional attachment and empathy, lack of warmth in social interactions, or chronic social and emotional deficits [7-10]. For comas caused by severe traumatic brain injury, we could expect a significantly impaired perception of charged emotional stimuli. According to the memories reported by comatose patients, they could not distinguish whether they were awake or dreaming, whether the reality was actual or imagined. They only reported unpleasant sensations such as pain, cold, or thirst [11]. At the same time, the hearing could be the last sense to be lost in comatose patients, and some patients said that they had recognized the voice of a family member and remembered a familiar voice pronouncing the patient’s name or some personal words [12-14].
To investigate the processing of emotional stimuli in comatose patients, we applied the event-related potential (ERP) analysis that allowed us to recognize the cognitive and mental functions involved in emotional perception [15]. As was previously shown, some ERP components and parameters had a particular prognostic value for unconscious patients regarding their association with emotional perception and cognitive functions and included the P300 wave [16,17], earlier positive components, negative components [18-21], and late negative components [22]. At the same time, the emotional perception was widely investigated using various electroencephalogram (EEG) parameters such as spectral power, fractal dimen-sion, Hjorth complexity (HC), and other features [23]. Pre-vious findings have also demonstrated the dynamics in theta-, beta-, and alpha-rhythm dynamics during emotional stimulation [24]. These features could also have an excellent prognostic value regarding comatose patients [25]. In addition, the fractal dimension was previously associated with the perception of emotional stimuli; an increase in these EEG parameters was previously associated with emotional arousal, empathy, fear, and other emotional states and could be revealed in patients with a mental disability or brain damage [5,26-28]. The HC was also associated with emotional states such as irritation, happiness, or scariness [29].
Our approach to emotional perception in the coma focused not only on responding to emotional stimuli but also on the specificity of emotional acoustic features that could induce the electrophysiological response. Based on the previous findings [30], we hypothesized that the response to emotional stimuli could be associated with their particular acoustic parameters, such as pitch or loudness [31,32]. At the same time, we assumed that not average values of pitch and loudness of sound could indicate emotional charging stimuli and their valence, but their dynamics. In particular, the rise and fall of the voice’s pitch and loudness during oral presentations could induce increased interest and attention of listeners [33].
We used a spectrum of emotionally significant sounds, which were selected taking into account different emotional significance and variability of acoustic indicators. For these sounds, in addition to the average ‘pitch’ and ‘loudness’ indicators previously studied [21,32], temporal dynamics (sound variability in time) was taken into account [34]. This indicator is extremely important for conveying the emotional significance of sounds, including music and intonation of speech [35]. Recently, to study the response of comatose patients to the acoustic characteristics of prosody and other emotional sounds, it is the variability of the physical characteristics of the sound that has been used more and more often [36]. Here, we applied the method of distances between the physical characteristics of sounds, which was previously used to compare 2 complex objects [37]. This method allowed us to take into consideration the minimal differences between emotionally significant sounds and compare them with the corresponding changes in the amplitudes of ERPs in patients and healthy volunteers.
The study involved 69 adult participants, namely, 25 individuals in a control group and 56 comatose patients. The patients were recruited in the subacute phase after injury (from 14 days to 3 months). All patients had severe traumatic brain injury with Glasgow Coma Scale (GCS) scores between 4 and 8. Individuals who were younger than 18 and older than 60 or had neurological or mental illnesses in anamnesis or with as epi-activity EEG signs were excluded from the study. Diffuse axonal damage was proved with magnetic resonance imaging and multi-slice computer tomography in all participants. Patients with focal brain lesions were excluded from the study (3 patients).
The outcome was estimated with the Glasgow Outcome Scale-Extended (GOS-E) 3−5 months after the study (3.6 ± 1.3 months). The GOS-E is a widely used outcome instrument to assess the recovery after severe traumatic brain injury [38] with 8-point scale, where “1” is a minimal score (a patient was dead) and 8 is a maximal score (a patient returned to normal life). None of the patients had GOS-E = 8. Therefore, 4 patients with GOS-E = 2 (vegetative state) were excluded from the study. We also excluded patients with temporary improvement with later aggravation and those with unstable somatic status (5 patients). In the end, 12 from 56 patients met the exclusion criterion, and 44 patients were taken into final statistical analysis. The GOS-E scores were divided into 2 groups: (1) 20 patients (14 males and 6 females) in the “Coma+” subgroup with GOS-E scores from 5 to 7 (5.9 ± 0.8); (2) 24 patients (16 males and 8 females) in the “Coma−” subgroup with GOS-E scores from 2 to 4 (2.7 ± 0.6). Information on patients’ age, GCS, and the time passed after the traumatic brain injury (the number of days that passed between the date of the injury occurrence and the date of the EEG study) is summarized in Table 1.
The control group included 25 healthy volunteers (19 males and 6 females) were aged 18 to 35 years (28.2 ± 7.9). All they were right-handed with no brain traumas or other neurological or psychiatric disorders in anamnesis. They did not use psychoactive substances or drugs; they also denied recent sleep deprivation. Information on participants’ age is summarized in Table 1.
The ethics committee of the Institute of Higher Nervous Activity and Neurophysiology of the Russian Academy of Sciences (#06, February 2016) approved the work. All healthy volunteers and the legal representatives of each patient have signed the informed consent.
We used ~1,000-ms sounds as stimuli, downloaded from free web sound databases (Sound Jay https://www.soundjay.com, Sound Library https://www.epidemicsound.com, Freesound https://freesound.org, Soundboard https://www.soundboard.com). The sounds were presented using the Presentation software (Neurobehavioral Systems, Inc.). The raw audio files were downsampled at 44,100 Hz to mono.wav files with 32-bit resolution. Before EEG examination, we presented about 40 non-verbal sounds to 67 healthy experts (mean age 26.9 years) to assess the sounds in scales of pleasantness, arousal, fear, empathy, etc. As a result of this preliminary study, we have selected sounds with the highest rates of “pleasantness”, “empathy,” and “arousal,” characterizing with similar duration, pitch, and loudness. Wavelab 10.0 (Steinberg Media Technol-ogies) and WavePad (NCH Software) were used to analyze the acoustic characteristics of the stimuli (Table 2).
First, the auditory paradigm was presented to the individuals from the control group. Furthermore, right after that, each sound was presented once more step by step for the subjects to assess the emotional valence and arousal level of the stimuli in scales of “pleasantness” (1−9), “arousal” (1−9), and “empathy” (1−9). Each stimulus was presented 40 times. The stimuli were presented randomly with the 0.7−2.0 seconds gaps between them. The background EEGs with open and closed eyes were recorded within 2 minutes at the beginning and end of the study, so the complete EEG registration took about 30 minutes.
The subjects sat in a comfortable position in an armchair in an acoustically and electrically shielded chamber during the EEG recording. The comatose patients lay in a hospital bed in a resuscitation hospital unit. The participants were instructed to remain calm, listen to the presented sounds, keep their eyes closed (avoid visual interference), and avoid falling asleep. The stimuli were presented via earphones. EEG was recorded using an Ence-phalan device (Medicom MTD) with the recording of polygraphic channels (these data are not presented). Nineteen AgCl electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2) were placed according to the International “10−20” system. The electrodes on the left and right mastoids served as joint references under unipolar montage. The vertical electrooculogram (EOG) was measured with AgCl cup electrodes placed 1 cm above and below the left eye. The horizontal EOG was measured with electrodes placed 1 cm lateral from the outer canthi of both eyes. The amplifier sampling rate was 250 Hz, the filtering was set to bandpass 1.6−30 Hz. The electrode impedances were maintained at less than 10 kW.
Eye movement artifacts were deleted with an independent component analysis-based algorithm with the EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out through manual data in-spection. The continuous resting-state EEG of each subject was filtered with the bandpass filter set to 0.5−30 Hz.
We segmented EEGs into fixed‐length epochs with 100 ms prestimulus period and 1,100 ms poststimulus period corresponding to each type of stimulus. As a result, we obtained ~40 EEG fragments 1,200 ms long for each subject and each type of stimuli. Thus, we analyzed from 32 to 40 free from artifacts EEG fragments for each participant and each type of stimuli (38.2 ± 1.7 seconds in the ‘Coma+’ group; 36.8 ± 1.9 seconds in the ‘Coma−’ group, and 37.9 in healthy volunteers). The amount of used EEG fragments didn’t differ significantly between groups.
Next, we performed ERP analysis and analysis of linear and nonlinear features of EEG fragments.
The EEG data were analyzed and processed using EEGLAB 14.1.1b, a neural electrophysiological analysis tool based on MATLAB (MathWorks) for the ERP analyses. The EEG data were processed using a 1.6−30 Hz bandpass filter (the finite impulse response filter). The 50 Hz power frequency was rejected by processing. The reference electrode was changed to a global brain average reference. We measured and analyzed the amplitudes and latencies of P50, N100, P200, P300, and N400. Based on the topographical distribution of the grand-averaged ERP activity, Fz, F3, F4, Сz, C3, and C4 were selected for the analysis of P50 component (20−120 ms) and N100 (80−180 ms); P200 (160−250 ms) were analyzed at Cz, C3, C4, Pz, P3, P4, O1, and O2 electrode sites, P300 (250−500 ms) and N400 components (250−500 ms) were analyzed at Cz, C3, C4, Pz, P3, and P4. To visualize ERP, Matlab software has been used which depicted ERPs for each electrode. Figure 1 presents the ERPs in Cz electrode used for the measurements of all ERP components.
Considering the previous findings [5,28], we applied nonlinear EEG analysis to assess the emotional response to the presented sounds. We used a Butterworth filter of 12 order to calculate the examined signal bandpass filtered in the range of interest (1.6−30 Hz). Higuchi’s fractal dimension (HFD) was evaluated using the Higuchi algorithm [39].
HC represents a change in frequency and indicates the similarity of the signal to a pure sine wave. This parameter was calculated for the wideband 1.6−30 Hz filtered signal as follows:
We hypothesized that acoustical features, such as pitch and loudness, play a crucial role in emotion perception in comatose patients [30]. Thus, we calculated distances between presented auditory stimuli according to their physical features (loudness and pitch’s mean values and the standard deviations [SD]). Next, we applied the coordinate method to calculate a distance between acoustical features of sounds using formula AB = [(xb − xa)2 + (yb − ya)2)]1/2 [40]. Here, A and B are coordinates of pitch’s (or loudness’s) mean and SD for 2 different sounds, xa, xb, yb, and ya are values of pitch’s (or loudness’s) mean and SD for these sounds. Thus, AB is a “distance” between coordinates A (xa; ya) and B (xb; yb) is a vector that has magnitude (size) and has not direction. In particular, the Pitch distance = [(Crying mean Pitch − Laughter mean Pitch)2 + (Crying Pitch’s SD − Laughter Pitch’s SD)2)]1/2. Similarly, the distances between pleasantness and arousal scores for the stimuli have been calculated using the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where xa, xb, yb, and ya are average scores of “pleasantness” and “arousal” for 2 pairs of sounds. The distances for each participant were calculated separately and then averaged inside each group. For example, if participant rated sound “crying” with 6 and 2 scores and sound “laughter” with 2 and 6 scores by scales “arousal” and “pleasantness” correspondently the AB = [(2 − 6)2 + (6 − 2)2)]1/2 = 5.66. The program for these calculations was implemented on C# programming language by the lab’s IT specialist mentioned in the Acknowledgements section. Finally, we obtained 21 distances between stimuli for loudness and pitch.
Similarly, we calculated the distances of the ERP components’ amplitudes for 2 pairs of components P200−N100 and P300−N400 by the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where A and B are the coordinates of the component amplitudes for 2 different sounds; xa, xb, yb, and ya are the amplitudes for 2 pairs of sounds. As a result, 21 distances between stimuli for each pair of components were calculated. The distances for each participant were calculated and then averaged inside each group [41]. The distances between nonlinear EEG features that corresponded to different sounds were calculated using the HFD and HC. We applied the formula AB = [(xb − xa)2 + (yb − ya)2]1/2, where A and B are the coordinates of HFD and HC values for 2 different sounds, xa, xb, yb, ya are HFD and HC values for 2 pairs of sounds. These parameters were previously associated with different emotional responses and had multidirectional changes during emotional perception.
A one-way and a repeated measures ANOVA with Bonferroni correction for multiple comparisons (p < 0.05) were used to calculate the group differences between ERP components (amplitudes and latencies) and non-linear EEG features. One-way repeated ANOVA trials with Bonferroni correction for multiple comparisons (p < 0.05) were used to determine age effects on the EEG metrics. To compare the differences in the EEG distances between the groups were assessed with repeated ANOVA trials with Bonferroni correction for multiple comparisons and Student’s ttest (p < 0.05). Spearman’s rank-order correlation coefficients were calculated for the distances for ERP metrics, acoustic features, subjective emotional assessment for each pair of stimuli (p < 0.05). Spearman’s rank-order correlation coefficients were calculated to estimate the association between the age and the distances of ERP metrics (for the P200−N100 and P300−N400 complexes); distances between acoustic features (x is the mean of pitch or loudness; y is the SD of pitch or loudness), and distances between subjective emotional assessment for each pair of stimuli (x is pleasantness; y is arousal). The Bonferroni correction for multiple comparisons (p < 0.05) was applied.
Comatose patients with better outcomes (Coma+) demonstrated the most prominent response to the stimuli with the highest pitch and loudness indices (Table 3). The individual correlation analysis supported the group correlations: 15 of 20 patients had significant correlations (Sup-plementary Table 1; available online). The significant correlations were not found in the Coma− group excepting 4 of 24 patients who demonstrated significant correlations.
For healthy participants, the correlations between acoustic parameters of stimuli and EEG response were not similar, but there was a significant correlation between the subjective assessment of the stimulus and the distances of the P300−N400 complex. Such individual significant correlations were found in 20 of 26 subjects.
The associations between the distances of the ERP components and the distances of the acoustical parameters of the stimuli in the comatose patients and healthy volunteers were depicted in Figure 2.
The ERP data for 7 stimuli are presented in Figure 1 and summarized in Table 4.
The control group demonstrated significant differences in N100 amplitudes depending on the stimuli type. The highest N100 amplitudes were recorded for a neutral stimulus, and the lowest values were for dog barking and screaming sounds (Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. barking p = 0.0008, neutral stimulus vs. screaming p = 0.0014). The patients in the Coma+ group demonstrated an inverse response for these 3 stimuli (Group effect − Post Hoc Bonferroni test: unpleasant stimuli for the control group vs. Coma+ p < 0.0001, unpleasant stimuli for the control group vs. Coma− p = 0.00021). The N100 amplitudes were significantly lower for the neutral sound compared to crying and laughter in control group (Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. crying p = 0.0063, neutral stimulus vs. laughter p = 0.0027). The coma group patients did not reveal significant differences in the N100 amplitudes (F(4, 180) = 17.417, p = 0.00000, η2 = 0.31).
The P200 amplitudes were significantly higher for barking, screaming, and crying than the neutral sounds only in patients of the Coma+ group (F(6, 270) = 14.134, p = 0.00000, η2 = 0.24; Stimuli*Group effect − Post Hoc Bonferroni test: neutral stimulus vs. barking, screaming, crying p < 0.0001). There was no significant difference between the early ERP component for the pleasant and unpleasant stimuli, neither in the control group nor in the coma groups.
The later components (N300, N400, and P300) in the control group were associated with emotional valence. The P300 amplitude was significantly higher in the control group compared to the coma groups (F(2, 90) = 25.275, p = 0.00000, η2 = 0.37; Post Hoc Bonferroni test: Group effect − control group vs. Coma+ p < 0.0001, control group vs. Coma− p < 0.0001). However, the amplitudes of unpleasant (barking and screaming) and pleasant (birds’ singing and laughter) stimuli significantly differed only in the control group subjects (F(2, 90) = 13.946, p = 0.00001, η2 = 0.21), and the P300 amplitudes were significantly higher for the unpleasant stimuli (Stimuli*Group effect − Post Hoc Bonferroni test: pleasant vs. unpleasant in the control group p = 0.0001). Similarly, the N400 amplitudes were significantly higher for the pleasant stimuli compared to the unpleasant ones (F(2, 90) = 10.5250, p = 0.00048, η2 = 0.14; Stimuli*Group effect − Post Hoc Bonferroni test: pleasant vs. unpleasant in the control group p = 0.0004). The amplitudes N300 were signifi-cantly higher for barking and screaming than the pleasant sounds (F(4, 180) = 10.226, p = 0.00044, η2 = 0.21).
Therefore, for the later ERP components, the difference between the pleasant and unpleasant sounds was found both in the control group and in the Coma+ group. At the same time, we noticed that pleasant stimuli did not differ from the neutral one in the comatose group. The statistical analysis proved that the difference between the N300 amplitudes for the pleasant stimuli (average bird song and laughter) and the neutral sound (noise) was not significant (t = 0.7, p = 0.46). On the other hand, the difference between P300 amplitudes (t = 3.2, p = 0.003) and N400 amplitudes (t = 2.9, p = 0.008) for the pleasant stimuli (average bird song and laughter) and the neutral sound (noise) in the control group was significant.
We have found a significant association between the amplitudes of N100 and P200 components and scores of GCS in the unified patients’ group during listening to scream, barking, crying, birds singing, and coughing (Table 5). Individual correlations between ERP indices and acoustical indices in patients and healthy volunteers are presented in Supplementary Tables 2 and 3 (available online), respectively.
Subjects of the control group demonstrated a significant negative correlation between the amplitude of N400 and pleasantness of stimuli for the following sounds: crying, scream, and laughter, and a significant positive correlation between the amplitude of P300 and pleasantness of the following sounds: crying, scream, barking, laughter and birds singing. In addition, the subjective arousal rate for barking, screaming, and birds singing also significantly correlated with the amplitude of N400.
Only healthy volunteers showed a significant increase in the fractal dimension of the EEG during listening to the sounds of coughing, scream and crying (F(4, 180) = 30.683, p = 0.0000, η2 = 0.41; Stimuli*Group effect − Post Hoc Bonferroni test: resting state vs. coughing, scream and crying in the control group p < 0.0001). On the other hand, the patients in the Coma+ group showed a significant decrease in the HC during listening to sounds of crying, barking, and screaming (sounds with highest mean pitch and loudness) F(4, 180) = 9.6157, p = 0,00000, η2 = 0.22; Stimuli*Group effect − Post Hoc Bonferroni test: resting state vs. crying p = 0.0027; resting state vs. barking, and screaming p < 0.0001 in Coma+ (Fig. 3).
Here, we have studied the ERP and nonlinear EEG response of comatose patients to acoustic stimuli and found that patients with better outcomes had recognizable responses to emotional sounds, unlike patients with worse outcomes. At the same time, the response was contributed to acoustical features of emotional sounds, whereas EEG response of healthy volunteers correlated with subjective pleasantness arousal and empathy during emotional per-ception. Similar results were obtained using longer duration emotional sounds and demonstrated that patients after severe TBI have difficulties processing the emotional tone of sounds and perceived the particular acoustic parameters of emotional stimuli 30 primarily. The acoustical analysis of stimuli showed that auditory stimulation’s emotional valence or impact might be changed by manipulating the prosodic parameters of sound, such as pitch, duration, and loudness [42-44]. Other researchers demonstrated that the ranges of pitch variation and overall amplitudes of acoustic stimuli were solid acoustic indicators for the targeted vocal emotions and contributed to emotional recognition and correlated with the emotional response [45-48]. The difference in EEG response to some acoustical features, such as loudness or tempo, could be one of the neurophysiological markers of mental diseases. For example, schizophrenic patients have significantly higher loudness dependence of auditory evoked potentials than the controls [49-52]. Thus, the revealed correlation between EEG changes during emotional stimuli perception and the acoustical features of the stimuli indicated about, first, altered emotional perception in comatose patients and, second, the necessity and potential benefit of this response for the patient’s recovery [53].
The specificity of ERP response to emotional stimuli should be discussed. The N100−P200 waveform revealed in both patients and healthy volunteers was previously associated with the processing of emotional sounds. In particular, P200 amplitude was previously suggested as an indicator of active differentiation and recognition of emotional prosody [45] and was associated with the potential ability of subjects to perceive particular frequencies of auditory information [54]. Regarding our study, the differences in P200 and N100 amplitudes between short emotionally charging and neutral non-verbal sounds did not achieve significant differences in healthy participants. In contrast, the amplitude of P200 was significantly more positive, and N100 amplitude was significantly more negative for emotional sounds in comatose patients with a better outcome. Other results suggest that patients with better outcomes demonstrated the ability to process the pitch values that contribute to the emotional recognition [46].
The contribution of N100−P200 waveform to emotional perception was accompanied, at the same time, by the absence of the typical response of the healthy subject to the emotional valence of stimuli associated with N400 and P300 amplitude. Our results showed that the unpleasant stimuli induced a higher P300 amplitude in subjects of a control group, whereas pleasant sounds were associated with higher N400 amplitude. These differences between pleasant and unpleasant stimuli were not detected in comatose patients, who demonstrated the N300 and N400 peaks only for some unpleasant stimuli. Ac-cording to the previous findings, the P300 and N400 components were previously used to process positive versus negative affective vocalizations, prosody, and music [55]. The P300 amplitude was previously associated with the cingulate cortex activity and the emotional states [56] and attributed to the perception of pleasant and hardly recognized emotional states [57]. Whereas N400 and N300 components responded to a semantically and emotionally incongruent stimulus [58] and a higher N400 amplitude was previously associated with the perception of vocal emotional expression [59,60], and the amplitude of the N300 component contributed to response attenuated an incongruent stimulus [61]. Considering the previous findings, we hypothesized that the patients’ response was associated only with acoustic parameters of unpleasant stimuli that had a prior evolutionary rate compared to pleasant and neutral stimuli [62] and induced recognizable responses in comatose patients.
Finally, as we mentioned, we found significant changes in some nonlinear EEG parameters associated with emotional perception [5,28] both in healthy individuals and patients with mental or neurological diseases. In particular, previous studies demonstrated that higher HFD was associated with different emotional states, including affect, fear, happiness, sadness, and empathy [5,26,28,63, 64]. The dynamics of Hjorth’s complexity were previously contributed to unpleasant emotions such as irritation [5, 28]; the decrease of this parameter during emotional auditory perception was more typical for individuals with mental and neurological disorders than for healthy subjects [3,5,65].
In conclusion, the EEG of healthy subjects reflects an acoustical assessment of stimuli, while in patients in a coma, it reflects the physical parameters of sounds. The emo-tional perception of comatose patients was associated with early ERP components, namely N100−P200, whereas the emotional perception of healthy volunteers was contributed to the amplitude of N400 and P300. The comatose patients did not show typical healthy subjects’ differences of EEG response between pleasant and unpleasant stimuli; however, they demonstrated the variable EEG activity for the neutral and emotional sounds. The higher HFD was associated with the emotional perception of healthy volunteers, whereas emotional processing of comatose patients was contributed to the decrease of HC.
This study has several limitations. The number of participants was relatively small. To avoid this limitation, we equalized the study and control groups by age and sex. We also equalized the patients’ groups by the GCS and time passed after the injury. However, the applied scale and methods of clinical assessment do not take into account the specific characteristics of the patient and the adaptive capabilities of his body. We also tried to take into account the presence of chronic concomitant diseases and the emotional and personal characteristics of the patients before the injury; however, it was not always possible to get a complete premorbid history. Another problem was the inability to examine the auditory thresholds in patients and adjust the volume of stimulation in accordance with these thresholds. As a result, all subjects were offered sounds of the same volume.
We thank the engineer O. Kashevarova and Dr G. Ivanitsky for implementing the cognitive space construction method and other technical help. We are grateful to Dr. V. Podlepich for providing the patients’ database. The authors thank Mikhail Atanov and Olga Kashevarova for writing programs and helping with data analysis.
This work was supported by the Russian Academy of Sciences, Russian Foundation for Basic Research (16-04- 00092).
No potential conflict of interest relevant to this article was reported.
Conceptualization: Galina V. Portnova. Data acquisition: Galina V. Portnova. Formal analysis: Galina V. Portnova, Elena V. Proskurnina. Funding: The State Assignment of the Ministry of Education and Science of the Russian Federation for 2021−2023. Supervision: Galina V. Portnova. Writing—original draft: Galina V. Portnova, Elena V. Proskurnina. Writing—review & editing: all authors.
![]() |
![]() |