Effective connectivity inference in the whole-brain network by using rDCM method for investigating the distinction between emotional states in fMRI data

ABSTRACT In recent years, the regression dynamic causal modelling (rDCM) method was introduced as a new version of dynamic causal modelling (DCM) to derive effective connectivity in whole-brain networks for functional magnetic resonance imaging (fMRI) data. In this research, we used data obtained while applying the stimulation of audio movie comprised different emotional states. We applied this method to two networks consisting of ten auditory and forty-four regions, respectively. This method was used to study effective connections between emotional states and represent the distinction between emotions. Finally, significant effective connections were found in emotional processing and auditory regions, and between visual and memory-related regions. We also observed the distinctive connections between the pair of emotions in both models. The greatest number of significant distinctions in the coupling between regions was represented in happiness-anger and happiness-fear for the whole-brain model and happiness-sadness, sadness-love, and anger-love for the auditory model.


Introduction
Studying and recognising the human brain function in confronting complex and natural stimuli such as different sounds, movies, and images (Brennan et al. 2010;Alluri et al. 2012;Glerean et al. 2012;Kassam et al. 2013;Emerson et al. 2015;Nguyen et al. 2016) are of great importance in cognitive neuroscience. Emotions are a complex mental state and origin of behavioural and internal actions and reactions in humans (Pessoa 2009(Pessoa , 2018Lindquist et al. 2012;Purves et al. 2012). So far, many studies have been performed for investigating emotions (Brattico et al. 2011;Goulden et al. 2012;Pohl et al. 2013;Koelsch and Skouras 2014;Zhang et al. 2014;Alkozei and Killgore 2015;Sladky et al. 2015;Morawetz et al. 2017;Mazzola et al. 2020).
By investigating emotions in the human brain, considering numerous evidence, researchers believe that it seems unlikely a comprehensive system is responsible for all aspects of emotional processing. The recent meta-analytical studies also have represented no certain markers of specific emotions. In fact, in the studies related to emotions, it seems that engagement of particular structures such as the amygdala or insula (Purves et al. 2017) depends on the nature of the task, the extent of emotions evoked, and other factors (Purves et al. 2012;Pugh et al. 2021).
Studying the human brain as a complex network in functional interactions may result in new insights into large-scale neural connections. Ghahari et al. (2019Ghahari et al. ( , 2020 investigated the time-varying functional connectivity among 44 regions of interest using the Spatial Distance (Ghahari et al. 2019) and Jackknife Correlation (Ghahari et al. 2020) methods for examining emotions in complex natural functional Magnetic Resonance Imaging (fMRI) data (Hanke et al. 2014). They used temporal network measures to investigate the distinction between different emotions. Finally, they showed that the brain network pattern changes across time during the expression of different emotions.
Effective connectivity represents directed connectivity between neurons population (Frässle et al. 2017). In neuroimaging, Granger Causality (Wen et al. 2013;Friston et al. 2013) and Dynamic Causal Modelling (DCM) Penny et al. 2004) are the most common methods for evaluating effective connectivity. However, dynamic causal models are currently limited to smaller graphs (usually up to 10 regions) (Frässle et al. 2017(Frässle et al. , 2018. DCM method has been utilised in many studies for estimating effective connectivity from fMRI datasets in healthy subjects. Dima et al. (2011) investigated the human brain function during emotional face processing. They found that the effective connection from the inferior occipital gyrus (IOG) to the lateral ventral prefrontal cortex (VPFC) increases in anger than fear or sadness (Dima et al. 2011). Nguyen et al. (2016), for the first time, studied the integration of internal and external environments in the insular cortex during the experience of dynamic emotions. The effective connectivity analysis showed an insular hierarchy for processing-related states to internal stimulus during naturalistic emotional experiences (Hanke et al. 2014). In other research, Seok and Cheong (2019) examined effective brain connections during playing anger-inducing film clips. They found positive connections between the left superior temporal gyrus (STG) and left insula and from the left STG to left orbitofrontal gyrus (OFC). Negative connections were shown between the left OFC and the left insula. Also, their results demonstrated the activity in regions of OFC, STG, anterior insula, fusiform gyrus, amygdala, and putamen (Seok and Cheong 2019). In another study investigating effective connections during emotional face processing (Jamieson et al. 2021), more significant positive connectivity from the right amygdala to the right dorsolateral prefrontal cortex (DLPFC) was found during processing sad and fearful facial expressions. Only in fear, more significant negative connectivity was demonstrated from the right ventromedial prefrontal cortex (VMPFC) to the right amygdala (Jamieson et al. 2021).
Developing large-scale networks that derive effective connectivity from neuroimaging data asserts a key challenge for computational neuroscience (Frässle et al. 2017). In 2017, a new version of DCM called regression Dynamic Causal Modelling (rDCM) was presented (Frässle et al. 2017) for assessing effective connectivity in large networks. This approach is concerned with transferring linear DCM to the frequency domain and modifying it as a specific case of Bayesian linear regression. The rDCM approach used a variational Bayesian method ) that provides a very rapid derivation possibility compared to the classic DCM. Also, in another study, Frässle et al. (2021) applied the rDCM method to the resting-state fMRI data. Farahani et al. (2019) examined the effective connectivity during complex natural stimulation (Hanke et al. 2014). The rDCM method was used to infer effective connectivity during various emotional states in a relatively small network consisting of 18 different brain regions. All 18 regions were selected using previous studies. Ultimately, they demonstrated the distinction of effective connections between different emotions.
Exploring the whole-brain effective connections while expressing different emotions can obtain more information about brain function. Furthermore, processing and understanding auditory information in the brain may lead to creating emotional states. Therefore, studying regions engaged in auditory processing during emotional-auditory stimulation is of great importance. So far, most of the studies have investigated the brain function in different emotions using methods that estimate small-scale directed networks (Goulden et al. 2012;Sladky et al. 2015;Mazzola et al. 2020), functional connections (Goulden et al. 2012), and time-varying functional connections (Ghahari et al. 2019(Ghahari et al. , 2020. Also, most existing studies are concerned with controlled emotional and auditory stimuli. In this research, we studied the brain function during five different emotions by using a complex natural emotional dataset. In the used fMRI dataset (Hanke et al. 2014), stimulation of an audio movie was applied that simulated expressed emotions during human life. We examined the effective connections in the whole-brain model. Furthermore, considering the type of stimulation, for understanding the neural distinction while expressing different auditory and emotional stimuli, we attempted to investigate and identify effective connectivity between auditory regions in the human brain. For estimating the effective connections between regions, we used the rDCM method and applied it to auditory and whole-brain models.

fMRI data
The used fMRI data are available at www.studyforrest.org, and we downloaded this data from http://psydata.ovgu.de/study forrest/phase1/. Data were recorded from 20 right-handed subjects (aged 21-38 years, average age 26.6 years, 12 males) during long-term stimulation with an audio movie (Forrest Gump). The native language of the participants was German. All the subjects had natural auditory and visual abilities and no record of neurotic and mental disorders (Hanke et al. 2014). The original movie (2 hours) was divided into eight audio segments, and the duration of each segment was approximately 15 minutes. The experiment was performed in two different sessions. In each session, four segments were played separately and consecutively. Between the sessions, participants left the scanner and, on average, started the second session after 15 minutes.
Functional images were obtained using a 32 channel head coil on a whole-body 7-Tesla Siemens MAGNETOM scanner with TR = 2 s, TE = 22 ms, FoV = 224 × 224 mm, and 36 axial slices. fMRI data were recorded with a high spatial resolution of 2.75 mm 3 . Totally, 3599 volumes were recorded for each participant (451,441,438,488,462,439,542, and 338 volumes for audio movie segments 1-8, respectively) (Hanke et al. 2014).

Data preprocessing
We used the preprocessed BOLD dataset (bold_dico_dico7Tad2grpbold7Tad_nl) where functional images were motion and distortion corrected and non-linearly registered the anatomical images to the BOLD group template image. Images were spatially smoothed with a Gaussian kernel of FWHM 4 mm, and high-pass filtered with a 120 s cut-off to remove baseline signal drifts and cardiorespiratory artefacts into a lower frequency range (Hanke et al. 2014). Finally, the data of two participants were eliminated from the analysis due to problems of image reconstruction and distortion correction.

Determining regions of interest
In this research, we used the region of interest (ROI) based approach. Hence, labelled Atlases were used for obtaining ROIs' time series. First, we had to extract the mask of ROIs from an Atlas. For this purpose, we used FSL software (https:// fsl.fmrib.ox.ac.uk). The given masks were obtained from Harvard-Oxford (HO) Atlas (Evans et al. 2012).

Extracting ROIs' time series from data
We used SPM software (https://www.fil.ion.ucl.ac.uk/spm/soft ware/spm12) to match extracted masks with data to have the same size. Then we applied the averaging method for extracting ROIs' time series by using the MarsBaR toolbox (http:// marsbar.sourceforge.net) in MATLAB software. ROIs were selected from HO Atlas that contain 44 regions. They are stated in Table A1 with full names and acronyms. Most of these regions were selected from the auditory, visual, and emotional processing regions, considering the previous studies (Purves et al. 2012(Purves et al. , 2017 and our aims in this research.

Creating specific stimulus for each emotion
The used dataset comprised eight segments, and each segment was a composition of different emotions. We aimed to study each emotion separately. To create input for data analysis by rDCM, we had to extract time-points from time series that were specific for each emotion. For the same reason, considering the labelling of movie seconds, time-points of continuous volumes of each segment of BOLD data in which only one emotion label existed was regarded as the specific stimulus of that particular emotion. Consequently, the analysis was done on the extracted time series from four segments (3, 6, 7, and 8) which contained the specific stimulation of five emotions. These five emotions included emotions of happiness, sadness, anger, fear, and love. The number of time-points in each segment was different. The length of each emotion and the number of time-points are shown in Table 1.

rDCM method
As mentioned in the Introduction section, in recent decades, one of the most common methods for estimation of effective connectivity has been the DCM method. The most common form of DCM is its bilinear equation that is observed in Equation (1) (Penny et al. 2004;Stephan et al. 2008): In this research, to estimate effective connections, we used the rDCM approach, which is based on four changes in the implementation of the original DCM (Frässle et al. 2017). First, the linear equation is translated from time to frequency domain. Then, linearisation operation of the haemodynamic forward model (Friston et al. 2000) is done. In the third step, partial independence is assumed between connectivity parameters. In the end, Gamma prior distribution is used for noise precision that significantly increases computational efficiency in whole-brain networks (for more details, refer to Lomakina (2016)).
As aforementioned, in the first step, a linear DCM equation is used for describing the neuronal state equation that is stated in Equation (2).
Then to translate into the frequency domain, Fourier transform is applied to Equation (2), and Equation (3) (Frässle et al. 2017) is obtained: For creating the haemodynamic model from neuronal state to BOLD signals, the convolution of the fixed hemodynamic response function (Stephan et al. 2007;Lindquist et al. 2009) is exploited to the equation of neuronal state that is stated in Equation (4) (Frässle et al. 2017): where ⊗ shows convolution and ŷ B is the noiseless prediction of data.
In order to calculate the discrete nature of data (considering the nature of computers and their discretisation), Equation (4) is transformed into its discrete form. It is done by applying discrete Fourier transformation and discretizing the frequency and time, stated in Equation (5) (Frässle et al. 2017): where N shows the number of data points, T is the time interval between the subsequent points, Δω is the frequency interval, and m = [0, 1, . . . , N-1] is a vector of frequency indices. Therefore, Equation (5) is the discrete representation of the BOLD equation (noiseless) in the frequency domain.
Measured fMRI data are also affected by noise. After obtaining a relation for discrete data (Equation (5)), the model is completed with measured or observed noise based on Equation (6).
where v is a noise vector in the form of Equation (7) (Frässle et al. 2017): Due to the un-complexity of the likelihood function and removing the dependence of noise on connectivity parameters, a mean-field approximation is used. Considering this approximation, we can rewrite based Equation (6) on a standard multiple linear regression: Therefore, v i is expressed as an independent random vector with noise precision t i .
It is possible for more efficiency of reverse DCM (variational). Here Y as the dependent variable, X as the design matrix (a set of regressors), and θ as the parameter vector are defined in Equation (9) (Frässle et al. 2017): where y i shows the measured signal in areai and u k is kth experimental input to that area.
After applying modifications to the original DCM to access the estimations, a variational Bayesian derivation Friston 2002) approach is used. Under several assumptions for parameters and hyper-parameters and during a final repetitive scheme by achieving termination condition, an estimate of noise and connectivity efficient parameters are obtained (for more details, refer to Lomakina (2016); Frässle et al. (2017)).

Implementation of rDCM method
In order to implement the rDCM method, after extracting the time series of ROIs and the specific stimulus of each emotion, we defined the models. In this study, we considered two models (auditory and whole-brain models) that are described below. Ultimately, after the formation of assumed models by coupling between regions (A), driving inputs (C), and inputs (U), we used the rDCM method to estimate and derive effective connections in the brain.
For applying the rDCM method, we used the rDCM toolbox (www.translationalneuromodeling.org/tapas/releases) in the MATLAB software environment. In assumed models for each emotion, during several steps, an estimation of the concerned parameters was obtained for investigating the effective connections. Used steps included forward and backward paths. In the forward step, the haemodynamic forward model was created, and in the backward step, estimation of connectivity parameters was done by variational Bayesian derivation.

Auditory model
Considering the previous studies (Purves et al. 2012;Hu et al. 2017) and existing regions related to the auditory processing in the brain, we selected ten auditory regions. These regions are shown in Figure 1(a). In this model, the input was entered into all the regions, and the connections between the regions were full (please see Figure 1(b)). Five models were formed that the specific stimulus of each emotion was entered as input into each model. The purpose of defining this model was to investigate the effective connectivity between auditory regions while applying emotional stimulation by an audio movie. We also studied how the input affects the regions. In other words, by defining this model, we examined that while expressing each emotion in the brain, which auditory regions have an effective connection with one another and which one has the most activity in each emotion. For defining this model, we used DCM12 in SPM12.

Whole-brain model
In this model, we considered 44 regions. The input was entered into all the auditory regions, and the connections between the regions were full. This model examined more regions than the previous studies. Therefore, new aspects of brain function while applying the emotional stimulus could be studied. The aim of defining this model was to examine the effective connectivity between most brain regions while applying the types of emotional stimulus during an audio movie. By defining this model, we represented that during the expression of each emotion in the brain, which regions have effective connectivity with one another and which one has the most activity in each emotion. Also, the existence or non-existence of significant distinction in five emotional stimuli was studied.

Statistical comparisons
We used the non-parametric permutation test (Nichols and Holmes 2001) to perform statistical comparisons in this article. For between-group comparisons, to express the significant distinction in the driving inputs and directed coupling between regions, 100,000 permutations were created so that the calculated connections of subjects were displaced randomly between the groups, and all the comparisons were two-tailed. All the permutations were performed separately between each pair of emotions. For multiple comparisons, the significant level was defined as p � 0:005 (Bonferronicorrected). We considered the test statistic as the meandifference and median-difference. Median was selected due to mean is affected by outlier data. The results of meandifference test statistic were presented in the Supplemental Material. To determine the significance of the driving inputs and directed coupling between regions in a specific emotion, 1,000 permutations for the auditory model and 10,000 permutations for the whole-brain model were done. In each permutation, the order of effective connections for each subject was displaced randomly, and then the values were averaged over subjects. The distribution with the largest 975th value (in the auditory model) and the largest 9750th value (in the whole-brain model) was selected to compare excitatory connections (p � 0:05, two-tailed) and the distribution with the lowest 25th value (in the auditory model) and the lowest 250th value (in the whole-brain model) was selected to compare inhibitory connections (p � 0:05, two-tailed).

Results
In this study, considering our aims, two results were obtained. First, the significant effective connections (p � 0:05) in each emotion were extracted. Then, the significant distinction (p � 0:005, Bonferroni-corrected), in effective connections, between the pair of emotions was expressed. Also, it can be concluded that the highest significant distinction between which emotions and in which connectivity exists.

Effective connectivity in each emotion
In Figure 2, the significant effective connections (p � 0:05) between regions are expressed. The significant driving inputs (p � 0:05) in each emotion are represented in Table 2.
Considering Figure 2, the greatest number of significant connectivity was found in the emotions of sadness and happiness. In each emotion, the highest connectivity strength was related to the connection from pars triangularis (F3t) to pars opercularis (F3o). Regarding these results, the reciprocal connection between F3t and F3o was common between the different emotions, and it seems that these connections are not dependent on the specific stimulus of each emotion. However, other connections, in terms of strength and type, were found in different emotions (for a better view, please see  Table 2, in driving input, the significant results were almost different in each emotion. Only one significant driving input was found in love. Figure 3 represents the effective connections with significant distinction (p � 0:005, Bonferroni-corrected) between the pair of emotions using the test statistic of median-difference. The colour of connections indicates that the median value of connections of all subjects in one emotion is higher than another.

Effective connectivity between pair of emotions
Belonging to anger, fear, happiness, love, and sadness is demonstrated by red, yellow, green, blue, and black, respectively.
As shown in Figure 3, in the coupling between regions (A matrix), the greatest number of significant distinctions was found in happiness-sadness. Also, in regions receiving the input (C matrix), the greatest number of significant distinctions was found in fear-sadness.
The results of mean-difference test statistic are illustrated in Supplementary Fig. S.2. No significant distinction for coupling between regions was observed in happinessanger and happiness-fear. Also, in regions receiving the input, no significant distinction was observed in sadnessanger.
It is noteworthy, no significant distinction was observed between fear and anger by using test statistic of meandifference and median-difference.  Table A1. We used the codes that are available at https://github.Com/paul-kassebaum-mathworks/circulargraph to illustrate the connections.

Effective connectivity in each emotion
Considering the results, the greatest number of significant connectivity was revealed in emotions of happiness and love.   Table  A1.
The most connectivity strength in happiness was related to inhibitory connections from cuneal cortex (CN) to supracalcarine cortex (SCLC), CN to occipital pole (OP), and intracalcarine cortex (CALC) to lingual gyrus (LG), and excitatory connections from CALC to parietal operculum cortex (PO), PO to F3t, central opercular cortex (CO) to F3t, and OP to angular gyrus (AG). The most connectivity strength in love was related to inhibitory connectivity from CALC to LG and excitatory connectivity from SCLC to F3o. Figure 4(c) shows driving inputs in happiness, love, and sadness (for other emotions, please see Supplementary Fig.  S.3C). The significant driving inputs were different in each emotion. The greatest number of significant driving inputs Figure 3. Significant coupling between regions (A matrix) and driving inputs (C matrix) in the auditory model (by test statistic of median-difference) between emotions of (a) anger and love, (b) fear and love, (c) fear and sadness, (d) happiness and anger, (e) happiness and love, (f) happiness and fear, (g) sadness and love, (h) happiness and sadness, (i) sadness and anger. The values of connections were Bonferroni-corrected (p � 0:005). The connections' color indicates that the median value of connections of all subjects in one emotion is higher than another. Belonging to anger, fear, happiness, love, and sadness is demonstrated by red, yellow, green, blue, and black, respectively. The full names of the regions are mentioned in Table A1.   Table A1.

Figure 5.
Significant coupling between regions (A matrix) and driving inputs (C matrix) in the whole-brain model (by test statistic of median-difference) between emotions of (a) anger and love, (b) fear and love, (c) sadness and anger, (d) happiness and love, (e) happiness and fear, (f) happiness and anger. The values of connections were Bonferroni-corrected (p � 0:005). The connections' color indicates that the median value of connections of all subjects in one emotion is higher than another. Belonging to anger, fear, happiness, love, and sadness is demonstrated by red, yellow, green, blue, and black, respectively. The full names of the regions are mentioned in Table A1.
was revealed in love and happiness. As shown in Figure 4(c), the most strength of driving input in love was related to inhibitory connectivity of heschl's gyrus (H), and no excitatory connectivity was found. The most strength of driving inputs in happiness was revealed in F3o and F3t that are excitatory connections. Driving input in love and sadness was related to inhibitory connections, and no excitatory connections were observed.

Effective connectivity between pair of emotions
In Figures 5 and 6, the significant distinction (p � 0:005, Bonferroni-corrected) between the pair of emotions in effective connections and regions receiving the input is represented. The colour of connections indicates that the median value of connections of all subjects in one emotion is higher than another. Belonging to anger, fear, happiness, love, and sadness is demonstrated by red, yellow, green, blue, and black, respectively. As shown in Figure 5, in the coupling between the regions, the greatest number of significant distinctions was found in happiness-anger ( Figure 5(f)) and happiness-fear ( Figure 5(e)). In happiness-anger ( Figure 5(f)), the highest level of significant difference was observed in connections from insular cortex (INS) to anterior division of inferior temporal gyrus (T3a), H to left amygdala (Amy.L), and occipital fusiform gyrus (OF) to AG. In the regions of receiving input, the greatest number of significant distinctions was found in happiness-love ( Figure 5(d)) and fear-love ( Figure 5(b)). The highest level of significant difference was related to regions of H and planum polare (PP) in fear-love ( Figure 5(b)) and happiness-love ( Figure 5(d)).
In Figure 6(a), the highest level of significant difference between happiness and sadness is observed in connections from OF to Amy.L and posterior division of parahippocampal gyrus (PHp) to Amy.L. Between emotions of sadness and love ( Figure 6(c)), the highest level of significant difference was found in connections from PHp to Amy.L, OP to planum temporale (PT), and left pallidum (Pall.L) to T3a. No driving input in this pair of emotion was revealed.
The results of mean-difference test statistic are illustrated in Supplementary Figs. S.4, S.5. In the coupling between regions, the greatest number of significant distinctions was found in happiness-fear (Fig. S.4E) and happiness-anger (Fig. S.4F). The highest level of significant difference in happiness-anger ( Fig.  S.4F) was related to connections from H to Amy.L, PHp to right amygdala (Amy.R), and T3a to left putamen (Put.L). In the regions receiving the input, the greatest number of significant distinctions was found in happiness-love (Fig. S.4D) and angerlove (Fig. S.4A). The highest level of significant difference was related to posterior division of middle temporal gyrus (T2p), anterior division of middle temporal gyrus (T2a), and H in anger-love (Fig. S.4A) and H, PP, and PT in happiness-love ( Fig. S.4D). Between emotions of sadness and love (Fig. S.5C), only one significant distinction in connection from PHp to Amy. L was found. Figure 6. Significant coupling between regions (A matrix) and driving inputs (C matrix) in the whole-brain model (by test statistic of median-difference) between emotions of (a) happiness and sadness, (b) fear and sadness, (c) sadness and love. The values of connections were Bonferroni-corrected (p � 0:005). The connections' color indicates that the median value of connections of all subjects in one emotion is higher than another. Belonging to happiness, love, and sadness is demonstrated by green, blue, and black, respectively. The full names of the regions are mentioned in Table A1.
In happiness-sadness (Figures 6(a), S.5A), the greatest number of significant distinctions was related to connections ending in the amygdala region, especially the Amy.L. Compared to the other pair of emotions, it seems that amygdala has an important role in creating the distinction between happiness and sadness.

Discussion
In this research, we used regression dynamic causal modelling for analysing an emotional fMRI dataset. This method is a new version of DCM that presents a very efficient computational analysis of effective connectivity in large-scale brain networks. By using experimental data, we applied rDCM to a relatively small network containing ten regions, and a whole-brain model consisting of 44 regions. Ultimately, we found significant effective connections in emotional processing and auditory regions, and also between visual and memory-related regions.
Although using linear models for complicated systems such as the brain does not seem appropriate, we could show these models are effective for exploring the brain connections during complex emotional stimulation and studying the large-scale networks. Also, it is possible to consider the results of these linear models as a priori knowledge in other research. The whole-brain model makes it possible to present a realistic physiological model against small models. Using the rDCM method, the implementation time of the reversing model took less than one second in the whole-brain model. The rDCM method has no limitation for selecting ROIs, as well as the speed of calculating and estimating the model parameters is higher than the DCM method. Therefore, a whole-brain network and new connections can be considered. Also, the graph theory that has commonly been applied to examine structural and functional connections, can be utilised to investigate effective connections (Frässle et al. 2017).
By reviewing the results of effective connections, considering the stimulation type of used dataset in this article, the connections between the auditory regions play an important role in the two models. Furthermore, the regions engaged in emotional and visual processing are of great importance in the whole-brain model. In the auditory model, similar connections were revealed in all emotions. Also, the highest connectivity strength is related to the reciprocal connection between inferior frontal gyrus (pars triangularis) and inferior frontal gyrus (pars opercularis). However, other connections were found in each emotion that both in terms of strength and type were different from one another. Pars opercularis and pars triangularis may be related to recognising a voice tone in spoken native languages and the ability to translate from a language back to one's native language, respectively (Elmer 2016;Schremm et al. 2018). So, it can be stated that the reciprocal connection between these two regions was created due to the audio stimulus. In the driving inputs in both models, some connections were inhibitory in one emotion while they were excitatory in other emotions. Therefore, in general, it can be concluded that the activity of the areas involved in auditory processing is different depending on the type of each emotion.
Considering the two assumed models, results were different to some extent in each model. While the whole-brain model represented a few distinctions between states of happinesslove and sadness-love, the auditory model has extensive distinctions in these states. Also, the results of applying the rDCM method to the mixed model (contained 18 regions) (Farahani et al. 2019) were somewhat different. In the mixed model, there was a distinction between happiness-fear, happiness-anger, and happiness-love. Due to the high number of regions in the whole-brain model and realistic comparison to small models, different results were found in the effective connections than the mixed model. For instance, in the whole-brain model, the connections ending in the amygdala were very important compared to the previous study (Farahani et al. 2019).
Our results are consistent with the findings of previous studies (Purves et al. 2012(Purves et al. , 2017Nguyen et al. 2016;Seok and Cheong 2019;Pugh et al. 2021;Jamieson et al. 2021). In anger emotion, we found an increasing effective connection from insular cortex to superior temporal gyrus (posterior division) which is compatible with previous findings (Mazzola et al. 2016;Seok and Cheong 2019). The insular cortex is associated with consciousness and plays an important role in the experience of pain and several basic emotions, such as anger, fear, happiness, and sadness (Bushara et al. 2001(Bushara et al. , 2003Wager 2002). Anatomically, the insular cortex can integrate the information about body states into higher-order cognitive and emotional processes (Craig 2002). In happiness, the connection from the Occipital pole to temporal occipital fusiform cortex was found that had been stated in another study (Fairhall and Ishai 2006). Farahani et al. (2019) expressed that the most connections in emotions of anger, fear, happiness, love, and sadness were related to the connections between the planum temporale and middle temporal gyrus (posterior division), heschl's gyrus and superior temporal gyrus (posterior division), inferior frontal gyrus (pars opercularis) and insular cortex, insular cortex and left hippocampus, and from insular cortex to inferior frontal gyrus (pars triangularis) and planum temporale to insular cortex. They found the highest number of significant distinctions in the coupling between regions, in the happiness-anger, happiness-love, and happiness-fear. Also, in the regions receiving the input, they revealed the highest number of distinctions in fear-love, fear-sadness, and happiness-sadness. Overall, our findings are somewhat consistent with their results. It is worth mentioning, there were differences in some connections compared to previous studies, which could be due to the different functioning of the brain as a large-scale network, and also the type of auditory stimulation that is associated with human life and makes people empathise with the movie.
In this study, by considering 44 ROIs, we found other effective connections in addition to the mentioned connections in the previous studies (Fairhall and Ishai 2006;Mazzola et al. 2016;Purves et al. 2017;Seok and Cheong 2019;Farahani et al. 2019). Amongst these connections, there is a connection from the occipital fusiform gyrus to the angular gyrus. The angular gyrus is related to language and recovery of memory. Considering the role of visual regions in these connections, it can be interpreted that during applying emotional audio stimulation, visual and spatial imagination may lead to the appearance of such connectivity in the brain.
Another effective connectivity that appears in the distinction between the emotions is the connectivity from the right hippocampus to heschl's gyrus. This connection is related to the emotional part of the memory, which emotionalises the memories. Also, we observed four other connections in studying the distinction between emotions. Connections from regions of the intracalcarine cortex to the angular gyrus and parahippocampal gyrus (posterior division) to the amygdala (left and right) are among such findings. Also, more connections between auditory and other regions such as putamen and inferior temporal gyrus (anterior division) were found. In distinguishing the connectivity between regions among emotions, the connections ending in the amygdala region, especially the left amygdala, had an important role in creating the distinction between emotions.
In this research, considering the data type, stimulation was designated in the form of a single block without repetition, so it may not provide our expected results. In order to obtain better results for deriving effective connections between regions engaged in emotional processing, repetitive blocks may be used. This method is applied to data of healthy people, and its performance is not clear for neurological disorders and pathophysiological studies.
By using the rDCM method to derive effective connections from an fMRI dataset acquired during a complex natural emotional auditory stimulation, we were able to reveal the distinction between different emotional states. We defined two models (auditory and whole-brain models) to investigate the human brain in a realistic condition. Ultimately, different effective connections were found in each emotion and between emotional states in each model. Also, our results were almost in agreement with previous studies.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The author(s) reported there is no funding associated with the work featured in this article.