Gaussian process classification accuracies (black bars) for all 15 conditions compared with support-vector machine accuracies (shaded bars). *P < .05; † P < .10.
Increase in accuracy for the decision tree model integrating conditions (83%) compared with the median of all Gaussian process (GP) classifiers (60%) and the most accurate single-GP classifier alone (72%).
Optimal decision tree model showing variables relevant for overall classification, using gaussian process classifiers (GP). A subject's predicted probability of being a patient (pGP), which is derived from functional magnetic resonance imaging data related to the processing of neutral facial expressions, was most informative for classification of the whole sample of subjects. Subjects in the 2 resulting subsamples could be classified best using pGP derived from data related to reward (actual large reward) and safety (anticipation of no loss). The brain maps show node-specific distribution maps: the shades of blue indicate left-branch classification; the shades of red indicate right-branch classification.
Hahn T, Marquand AF, Ehlis A, Dresler T, Kittel-Schneider S, Jarczok TA, Lesch K, Jakob PM, Mourao-Miranda J, Brammer MJ, Fallgatter AJ. Integrating Neurobiological Markers of Depression. Arch Gen Psychiatry. 2011;68(4):361-368. doi:10.1001/archgenpsychiatry.2010.178
Psychiatric disorders are, at present, diagnosed on the basis of behavioral symptoms and course of illness, according to standard classifications systems such as the DSM-IV or the International Statistical Classification of Diseases, 10th Revision (ICD-10). In recent years, however, the interest in biomarkers of psychiatric disorders has increased dramatically.1
Simultaneously, the development and application of powerful whole-brain pattern classification algorithms has brought single-subject classification based on neurobiological markers within reach. These procedures furnish predictions based on spatial or spatiotemporal patterns within the data while also making use of information encoded by correlations between brain regions.2 It is this multivariate nature of pattern-recognition algorithms that leads to increased sensitivity over univariate methods.3,4 Generally, pattern recognition is a field within the area of machine learning that is concerned with the automatic discovery of regularities in data through the use of computer algorithms. Using these regularities, a computer can classify data into different categories.5 In the context of neuroimaging, brain images are treated as spatial patterns and pattern-recognition approaches are used to identify statistical properties of the data that discriminate between 2 groups of subjects (eg, patients and controls) or 2 cognitive tasks. A classifier based on pattern recognition is trained by providing examples of the form < x, c>, where x represents a spatial pattern and c is the class label (eg, c = +1 for patients and c = −1 for controls). Each spatial pattern (eg, whole-brain image) corresponds to a point in the input space, and each voxel in the brain image represents 1 dimension of this space. During the training phase, the pattern-recognition algorithm finds a decision function that separates the examples in the input space according to the class label. Once the decision function is determined from the training data, it can be used to predict the class label of a new test example. There are different approaches to determine the decision function depending on the learning method used. Generally, it is important to have a decision function that classifies both the training data and the test data correctly. In this regard, gaussian process (GP) classifiers, recently introduced in the field of neuroimaging, have consistently shown high levels of performance.3
Using pattern-recognition algorithms suitable for functional magnetic resonance imaging (fMRI) data, Zhang et al6 demonstrated that it is possible to separate drug-addicted subjects from healthy controls. Since then, related techniques have shown potential for highly accurate single-subject classification in a number of clinical populations involving disorders such as Alzheimer disease,7,8 attention-deficit/hyperactivity disorder,9 and schizophrenia10 (for a review, see Demirci et al11). In recent years, such approaches have also shown their potential for high-accuracy classification in the context of depression: For example, Fu et al12 correctly classified depressive patients on the basis of their neural response during the presentation of sad faces (with 74% accuracy for medium-intensity sad faces and 76% accuracy for high-intensity sad faces). Corresponding to the impaired recognition of neutral facial expressions on the behavioral level,13 depressive patients could also be identified on the basis of their neural response pattern following neutral facial expressions (accuracy rate, 87%).12 In the same line of research, Marquand et al14 investigated the functional neuroanatomy of verbal working memory as a potential diagnostic biomarker for depression. They found that prediction accuracy based on fMRI data during an n -back task was highest (accuracy rate, 68%) in the 2-back condition.14 Attempting to predict treatment response, Costafreda et al15 were able to predict response to cognitive behavioral therapy with more than 78% accuracy on the basis of neural responses following the presentation of sad facial expressions. Using neutral facial expressions with the same algorithm provided an equally large accuracy rate.15 Using the pattern recognition algorithm on anatomical MRI data, Costafreda et al16 found that it was possible to predict response to pharmacological treatment with an accuracy of 89%.
Despite these advances, current classification approaches are mostly based on single neurobiological markers (ie, the neural responses related to a single pathologically deviating process or symptom). Although first attempts to combine 2 sources of potentially clinically relevant neuroimaging data have been successful,17,18 to date, a method that integrates results from multiple classifiers is not generally available. Considering the fact that all psychiatric disorders are diagnosed on the basis of multiple symptoms associated with a potentially large number of neural processes, this appears conceptually unsatisfying and methodologically suboptimal. We therefore propose a principled procedure integrating information from multiple neurobiological markers, to allow for a more comprehensive, symptom-related classification of psychiatric disorders and demonstrate its utility for classification. To provide a realistic estimate of the algorithm's potential utility, we investigate a group of patients who experienced depressive episodes (category F32; n = 15), recurrent depressive disorder (category F33; n = 10), and bipolar affective disorder (category F31; n = 5) diagnosed according to the ICD-10 (DSM-IV codes 296.xx). These patients were explicitly recruited regardless of current medication and at different stages of the respective disorders (ie, presenting with varying degrees of depressive symptoms). All patients were in a depressed phase or recovering from a recent one; none showed manic symptoms.
These disorders are a prime example of the necessity to consider multiple clinical symptoms in diagnosis because they share 2 core symptoms: lowered mood and anhedonia, which are known to be related to a number of altered affective and motivational processes. Among these processes are an increased propensity to negative emotional reactions as well as a decreased motivation to seek rewards and a reduced ability to experience rewards.19- 21 In particular, studies investigating the processing of emotional cues showed that patients who had major depressive disorders preferentially paid attention to sad facial expressions rather than neutral facial expressions.22,23 Also, acutely depressed individuals attribute sadness to neutral facial expressions12 while displaying an attentional bias away from happy faces.24 Furthermore, an increased sensitivity to failure25 and a decreased sensitivity to reward26 have been observed.
In addition to these effects on the behavioral level, neuroimaging studies using fMRI have consistently identified differences between patients and controls regarding neural activity in response to emotional stimuli that constitute potentially useful neurobiological markers of depression. For example, neural activity in depressive patients, but not in controls, increased linearly in response to increasingly intense sad faces in areas known to be involved in emotional processing and in the analysis of stimulus features. In response to increasingly intense happy faces, a linear increase in activation in similar locations was observed only in healthy controls, not patients.27 Specifically investigating the neural correlates of anhedonia in depression, Keedwell et al28 showed that severity of anhedonia is positively correlated with reward-related activity in the ventral medial prefrontal cortex and negatively correlated with activation in the amygdala and the ventral striatum. Similarly, a decreased response of ventral striatal structures to rewards has consistently been observed in patients with depression.29,30 During anticipation of rewards, patients displayed increasing anterior cingulate activation with increasing magnitude of reward.31 In summary, research has provided compelling evidence showing altered affective and motivational processing on the behavioral level, and neuroimaging studies have begun to elucidate the complex neural underpinnings of pathological deviations in depression, identifying potential neurobiological markers.
On the basis of these findings, we present a 2-step procedure integrating symptom-related biomarkers of depression to allow for a highly accurate single-subject classification. Specifically, a GP classifier3 was used to classify all subjects (patients and controls) on the basis of whole-brain fMRI data from 3 independent tasks reflecting a total of 15 neural processes related to emotional processing and anhedonia. For each of the symptom-related neural processes, the GP classifier yields a participant's probability of being a patient. In the second step, we integrate these classification probabilities associated with each biomarker using a decision tree algorithm.32 We hypothesized that this combination of biomarkers would result in substantially increased classification accuracy compared with the accuracy obtained from the most informative of the biomarkers alone. Furthermore, we quantify the utility of each biomarker, derive a decision tree that models the interrelations of those markers, and discuss the resulting integrated biomarker model in the context of depression.
A total of 31 psychiatric inpatients from the Department of Psychiatry, Psychosomatics, and Psychotherapy at the University of Wuerzburg, Wuerzburg, Germany, diagnosed (according to ICD-10 criteria [DSM-IV codes 296.xx]) with recurrent depressive disorder (category F33), depressive episodes (category F32), or bipolar affective disorder (category F31) on the basis of the consensus of 2 trained psychiatrists participated in the study. One patient was excluded owing to a panic attack during the measurement procedure, leaving 30 patients for further analysis. We explicitly recruited patients who were on a variety of medications and who, at the time of the measurement procedures, presented with varying degrees of depressive symptoms (from severe to almost symptom free). Accordingly, self-report scores in the German version of the Beck Depression Inventory–Second Edition33 ranged from 2 to 42 (mean [SD] score, 19.0 [9.4]). Choosing a well-diagnosed but heterogeneous group of patients with varying degrees of depressive symptoms while not excluding medicated patients should provide a more realistic estimate of the algorithm's potential utility. Exclusion criteria were age younger than 18 years or older than 60 years, comorbidity with other currently present axis I disorders, mental retardation or mood disorder secondary to substance abuse, medical conditions, and severe somatic or neurological diseases. Patients with a bipolar affective disorder were in a depressed phase or were recovering from a recent one; none showed manic symptoms. All patients were taking standard antidepressant medications, including selective serotonin reuptake inhibitors (n = 14), tricyclic antidepressants (n = 14), tetracyclic antidepressants (n = 8), or noradrenaline and serotonin selective inhibitors (n = 8). For a detailed description of the patients' medication, see the supplementary methods in the eAppendix.
Thirty control subjects from a pool of 94 participants previously recruited from the local population (by use of advertisements)
were selected to match the patient group for sex, age, smoking status, and handedness using the optimal matching algorithm implemented in the MatchIt package34 for R (http://www.r-project.org/). For a summary of the demographic features of the matched groups, see the Table. To exclude potential Axis I disorders, the German version of the Structured Clinical Interview for DSM-IV Screening Questionnaire was conducted.35 Additionally, none of the control subjects had scores on the Beck Depression Inventory–Second Edition that indicated pathological symptoms (mean [SD] score, 4.3 [4.6]).
Written informed consent was obtained from all 60 participants after a complete description of the study was provided. Our study was approved by the ethics committee of the University of Wuerzburg, and all of the procedures involved were in accordance with the latest version (fifth revision) of the Declaration of Helsinki.
We conducted 3 independent tasks. The first task consisted of passively viewing emotional faces. Sad, happy, anxious, and neutral facial expressions were used in a blocked design, with each block containing pictures of faces from 8 individuals; these pictures were obtained from the Karolinska Directed Emotional Faces database.36 Every block was randomly repeated 4 times (eAppendix, supplementary methods). The second and third tasks were modified versions of the monetary incentive delay task developed by Knutson et al37 that has been used previously.38 During each trial, participants saw 1 of 3 different shapes (presentation time, 2000 milliseconds) followed by a fixation cross as they waited a variable interval (2250-2750 milliseconds). Thereafter, they responded with a button press to a white target square that appeared for a variable length of time depending on the subject's performance. Feedback (2000 milliseconds), which followed the disappearance of the target, informed participants of whether they had reacted in time during that trial and indicated their cumulative total winnings in euros at that point. Cues signaled the possibility of winning €0.05 (n = 20; a circle with 1 horizontal line) or €1.00 (n = 20; a circle with 3 horizontal lines). The third cue (n = 20; a triangle) indicated that no money could be won during this trial. The 3 trial types were randomly ordered within the experiment, and the length of the intertrial interval was randomly jittered in steps of 83 milliseconds between 83 and 2000 milliseconds.
The third task was also adapted from Knutson et al37 and exactly mirrored the second task. However, participants started with €10.00, of which they were instructed to lose as little as possible. In contrast to the second task, cues signaled the possibility of losing €0.05 (n = 20; a square with 1 horizontal line) or €1.00 (n = 20; a square with 3 horizontal lines). The third cue (n = 20; a triangle), again, indicated that no money could be lost during this trial (eAppendix, supplementary methods).
For all 3 tasks, imaging was performed with the same parameters using a 1.5-T Magnetom Avanto total imaging matrix MRI scanner (Siemens, Erlangen, Germany) equipped with a standard 12-channel head coil. In a single session, twenty-four 4-mm-thick, interleaved axial slices (in-plane resolution, 3.28 × 3.28 mm) oriented at the anterior commissure–posterior commissure transverse plane were acquired with a 1-mm interslice gap, using a T2*-sensitive single-shot echo planar imaging sequence with the following parameters: repetition time, 2000 millisecond; echo time, 40 milliseconds; flip angle, 90°; matrix, 64 × 64; and field of view, 210 × 210 mm2. The first 6 volumes were discarded to account for magnetization saturation effects. Stimuli were presented via MRI-compatible goggles (VisuaStim; Magnetic Resonance Technologies, Northridge, California).
After preprocessing, whole-brain data from a total of 15 conditions could be extracted from each of the 60 subjects (for a detailed description of all preprocessing and data preparation steps, see supplementary methods in the eAppendix). These data were analyzed using GP classification in accordance with Marquand et al.3 We predicted a subject's probability of being a patient (pGP) independently for each condition using leave-one-out cross-validation (see supplementary methods in the eAppendix for a detailed description of the procedure).
Additionally, we evaluated the performance of the single classifiers by converting the predictive probabilities to categorical predictions. This was achieved by applying a threshold that categorized a subject as a patient (control), if his or her pGP was greater than .5 (less than .5). Accuracy was calculated as the ratio of correct predictions to the number of cases for each GP classifier. To benchmark the single-GP classifiers, we compared GP classifier accuracy ratios to the performance of the linear support vector machine (SVM) classifiers,12,14 which constitute the most widely used pattern-recognition approach in the field of neuroimaging (for a comparison of the 2 pattern-recognition approaches, see the eAppendix).
A decision tree algorithm32 was applied to integrate the predictive classification probabilities obtained from the leave-one-out GP classifiers. Generally, we used GP classification probabilities as predictors based on which the algorithm determines a set of if-then logical conditions that allow for the classification of subjects. In our study, all decision tree calculations were performed using the classregtree function implemented in Matlab R2008b (The Mathworks, Natick, Massachusetts) with the default parameters. To calculate the overall prediction accuracy of this approach, a leave-one-out procedure was implemented in analogy to the one used to determine the accuracy of the single-GP classifiers (eAppendix, supplementary methods).
For the purpose of determining the optimal tree model (ie, the tree that optimally classifies all participants given the same parameters used during leave-one-out cross-validation of the decision tree), the matrix containing the predictive probabilities and labels of all 60 subjects in all of the 15 conditions was taken as input to the algorithm. The generated tree was pruned such that a node was not split if at least 1 resulting leaf would contain 3 or fewer subjects.
To establish whether the observed GP classification accuracies are statistically significant, we ran each GP classifier 1000 times with randomly permuted labels and counted the number of permutations that achieved greater accuracy than the one observed with the true labels. The P value was then calculated by dividing this number by 1000.
To test our main hypothesis of whether the combination of data sources results in substantially increased classification accuracy compared with the accuracy obtained from the most informative of the sources alone, we obtained an estimate of the expected best single-GP classification accuracy under permutation. This was done by running each GP classifier independently for all 15 conditions with randomly permuted labels and taking the maximum accuracy. Doing this 1000 times provided a distribution of maximum accuracy during permutation. The median of this distribution constitutes the best estimate for the expected maximum single-GP classification accuracy during permutation. Second, we reran each GP classifier independently for all 15 conditions with randomly permuted labels. This time, however, we calculated the accuracy of the decision tree on the basis of the predictive probabilities derived with the randomly permuted labels. Doing this 1000 times provided a distribution of decision tree accuracy during permutation. Subtracting the best estimate for the expected maximum single-GP classification accuracy during permutation calculated from this distribution created the distribution of the expected difference between decision tree accuracy and single best accuracy under permutation. Because the null hypothesis was that the decision tree does not substantially outperform the best individual classifier, the P value was then calculated by counting the number of times that this expected difference during permutation exceeded the difference between the decision tree accuracy and the single-best-GP classification observed with the true labels and dividing by 1000.
To determine the brain regions that contributed most to decision tree classification, single-GP weight maps and node-specific distribution maps were generated (for a detailed description of the mapping procedures and the interpretation of the maps, see supplementary methods in the eAppendix).
Independent GP classification of the data from each of the 15 conditions revealed significant accuracies for a total of 8 conditions (Figure 1). The median accuracy for all GP classifiers was 60%, whereas the single best classifier (anticipation of no loss) performed at an accuracy of 72%.
Furthermore, we compared the GP classifiers with the standard SVM approach. Generally, both algorithms' performances were comparable, with slight advantages for the GP classifiers that reached accuracies at least as high as the SVMs in all but 1 of the 15 cases (Figure 1).
Integrating the descriptive probabilities from all single-GP classifiers using a decision tree algorithm leads to an accuracy of 83% (sensitivity, 80%; specificity, 87%). This constitutes an improvement in accuracy of 11% (P = .02) compared with the single best of all GP classifiers (Figure 2). The boost in accuracy compared with the median of all GP classifiers is 23%.
classified by splitting the pGPfor neutral facial expressions at .46. In the second step, subjects with a pGP for neutral facial expressions of less than .46 (left branch) were best classified by splitting the pGP for actual large reward at .39. For the subjects who were more likely to be patients based on the pGP for neutral facial expressions (right branch), the best classification was obtained on the basis of the pGP for anticipation of no loss splitting at .47.
In summary, integrating pGP using a decision tree algorithm substantially boosted classification accuracy by considering GP predictive probabilities derived from 3 conditions. Note that these conditions are not those with the highest single-GP classification accuracies. Although there are 3 conditions related to the processing of emotional facial expressions among the 4 most accurate single-GP classifiers, only the pGP for neutral facial expressions is relevant for the construction of the tree model. Furthermore, although the single-GP classifier based on actual large reward does not classify the entire sample above the chance level (Figure 1), it nonetheless contains information essential for the classification of participants into subsamples.
For all 3 biomarkers relevant for final prediction, the decision tree highlighted differences between the left and right tree branches in a diffuse network of brain regions (Figure 3; for details and an alternative mapping perspective, see supplementary results in the eAppendix). For the split on neutral facial expressions, this network included the fusiform gyrus, smaller clusters within the caudate, and frontal regions. Although regions characteristic of left-branch classification also include occipital Investigating the optimal decision tree model (Figure 3) revealed which conditions were relevant for overall classification. The entire subject group was best regions, the highest coefficient scores were found in frontal areas, suggesting an important difference in these regions. Following the tree to anticipation of no loss, we again find an extended occipital-parietal cluster. This time, however, it is characteristic of left-branch classification and the lingual gyrus is again characteristic of right-branch classification. Investigating the split on actual large reward, we found that right-branch classification was characterized by an extensive parietal cluster in addition to smaller superior temporal regions and the thalamus. In this case, the cuneus and lingual gyrus are both characteristic of left-branch classification.
In our study, we were able to provide evidence showing that neural correlates of emotional processing and anhedonia (which had previously been identified as biomarkers for depression on the group level) are also useful for single-subject classification. We replicated previous findings based on data related to emotional facial expressions13 and extended them by showing the predictive power of data derived from reward- and loss-related neural processes. Consistent with other reports,3 we also found comparable accuracies for SVM and GP classifiers underlining the general suitability of the latter method. However, we introduced a principled 2-step algorithm to integrate classification probabilities obtained from single-GP classifiers based on neurobiological markers of depression and showed that this approach leads to a substantial increase in classification accuracy.
Additionally, the tree model constructed in the second classification step provides information regarding which biomarkers are relevant for classification in which particular subsample of subjects. Specifically, neuroimaging data related to the processing of neutral facial expressions were most informative for classification of the whole sample. This result is consistent with findings by Fu et al,12 who showed that neutral facial expressions have the highest predictive power to classify depressive patients (for behavioral evidence showing altered processing of neutral facial expressions in depressive patients, see Leppänen et al13). Furthermore, even though data from sad, happy, and neutral faces all provide relatively high accuracies on the single classifier level, incorporating information from other emotional facial expressions is no longer of significant utility after splitting the sample based on data derived from neutral facial expressions. It appears that data from sad, happy, and neutral facial expressions provide similar information so that additionally incorporating data from sad or happy facial expressions does not increase accuracies in any of the 2 subsamples resulting from the split based on data from neutral faces. However, the subjects in both new subsamples can be classified best using data related to reward (actual large reward) and safety (anticipation of no loss; Figure 3). These results fit well with previous studies showing altered processing of neutral facial expressions13 as well as reward- and loss-related deviations in depression25,26 while, for the first time, allowing for an analysis of the interrelations between those multiple biomarkers. Furthermore, it appears noteworthy that though the single-GP classifier based on actual large reward does not classify the entire sample above the chance level, it nonetheless contains information essential for the classification of participants into subsamples. This underlines a general strength of the decision tree that subdivides the sample into a number of subsamples: information that did not possess significant predictive power for the whole data set can be of great importance in a subsample. In addition, class boundaries that are at P = .5 for the single classifiers can then be optimized for each subsample independently, thereby reaching an improved classification accuracy. Investigation of the neural basis of classification revealed a complex pattern of regions known to be involved in emotional processing and in the analysis of stimulus features. Although brain regions relevant for classification at each node overlapped, their sign (positive or negative) provided information not available in the previous split. This underlines the strength of the decision tree to retrieve nonredundant information, allowing efficient selection of biomarkers for future diagnostic aids.
In our study, we chose to focus on correlates of a specific pattern of symptoms (lowered mood and anhedonia) rather than attempt to directly investigate a more abstractly defined single disorder. As a result, we assessed multiple symptom-related neural processes in patients sharing current or recent depressive symptoms who had received a diagnosis of 1 of 3 distinct mood disorders (recurrent depressive disorder, depressive episodes, and bipolar affective disorder). Our results show that accurate classification is possible in such a diagnostically heterogeneous group, suggesting shared neural mechanisms related to altered affective and motivational processing in all patients who have (or have recently had) severe depressive symptoms. Underlining the stability and utility of the approach is the fact that such highly accurate classification can be obtained even if patients are medicated differently and vary greatly regarding current severity of symptoms. In this context, the question arises whether the classifier might have learned to differentiate between subjects who were taking medication and those who were not rather than between depressive individuals and controls. Although we cannot directly address this concern in our study, the fact that the patients were taking a variety of medications with different mechanisms of action makes it unlikely that the classifier could have derived a reasonable rule from drug-associated neural response patterns. Also undermining a rule based on drug effects, regions relevant for classification are highly similar to those found by Fu et al,12 who measured and classified unmedicated patients. Thus, these regions appear to be central to the classification of patients with depression who were or were not taking medication. Furthermore, evidence suggests that neural responses in a number of regions become more similar to the patterns in controls following pharmacological intervention or psychotherapy.39,40 This would obviously impair classification rather than foster it. The question then arises whether the classifier learned state-related or trait-related neural deviations. Considering that our sample displayed a wide range of depression severity scores (from almost symptom free to strongly depressed), it is highly unlikely that the classifier could have learned a state-related neural deviation. Although we cannot directly investigate this issue, it thus appears most plausible to think of the identified patterns as traitlike. Further investigations are, however, needed in this area. Generally, we think that for future applications, researchers will design their tasks according to whether state or trait markers are of interest.
Although classification algorithms are not meant as a substitute for a thorough clinical examination and a proper diagnostic process, the capability of this approach to model the interrelation of multiple neurobiological markers could be of great utility, especially when investigating symptom-related neural processes rather than aiming for mere classification accuracy alone. In this context, we showed that classification specifically relies on data derived from neural mechanisms associated with neutral facial expressions as well as with reward-related and loss-related processes.
With its suitability not only for neuroimaging data but for any information independent of the level of measurement (from genetic data to psychometric ratings or expert knowledge), the proposed method of identifying, integrating, and mapping the neural processes related to multiple biomarkers of psychiatric disorders may increase our understanding of the complex interplay between neural processes, genetic effects, and subjective symptoms.
Correspondence: Tim Hahn, PhD, Department of Psychiatry, Psychosomatics, and Psychotherapy, University of Wuerzburg, Fuechsleinstr 15, 97080 Wuerzburg, Germany (Hahn_T@klinik.uni-wuerzburg.de).
Submitted for Publication: March 27, 2010; final revision received September 23, 2010; accepted October 5, 2010.
Published Online: December 6, 2010. doi:10.1001/archgenpsychiatry.2010.178
Financial Disclosure: None reported.
Funding/Support: This study was supported by a grant from the German Excellence Initiative to the Graduate School of Life Sciences, University of Wuerzburg (Dr Hahn); grants from the Deutsche Forschungsgemeinschaft (KFO 125/1-2 to Drs Lesch and Fallgatter), the Transregio-Sonderforschungsbereich (TRR58-C4 to Drs Ehlis, Lesch, and Fallgatter) and the Wellcome Trust (Drs Mourao-Miranda and Brammer).
Additional Contributions: We thank Felix Breuer, PhD, and Martin Blaimer, PhD, from the Research Center of Magnetic Resonance Bavaria in Wuerzburg for their technical support.