Overview (50 minute video)

Differences in cognition and perception are thought to emerge from differences in the dynamic activity of functional brain networks. These networks exist over a broad range of spatial and temporal scales with different modalities or approaches revealing different subsets of networks. With electrophysiology (EEG) frequency tagging, oscillating visual or auditory inputs were presented and different cortical networks entrained to different input frequencies, revealing their functionally distinct properties. With event-related potentials (ERPs), evoked responses were measured and averaged over a series of discrete inputs, and the multiplexed ERP response was subsequently decomposed with multi-subject independent components analysis (ICA). I discuss this approach in the talk linked below and demonstrate relationships between ERP amplitudes and attention and cognitive control in healthy and clinical populations. The multi-subject approach also demonstrates considerable utility for resting EEG, revealing distinct spatiospectral maps which were linked with concurrent fMRI for improved spatiospectral resolution of human brain dynamics.

link to video

Overview (Talk Transcript)

Below is the complete transcript of my most recent talk summarizing my work. The title of the talk is “What do (scalp averaged) cortical oscillations reveal about cognition, attention, and perception?” It's organized in the following sections: Introduction, Frequency Tagging, Discrete Events, Natural Stimuli, Summary and References.

I’ll start out by taking a step back, by asking and thinking about the purpose and function of the brain- keeping in mind that your understanding of brain function, and your perspectives on the brain, are influenced by the measurement tool that you are using. Since I primarily use EEG my perspective is shaped by what those signals represent, and how they are modulated. I will start with a broad background in the purpose of the brain, then talk about how we think electrical signals from the cortex propagate to the scalp, where they are measured non-invasively with EEG electrodes. I’ll point out a few quantitative problems encountered in EEG, like the summation of signals on the cortex, the spatial filtering of signals through the skull-scalp boundary, and separating those signals or decomposing the time courses using approaches like wavelet analysis or independent component analysis (ICA). In the context of my previous research, I'll talk about what we know about cognition, attention, and perception from these signals.

Purpose of the Brain

Now, Imagine that you have to summarize the purpose or function of the brain within a single sentence. What is its role from an evolutionary perspective? Why do we devote up to 20% of our energy to this 3 lb organ, protected within this helmet we call a skull?

I would say that the answer is the following: That the brain’s primary function is to integrate our internal goals with our interactions with the environment. This is what the brain has evolved to do in order to benefit the organism or the species. In order to accomplish this task, it makes predictions about the environment and updates its predictions with incoming stimuli.

These processes are constantly occurring, with information arising within all senses each second. The information is processed such that the brain's limited resources are devoted toward stimuli that are relevant and irrelevant stimuli are ignored.

Diffusion Spectrum Imaging

The brain is very complex, and this complexity supports its various functions. The image above is a diffusion spectrum imaging picture of the brain. Cell bodies are at the edges of these pathways and these pathways represent bundles of axons where neurons extend from one part to another (see Hagmann et al., 2008). The picture only shows about 10-20% of the white matter pathways. And it only focuses on long-range connections- there are many more pathways that are very short that are not depicted here. So you can see that the brain is very complex, and if you look at any single neuron here, and any neuron on the other part of the brain, you wouldn’t have to travel across many synapses to find a connection between the two. This is because the brain is highly interconnected, where a single neuron can synapse onto 100’s of other neurons and so on and it doesn’t take very long to go from one location to another. It’s like how any actor can be connected to Kevin Bacon in 6 degrees or less, any neuron can be connected to another neuron through the pathways in the brain, except it likely takes less than 6.

Response to a visual input

Knowing that the brain is heavily interconnected, lets look at the image above depicting what happens when you flash up a visual input. How does a simple visual flash travel throughout the brain? There is a frontal response here in the frontal eye fields 10 ms after the stimulus followed by a large swath of frontal areas and parietal areas. And respones appear within the primary motor area after 115 ms even though the monkey is not performing a task, so once you present something it gets to the whole brain very quickly. And it’s not depicted here, but these areas send connections back to these early visual areas as part of feedforward and feedback interactions that are constantly occuring (Lamme & Roelfsema, 2000). These aspects of brain activity are only captured with measures that have high temporal resolution, like EEG.

The most important point to take from these images is this: The brain maintains these parallel and recursive signals among different brain areas, and one area alters and is altered by the activity of the other areas. So, an early visual area is constantly receiving information from other areas in the brain that are modifying its behavior. And even when the visual input first appears, the early visual area is immersed within that global environment, it is already influenced by all the other areas in some way, and that will shape how that information is processed.

There is no top

As a result of these dynamics, there is no real structural “top” to the brain, even though we often use that language. This notion is indicated by the first two quotes by Buszaki from the book Rhythms of the Brain (Buzsaki, 2006). The traditional notion that there are top-down and bottom-up parts to the brain is limiting. The brain is a complex system, information travels quickly through feedforward and feedback connections, and brain areas are often coupled simultaneously through synchrony.

The language of brain dynamics is oscillations and synchrony (Engel, Fries, & Singer, 2001; Varela, Lachaux, Rodriguez, & Martinerie, 2001). Synchrony means that two different areas share common information, and that information is usually expressed as an oscillation. That is, multiple regions fluctuate together, and that allows communication and likely underlies basic processes such as cognition and perception.

These points are driven home by the third quote presented in the slide above. This quote is a bit dramatic, but it reflects our cognitive bias of imposing causal explanations on what we observe. We are biased toward putting things in boxes and seeing things in a linear cause-and-effect sense. Unfortunately, this way of thinking will only get us so far in understanding the brain. We have to be careful of these biases- they trick us into believing something is more simple than it really is.

Resolution of Neuroimaging methods

Lets review what our imaging tools are sensitive to and what they are not. This picture shows the range of spatial and temporal scales in which you can look at the brain (Gazzaniga, Ivry, & Mangun, 2013). The y-axis shows the range of spatial windows in which you can observe neural activity, going from around the scale of a synapse all the way up to the entire brain. The x-axis shows the range of temporal information in which you can observe neural activity. This goes from milliseconds and seconds all the way to days. The axis are each on a log scale. EEG is indicated by the red enclosure and functional MRI, or fMRI is represented by the orange enclosure. You can see that they don’t overlap much in terms of their sensitivity to spatial and temporal information. That is an advantage of course because if EEG and fMRI completely overlapped you wouldn’t get any additional information by combining the two (for review see Bridwell & Calhoun, 2014, and for e.g. see Bridwell, Wu, Eichele, & Calhoun, 2013). The important thing to note is that the timescale of EEG aligns with the time scales of perception and cognition and speech and language. That is, our perceptions fluctuate along the scales of 100 ms or so, and speech signals fluctuate with a similar periodicity, so these dynamics are captured with EEG.

Skull scalp boundary in EEG

Lets get into the basics of EEG and where it comes from. First, we should take a step back and appreciate the fact that we can even do it in the first place. That is, it’s amazing that you can take someone (with proper IRB consent) and put electrodes on them and record actual brain activity within a few minutes. When Hans Berger discovered this in the 1920’s he was skeptical, which was reasonable, and he had to do a lot of verification to make sure it was actually true.

The EEG responses arise from synchronous post-synaptic potentials of pyramidal cells, oriented radially along the cortex (Nunez & Srinivasan, 2006). The EEG is primarily generated by synchronous cortical activity, not by activity occurring deep within the brain.

The signals have to pass through the skull-scalp boundary where they are recorded by an electrode, and they return back to the brain in a return current, for current conservation. On the upper right is a simulation of brain potentials as they pass through the layers of the skull and scalp, and the main attenuation is at the skull— there is a lack of conductivity at the skull-scalp boundary which reduces the potential over the source and spreads the voltage tangentially across the scalp. This means that the skull acts as a low-pass spatial filter. But the spatial filter effectively also acts as a temporal filter— it emphasizes lower frequency EEG oscillations since these oscillations are more likely to pass through this skull-scalp boundary. Within Nunez and Srinivasan’s book Electric Fields of the Brain, which I read religiously because Ramesh was my graduate school advisor, they state that “nature has conveniently provided us with an anti-aliasing spatial filter in the form of the poorly conducting skull.” In other words, “it’s not a bug, it’s a feature”.

One of the consequences of the low-pass spatial filtering properties of the skull-scalp is that it informs how many electrodes are required to put over the scalp to get adequate spatial coverage. There are systems that go up to 256 electrodes, but that is unnecessary since there is enough smearing of potentials across the scalp that 128 electrodes is sufficient. On the other hand, if you have few electrodes, like fourteen or so with an Emotiv device, then you are likely undersampling spatially and you run into a spatial Nyquist issue. You shouldn’t interpolate voltages across the surface of the brain with such a system.

EEG as a symphony

You can think about what you are measuring with EEG by imagining different ways of recording the activity of an orchestra. When you record the activity of a single neuron, or of local field potentials (LFPs), or ECoG (which is where you put electrodes directly on the cortex), it is like putting a microphone on a single instrument within the orchestra. It provides a measure of the collective activity from all of the musicians. With EEG, you obtain an aggregate measure of global brain dynamics, including large-scale processes which unfold around the time scale of our perceptual experience (Nunez & Srinivasan, 2006).

It is important to acknowledge that EEG is not simply an impoverished view of single units, or LFPs, or ECoG. It is not like you have a perfect view of the cortex when you place an electrode directly on the cortex, and then you place a skull and scalp over it and it screws it up. EEG provides its own unique measure. It provides its own unique picture which you can’t necessarily get when you use the other approaches. And since EEG provides a measure of aggregate, or global, brain activity, it tells you about the state of the brain. This is described within the quote by Tononi and Edelman above, from their book A Universe of Consciousness (Edelman & Tononi, 2000).

Return to Introduction       Skip to discrete events

Inputs and Outputs

Now, we understand that the brain is a complex system. And if you ask any engineer the best way to understand a complex system they would say that you provide sine wave inputs, measure sine wave outputs, and make inferences about the system based upon the change in amplitude and phase between the inputs and outputs.

This notion is conveyed by Bendat, Piersol, and Basar in the quotes above (Basar, 1999; Bendat & Piersol, 2000), and this is the approach that I have taken in many of my experiments, including the first four experiments I'll talk about below. The approach is called frequency tagging.

Resonance

It turns out if you apply a sine wave input to a physical system, some frequencies will drive the system better than others. This is the system's resonant frequency. A classic example of resonance is demonstrated by what happened with the Millenium Bridge on opening day (Strogatz, Abrams, McRobie, Eckhardt, & Ott, 2005). The bridge was a big engineering achievement, but the 32 million dollar project was closed the day after it opened because it swayed while thousands of individuals walked across. The swaying occurred when individual's steps synchronized with each other. But how did their steps become synchronized? What happened was there was an initial oscillation which caused a few people to brace themselves and balance their weight slightly. The act of balancing their weight resulted in a subsequent oscillation of the bridge, which caused more people in turn to brace themselves and balance their weight in unison. Their steps became synchronized as a feedback loop formed, and they stepped together at the right phase and frequency to enhance the sway of the bridge.

This phenomenon highlights why resonance is such an important property of brain function. Imagine that a person on one end of the bridge wants to synchronize their stepping with a person on the other end of the bridge. This transfer of information between people is similar to the transfer of information between neurons in different parts of the brain. When the two people move in synchrony from different ends of the bridge, it is analogous to a neuron on one part of the brain synchronizing its excitability (i.e. its propensity to fire) with a neuron from another part of the brain.

The oscillating bridge binds people on the two different sides. Within EEG, low frequency oscillations serve the same purpose. Low frequency oscillations between 1-8 Hz are established across distant areas, which coordinates amplitudes of higher frequency activities and supports synchronized bursts of firing across distant cortical areas (Canolty et al., 2006). They provide a mechanism for synchronization between neurons within distant parts of the brain, which enhances the efficiency of neuronal communication. And our cognitions and perceptions likely emerge from these dynamics.

These systems have natural resonant frequencies for communicating among themselves, and for communicating with other regions. For example, the visual system prefers to fluctuate between 8-12 Hz. The auditory system generates a transient 40 Hz response to tones (Tiitinen et al., 1993), and frequency tagged auditory responses peak at 40 Hz (Galambos, Makeig, & Talmachoff, 1981; Picton, John, Purcell, & Dimitrijevic, 2003). Thus, 40 Hz auditory stimuli drive the auditory system, and the output 40 Hz EEG response provides a direct interpretable measure of auditory processing.

Tetris and Science

We took advantage of the auditory systems 40 Hz resonance properties in this study, where we had people listen to 40 Hz clicks while they performed different tasks. The individuals either played an easy version of the video game Tetris, played a difficult version of Tetris, or attended to the sounds (Roth et al., 2013). The EEG spectra is plotted on the right, and you can see that there is a peak at 40 Hz that represents the frequency of the sound presented to the subject. By focusing on the 40 Hz response, we filter or isolate EEG activity that is specific to the auditory system.

Tetris and Science

Using this approach, we can examine to what extent cortical responses within one sensory modality are modulated by a complex task conducted within another sensory modality. We found that 40 Hz responses were the largest when individuals listened directly to the sounds, or when they played an easy version of the Tetris task, but responses were reduced when individuals played a difficult version of the Tetris task. Why are responses to the 40 Hz auditory input reduced during the difficult Tetris task? What happens is that, as the Tetris task becomes more demanding, greater resources are devoted to performing the task, and less resources are devoted to processing auditory stimuli. This finding speaks to a practical application of these responses- the frequency tagged auditory responses provide a measure of the degree in which you are engaged within another task (Roth et al., 2013). Thus, you could potentially use these responses to probe, or track, the degree in which an individual attends to something. For example, this would be a potential approach to generate moment-to-moment measures of individual’s attention while they are driving, or monitoring radar, or programming at their computer.

Music and Science

You can go beyond a simple (and annoying) 40 Hz helicopter sound and appreciate that the brain also fluctuates to more complicated stimuli, like music. For example, if we averaged the EEG of all the individuals attending the concert depicted above, we would observe fluctuations that followed the prominent features of the music. The phase of ongoing EEG would align with the aggregate sounds from the instruments. We demonstrated this phenomenon experimentally by having individuals listen to a series of guitar notes at 4 Hz with the notes either forming a musical pattern, or forming a random pattern (Bridwell, Leslie, McCoy, Plis, & Calhoun, 2017).

Music and Science

The above plot demonstrates the average EEG response to each individual note. Technically, averaging EEG responses across stimuli generates an Event-Related-Potential (ERP). You can see that the peaks continue to repeat, indicating that EEG responses entrained to the periodic 4 Hz inputs, as I mentioned earlier. There are four peaks within the interval of an individual note, which suggests that endogenous 8 Hz EEG responses entrained to the 4 Hz tones. This is consistent with previous studies which suggest that cortical oscillations entrain to stimuli presented at frequencies of 1 Hz or higher (Doelling & Poeppel, 2015).

The interesting finding is that responses to guitar notes are greater when the guitar notes are presented within a musical context compared to when they are presented randomly. Why would there be larger responses to random stimuli? I mentioned earlier that the brain is constantly trying to represent the environment. It constantly processes stimuli and makes predictions for future stimuli. When guitar notes are presented with a musical pattern, the brain is better at anticipating the subsequent note (this is one of the reasons that music is so appealing). And less work is required to represent each note when they align with the brain's expectations. If the next note does not align with the brain’s prediction, additional resources are recruited to try to understand and represent the environment. This means that there is more work required for representing a random stream of guitar notes compared to a musical stream of notes, and this contributes to the greater amplitude of ERP responses to random unpredictable notes (Bridwell, Leslie, McCoy, Plis, & Calhoun, 2017).

What is the relevance of this finding? We are using 4 Hz inputs to tap into cortical networks that are sensitive to acoustic regularities. The sensitivity to these features may be related to healthy development, and may reveal the integrity of cognitive systems that support additional complex processes such as language and memory (Feld & Fox, 1994; Patel, 2011; Peretz, Vuvan, Lagrois, & Armony, 2015).

Approaches to EEG

I reviewed two experiments where auditory inputs were presented at a certain frequency, which entrains the auditory system. The visual system can be probed in a similar manner by flashing up stimuli on the computer screen, where the stimuli enter the eyes and propagate throughout the brain within networks that are attuned to the input frequency. An example of this approach would be to have subjects perform a task at one location, i.e. asking them to detect lines which appear against a checkerboard background, while a flicker is presented in the periphery with the same orientation as the lines. There are cells within early visual areas which are sensitive to different orientations, the flickering input will excite those cells and they will become immersed within the brain systems involved in processing those stimuli. Even though EEG is a very global measure, you are still driving responses through a specific system, or network, that begins with cells tuned to flicker features. This means that you can isolate EEG oscillations which are driven both by the frequency of the input as well as its features.

SSVEP and Perception

Using this approach, we further demonstrate how these frequency tagged oscillations are related to attention and perception. In this study, the visual system was entrained to 8 or 12 Hz flickering inputs presented at one spatial location while individuals performed a task at the attended location where the flicker features matched the features of the targets, when they were neutral, or when the flicker features competed with targets at the attended location.

The physical stimuli are the same for each of the topographic plots above, so the differences are driven entirely by differences in attention due to the different tasks. We found that higher responses to the flicker are related to correctly detecting the target when the target feature matches the flicker feature, but higher responses to the flicker are related to failing to detect targets when the flicker feature competes with the target at the attended location (Bridwell & Srinivasan, 2012). When you are doing a good job of representing the flicker feature, the brain’s response to the flicker is enhanced, and you are better able to detect targets with that feature. If you do a good job of representing the flicker feature (or a poor job of suppressing that feature), and that feature competes with the targets at the attended location, then you’re less able to detect targets at that location. This suggests that the response to the flicker reflects individual’s perceptual representation of the flicker feature.

Frequency Tagged Attentional Tuning Functions

We further demonstrate relationships between frequency tagged oscillations and attention in this study. But first, let me describe an important characteristic of attention. Imagine that you are trying to detect a tiger. You know that tigers are colored yellow and orange, and there are individual neurons within the visual system which are tuned to these colors, such that increases in activity within those cells could indicate the presence of a tiger. If the tiger happens to be hiding within yellow grass, it doesn’t make sense to identify the tiger using neurons coding the color yellow since the grass is also colored yellow (and you're not interested in detecting grass). It turns out that it is beneficial to enhance the response to neurons which represent the color orange, since the changes in activity within these neurons will be more informative about the presence of a tiger than changes in activity within neurons coding the color yellow. (More specifically, it may be advantageous to attend to neurons coding an exaggerated version of the color orange, even if it differs from the true orange color of the tiger). This is because it is more important to enhance the response of neurons that are most informative for discriminating the presence of a stimulus, as opposed to neurons which represent the features of the stimulus exactly (Bridwell, Hecker, Serences, & Srinivasan, 2013; Navalpakkam & Itti, 2007; Scolari & Serences, 2010; Verghese, Kim, & Wade, 2012).

When the yellow-orange tiger is within yellow grass, it is beneficial to embed the neurons coding orange within the networks that represent and respond to the relevant stimulus (the tiger), while keeping the neurons coding yellow separate from this system or network. We target these systems with our frequency tagged inputs, and we designed an experiment to determine whether EEG oscillations were modulated by these aspects of attention. I don’t have time to go into the details, but we measured EEG responses to a flicker of a fixed orientation while individuals attended parametrically over a range of orientations, forming an “attentional tuning curve”. Importantly, in the experiment where it is advantageous to attend to the target features (like attending to the color yellow and orange when looking for a tiger in a green field) we found that those with an enhanced response to the target feature were better at detecting targets. In the experiment where it is advantageous to attend to features that differ from the target (like only attending to the color orange when the tiger is hiding within yellow grass), we found that those with an enhanced response away from the target features were better at detecting targets (Bridwell, Hecker, Serences, & Srinivasan, 2013). These findings highlight the flexibility of attention within and across individuals. They also demonstrate that oscillations induced by the flicker reveal subtle changes in attention necessary for detecting targets in different scenerios.

Return to Frequency Tagging       Skip to Natural Stimuli

Perspectives and Approaches to EEG

I reviewed the frequency tagging approach in the previous section, where oscillating inputs are presented to the brain which can drive and entrain different networks, revealing their potential contribution to attention and perception.

Another approach is to present discrete events and to average EEG responses to each individual event. When you average across events, oscillations which are phase locked will sum together and those that are not (whether they are signal or noise) will approach zero. The summed response is termed an Event-Related Potential (ERP). ERP’s contains the sum of a series of oscillations which follow the presentation of the individual stimulus. Psychologists are very good at assigning names and functions to these stererotypical oscillations, such as the P1, N1, P2, N2, P3, and so on. Considerable energy is given to understanding the function of the various peaks among ERP researchers. (If you're really fixated on the peaks, you might call yourself an "ERPer".)

ERP amplitudes are arbitrary

After calculating an ERP, it is common to summarize the peaks by their amplitude and latency. I really like the quote from Basar’s book Brain Function and Oscillations within the slide above (Basar, 1999). He notes that the evoked response is defined by "several arbitrarily defined components … and that these arbitrarily defined components depend upon many factors, including electrode location, behavioral state, and so on" (and he could add reference electrode to that list). He goes on to say that the interpretation of the arbitrarily defined components is difficult. What I find funny about this passage is that he used the term “arbitrarily defined components” three times. And I can see where he is coming from. The concern is that the ERP is an average across many individual trials, and each trial is a series of oscillations that superimpose. Averaging these superimposed oscillations smears the underlying response, which obscures the underlying true reality. In addition, the ERP peaks might not even be present on any one of the individual trials (Luck, 2004). And we assign interpretations to each peak, like “awareness of errors”, even though it is unlikely that any given peak has a single function or interpretation. And it's unlikely that there will be a peak which maps onto the title of each chapter of a Psychology textbook.

But there are blind source separation (BSS) approaches and time-frequency decompositions, such as wavelets, which can unravel these superimposed responses. With respect to BSS, Independent Component Analysis (ICA) (Delorme & Makeig, 2004) is a popular approach that I have found useful (Bridwell, Kiehl, Pearlson, & Calhoun, 2014; Bridwell, Rachakonda, Silva, Pearlson, & Calhoun, 2016; Bridwell, Steele, Maurer, Kiehl, & Calhoun, 2015; Bridwell, Wu, et al., 2013). It can decompose these oscillations in a data-driven way, which reduces the ambiguity in interpreting the peaks that are present after averaging (and smearing) across all of these separate components. The resulting amplitude and latency measures are more likely to reflect a distinct process, which provides a stronger motivation for extracting summary measures from a given peak. It makes these “arbitrary” measures a little less arbitrary.

And there are extensions of ICA to multiple subjects, called group ICA, which estimates these components at the aggreagate group level and then essentially filter the single subject data in order to emphasize the group-level components (Calhoun & Adali, 2012;Calhoun, Adali, Pearlson, & Pekar, 2001; Calhoun, Liu, & Adalı, 2009; Eichele, Rachakonda, Brakedal, Eikeland, & Calhoun, 2011). I’ve found this approach very useful for both temporal and spatiospectral decompositions of EEG.

Even though there are approaches to obtain more robust measures of ERP peaks, I’m still not a fan of assigning labels to the peaks, and I’m concerned about overinterpreting their function. This is probably due to my personal issue that the brain seems to trick us into thinking we understand a phenomenon once we give it a name. As Shakespeare said, “These earthly godfathers of Heaven’s lights, that give a name to every fixed star, have no more profit of their shining nights than those that walk and know not what they are.” Fortunately, experiments may be designed so that differences in ERP peaks are interpretable even without assigning labels to the peaks, or without having to rely on the previous literatures interpretations of the peaks. I’ll show you a few examples where I’ve applied this approach to ERPs, demonstrating attention and cognitive control impairments within patients with schizophrenia and depression, respectively.

Oddball Stimuli

Before, I mentioned that the primary function of the brain is to monitor the environment for behaviorally relevant events and to respond to those events. These processes can be examined with a simple experiment called the oddball experiment. Each dot within the above slide is a discrete event/input. One set of auditory tones is presented frequently, as depicted by the blue dots, and another set of auditory tones (the red dots) are presented infrequently. It sounds like “beep beep beep beep boop” where the “beeps” are frequent and the “boops” are infrequent "oddballs". When the brain hears the rare “boop”, a series of networks are engaged which reorient our attention to the unexpected event, generating a robust ERP response. So, by measuring this ERP response within individuals you can determine their sensitivity to environmental regularities, even within infants who passively listen to the beeps and boops.

Oddball Surprise Stimuli

But there is some added complexity. Typically we average the response to each of the oddballs, i.e. each of the rare “boops” depicted in red, assuming that they are equally unexpected. But it turns out that your expectancy of each event depends on the recent history of events. So here, I’ve calculated the level of surprise of each individual stimulus using Itti and Baldi’s surprise model (Baldi & Itti, 2010; Itti & Baldi, 2009; Itti & Baldi, 2005). It is a Bayesian model which assumes that individuals assign a probability in which each potential target will appear. The probability distribution is updated after a new stimulus appears, and the differences in the distributions before and after the stimulus reflect its level of surprise (or prediction error) - larger changes in the distribution correspond to larger levels of surprise. According to this model, and as indicated in the slide above, if you are presented with three rare targets in a row, the ERP response to each target is diminished as the brain begins to anticipate that targets will appear in the future with a higher probability. In this case, the response to the frequent standard stimulus (the “beep”) that follows is actually larger than the response to a target that follows another target. This is because the brain constantly updates and regenerates predictions for stimuli based upon the recent history of events, and the frequent note (in blue) may appear rare if the recent history of events happens to include many notes (in red) which were typically infrequent. It's clear that the surprise model provides a more detailed measure of the sensitivity to environmental regularities than simple binary classification into "frequent" and "infrequent" categories.

Surprise or Prediction Error Deficits Schizophrenia

On the lowest plot here you can see the ERP response to the rare target “boop” as a function of their level of surprise. This figure indicates that higher ERP amplitudes are observed for targets with higher levels of surprise. And you’ll note that the responses are linear when surprise is plotted on the x-axis, but they are not linear when a more traditional measure of “surprise” is used: the number of preceding non-targets (middle plot) (Bridwell et al., 2014). This means that these ERP amplitudes are modulated linearly with the modeled level of surprise, which is useful since linear models are used in the majority of statistical analysis.

Surprise or Prediction Error Deficits Schizophrenia

So we have a measure of individual’s sensitivity to these subtle regularities in the environment. And we wanted to apply this measure to patients with schizophrenia to determine whether there is an impairment in their sensitivity to these subtle regularities. The idea is that patients diagnosed with schizophrenia demonstrate complex symptoms, and impairments within basic stimulus processing may propagate up and contribute to deficits within complicated domains like attention and memory.

The figure above shows the average amplitude to each individual infrequent target, plotted according to its level of surprise. We found that the correlation between surprise and ERP amplitude was significantly larger within healthy controls than within patients diagnosed with schizophrenia (Bridwell et al., 2014). This demonstrates that patients with schizophrenia are impaired in their ability to detect subtle regularities in their environment. This impairment could be related to a more fragmented perceptual experience, or to impairments in working memory, within patients with schizophrenia.

It is important to note that these ERP amplitudes were computed from the temporal source depicted above, derived from Group ICA (Calhoun et al., 2001; Eichele et al., 2011). We didn’t have to label this peak, or associate it with ERP peaks which have previously been identified, since the experimental context provides sufficient interpretation of differences between these two populations. We show that there is a process in the brain that peaks 340 ms after the stimulus which is more sensitive to subtle acoustic regularities in healthy controls than patients with schizophrenia—this finding is interesting irrespective of the name of the peak.

The other relevant thing to note is that these differences were observed using only the rare target “boop” stimuli. This means that the response to targets can be used as a measure of environmental sensitivity in addition to the standard approach of comparing the average response to rare and frequent stimuli.

Depression and Cognitive Control ERPs

Group Temporal ICA was also succesfully implemented within this study looking at the relationship between ERP measures of cognitive control and depression symptoms (Bridwell et al., 2015). Individuals were instructed to press a button when an “X” appeared on the screen and withhold responses to a “K”. The “X” appeared more frequently, so individuals get in the habit of pressing a button each time a letter appears and they often make an incorrect button press to the infrequent “K”. It turns out that there are a cascade of events that occur within the brain in response to this error, which involves identifying that an error was made and adjusting your perception and behavior for subsequent trials (Falkenstein, 2004; Yeung, Botvinick, & Cohen, 2004).

The ERP response to these errors provides a measure of cognitive control, and cognitive control is one of the cognitive deficits common in patients with depression. In order to understand the relationship between these cognitive deficits and depression further we determined whether cortical deficits in cognitive control (reflected in the magnitude of ERP amplitudes following errors) were related to either self-reported somatic depression symptoms (such as sadness, loss of pleasure, indecisiveness), or self-reported affective symptoms (such as negative mood or affect) (Storch, Roberti, & Roth, 2004). We found that individuals with greater somatic symptoms demonstrate a reduced ERP response 287-525 ms following an error, but we were unable to observe a statistical relationship between ERP amplitudes and affective symptoms (Bridwell et al., 2015). These findings suggest that individuals with greater somatic symptoms may have a reduced awareness of errors, and they further clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. The findings highlight the utility of focusing on clinical symptoms instead of clinical diagnosis, and may improve clinical assessment and treatment of depression.

Return to Discrete Events       Skip to Summary and References

Natural Stimuli and EEG

It’s important to start to bridge the gap between our carefully constructed experimental paradigms and more natural paradigms that better approximate the complex environment. In order to improve our understanding and interpretation EEG responses to complex stimuli, we measured EEG within individuals while they watched movie clips (Bridwell, Roth, Gupta, & Calhoun, 2015). The issue with identifying EEG responses to movies is that it’s difficult to simplify your representation of the input. For example, do you look for brain responses that follow a pixel and assume that the brain is modulated by that pixel, or do you look at faces and assume that the brain is modulated by faces? You have to make a choice of which features to focus on, and that decision can be difficult. The alternative is to measure responses across individuals to the same inputs, i.e. the same movie clips, and then look at the correspondence across individuals. We used correlation for that measure of correspondence, and this approach is called Intersubject Correlations (ISC’s) (Hasson, 2004). The idea is, you have a complex input, so you don’t necessarily know what the output of the brain will be. It could be non-linear, and it will be a summation of different evoked potentials to different aspects of the stimuli, but it turns out it doesn’t matter as long as that non-linear complex response is similar across individuals. Or, at least in this analysis, when we correlate responses across individuals we are emphasizing the complex responses that are similar across subjects.

EEG while individuals watch movies

We had individuals watch clips from these 16 different movies. There is romance, comedy and drama, and a lot of awkward social interaction, since I think awkward social interactions drive a lot of brain responses.

We wanted to know what information could be retrieved from cortical responses to complex stimuli. We wanted to determine first whether we could determine which clips they were watching from their EEG, and second, whether their response to the movies was related to their engagement or preference for the content of the clips.

Intersubject correlations and movies

We had a simple way of predicting which movie clip individuals watched based upon correlating across all subjects and picking the one with the highest correlation. The dashed white line indicates chance at 6.25% and we were above chance for all 16 clips. The ability to predict clips appeared to depend on the content of the clips, with the highest accuracy (above 50%) for an awkward dinner table scene from Silver Linings Playbook (Bradley Cooper's character tells Jennifer Lawrence's charater "you say more inappropriate things that appropriate things"), and lower accuracies generally observed for comedic content. Overall, these findings indicate that the EEG is tracking changes in the auditory and visual features of the clip, and there is enough similarity in the fluctuations across subjects to predict which clips individuals viewed.

ISC's and preference

I thought this was a very elegant way, a very simple way (I guess those are related) to determine whether EEG responses were related to individuals preferences for the clip content. They rated each of the movies from 1 to 5 based upon whether they liked it or disliked it and we computed the correlation between two individuals with respect to their ratings, which is Spearman’s Rho on the x-axis. The average correlation between the EEG of that same pair is indicated on the y-axis. If they had similar preferences in clips then they would have similar EEG, but that wasn’t the case: Individuals with similar clip preferences do not appear to demonstrate similar cortical responses.

We should keep looking and thinking about what information these responses may provide. They could be related to your subsequent memory for the clips, or they could be broadly related to the clip content. There was a study that looked at individual’s EEG engagement with the clips and the number of twitter tweats from the TV series Walking Dead. They found that individuals EEG responses were more related to the publics engagement with the clips than with the actual subjects engagement with the clips (Dmochowski et al., 2014). So, it’s possible that they were more engaged by the clips that generated stronger EEG responses, but they aren’t very good at accurately reporting their engagement or preference. People sometimes don’t know what they want.

Return to Natural Stimuli      

Summary of David Bridwell's Research

To summarize, I talked about different approaches to EEG experiments and analysis. I talked about what (scalp averaged) cortical oscillations reveal about cognition, perception, and attention. When oscillating visual or auditory inputs are presented, as in EEG frequency tagging, they entrain different cortical networks by their different input frequencies, revealing the attention and perceptual properties of these networks.

With ERP’s we decomposed the multiplexed response into individual peaks with group temporal ICA, and peak amplitudes demonstrated impairments in attention within patients diagnosed with schizophrenia. We also showed that individuals with a reduced error response had greater somatic symptoms, indicating a relationship between somatic depression symptoms and cognitive control.

Next, we measured EEG responses to movie clips and demonstrated that EEG fluctuations to the complex auditory and visual stimuli were similar enough across individuals to predict which movie clips they viewed.

References

Baldi, P., & Itti, L. (2010). Of bits and wows: a Bayesian theory of surprise with applications to attention. Neural Networks, 23(5), 649–666.

Basar, E. (1999). Brain Function and Oscillations, Principles and Approaches, 1. Berlin: Springer.

Bendat, J. S., & Piersol, A. G. (2000). Random Data. Analysis and measurement procedures (3rd ed.). New York: John Wiley & Sons.

Bridwell, D. A., Hecker, E. A., Serences, J. T., & Srinivasan, R. (2013). Individual differences in attention strategies during detection, fine discrimination, and coarse discrimination. Journal of Neurophysiology, 110(3), 784–794. https://doi.org/10.1152/jn.00520.2012

Bridwell, D. A., Leslie, E., McCoy, D., Plis, S., & Calhoun, V. D. (2017). Cortical Sensitivity to Guitar Note Patterns: EEG Entrainment to Repetition and Key. Frontiers in Human Neuroscience, 11(90).

Bridwell, D.A., Kiehl, K. A., Pearlson, G. D., & Calhoun, V. D. (2014). Patients with schizophrenia demonstrate reduced cortical sensitivity to auditory oddball regularities. Schizophrenia Research, 158(1–3), 189–194. https://doi.org/10.1016/j.schres.2014.06.037

Bridwell, D.A., Rachakonda, S., Silva, R. F., Pearlson, G. D., & Calhoun, V. D. (2016). Spatiospectral Decomposition of Multi-subject EEG: Evaluating Blind Source Separation Algorithms on Real and Realistic Simulated Data. Brain Topography. https://doi.org/10.1007/s10548-016-0479-1

Bridwell, D.A., & Srinivasan, R. (2012). Distinct attention networks for feature enhancement and suppression in vision. Psychological Science, 23(10), 1151–1158.

Bridwell, D.A., Steele, V. R., Maurer, J. M., Kiehl, K. A., & Calhoun, V. D. (2015). The relationship between somatic and cognitive-affective depression symptoms and error-related ERPs. Journal of Affective Disorders, 172, 89–95. https://doi.org/10.1016/j.jad.2014.09.054

Bridwell, D.A., Wu, L., Eichele, T., & Calhoun, V. D. (2013). The spatiospectral characterization of brain networks: fusing concurrent EEG spectra and fMRI maps. NeuroImage, 69, 101–111.

Bridwell, David A., Roth, C., Gupta, C. N., & Calhoun, V. D. (2015). Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference. PLOS ONE, 10(6), e0128833. https://doi.org/10.1371/journal.pone.0128833

Buzsaki, G. (2006). Rhythms of the brain. New York: Oxford University Press.

Calhoun, V., & Adali, T. (2012). Multi-Subject Independent Component Analysis of fMRI: A Decade of Intrinsic Networks, Default Mode, and Neurodiagnostic Discovery. IEEE Reviews in Biomedical Engineering, 5, 60–72.

Calhoun, V. D., Adali, T., Pearlson, G. D., & Pekar, J. J. (2001). A method for making group inferences from functional MRI data using independent component analysis. Human Brain Mapping, 14(3), 140–151.

Calhoun, V. D., Liu, J., & Adalı, T. (2009). A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data. Neuroimage, 45(1 Suppl), S163.

Canolty, R. T., Edwards, E., Dalal, S. S., Soltani, M., Nagarajan, S. S., Kirsch, H. E., … Knight, R. T. (2006). High Gamma Power Is Phase-Locked to Theta Oscillations in Human Neocortex. Science, 313(5793), 1626–1628. https://doi.org/10.1126/science.1128115

Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134(1), 9–21.

Dmochowski, J. P., Bezdek, M. A., Abelson, B. P., Johnson, J. S., Schumacher, E. H., & Parra, L. C. (2014). Audience preferences are predicted by temporal reliability of neural processing. Nature Communications, 5. https://doi.org/10.1038/ncomms5567

Doelling, K. B., & Poeppel, D. (2015). Cortical entrainment to music and its modulation by expertise. Proceedings of the National Academy of Sciences, 112(45), E6233–E6242. https://doi.org/10.1073/pnas.1508431112

Edelman, G. M., & Tononi, G. (2000). A universe of consciousness. New York: Basic Books.

Eichele, T., Rachakonda, S., Brakedal, B., Eikeland, R., & Calhoun, V. D. (2011). EEGIFT: group independent component analysis for event-related EEG data. Computational Intelligence and Neuroscience, 2011, 9.

Engel, A. K., Fries, P., & Singer, W. (2001). Dynamic predictions: oscillations and synchrony in top-down processing. Nature Reviews Neuroscience, 2, 704–716.

Falkenstein, M. (2004). ERP correlates of erroneous performance. In Errors, conflicts, and the brain. Current Opinions on Performance Monitoring (pp. 5-14). Leipzig: Max-Plank-Institut fur Kognitions- und Neurowissenschaften.

Feld, S., & Fox, A. A. (1994). Music and language. Annual Review of Anthropology, 25–53.

Galambos, R., Makeig, S., & Talmachoff, P. J. (1981). A 40-Hz auditory potential recorded from the human scalp. Proceedings of the National Academy of Sciences, 78(4), 2643–2647.

Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R. (2013). Cognitive Neuroscience: The Biology of the Mind (4th ed.). W. W. Norton & Company.

Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C. J., Wedeen, V. J., & Sporns, O. (2008). Mapping the structural core of human cerebral cortex. PLoS Biol, 6(7), e159.

Hasson, U. (2004). Intersubject Synchronization of Cortical Activity During Natural Vision. Science, 303(5664), 1634–1640. https://doi.org/10.1126/science.1089506

Itti, L., & Baldi, P. (2009). Bayesian surprise attracts human attention. Vision Research, 49, 1295–1306.

Itti, Laurent, & Baldi, P. (2005). A principled approach to detecting surprising events in video. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 1, pp. 631–637). Retrieved from http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1467327

Lamme, V. A. F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11), 571–579.

Lee, T. W., Girolami, M., & Sejnowski, T. J. (1999). Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural Computation, 11(2), 417–441.

Luck, S. (2004). An introduction to the Event-Related Potential technique (2nd ed.). Boston: MIT Press.

Navalpakkam, V., & Itti, L. (2007). Search Goal Tunes Visual Features Optimally. Neuron, 53(4), 605–617. https://doi.org/10.1016/j.neuron.2007.01.018

Nunez, P., & Srinivasan, R. (2006). Electric Fields of the Brain: The neurophysics of EEG (2nd ed.). New York: Oxford University Press.

Patel, A. D. (2011). Why would Musical Training Benefit the Neural Encoding of Speech? The OPERA Hypothesis. Frontiers in Psychology, 2. https://doi.org/10.3389/fpsyg.2011.00142

Peretz, I., Vuvan, D., Lagrois, M.-E., & Armony, J. L. (2015). Neural overlap in processing music and speech. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1664), 20140090–20140090. https://doi.org/10.1098/rstb.2014.0090

Picton, T. W., John, M. S., Purcell, D. W., & Dimitrijevic, A. (2003). Human auditory steady-state responses. International Journal of Audiology, 42(4), 177–219.

Roth, C., Gupta, C. N., Plis, S. M., Damaraju, E., Khullar, S., Calhoun, V. D., & Bridwell, D. A. (2013). The influence of visuospatial attention on unattended auditory 40 Hz responses. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00370

Scolari, M., & Serences, J. T. (2010). Basing Perceptual Decisions on the Most Informative Sensory Neurons. Journal of Neurophysiology, 104(4), 2266–2273. https://doi.org/10.1152/jn.00273.2010

Storch, E. A., Roberti, J. W., & Roth, D. A. (2004). Factor structure, concurrent validity, and internal consistency of the beck depression inventory?second edition in a sample of college students. Depression and Anxiety, 19(3), 187–189. https://doi.org/10.1002/da.20002

Strogatz, S. H., Abrams, D. M., McRobie, A., Eckhardt, B., & Ott, E. (2005). Crowd synchrony on the Millennium Bridge. Nature, 438.

Tiitinen, H., Sinkkonen, J., Reinikainen, K., Alho, K., Lavikainen, J., & Naatanen, R. (1993). Selective attention enhances the auditory 40-Hz transient response in humans. Nature, 364, 59–60.

Varela, F., Lachaux, J. P., Rodriguez, E., & Martinerie, J. (2001). The brainweb: phase synchronization and large-scale integration. Nature Reviews Neuroscience, 2(4), 229–239.

Verghese, P., Kim, Y.-J., & Wade, A. R. (2012). Attention Selects Informative Neural Populations in Human V1. Journal of Neuroscience, 32(46), 16379–16390. https://doi.org/10.1523/JNEUROSCI.1174-12.2012

Yeung, N., Botvinick, M. M., & Cohen, J. D. (2004). The neural basis of error detection: conflict monitoring and the error-related negativity. Psychological Review, 111(4), 931–959.

return to top