Biomag 2000 tutorial, Helsinki, Aug 16, 2000 (35+10 min)

Slide01. First, I want to emphasize that MEG data are usually NOT ambiguous. It is mostly quite obvious which areas are active. In this sense, the infamous inverse problem is not really the problem in the analysis of MEG data.

What I want to stress is that you really have to learn to read your signals. That is the sound basis for all analysis. Also, careful experimental design and measurement are the prerequisite for successful data analysis.

Slide02. The data sets I will show as examples are from paradigms using simple auditory stimulation, somatosensory stimulation, silent reading of single words, and reading words aloud, so the difficulty increases towards the end of the talk.

Slide03. This is our starting point: continuous MEG signals recorded when the subject heard short tone pips to his left, right, left, right, left ear, every 1 second. These signals were recorded over the left and right temporal lobes, and over the parietal and left occipital cortex. For auditory stimulation it is sometimes possible to identify evoked responses even in single trials, but not systematically. The responses are embedded in this strong background activity, which is occasionally quite rhythmic. In the end of my talk, we will return to this interesting background activity which is noise only from the point of view of the phase-locked evoked responses we would now like to obtain. To extract the evoked responses, we will have to average brain activity over 80-100 trials, aligned with stimulus onset.

Slide04. If the signals are recorded with magnetometers, like in the BTi whole-head system, or axial gradiometers, like in the CTF system, this is what we will see in response to the auditory stimulation. These data were recorded with the Neuromag Vectorview device, which has both planar gradiometers and magnetometers at the same locations, so I can nicely illustrate the different types of signals for you. There are 102 measurement locations. The measurement helmet is flattened onto a plane and we are viewing it from above, with the nose pointing upwards. We are looking at a 300-ms time interval, with a 50-ms prestimulus baseline.

A source current is here indicated by this arrow. Magnetometers, or axial gradiometers, detect two maxima, one negative and the other positive, on both sides of the current. Immediately above the source, the signal is zero. So, with magnetometers, when you do see a strong signal, the one thing you know for sure is that the source is NOT at that location. By looking at these two pairs of N100 maxima over the left and right hemispheres, we can tell that there must be source currents in both hemispheres, between the maxima.

Slide05. This slide shows the simultaneously recorded signals of the 204 planar gradiometers. The planar gradiometers detect one maximum, directly above the source current, where the field gradient is largest. Therefore, we now see only one single area of strong deflections in both hemispheres, just above the active cortical area.

In each measurement location, there are two planar gradiometers which are oriented orthogonally to each other. The upper signal of each pair is the output of the pickup coil most sensitive to current flowing longitudinally, laterally from the vertex. The lower signal is the output of the pickup coil most sensitive to latitudinal current flow. Thus, seeing that the response is mainly in the upper row tells us immediately that the current will be oriented pretty much vertically, when seen from the side, as it should be if it is generated within the sylvian fissure.

In the rest of the talk I will show data recorded with planar gradiometers. Their signals are often easier to read than those of magnetometers, especially when we get to the language data where several distinct areas are active simultaneously.

Slide06. This slide depicts the magnetic field pattern over the left hemisphere when we move through the strong N100 response. The field patterns are naturally the same, independent of whether the signals were recorded with axial or planar gradiometers. At about 60 ms, there is very little signal. Around the peak response, we can see a clearly dipolar field pattern. In these slides, the red area indicates magnetic field emerging from the brain and the blue area the re-entering field. At 150 ms after stimulus onset, we have again a clearly dipolar field pattern, but anterior and inferior to the earlier N100 pattern.

Note that the assumption that the sources can be represented by current dipoles is not just some mathematical trick to simplify the inverse calculation. When you have good-quality data, most of your strong cortical sources do produce focal dipolar field patterns.

Slide07. By scanning through the N100 response, we find a time point where the field is as closely dipolar as possible. The density curves should then be fairly symmetrical, and the line connecting the maxima should be perpendicular to the zero field line, indicated with this black curve. The center of the dipole should fall on the crossing of these lines. This way you should already have a feeling of what to expect before you calculate the dipole solution.

There is simultaneous activation in both hemispheres. To find the source of the left N100 response, we select a subset of sensors, covering the local field maxima. The selection is also depicted as the light gray squares on the helmet. This source is located in the lower lip of the sylvian fissure.

A model using this one source, plotted in light blue, allows comparison with the original signals, which are plotted in orange. This source accounts nicely for the N100 in the left hemisphere, but not so well for the later component, which had a clearly different field pattern. The right hemisphere N100 remains unexplained, as it should. We can get the right-hemisphere N100 source in the same way, by finding a clear dipolar field pattern and using another subset of sensors, as shown here...

Slide08. In this slide, we now have both the left and right N100 sources available. If our source models are good, the source waveforms should not be dramatically affected by including only one or both sources in the multidipole model. The light green curve indicates the model where we have only the left N100 source, and the red model only has the right N100 source. Both sources are included for the orange curves. As we can see on the right, the waveforms are indeed not affected by inclusion of only one or both sources. However, the goodness-of-fit value calculated for all the 306 sensors is heavily reduced when we use only the left- or right-hemisphere source. So, there was no interaction between the sources and, according to this criterion, our model is fine for the N100 deflection.

Slide09. But, as you remember, the later deflection at around 200 ms was not explained by the N100 source. In this subject, the so-called P200 sources were located slightly anterior to the N100 sources, apparently in the upper lip of the superior temporal sulcus. The N100 and P200 sources are spatially rather close to each other and have fairly similar (but opposite) orientations of current flow. However, we can still try to include them all in the multidipole model. In the 4-dipole model, shown by these thick orange and red curves, we can see how the N100 sources are active first and then return to the baselevel when the P200 sources become active. The goodness-of-fit value now exceeds 80% for most of the measurement interval.

For comparison, I have also shown the previous 2-dipole model with only the N100 sources, plotted in bright yellow. The N100 waveforms are slightly affected by the inclusion of the P200 dipoles, because of the closeness of the sources. The g-value at the 100-ms peak does not depend on whether there are 2 or 4 sources in the model.

Slide10. We will now move on to a slightly more complex data set. These are responses to right median nerve stimulation. Again, we are looking at a 300-ms time interval. Obviously, there is a lot of activity over the left central sulcus, with different time behaviours, with early components close to the vertex and later components also more laterally in both hemispheres.

Slide11. Also in this case, by scanning through the field patterns one can recognize the different source areas rather nicely at distinct time points. At about 20 ms after stimulus onset, there is a dipolar field pattern forming over the left hand area in the primary somatosensory cortex. The picture becomes even clearer at about 30 ms, with a strong signal emerging from the hand area. Until about 50 ms, the field pattern remains rather unchanged. At about 80 ms, the left posterior parietal cortex produces a pronounced dipolar field pattern. At around 100 ms, it is accompanied by activations in the left and right second somatosensory cortices in the upper lip of the sylvian fissure, with the current therefore pointing upwards.

When several source areas are active rather simultaneously, one has to work a bit with the selection of the subset of sensors used for source localization. For the early sources in the upper row, almost any selection will do, because there is no other activation. For the posterior parietal source, one should ecxlude the left frontal sensors which see the SII activation. Similarly, for the left SII source, the sensor selection should avoid the parietal region. It may even be helpful to remove the field pattern arising from the posterior parietal source before finding the left SII source.

Similarly as for the auditory responses, the question is not where the active areas are but how to find such unequivocal field patterns that it is possible to localize the sources reliably enough.

Slide12. Here, the sources are shown on the subject's MR images. The 20-ms and 30-ms responses are generated by slightly different sources.However, they are so similar in location and orientation that they cannot be both included in the multidipole model, because they interact too much. Therefore, the strong 30-ms source represents SI activation in the model. That is why the SI waveform is negative at 20 ms and then goes positive. There is a nice sequence from SI to posterior parietal and further to the ipsi- and contralateral SII cortices, with all activations partly overlapping in time. Again, one can check for possible interactions between sources by leaving out one source at a time and seeing if it affects the other waveforms.

Slide13. For this high-quality data set, other analysis approaches will also give essentially the same sequence of activation. Here, the data were visualized with the Minimum Current Estimate analysis. From 20 to 50 ms, activity concentrates more or less to the SI cortex. At about 80 ms, both the posterior parietal cortex and the ipsilateral SII show activation. At about 100 ms, the SII activations have reached their maximum in both hemispheres.

Slide14. We now step further, to cortical activations during silent reading of single words. We had a parametric design to assess effects of lexicality and word length. The subject was shown short and long words and short and long nonwords, in a randomized sequence. Based on behavioural data, we expected that the length should not matter for real words, but should matter for nonwords.

Slide15. And here are the MEG responses in one subject, again using planar gradiometers. We are looking at a time interval of 1 second. The word is shown at time zero. Now there is activation essentially everywhere in the brain. We can of course recognize early visual activations in the posterior areas but otherwise it may not seem immediately obvious which exact areas are active. You might even think that we are in real trouble in this type of cognitive task. BUT we can do a lot about this data set as well.

In these enlarged sensor outputs we can already see some interesting effects. Over the left temporal area we have a response which is the same for short and long words but much stronger for long than short nonwords. It looks a lot like the lexicality-by-length effect we were expecting. This effect is seen mainly in the upper row in this area and we would thus expect a source rather similar to the auditory N100. In the posterior parietal cortex, we have a particularly strong response for long words. This early posterior visual response is similar for all stimuli.

Slide16. We might start looking for sources in the long nonword condition where we have the exceptionally strong left temporal activation. Although there are now many more simultaneously active areas than in the previous examples, we can still find a reasonably dipolar field pattern for this source. By selecting these light gray frontotemporal sensors we can avoid the unwanted effect of other active areas. The source is located in the superior temporal cortex. It is indeed rather similar to the N100 source as we could tell already from the sensor outputs. A model with this one source accounts nicely for these left temporal signals. It also helps to recognize systematic unexplained signals in other areas, systematic meaning that the same waveform can be seen in several adjacent sensors. For example, there is this early posterior deflection. Here, close to the vertex, we would expect to find a rather vertically oriented source current. In the right posterior parietal cortex, we should find a fairly horizontally oriented source. And more laterally, in the occipitotemporal cortex, another horizontally oriented source current, as the signals are mainly in the lower row.

Slide17. In this case we have four data sets which are rather similar but have some interesting differences because we have manipulated the stimuli ina systematic way. There are various approaches for modelling these data. You could analyze each data set independently and then combine the sources into a single model. You could start from the most prominent response pattern, like I just did, or you could start from the earliest systematic response. As soon as you find one source, the others will become more obvious. You could choose the sources with clearest field patterns and/or best confidence values for the combined model. Some source areas may be identified more successfully from one stimulus condition than another, like the left temporal source from the long nonword condition.

You could also start from one experimental condition, use these sources when you start to work with the next condition, and add and modify your sources while you work your way through the data sets. In fact, it would be best to analyze your data 2-3 times, using different approaches. The main thing is that you get to know your data.

Separate sets of sources for each condition may be useful if you expect to find small but systematic differences in location and orientations of current flow, say, for long words vs. long nonwords. On the other hand, a single set of sources for all experimental conditions is what you would like to have for comparing activation strengths and timing between stimuli. Always try to use the minimum number of sources which can explain the data.

Slide18. After the 2-3 analysis rounds, with different approaches, we have this model with 9 dipoles in this subject, spreading over the occipital areas, left temporal and posterior parietal, and right occipitotemporal cortex. The sources are ordered according to latency of activation. The first differences between stimuli start to appear at about 200 ms. One should always check that each individual source makes sense when comparing the source waveforms with the original signals. As an example, we could focus on this left temporal area. Here the signals in the upper and lower rows display quite different task dependence. Our solution suggests that sources 5 and 9 produce these signals. The long nonwords differ from the other stimuli in source 9 in the superior temporal cortex, while in the more inferior, rather horizontally oriented source 5, the response is strongest for the short nonwords, in agreement with the original signals.

Slide19. To check that the model is really meaningful, it is very useful to compare two stimulus conditions at a time. Here, we have responses to long words and long nonwords, which showed the most pronounced differences. There are at least two very obvious differences in the original waveforms, one over the left temporal and the other over the posterior parietal cortex. These areas apparently correspond to sources 9 and 8. Indeed, source 9 shows the strong response to nonwords whereas source 8 has the stronger activation to real words, with the time behaviour matching that of the original sensor waveforms. So, we are reasonably happy with this model.

Slide20. Now let us move to another topic: The next couple of slides will describe how one may enhance the clarity of field patterns by averaging with respect to different triggers, and how one may deal with speech artefacts. The subject again saw single words, but now he also read them aloud, prompted by a question mark. Here, the responses are averaged with respect to word onset. We are looking at a time interval of 2 seconds. If we now concentrate on the bilateral frontal activations at about 500 ms after word presentation, we can see a very nice and clear focal activation over the left hemisphere and can readily determine the source in the mouth motor cortex. However, in the right hemisphere, this is the clearest field pattern one can find, which is not too nice.

Slide21. Now, if we have recorded mouth movements with EMG electrodes placed here in the corners of the mouth, we can also average the same original data with respect to mouth movement onset. This seems like a clever thing to do, as we are apparently trying to identify sources in the mouth motor cortex. And, indeed, the frontal signals are slightly enhanced and now we see neat dipolar field patterns not only over the left motor cortex but also over the right hemisphere. Both of these sources can now be nicely localized, and the dipoles can be used as models also for the signals seen in the previous slide, which were averaged with respect to word onset.

Slide22. In most subjects, mouth movements cause serious artefacts. But one can often do something about that as well. Here, the original data were averaged with respect to speech onset, recorded with a microphone. The artefact, which concentrates along the rim of the helmet, is rather nicely time-locked with speech onset and, therefore, we can get a very clean artefact pattern. By emphasizing the artefact this way, we can then remove this disturbing field pattern from the responses averaged with respect to word onset and mouth movement onset, shown on the previous slides.

Averaging the same data set with respect to different triggers can thus help to clarify both cortical sources and artefact patterns.

Slide23. I will now return to the continuous background activity which is so disturbing for the phase-locked evoked reponses. But one can also obtain important information from the background activity. These spectra now show the frequency distribution in one healthy subject. Two sensors over the central sulcus are shown enlarged on top, and one occipital sensor below. The parieto-occipital activity is mainly in the 10-Hz range, called alpha rhythm, whereas the sensorimotor activity has both 10- and 20-Hz components, and is known as mu rhythm. The orange curves show the relaxed, eyes closed condition when the level of rhythmic activity is highest. The occipital rhythm is suppressed by opening the eyes and the sensorimotor rhythm by moving the left or right hand.

Slide24. The 20-Hz rhythm of the motor cortex is particularly useful. The coloured areas show source clusters of foot, hand, and mouth area 20-Hz rhythm in this subject. The dots denote the approximate sensor locations, where these signals come from. When the subject moves his left toes, the movement is followed by a transient increase of 20-Hz activity over the upper part of the right motor cortex. When the left index finger is moved, the modulation is found more laterally, over the hand area. For mouth movement, the burst is even more lateral. The task-related modulation of 20-Hz activity thus shows somatotopic organization.

Slide25. We may quantify the task-related modulation for example with a technique we call Temporal Spectral Evolution or TSE. The signal is first filtered through a passband suggested by the frequency spectrum, in this case around 20 Hz, and then its absolute value is taken. When this signal is averaged with respect to movement, one gets the average amplitude level of 20-Hz oscillations as a function of time with respect to movement onset. Note that the cortical rhythms are modulated over periods of a few seconds whereas the phase-locked evoked responses typically have a time scale of less than half a second.

Slide26. The 20-Hz rhythm in the motor cortex was modulated in the task where the subjects read single words aloud, which I mentioned a couple of slides back. The speech artefact is no longer a problem when we go up to the 20-Hz range. Here we have the whole-head view of one subject, showing the mean amplitude of 20-Hz oscillations from 1 second before word onset to 5 seconds after it. There is a clear local suppression of 20-Hz activity, which is generally taken to indicate that the cortex is doing heavy task-related computation. The suppression appears to concentrate approximately over the mouth areas but also over the hand area, with different time behaviours.

Slide27. In this rather simple situation, where the modulation concentrates to the motor cortex, we can model the continuous activity in pretty much the same way as for evoked responses. As the sources of the motor cortical 20-Hz activity concentrate to the hand and mouth areas, we can find representative sources in these areas, which are shown here on the subject's MRI. Thereafter, we can construct a multidipole model of the continuous activity using these four sources. These curves are 15-second intervals of the source waveforms, not the original sensor waveforms. A word was shown at each vertical line. The red curves of the mouth areas show fairly systematic suppression after word onset.

On the right, we have the modulation of 20-Hz activity in the hand and mouth areas, averaged over 10 subjects. We can see that the mouth area rhythms were strongly suppressed as soon as the word was shown, well before vocalization, whereas the weak hand area suppression started later, with the vocalization prompt.

Slide28. To conclude, the experimenter can do a lot to make data analysis easier. The first thing is to decide what one really needs to find out and choose the paradigm accordingly. For example, accurate localization is certainly needed for presurgical mapping of the central sulcus. The early somatosensory evoked fields will do a great job here as there is little activity in other cortical areas than the primary somatosensory cortex. But if one studies differences in timing, which is what MEG is really good at, for example during language tasks, extremely accurate localization is not really the issue to push. Systematic variation of stimuli is essential for functional localization, and also helpful in source modelling.

The experimenter should make sure that the data is of highest quality. If one cannot avoid movement artefacts, they should be clean and clear-cut as well. No analysis method can save bad data. Garbage integrated is garbage.

It is essential that the experimenter knows the data throughout. It may be a good idea to use different analysis methods to look at the same data also because that makes one spend more time scrutinizing the signals and thus become thoroughly familiar with them. This makes it possible to fully understand the solutions one gets.

Finally, when a solution or model is ready, one should be able to recognize it also in the original signals. Everything should make sense in the end.

Slide29. But which tools to choose? Obviously, there are no miracle tools which would be more correct than others and this is of course the real inverse problem in MEG or EEG analysis.

User-independent tools are great for comparisons across sites and users. Clearly, one should still not assume that they are somehow more correct.

For good and clear data, any analysis tool should be fine and should give essentially the same results.

It is essential to have visual control of the solution or model, to verify which particular parts of the original data the different source areas account for. For the time being, I personally find the equivalent current dipole approach particularly attractive in this respect.

And, whichever tool one uses, one should of course be cautious in the interpretations. But this is true for all imaging techniques.