In MEG and EEG studies, the accuracy of the head digitization impacts the co-registration between functional and structural data. The co-registration is one of the major factors that affect the ...spatial accuracy in MEG/EEG source imaging. Precisely digitized head-surface (scalp) points do not only improve the co-registration but can also deform a template MRI. Such an individualized-template MRI can be used for conductivity modeling in MEG/EEG source imaging if the individual's structural MRI is unavailable. Electromagnetic tracking (EMT) systems (particularly Fastrak, Polhemus Inc., Colchester, VT, USA) have been the most common solution for digitization in MEG and EEG. However, they may occasionally suffer from ambient electromagnetic interference which makes it challenging to achieve (sub-)millimeter digitization accuracy. The current study-(i) evaluated the performance of the Fastrak EMT system under different conditions in MEG/EEG digitization, and (ii) explores the usability of two alternative EMT systems (Aurora, NDI, Waterloo, ON, Canada; Fastrak with a short-range transmitter) for digitization. Tracking fluctuation, digitization accuracy, and robustness of the systems were evaluated in several test cases using test frames and human head models. The performance of the two alternative systems was compared against the Fastrak system. The results showed that the Fastrak system is accurate and robust for MEG/EEG digitization if the recommended operating conditions are met. The Fastrak with the short-range transmitter shows comparatively higher digitization error if digitization is not carried out very close to the transmitter. The study also evinces that the Aurora system can be used for MEG/EEG digitization within a constrained range; however, some modifications would be required to make the system a practical and easy-to-use digitizer. Its real-time error estimation feature can potentially improve digitization accuracy.
Beamformers are applied for estimating spatiotemporal characteristics of neuronal sources underlying measured MEG/EEG signals. Several MEG analysis toolboxes include an implementation of a linearly ...constrained minimum-variance (LCMV) beamformer. However, differences in implementations and in their results complicate the selection and application of beamformers and may hinder their wider adoption in research and clinical use. Additionally, combinations of different MEG sensor types (such as magnetometers and planar gradiometers) and application of preprocessing methods for interference suppression, such as signal space separation (SSS), can affect the results in different ways for different implementations. So far, a systematic evaluation of the different implementations has not been performed. Here, we compared the localization performance of the LCMV beamformer pipelines in four widely used open-source toolboxes (MNE-Python, FieldTrip, DAiSS (SPM12), and Brainstorm) using datasets both with and without SSS interference suppression.
We analyzed MEG data that were i) simulated, ii) recorded from a static and moving phantom, and iii) recorded from a healthy volunteer receiving auditory, visual, and somatosensory stimulation. We also investigated the effects of SSS and the combination of the magnetometer and gradiometer signals. We quantified how localization error and point-spread volume vary with the signal-to-noise ratio (SNR) in all four toolboxes.
When applied carefully to MEG data with a typical SNR (3–15 dB), all four toolboxes localized the sources reliably; however, they differed in their sensitivity to preprocessing parameters. As expected, localizations were highly unreliable at very low SNR, but we found high localization error also at very high SNRs for the first three toolboxes while Brainstorm showed greater robustness but with lower spatial resolution. We also found that the SNR improvement offered by SSS led to more accurate localization.
•Different beamformer implementations are reported to sometimes yield differing source estimates for the same MEG data.•We compared beamformers in four major open-source MEG analysis toolboxes.•All toolboxes provide consistent and accurate results with 3–15-dB input SNR.•However, localization errors are high at very high input SNR for the tested scalar beamformers.•We discuss the critical differences between the implementations.
Exaggerated subthalamic beta oscillatory activity and increased beta range cortico-subthalamic synchrony have crystallized as the electrophysiological hallmarks of Parkinson's disease. Beta ...oscillatory activity is not tonic but occurs in ‘bursts’ of transient amplitude increases. In Parkinson's disease, the characteristics of these bursts are altered especially in the basal ganglia. However, beta oscillatory dynamics at the cortical level and how they compare with healthy brain activity is less well studied. We used magnetoencephalography (MEG) to study sensorimotor cortical beta bursting and its modulation by subthalamic deep brain stimulation in Parkinson's disease patients and age-matched healthy controls. We show that the changes in beta bursting amplitude and duration typical of Parkinson's disease can also be observed in the sensorimotor cortex, and that they are modulated by chronic subthalamic deep brain stimulation, which, in turn, is reflected in improved motor function at the behavioural level. In addition to the changes in individual beta bursts, their timing relative to each other was altered in patients compared to controls: bursts were more clustered in untreated Parkinson's disease, occurring in ‘bursts of bursts’, and re-burst probability was higher for longer compared to shorter bursts. During active deep brain stimulation, the beta bursting in patients resembled healthy controls’ data. In summary, both individual bursts’ characteristics and burst patterning are affected in Parkinson's disease, and subthalamic deep brain stimulation normalizes some of these changes to resemble healthy controls’ beta bursting activity, suggesting a non-invasive biomarker for patient and treatment follow-up.
Display omitted
Globally, the demand for improved health care delivery while managing escalating costs is a major challenge. Measuring the biomagnetic fields that emanate from the human brain already impacts the ...treatment of epilepsy, brain tumours and other brain disorders. This roadmap explores how superconducting technologies are poised to impact health care. Biomagnetism is the study of magnetic fields of biological origin. Biomagnetic fields are typically very weak, often in the femtotesla range, making their measurement challenging. The earliest in vivo human measurements were made with room-temperature coils. In 1963, Baule and McFee (1963 Am. Heart J. 55 95−6) reported the magnetic field produced by electric currents in the heart ('magnetocardiography'), and in 1968, Cohen (1968 Science 161 784−6) described the magnetic field generated by alpha-rhythm currents in the brain ('magnetoencephalography'). Subsequently, in 1970, Cohen et al (1970 Appl. Phys. Lett. 16 278-80) reported the recording of a magnetocardiogram using a Superconducting QUantum Interference Device (SQUID). Just two years later, in 1972, Cohen (1972 Science 175 664-6) described the use of a SQUID in magnetoencephalography. These last two papers set the scene for applications of SQUIDs in biomagnetism, the subject of this roadmap. The SQUID is a combination of two fundamental properties of superconductors. The first is flux quantization-the fact that the magnetic flux Φ in a closed superconducting loop is quantized in units of the magnetic flux quantum, Φ0 h/2e, 2.07 × 10−15 Tm2 (Deaver and Fairbank 1961 Phys. Rev. Lett. 7 43-6, Doll R and Näbauer M 1961 Phys. Rev. Lett. 7 51-2). Here, h is the Planck constant and e the elementary charge. The second property is the Josephson effect, predicted in 1962 by Josephson (1962 Phys. Lett. 1 251-3) and observed by Anderson and Rowell (1963 Phys. Rev. Lett. 10 230-2) in 1963. The Josephson junction consists of two weakly coupled superconductors separated by a tunnel barrier or other weak link. A tiny electric current is able to flow between the superconductors as a supercurrent, without developing a voltage across them. At currents above the 'critical current' (maximum supercurrent), however, a voltage is developed. In 1964, Jaklevic et al (1964 Phys. Rev. Lett. 12 159-60) observed quantum interference between two Josephson junctions connected in series on a superconducting loop, giving birth to the dc SQUID. The essential property of the SQUID is that a steady increase in the magnetic flux threading the loop causes the critical current to oscillate with a period of one flux quantum. In today's SQUIDs, using conventional semiconductor readout electronics, one can typically detect a change in Φ corresponding to 10−6 Φ0 in one second. Although early practical SQUIDs were usually made from bulk superconductors, for example, niobium or Pb-Sn solder blobs, today's devices are invariably made from thin superconducting films patterned with photolithography or even electron lithography. An extensive description of SQUIDs and their applications can be found in the SQUID Handbooks (Clarke and Braginski 2004 Fundamentals and Technology of SQUIDs and SQUID Systems vol I (Weinheim, Germany: Wiley-VCH), Clarke and Braginski 2006 Applications of SQUIDs and SQUID Systems vol II (Weinheim, Germany: Wiley-VCH)). The roadmap begins (chapter 1) with a brief review of the state-of-the-art of SQUID-based magnetometers and gradiometers for biomagnetic measurements. The magnetic field noise referred to the pick-up loop is typically a few fT Hz−1/2, often limited by noise in the metallized thermal insulation of the dewar rather than by intrinsic SQUID noise. The authors describe a pathway to achieve an intrinsic magnetic field noise as low as 0.1 fT Hz−1/2, approximately the Nyquist noise of the human body. They also descibe a technology to defeat dewar noise. Chapter 2 reviews the neuroscientific and clinical use of magnetoencephalography (MEG), by far the most widespread application of biomagnetism with systems containing typically 300 sensors cooled to liquid-helium temperature, 4.2 K. Two important clinical applications are presurgical mapping of focal epilepsy and of eloquent cortex in brain-tumor patients. Reducing the sensor-to-brain separation and the system noise level would both improve spatial resolution. The very recent commercial innovation that replaces the need for frequent manual transfer of liquid helium with an automated system that collects and liquefies the gas and transfers the liquid to the dewar will make MEG systems more accessible. A highly promising means of placing the sensors substantially closer to the scalp for MEG is to use high-transition-temperature (high-Tc) SQUID sensors and flux transformers (chapter 3). Operation of these devices at liquid-nitrogen temperature, 77 K, enables one to minimize or even omit metallic thermal insulation between the sensors and the dewar. Noise levels of a few fT Hz−1/2 have already been achieved, and lower values are likely. The dewars can be made relatively flexible, and thus able to be placed close to the skull irrespective of the size of the head, potentially providing higher spatial resolution than liquid-helium based systems. The successful realization of a commercial high-Tc MEG system would have a major commercial impact. Chapter 4 introduces the concept of SQUID-based ultra-low-field magnetic resonance imaging (ULF MRI) operating at typically several kHz, some four orders of magnitude lower than conventional, clinical MRI machines. Potential advantages of ULF MRI include higher image contrast than for conventional MRI, enabling methodologies not currently available. Examples include screening for cancer without a contrast agent, imaging traumatic brain injury (TBI) and degenerative diseases such as Alzheimer's, and determining the elapsed time since a stroke. The major current problem with ULF MRI is that its signal-to-noise ratio (SNR) is low compared with high-field MRI. Realistic solutions to this problem are proposed, including implementing sensors with a noise level of 0.1 fT Hz−1/2. A logical and exciting prospect (chapter 5) is to combine MEG and ULF MRI into a single system in which both signal sources are detected with the same array of SQUIDs. A prototype system is described. The combination of MEG and ULF MRI allows one to obtain structural images of the head concurrently with the recording of brain activity. Since all MEG images require an MRI to determine source locations underlying the MEG signal, the combined modality would give a precise registration of the two images; the combination of MEG with high-field MRI can produce registration errors as large as 5 mm. The use of multiple sensors for ULF MRI increases both the SNR and the field of view. Chapter 6 describes another potentially far-reaching application of ULF MRI, namely neuronal current imaging (NCI) of the brain. Currently available neuronal imaging techniques include MEG, which is fast but has relatively poor spatial resolution, perhaps 10 mm, and functional MRI (fMRI) which has a millimeter resolution but is slow, on the order of seconds, and furthermore does not directly measure neuronal signals. NCI combines the ability of direct measurement of MEG with the spatial precision of MRI. In essence, the magnetic fields generated by neural currents shift the frequency of the magnetic resonance signal at a location that is imaged by the three-dimensional magnetic field gradients that form the basis of MRI. The currently achieved sensitivity of NCI is not quite sufficient to realize its goal, but it is close. The realization of NCI would represent a revolution in functional brain imaging. Improved techniques for immunoassay are always being sought, and chapter 7 introduces an entirely new topic, magnetic nanoparticles for immunoassay. These particles are bio-funtionalized, for example with a specific antibody which binds to its corresponding antigen, if it is present. Any resulting changes in the properties of the nanoparticles are detected with a SQUID. For liquid-phase detection, there are three basic methods: AC susceptibility, magnetic relaxation and remanence measurement. These methods, which have been successfully implemented for both in vivo and ex vivo applications, are highly sensitive and, although further development is required, it appears highly likely that at least some of them will be commercialized. Chapter 8 concludes the roadmap with an assessment of the commercial market for MEG systems. Despite the huge advances that have been realized since MEG was first introduced, the number of commercial systems deployed around the world remains small, around 250 units employing about 50 000 SQUIDs. The slow adoption of this technology is undoubtedly in part due to the high cost, not least because of the need to surround the entire system in an expensive magnetically shielded room. Nonetheless, the recent introduction of automatically refilling liquid-helium systems, the ongoing reduction in sensor noise, the potential availability of high-Tc SQUID systems, the availability of new and better software and the combination of MEG with ULF MRI all have the potential to increase the market size in the not-so-distant future. In particular, there is a great and growing need for better noninvasive technologies to measure brain function. There are hundreds of millions of people in the world who suffer from brain disorders such as epilepsy, stroke, dementia or depression. The enormous cost to society of these diseases can be reduced by earlier and more accurate detection and diagnosis. Once the challenges outlined in this roadmap have been met and the outstanding problems have been solved, the potential demand for SQUID-based health technology can be expected to increase by ten- if not hundred-fold.
Highlights ► We evaluated novel methods for artifact suppression and movement correction in MEG data recorded in 20 subjects with controlled magnetic artifacts and head movements. ► The results show ...that methods based on signal space separation can reliably suppress interference from nearby sources. ► Head movement correction recovered evoked responses of satisfactory quality in all test conditions.
In this paper, we study the performance of a source montage corresponding to 29 brain regions reconstructed from whole-head magnetoencephalographic (MEG) recordings, with the aim of facilitating the ...review of MEG data containing epileptiform discharges. Test data were obtained by superposing simulated signals from 100-nAm dipolar sources to a resting state MEG recording from a healthy subject. Simulated sources were placed systematically to different cortical locations for defining the optimal regularization for the source montage reconstruction and for assessing the detectability of the source activity from the 29-channel MEG source montage. The signal-to-noise ratio (SNR), computed for each source from the sensor-level and source-montage signals, was used as the evaluation parameter. Without regularization, the SNR from the simulated sources was larger in the sensor-level signals than in the source montage reconstructions. Setting the regularization to 2% increased the source montage SNR to the same level as the sensor-level SNR, improving the detectability of the simulated events from the source montage reconstruction. Sources producing a SNR of at least 15 dB were visually detectable from the source-montage signals. Such sources are located closer than about 75 mm from the MEG sensors, in practice covering all areas in the grey matter. The 29-channel source montage creates more focal signals compared to the sensor space and can significantly shorten the detection time of epileptiform MEG discharges for focus localization.
High frequency oscillations (HFOs, 80-500 Hz) in invasive EEG are a biomarker for the epileptic focus. Ripples (80-250 Hz) have also been identified in non-invasive MEG, yet detection is impeded by ...noise, their low occurrence rates, and the workload of visual analysis. We propose a method that identifies ripples in MEG through noise reduction, beamforming and automatic detection with minimal user effort. We analysed 15 min of presurgical resting-state interictal MEG data of 25 patients with epilepsy. The MEG signal-to-noise was improved by using a cross-validation signal space separation method, and by calculating ~ 2400 beamformer-based virtual sensors in the grey matter. Ripples in these sensors were automatically detected by an algorithm optimized for MEG. A small subset of the identified ripples was visually checked. Ripple locations were compared with MEG spike dipole locations and the resection area if available. Running the automatic detection algorithm resulted in on average 905 ripples per patient, of which on average 148 ripples were visually reviewed. Reviewing took approximately 5 min per patient, and identified ripples in 16 out of 25 patients. In 14 patients the ripple locations showed good or moderate concordance with the MEG spikes. For six out of eight patients who had surgery, the ripple locations showed concordance with the resection area: 4/5 with good outcome and 2/3 with poor outcome. Automatic ripple detection in beamformer-based virtual sensors is a feasible non-invasive tool for the identification of ripples in MEG. Our method requires minimal user effort and is easily applicable in a clinical setting.
Objective: Magnetoencephalography (MEG) signals typically reflect a mixture of neuromagnetic fields, subject-related artifacts, external interference and sensor noise. Even inside a magnetically ...shielded room, external interference can be significantly stronger than brain signals. Methods such as signal-space projection (SSP) and signal-space separation (SSS) have been developed to suppress this residual interference, but their performance might not be sufficient in cases of strong interference or when the sources of interference change over time. Methods: Here we suggest a new method, extended signal-space separation (eSSS), which combines a physical model of the magnetic fields (as in SSS) with a statistical description of the interference (as in SSP). We demonstrate the performance of this method via simulations and experimental MEG data. Results: The eSSS method clearly outperforms SSS and SSP in interference suppression regardless of the extent of a priori information available on the interference sources. We also show that the method does not cause location or amplitude bias in dipole modeling. Conclusion: Our eSSS method provides better data quality than SSP or SSS and can be readily combined with other SSS-based methods, such as spatiotemporal SSS or head movement compensation. Thus, eSSS extends and complements the interference suppression techniques currently available for MEG. Significance: Due to its ability to suppress external interference to the level of sensor noise, eSSS can facilitate single-trial data analysis, exemplified in automated analysis of epileptic data. Such an enhanced suppression is especially important in environments with large interference fields.
Recent studies reported differential information in human magnetocardiogram and in electrocardiogram. Vortex currents have been discussed as a possible source of this divergence. With the help of ...physical phantom experiments, we quantified the influence of active vortex currents on the strength of electric and magnetic signals, and we tested the ability of standard source localization algorithms to reconstruct vortex currents. The active vortex currents were modeled by a set of twelve single current dipoles arranged in a circle and mounted inside a phantom that resembles a human torso. Magnetic and electric data were recorded simultaneously while the dipoles were switched on stepwise one after the other. The magnetic signal strength increased continuously for an increasing number of dipoles switched on. The electric signal strength increased up to a semicircle and decreased thereafter. Source reconstruction with unconstrained focal source models performed well for a single dipole only (less than 3-mm localization error). Minimum norm source reconstruction yielded reasonable results only for a few of the dipole configurations. In conclusion active vortex currents might explain, at least in part, the difference between magnetically and electrically acquired data, but improved source models are required for their reconstruction.
In this paper, a new approach is presented for the assessment of a 3-D anatomical and functional model of the heart including structural information from magnetic resonance imaging (MRI) and ...functional information from positron emission tomography (PET) and magnetocardiography (MCG). The method uses model-based co-registration of MR and PET images and marker-based registration for MRI and MCG. Model-based segmentation of MR anatomical images results in an individualized 3-D biventricular model of the heart including functional parameters from PET and MCG in an easily interpretable 3-D form.