Research Focus
le>Overview Current research activities: -- Brain responses indicate learning and experience, and those responses change with aging.-- Sensory binding, gamma oscillations, and speech understanding in aging-- The interaction between musical rhythms and brain rhythms-- A new music supported rehabilitation training for stroke patients-- Somatosensory stimulation supporting training and learning-- Auditory scene analysis-- Advancing the MEG research methods:Overview My main research questions for the recent years were “What makes speech understanding difficult in aging?” (and how can we overcome those difficulties?) and “What is special to music, that we can apply to medicine?”. At the first glance, both questions seem mutually exclusive, however when studying the underlying neural mechanisms it becomes clear that both are strongly interconnected.My research about speech understanding in noise is informed by the common complaint of elderly people that amplification with hearing aids restored sensation, specifically at high frequencies, which become more difficult to hear in older age. However, hearing aids did not help separating speech from noise and consequently improve understanding. Our previous research supported the hypothesis that hearing sensation and registration of sound at the brain level are sufficiently well preserved in elderly listeners, however the efficacy of neural processes for interpreting the sound, which then result in perception of the meaning of speech, is affected by age. Also neural processes of perception seems to be more vulnerable to noise in older than young adults. A broad part of my research is dedicated to identify neural mechanism underlying auditory perception and how those are affected by aging.One most intriguing finding of human neuroscience in recent decades was the insight in neuroplastic reorganization based on learning and training even in the adult and aging brain. For the early work in this area we studied professional musicians as a model of life-long intense training and found enhanced brain function compared to non-musicians. Moreover recent findings suggest that musicianship may have a positive impact on preserving listening abilities in advanced age. My current research addresses the question, what is special to music that facilitates neuroplasticity. Music-making seems to play an important role; we do not have evidence that only listening to music could result in similar neuroplastic changes as those observed in professional musicians. One hypothesis is that the action-perception cycle during music making requires precise coordination between brain areas of perception and sensorimotor areas, as well brain areas of motor planning, memory and cognition. Such strong functional connectivity between brain brain areas of different modalities may have synergetic effects on plastic reorganization. Specifically, precise timing in rhythm perception and rhythmic movement on the beat of musical sound may strengthen neural connections. My basic research addressed the understanding of neural mechanism underlying rhythm perception. We applied our concepts to a clinical trial of comparing a new music supported stroke rehabilitation training with conventional intervention. Chronic stroke patients with unilateral impairments of arm and hand functions learned to play an musical instrument. During music making, they exercised precise movements of the impaired hand, guided by the timing structure of musical rhythm. Patients could adjust and correct movements through the steady feedback of the produced sound. We performed intense neuroimaging and behavioural testing over the time course of the training; the data are currently under evaluation.Another hot topic in current neuroscience research is whether brain stimulation paired with a learing task could improve learning. While electric or magnetic interaction with brain functions are rather unspecific, and efficiencies are discussed controversially, sensory stimulation could facilitate more specific brain functions. The underlying concept is that the rhythm of sensory stimulation synchronizes precisely with the brain rhythm. With controlling the frequency of sensory stimulation it seems be possible to interact with specific brain function. I studied in detail the effects auditory and somatosensory rhythmic stimulation on oscillatory brain activity. First results in stroke patients showed that neuroplastic reorganization is indeed expressed in strengthening high frequency brain oscillations. Brain responses indicate learning and experience, and those responses change with aging.My aim was identifying a brain response which is specific to perception. We want to apply such responses as objective indicators of central auditory processing, neural plasticity, and changes in aging.The auditory brain performs a hierarchy of processing steps for analyzing sound: First, the brain registers the sensation of sound. Then it analyses the detailed properties of the sound like its pitch and timbre, temporal structure, loudness, and location of origin in space and so on. Next step of central auditory processing is an analysis of the auditory scene, for example identifying the voice of a speaker and separating it from background noise. Finally, the brain interprets the sound, leading to conscious perception and allows the listener of reacting to the perceived information. Our hypothesis is that this sequence of sensation, perception, and cognition is reflected in a sequence of auditory brain responses.In a series of MEG studies, participants listened to sounds, and we analyzed differentially the series of subsequent positive and negative waves in the brain response following a sound event. Indeed, we found experimental evidence that perceptual learning results in neuroplastic changes of brain responses related to representation of the meaning of sound in the auditory cortex. Healthy volunteers in our study participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Brain responses were recorded with MEG during two pre-training and one post-training sessions and were compared with behavioural performance of participants differentiating a pre-voiced 'mba' syllable from the syllable 'ba'. We identified the sources of activity in the auditory cortex and analyzed the temporal dynamics of auditory evoked responses. After both passive listening and active training, the amplitude of the P2 wave with latency of 200 ms increased considerably. Two findings supported our interpretation that the recorded brain responses were related to representation of the meaning of sound: First, the latency of the response at 200 ms after sound onset; by this time, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Second, the center of activity in the cortex; P2 sources were localized in anterior auditory association cortex, which has been described as part of the antero-ventral pathway for object identification. In contrast to those findings, the amplitude of the earlier N1 wave, which is related to processing of sensory information, did not change over the time course of the study. We concluded that the P2 amplitude increase and its persistence over time constitute a neuroplastic change. The P2 gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2 increase relates to brain processes, which are necessary precursors of perceptual learning. (Ross, Jamali, & Tremblay, 2013).A key finding of our earlier studies was that passive listening during the MEG session of one hour duration resulted in similar amounts of P2 increase than during subsequent training sessions (Ross & Tremblay, 2009, Hearing Research, 248:48-59). We further investigated, whether the P2 increase was specific for auditory learning and how long the observed changes would last. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. Notably, these effects were retained for months, and P2 amplitudes were still significantly larger than the initial measurement, even six month after the training. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process. Changes in brain responses were visible before explicit learning resulted in increased behavioural performance. A further finding was the expression of a late negativity wave 600–900 ms post- stimulus onset, post-training exclusively for the group that learned to identify the pre- voiced contrast (Tremblay, Ross, Inoue, McClannahan, & Collet, 2014).In the previous studies we had observed that the P2 amplitude increased between MEG recordings at subsequent days. Thus our question was whether just passing of time or a night of sleep is required to enable the increase of brain responses. We recorded auditory evoked fields using MEG while participants segregated and identified two simultaneously presented vowels in three different sessions separated by twelve hours over two consecutive days. The first two practice sessions were scheduled in the morning and evening of the same day and the morning of the following day for one group. The second group was tested in the evening and at two sessions at morning and evening of the of subsequent day. We then examined the amplitudes and latencies of auditory cortex source waveforms as a function of sleep and passage of time. Participants in both groups improved their auditory performance with repeated testing. Auditory learning was paralleled by increased amplitude of the evoked brain response 200 ms after sound onset (i.e., P2 amplitude). Most importantly, the P2 increase over a night of sleep was significantly larger than an P2 increase during the day in both groups, independent of the order of MEG recordings. These neuromagnetic changes suggest that auditory learning involves sleep-dependent consolidation, and the effect of reorganization of brain function during sleep can be measured by the P2 amplitude in the MEG response (Alain, Da, He, & Ross, 2015). Alain, C., Da, K., He, Y., & Ross, B. (2015). Sleep-dependent neuroplastic changes during auditory perceptual learning. Neurobiology of Learning and Memory, 118, 133–142.Tremblay, K. L., Ross, B., Inoue, K., McClannahan, K., & Collet, G. (2014). Is the auditory evoked P2 response a biomarker of learning? Frontiers in Systems Neuroscience, 8(February), 1–13.Ross, B. (2013). The Auditory evoked P2 response indicates effects of aging on central auditory processing. Canadian Hearing Report, 8(4), 30–34.Ross, B., Jamali, S., & Tremblay, K. L. (2013). Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation. BMC Neuroscience, 14(1), 151 Sensory binding, gamma oscillations, and speech understanding in aging My aims were discovering the neural circuitry that combines sound information into a meaningful object for perception, studying how noise interferes with this process, and reporting aging related changes. The research addressed the long-standing question about perceptual binding: How does the brain combine elements of sensation, like the line segments of a drawing, into a unique whole object, i.e. the drawing. Current opinion is neurosciences is that unlikely an object is represented by a unique neuron, that receives synaptic inputs from all the elements, that constitute the object. Instead of such hard-wired model, sensory information are combined through synchrony in oscillatory networks. Such oscillatory model would allow fast dynamic reconfiguration and adaptation to the ever changing sensory environment. It has been suggested that sensory binding takes place in recurrent connections between cortex and thalamus. We studied this network in detail and how it is affected by aging.We recorded auditory brain responses with MEG while participants listened to rhythmically modulated sound. The rhythm of the sound was gradually changed over time between 3 Hz and 60 Hz. At low frequency, such sound is perceived as beating fluctuation of the loudness. In faster rhythm, we cannot distinct each beat and perceive the sound as vibrating flutter. Even faster rhythm are perceived as a rough sound. The change in the nature of the percept as the sound rhythm gets faster was reflected in changes of auditory cortical activity. Slow rhythms, perceived as distinct beats, elicited a series of responses for each beat. At a critical frequency around 12 Hz, beats were no more separated, and the percept changed into flutter sensation. At the same frequency, subsequent waves of auditory evoked responses overlapped and were no longer separated. A second level of overlapping brain responses occurred at 25 Hz, where the percept changed from flutter into roughness. The wide range of sound rhythms elicited strong brain responses in the 40-Hz range, the gamma band. The study provided insights in the cortical responses, reflecting the processes of integrating successive acoustic events at different time scales for extracting complex features of natural sound as in speech or music. Specifically, brain oscillations at 40-Hz seems involved in this process as predicted earlier from theoretical considerations (Miyazaki, Thompson, Fujioka, & Ross, 2013).We applied the paradigm of studying the perception of sound rhythm to the binaural case. We presented pure tones to the left and right ear. However, frequencies were slightly different between both ears. When the auditory brain combines both sounds, we experience a beating sensation, the binaural beat. This experimental procedure allowed studying how the brain combines sounds from both ears. Hearing with both ears allows the listener to localize sound in space and to separate sounds, which originate from different locations in space. Therefore, binaural hearing is important for listening in a noisy environment. However, binaural hearing seems strongly affected by aging.In our study, auditory cortex responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. Our interpretation was that the observed difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We proposed that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. We concluded that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations (Ross, Miyazaki, Thompson, & Fujioka, 2014).In both studies about the perception of sound rhythms we found largest auditory cortex responses at 40 Hz, which supported the hypothesis that 40-Hz oscillations are strongly involved in perceptual binding. We studied in more detail how noise affects gamma oscillation, and how the oscillatory neural mechanism changes with aging. Participants in our studies were engaged in a gap-detection task while listening to 40-Hz amplitude modulated sound, presented to one ear. In hals of experimental trials, the noise of multiple people talking simultaneously was presented to the other ear, simulating listening in quiet or in noise. Analyzing the temporal dynamics of 40-Hz brain activity in the EEG, we found two different types of gamma oscillations: The first followed with high precision the time course of the stimulus, while the second showed a delay with 200-ms time constant. Importantly, masking abolished only the second component only. We interpreted that the first 40-Hz component is related to representation of low-level sensory input whereas the second is related to auditory processing of stimulus integration in thalamocortical networks. We studied also how noise interacted with the transient evoked auditory responses and found that the N1 wave was not affected by contralateral masking while the P2 wave was strongly reduced. Our previous work, suggesting that N1 responses reflect the registration of the sensory input, and P2 is related to object representation, support our interpretation that we identified two types of 40-Hz responses, one related to sensation, the other related to stimulus integration and binding (Ross, Miyazaki, & Fujioka, 2012)We applied the experimental paradigm of listening to rhythmic sound with and without a contralateral masking noise to a group of oder participants and compared the findings with young adults. The first component of gamma oscillations, which was observed when noise was presented, was of equal amplitude and precise temporal precision in young and older listeners. The second component, related to information binding, was affected by noise and was significantly smaller in older adults. Only the second component was correlated with behavioural performance in speech-in-noise understanding. The results support our hypothesis, representation of sound information is well preserved in older adults, however the ability of integrating sound information and extracting the meaning is affected by aging. Reduced amount of 40-Hz oscillation in the binding network in older listeners indicate that resources for central auditory processing are limited, which contributes to speech-in-noise understanding deficits in aging. We refined the model of sensory binding through synchrony of gamma oscillations in a thalamocortical network based on the experimental data (Ross & Fujioka, 2016) Ross, B., & Fujioka, T. (2016). 40-Hz oscillations underlying perceptual binding in young and older adults. Psychophysiology, (in press, online March 2016).Ross, B., Miyazaki, T., Thompson, J., Jamali, S., & Fujioka, T. (2014). Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing. Journal of Neurophysiology, 112(July), 1871–1884.Miyazaki, T., Thompson, J., Fujioka, T., & Ross, B. (2013). Sound envelope encoding in the auditory cortex revealed by neuromagnetic responses in the theta to gamma frequency bands. Brain Research, 1506, 64–75.Ross, B., Miyazaki, T., & Fujioka, T. (2012). Interference in dichotic listening: The effect of contralateral noise on oscillatory brain networks. European Journal of Neuroscience, 35, 106–118.The interaction between musical rhythms and brain rhythmsWe can easily move to the rhythm of sound like in tapping to the beat of music and dancing. Interestingly during such movements, we do not react to the sound. Instead, during listening we develop an internal representation of time and then move in predictive fashion: Finger tapping occurs commonly just before the next beat. The aim of our studies was discovering the neural mechanism underlying auditory–sensorimotor interaction in rhythm perception. The studies were motivated by previous findings in Parkinson's disease patients, who could move smoothly to the rhythm of sound and even dance. Excessive beta activity, i.e. brain rhythms or oscillations at the rate of about 20 Hz, is characteristic for Parkinson's disease; and auditory stimulation could help, suppressing beta activity. Our research was guided by keeping in mind future application to stroke rehabilitation intervention.First, we studied the relations between length of the tapping interval and metric structures on behavioural tapping performance. Precise tapping becomes more difficult for longer tapping intervals. However tapping on a long interval (e.g. every 1000 ms) is more precise when tapping on every second beat of 500 ms (i.e tap-beat ratio of 1:2) or every fourth beat of 250 ms (i.e. tap-beat ratio of 1:4). These findings support the hypothesis that listening to the beats established a precise internal representation of time (Zendel, Ross, & Fujioka, 2011).We recorded brain responses with MEG while participants listened passively to metronome sounds at various rates. We analyzed specifically brain activity at 20 Hz (beta band oscillations), which has been found predominantly in sensorimotor cortices. The magnitude of beta oscillation was modulated by the rhythm of stimulation: Beta oscillations decreased after the metronome stimulus, reached a minimum at 200 ms latency and rebound on a linear slope, reaching the initial magnitude just before the next beat occurred. Beta modulations were strongest in bilateral auditory cortices, however they involved also a wide spread brain network including bilateral primary sensory and motor areas, supplementary motor areas, cerebellum and basal ganglia, which are brain areas involved in motor planing and execution. An important discovery was the time course of beta oscillations, in wich the beta rebound before the onset of the next beat seems to encode an internal representation of the predictable time interval, which helps predictive movement planning in a time-sensitive manner (Fujioka, Trainor, Large, & Ross, 2012).We conducted a series of experiments for testing whether the observed modulation of beta oscillations are specific for auditory-motor interaction. For a short period of time, participants tapped with right hand fingers to accented beats or listened to accented beats without movements, for example every second beat like in a march or every third beat like in waltz. Then, the accents were removed from the isochronous beats, and participants continued listening without tapping, however they imagined the previously accented metric structure. We compared brain responses during both the interval of accented beats and listening to beats of same loudness and imagining the previously induced meter.We analyzed beta-band oscillatory brain activities around 20 Hz, which was repeatedly recorded with MEG during a time interval of listening to accented beats followed by a time interval of imagining continuation of the induced musical meter. First, we confirmed previous results, that beta activity decreased after each beat, which is termed 'event-related desynchronization (ERD)'. We demonstrated that beta-band event-related desynchronization in the auditory cortex. The amount of beta ERD was stronger for the accented down-beat than for he up-beat. This was the first demonstration of beta-band oscillations related to hierarchical and internalized timing information. Moreover, the meter representation in the beta oscillations was widespread across the brain, including sensorimotor and premotor cortices, parietal lobe, and cerebellum. The results extended current understanding of the role of beta oscillations in neural processing of predictive timing (Fujioka, Ross, & Trainor, 2015).We extended the study of the role of rhythm perception for predictive coding of movement with a more elaborate musical figure containing a shift between 'waltz' and 'march', a transition, termed hemiola in Western music. While participants listened to the same metronome sequence and imagined the accents, their pattern of brain responses significantly changed just before the “pivot” point of metric transition from ternary to binary meter. Until 100 ms before the pivot point, brain activities were more similar to those in the simple ternary meter than those in the simple binary meter, but the pattern was reversed afterwards. A similar transition was also observed at the downbeat after the pivot. Brain areas related to the metric transition were identified from source reconstruction of the MEG using a beamformer and included auditory cortices, sensorimotor and premotor cortices, cerebellum, inferior/middle frontal gyrus, parahippocampal gyrus, inferior parietal lobule, cingulate cortex, and precuneus. The results strongly support that predictive timing processes related to auditory-motor, fronto-parietal, and medial limbic systems underlie metrical representation and its transitions (Fujioka, Fidali, & Ross, 2014).We applied the experimental framework from our basic studies to a pilot study in preparation for a larger clinical trial of music supported rehabilitation training in stroke. Our hypothesis was that motor rehabilitation training incorporating music playing will stimulate and enhance auditory–motor interaction in stroke patients. We examined three chronic patients who received 'music-supported therapy'. Neuromagnetic beta-band activity was remarkably alike during passive listening to a metronome and during finger tapping,with or without the metronome, for either the paretic or nonparetic hand, suggesting a shared mechanism of the beta modulation. In the listening task, the magnitude of the beta decrease after the tone onset was more pronounced at the post training time point and was accompanied by improved arm and hand skills. The present case data gave insight into the neural underpinnings of rehabilitation with music making and rhythmic auditory stimulation (Fujioka, Ween, Jamali, Stuss, & Ross, 2012).Fujioka, T., Ross, B., & Trainor, L. J. (2015). Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery. Journal of Neuroscience, 35(45), 15187–15198.Fujioka, T., Fidali, B., & Ross, B. (2014). Neural correlates of intentional switching from ternary to binary meter in a musical hemiola pattern. Auditory Cognitive Neuroscience, 5(November), 1–15.Fujioka, T., Ween, J. E., Jamali, S., Stuss, D. T., & Ross, B. (2012). Changes in neuromagnetic beta-band oscillation after music-supported stroke rehabilitation. Annals of the New York Academy of Sciences, 1252(1), 294–304.Fujioka, T., Trainor, L. J., Large, E. W., & Ross, B. (2012). Internalized timing of isochronous sounds is represented in neuromagnetic beta oscillations. Journal of Neuroscience, 32(5), 1791–1802.Zendel, B. R., Ross, B., & Fujioka, T. (2011). The effects of stimulus rate and tapping rate on tapping performance. Music Perception, 29(1), 65–78. Somatosensory stimulation supporting training and learningIn our previous studies we found that the auditory cortex responds selectively at frequencies around 40 Hz (gamma band). We employed this feature of brain function and applied auditory stimuli, containing strong 40-Hz rhythm to evoke stimulus-locked 40-Hz brain responses. This technique allowed of recording brain oscillations, which are otherwise very small and hidden in noise. Similar principles should apply to the somatosensory system. Therefore we applied vibration stimuli to the skin and recorded brain responses from sensorimotor cortices. We chose the stimulation frequency of 20 Hz because previous studies showed best responses of sensorimotor cortices at 20 Hz. The aims of the studies were developing an objective measure of sensorimotor function which could be applied to training studies or rehabilitation intervention for assessing neuroplastic changes. Moreover, we were interested in studying gamma oscillations in the 40-Hz range, which had been discussed as potential indicators of learning. Finally, our aim was improving source analysis for observing functional reorganization in primary sensory cortices during rehabilitation intervention.For source analysis in somatosensory cortex, we developed a procedure for stimulating the finger tip with 20-Hz vibration. We recorded 20-Hz brain activity with MEG, modelled the magnetic field with a single equivalent dipole in the brain hemisphere contralateral to the stimulated hand and localized precisely the center of activity in primary somatosensory cortex. We developed a signal statistics based on bootstrap resampling for estimating confidence volumes for the sources (Jamali & Ross, 2012).We refined the source localization procedure and stimulated two fingers simultaneously with slightly different frequencies (i.e. 19 Hz and 21 Hz instead of 20 Hz) and separated the brain responses by applying spectral analysis to the MEG data. We were able of significantly separating the somatotopical organized sensory areas responding to neighbored fingers, which are separated by no more than about 3 mm. At same time we shortened the required recording time (Jamali & Ross, 2013).40-Hz oscillations are elicited in addition to the 20-Hz response when stimulating the skin receptors with 20-Hz vibration. This finding has not been considered thoroughly in the literature. Given our findings about 40-Hz activity in audition, we hypothesized that 40-Hz oscillations in the sensorimotor system may also be related to sensory binding. For testing this hypothesis, we used a stimulation paradigm analog to our auditory studies. We applied 20-Hz vibrotactile stimuli to one finger and single touch stimuli to another finger of the same hand. Our hypothesis was that the vibrotactile stimulus would elicit 20 and 40-Hz oscillations, while the touch stimulus would elicit transient somatosensory responses. Interaction between the touch stimulus and the 40-Hz response but not with the 20-Hz response would indicate the role of 40-Hz oscillations for stimulus integration. Indeed, when briefly touching the second finger, 40-Hz responses were reset and recovered over a 200-ms time interval, similar to the time course of 40-Hz response which we observed in the auditory modality. In contrast, the 20-Hz response was not affected by concurrent stimulation supporting the hypothesis that beta and gamma oscillations are involved in different brain function (Ross, Jamali, Miyazaki & Fujioka, 2013).We studied whether short-time passive tactile stimulation at 20 Hz could improves tactile discrimination acuity and investigated whether sustained 20 Hz stimulation also modifies cortical responses and whether these changes are plastic as indicated by differences between subsequent recording sessions. We applied vibrotactile stimuli at 20 Hz to the fingertip, and recorded beta and gamma oscillations at multiples of the stimulus frequency with MEG. Consistently with the previous studies, we found neuromagnetic sources in the contralateral somatosensory cortex. Time courses of the amplitudes of beta and gamma responses were different: While beta responses decreased within a session, but recovered after a break between two sessions, gamma responses were consistent across repeated blocks and increased between the sessions. Specifically the larger gamma response at subsequent sessions indicated neuroplastic reorganizations. The differences between beta and gamma activities suggest that stimulus experience enhanced the temporal precision of neural activity, whereas the magnitude of the primary somatosensory response remained constant (Jamali & Ross, 2014).We investigated whether our basic findings could serve as objective indicators for neuroplastic reorganization during rehabilitation training. We applied 20-Hz somatosensory steady-state responses with vibrotactile stimulation of index and ring fingers in MEG, and analyzed the cortical sources displaying oscillations synchronized with the external stimuli in two groups of healthy older adults before and after musical training or without training. In addition, we applied the same analysis for an anecdotic report of a single chronic stroke patient with hemiparetic arm and hand problems, who received music-supported therapy. The results were that healthy older adults showed significant finger separation within the primary somatotopic map. Beta dipole sources were more anterior located compared to gamma sources. Most importantly, the center of cortical activation shifted toward anterior location and synchrony increased between the stimuli and beta and gamma oscillations after music training but not in the control group. In the stroke patient, a normalization of somatotopic organization was observed after MST, with digit separation recovered after training and stimulus induced gamma synchrony increased. We concluded that our stimulation paradigm captured the integrity of primary somatosensory hand representation. Source position and synchronization between the stimuli and gamma activity seems to be objective indices, which are sensitive to music-supported training. in a In the chronic stroke patient, we observed responsiveness to sensory stimulation, encouraging the concept of the music-supported therapy. Notably, changes in somatosensory responses were observed, even though the therapy did not involve specific sensory discrimination training. It seems that our protocol can be used for monitoring changes in neuronal organization during training and will improve the understanding of the brain mechanisms underlying rehabilitation (Jamali, Fujioka, & Ross, 2014).Jamali, S., Fujioka, T., & Ross, B. (2014). Neuromagnetic beta and gamma oscillations in the somatosensory cortex after music training in healthy older adults and a chronic stroke patient. Clinical Neurophysiology, 125, 1213–1222.Jamali, S., & Ross, B. (2014). Sustained changes in somatosensory gamma responses after brief vibrotactile stimulation. NeuroReport, 25(7), 537–541.Jamali, S., & Ross, B. (2013). Somatotopic finger mapping using MEG: Toward an optimal stimulation paradigm. Clinical Neurophysiology, 124(8), 1659–70.Ross, B., Jamali, S., Miyazaki, T., & Fujioka, T. (2013). Synchronization of beta and gamma oscillations in the somatosensory evoked neuromagnetic steady-state response. Experimental Neurology, 245(33), 40–51.Jamali, S., & Ross, B. (2012). Precise mapping of the somatotopic hand area using neuromagnetic steady-state responses. Brain Research, 1455, 28–39. A new music supported rehabilitation training for stroke patientsPlaying music requires memorizing melodic sequences, planning movements in time, and receiving feedback in sound, touch, and vision. We and other researchers showed evidence that training in music making results in enhanced brain functions related to all of the above activities in healthy children and adults. Thus, a question arises as to how this benefit can be translated to stroke rehabilitation. Indeed, music-supported rehabilitation (MSR) therapy has been recently developed in which stroke patients learn to play music on percussion and keyboard instruments. MSR has been shown to be more effective in improving arm and hand skills than conventional physical therapy for patients at an earlier stage after stroke. MSR also, is an excellent candidate for rehabilitation services to chronic patients who have limited access to expensive therapies because MSR can be easily delivered in community- or home- based settings without a certified therapist. Further, MSR adds an enjoyable context such that patients can easily engage in and focus attention during the training. Therefore, in this project, for the first time we systemically examined the impacts of MSR on chronic stroke patients, specifically in terms of patients' outcome on behavioural and brain functions after MSR compared to those after conventional physical therapy in a randomized intervention assignment. For behavioural outcomes, we evaluated not only motor skills but also cognitive functions and health-related quality of life. The brain functional recovery was assessed through MEG, which had the sensitivity to detect changes in brain functions in individuals. The knowledge gained from this project will help to examine the efficacy of MSR at chronic stage of stroke recovery and reveal its underlying neural mechanisms which, in turn, will allow us to improve rehabilitation services in the future for stroke patients.We recruited in total 350 stroke patients in the chronic state, almost entirely from the community and screened them for eligibility for the study. 35 patients finally enrolled in the study and completed the intervention program as well intense behavioural testing and neuroimaging. Patients were randomly assigned into two streams of rehabilitation training, involving either a graded arm and hand training or music supported training. The training involved participating in three weekly sessions with individual training over ten weeks. The training involved active music making with percussion instrument and keyboard, allowing for a wide range of arm, hand, and finger movements. The music supported training was administered by a music therapist who adopted the task difficulties according to patients abilities. Equivalently, the graded arm and hand training was individually administered, gradually adjusted to patients abilities, and involved similar amount of interaction between patient and therapist as the MSR.At four time points, before the rehabilitation training, at mid point and after the training, as well after a three month retention period, we performed intense behavioural testing and neuroimaging. We performed structural imaging with MRI and diffusion tensor imaging for imaging changes in brain connectivity. With MEG, we recorded auditory evoked responses, Mismatch-Negativity responses to pitch deviation and timing deviation, somatosensory evoked responses with vibro-tactile stimulation, motor responses during auditory cued and voluntary finger-tapping and resting state activity. The imaging data are currently analyzed and will provide detailed insights into plasticity in auditory and sensorimotor networks and its connections during the different streams of rehabilitation training.In a first analysis of the auditory data, we found that central auditory processing deficits seems widely prevalent in stroke. Unfortunately, this is overlooked during assessment of stroke because audiometric hearing thresholds are normal according to clinical standard, as it was the case in our patients. Despite bilateral hearing, auditory responses were strongly attenuated in the lesion hemisphere. Moreover, mismatch responses to pitch deviation were strongly reduced compared to control participants; and this was even more the case for timing deviation, which requires a higher level of functional processing. A publication of comparing those findings with the lesion sizes is in advanced state of preparation.Auditory scene analysisUnderstanding the neural mechanism of central auditory processes of analyzing sound for identifying a speaker in noise and understanding speech, is essential for our studies of how and why speech understanding becomes more difficult in older age. We performed continuously a series of studies for discovering the principles of auditory scene analysis.We used MEG source analysis for identifying brain responses which are specific for abstract representation of an auditory object rather than sensation of elements of the acoustic input. Discovering such components of brain activity, that reflect interpretation of sound is important for studying how central auditory processing changes with aging (Arnott, Bardouille, Ross, & Alain, 2011).For speech understanding in a noisy environment, listeners perceptually integrate sounds originating from one person's voice based on the spectro-temporal characteristics of the voice as well as location information, and segregate these from concurrent sounds of other talkers. We studied how spectral and/or spatial distances between two simultaneously presented steady-state vowels contribute to perception and activation in auditory cortex using MEG. Participants were more accurate in identifying both vowels when they differed in their fundamental frequency f0 and location than when they differed in a single cue only or when they shared the same f0 and location. We concluded that during auditory scene analysis, acoustic differences among the various sources are combined linearly to increase the perceptual distance between the sound objects (Du, He, Ross, Bardouille, Wu, Li, & Alain, 2011).We used an approach from information theory for analyzing MEG signals, which were recorded while participants were listening to audio recordings describing past personal episodes or general semantic events. Entropy measures identified that more local brain processes were involved in processing responses to personal episodic recordings, whereas general semantic recordings produced more distributed entropy (Heisz, Vakorin, Ross, Levine, & MaIntosh, 2013)We showed rapid performance improvement when participants learned the identification of simultaneously presented vowel stimuli and identified involvement of specific brain areas when participants used a spectral or spatial cues (Du, He, Arnott, Ross, Wu, Li, & Alain, 2015).We demonstrated an analog to the visual attentional blink, the failure to detect a visual target during the time interval immediately following the detection of a previous target, also for the auditory modality and described for first time the involvement of oscillatory brain activity (Shen, Ross & Alain, 2016). Shen, D. Ross, B. & Alain, C. (2016) Temporal Cue Modulates Alpha Oscillations during Auditory Attentional Blink, European Journal of Neuroscience, (in press)Du, Y., He, Y., Arnott, S. R., Ross, B., Wu, X., Li, L., & Alain, C. (2015). Rapid Tuning of Auditory “What” and “Where” Pathways by Training. Cerebral Cortex, 25, 496–506.Heisz, J. J., Vakorin, V., Ross, B., Levine, B., & Mcintosh, A. R. (2013). A Trade-off between Local and Distributed Information Processing Associated with Remote Episodic versus Semantic Memory. Journal of Cognitive Neuroscience, 26(1), 41–53.Du, Y., He, Y., Ross, B., Bardouille, T., Wu, X., Li, L., & Alain, C. (2011). Human auditory cortex activity shows additive effects of spectral and spatial cues during speech segregation. Cerebral Cortex, 21(3), 698–707.Arnott, S. R., Bardouille, T., Ross, B., & Alain, C. (2011). Neural generators underlying concurrent sound segregation. Brain Research, 1387, 116–124. Advancing the MEG research methods:Continuously we advanced and improved the experimental procedures and data analyses for MEG.We developed a method for re-aligning the magnetic field map of MEG recordings to a standard space. This method improves comparison of MEG collected from multiple participants, because individual positioning in the MEG is slightly different between sessions (Charron, Ross, & Jamali, 2011).We compared oscillatory responses in MEG with fMRI activation using same stimulation for further understanding of the relation between neural activity and the BOLD signal (Marxen, Cassidy, Dawson, Ross, & Graham, 2012).We developed a method for precise localization of sources in the somatosensory cortex using vibrotactile stimulation and recording of somatosensory steady-state responses (Jamali & Ross, 2012).We advanced MEG source analysis and demonstrated source separation of neighboured fingers along somatotopic organization in primary somatosensory cortex (Jamali & Ross, 2013).I summarized the fundamental methods of recording cortical auditory steady-state responses and reviewed current experimental paradigms in the Handbook of Clinical Neurophysiology (Ross, 2013)We studied the possibilities and limitations of detecting synchrony between MEG activity, which is an important indicator for brain connectivity. We documented in detail the effects of noise as the most important limiting factor (Wianda & Ross, 2016). Wianda, E., & Ross, B. (2016). Detecting neuromagnetic synchrony in the presence of noise. Journal of Neuroscience Methods, 262, 41–55.Ross, B. (2013). Steady-state auditory evoked responses. In C. G.G. (Ed.), Handbook of Clinical Neurophysiology (1st ed., Vol. 10, pp. 155–176).Jamali, S., & Ross, B. (2013). Somatotopic finger mapping using MEG: Toward an optimal stimulation paradigm. Clinical Neurophysiology, 124(8), 1659–70.Jamali, S., & Ross, B. (2012). Precise mapping of the somatotopic hand area using neuromagnetic steady-state responses. Brain Research, 1455, 28–39.Marxen, M., Cassidy, R. J., Dawson, T. L., Ross, B., & Graham, S. J. (2012). Transient and sustained components of the sensorimotor BOLD response in fMRI. Magnetic Resonance Imaging, 30(6), 837–847.Ross, B., Charron, R. E. M., & Jamali, S. (2011). Realignment of magnetoencephalographic data for group analysis in the sensor domain. Journal of Clinical Neurophysiology, 28(2), 190–201. Publications https://scholar.google.ca/citations?user=Adq5m_QAAAAJ&hl=en listed at Google Scholar Research Funding 2008-2011Neural mechanisms of auditory-based remediation programs, Investigators: Claude Alain, Bernhard Ross, and Kelly Tremblay, Sponsors: Canadian Institutes for Health Research (CIHR), 85,000$ per annum for 3 years2007-2010Characterizing fMRI and MEG signals in the somatosensory cortex, Investigtors, Simon Graham, Wilkin Chau, Randy McIntosh, Bernhard Ross, Jon Ween, Sponsors: Canadian Institutes for Health Research (CIHR), 100,000$ per annum for 3 years2007Magnetoencephalography as a possible diagnostic tool for traumatic brain injury: Cortical oscillations during retrieval from working memory, Investigator: Bernhard Ross, Sponsor: Dept. for Medicine University of Toronto, The Deans Fund, 10,000$ one year2007Auditory cortex activation indicating audiovisual integration during listening and reading, Investigators Bernhard Ross and Hao Luo, Sponsors: The Hearing Foundation of Canada, 24,500 $ for one year2007-2012Improved characterization of human somatosensory cortex using simultaneous vibro-tactile stimulation Investigators: Bernhard Ross; Sponsors: Natural Sciences and Engineering Research Council of Canada (NSERC), 26,000$ per annum vor five years2006-2011Aging related changes in central hearing - A neuromagnetic study Investigators: Bernhard Ross; Terence Picton, Claude Alain Sponsors: Canadian Institutes for Health Research (CIHR), 136,000$ per annum for five years2006-2008Transformation of whole head magnetoencephalographic data onto standardized sensor position Investigators: Bernhard Ross; Wilkin Chau Sponsors: Natural Sciences and Engineering Research Council of Canada (NSERC), 45,000$ per annum for three years2005-2005Binaural Processing in young and older adults Investigators: Bernhard Ross; Sponsors: The Hearing Foundation of Canada, 23,500 $ for one year
Select Publications
Baycrest is an academic health sciences centre fully affiliated with the University of Toronto
Privacy Statement - Disclaimer - © 1989-2024 BAYCREST HEALTH SCIENCE. ALL RIGHTS RESERVED