Skip to main content
Text Size: sML

Barry Horwitz, Ph.D.

Photo of Dr. Horwitz

Chief
Section on Brain Imaging and Modeling
Voice, Speech, and Language Branch

NIDCD/NIH
Bldg. 10, Rm. 5D39
MSC 1402
9000 Rockville Pike
Bethesda, MD 20892
Phone: (301) 594–7755
Fax: (301) 480–5625
E-mail: horwitzb@mail.nih.gov

Ph.D., University of Pennsylvania, 1972

Research Statement

The major research focus of the Brain Imaging and Modeling Section concerns ascertaining how interacting brain regions (i.e., neural networks) implement specific cognitive tasks, especially those associated with audition and language. We also study how these networks are altered in brain disorders. These issues are addressed by combining computational neuroscience techniques with neuroscientific data, especially those acquired using functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). The network analysis methods allow us to evaluate how brain operations differ between tasks, and between normal and patient populations, thus permitting us to determine which networks are dysfunctional and the role neural plasticity plays in enabling compensatory behavior to occur. Central to this research is the use of large-scale biologically realistic network models that relate neuroanatomical and neurophysiological data to the signals measured by functional brain imaging. Not only does computational modeling help interpret the meaning of functional brain imaging data, it also provides a framework to generate and quantitatively test hypotheses concerning the mechanisms by which specific cognitive tasks are implemented in the brain.

Research in our section is divided into three main interconnected areas: (1) designing and executing neuroscientific experiments - primarily functional brain imaging studies; (2) network analysis of functional and effective connectivity between important brain regions based on these data; and (3) development and implementation of large-scale neural models aimed at determining how the functional brain imaging signals from some of these experiments are related to the underlying cellular neural activity. In particular, these approaches are applied to high-level auditory and language function. Because many of the analytic and computational methods used were originated by us, ongoing methodological development of these approaches also continues as a major activity.

Recent Accomplishments

We have focused on using multisensory stimuli with fMRI to elucidate cognitive function relating to language processing, including such activities as naming, categorization and recognition of speech and non-speech stimuli. For example we examined the effects of training subjects on perceiving pairs of sensory stimuli using delayed-match-to-sample (DMS) tasks. The DMS task involves the presentation of a stimulus, a delay, and the presentation of a second stimulus; the subject responds as to whether the second stimulus matches the first. There exists much data (functional, neuroanatomical, electrophysiological and cognitive) from humans and primates about this type of task. We have implemented computational neural models for both visual (Tagamets and Horwitz, 1998; Horwitz and Tagamets, 1999) and auditory (Husain et al., 2004) DMS tasks that generate simulated neural data and simulated functional neuroimaging data that agree with experimental findings. Recently, we have used MEG to study modulation of auditory cortex during the DMS task, and we are now incorporating the simulation of MEG data into our large-scale modeling framework.

Remembering associations between names and objects is fundamental to language. In one recent study, we taught subjects to associate pairs of auditory and visual stimuli, and then investigated long-term memory storage. We examined the effect of retention interval (immediate vs. one month) and language on the neural correlates of auditory–visual memory storage using fMRI and a paired associates (PA) task.

We are also interested in studying auditory disorders and their impact on language understanding using fMRI. For these studies we examine the performance of subjects with hearing impairment and related squeal to gain a better understanding of their impact on the network of brain regions that process and construct auditory percepts. For example, one recent study compared patients with hearing loss with tinnitus and without tinnitus.

View Supplemental Information and Images


Lab Personnel

Jason F. Smith, Ph.D., Research Fellow, 301-451-1647 (Send e-mail)
Arpan Banerjee, Ph.D., Research Fellow (Send e-mail)
Jessica Gilbert, Ph.D., Postdoctoral Fellow (Send e-mail)
Mikhail Turovskiy, Technical IRTA (Send e-mail)

Selected Publications

Top