Medindia LOGIN REGISTER
Medindia

Insights into Improving the Ability to Read the Human Mind

by Dr. Trupti Shirole on Feb 27 2017 10:25 AM

 Insights into Improving the Ability to Read the Human Mind
The capacity to monitor the brain in real time has tremendous potential for improving the diagnosis and treatment of brain disorders as well as for basic research on how the mind works.
Early this year, about 30 neuroscientists and computer programmers got together to improve their ability to read the human mind.

The hackathon was one of several that researchers from Princeton University and Intel, the largest maker of computer processors, organized to build software that can tell what a person is thinking in real time, while the person is thinking it.

The collaboration between researchers at Princeton and Intel has enabled rapid progress on the ability to decode digital brain data, scanned using functional magnetic resonance imaging (fMRI), to reveal how neural activity gives rise to learning, memory and other cognitive functions.

A review of computational advances toward decoding brain scans appears in the journal Nature Neuroscience, authored by researchers at the Princeton Neuroscience Institute and Princeton's departments of computer science and electrical engineering, together with colleagues at Intel Labs, a research arm of Intel.

Since the collaboration's inception two years ago, the researchers have whittled the time it takes to extract thoughts from brain scans from days down to less than a second, said Cohen, who is also a professor of psychology.

One type of experiment that is benefiting from real-time decoding of thoughts occurred during the hackathon. The study, designed by J. Benjamin Hutchinson, a former postdoctoral researcher in the Princeton Neuroscience Institute who is now an assistant professor at Northeastern University, aimed to explore activity in the brain when a person is paying attention to the environment, versus when his or her attention wanders to other thoughts or memories.

Advertisement
In the experiment, Hutchinson asked a research volunteer - a graduate student lying in the fMRI scanner - to look at a detail-filled picture of people in a crowded café. From his computer in the console room, Hutchinson could tell in real time whether the graduate student was paying attention to the picture or whether her mind was drifting to internal thoughts. Hutchinson could then give the graduate student feedback on how well she was paying attention by making the picture clearer and stronger in color when her mind was focused on the picture, and fading the picture when her attention drifted.

The ongoing collaboration has benefited neuroscientists who want to learn more about the brain and computer scientists who want to design more efficient computer algorithms and processing methods to rapidly sort through large data sets, according to Theodore Willke, a senior principal engineer at Intel Labs in Hillsboro, Oregon, and head of Intel's Mind's Eye Lab. Willke directs Intel's part of the collaborative team.

Advertisement
"Intel was interested in working on emerging applications for high-performance computing, and the collaboration with Princeton provided us with new challenges," Willke said. "We also hope to export what we learn from studies of human intelligence and cognition to machine learning and artificial intelligence, with the goal of advancing other important objectives, such as safer autonomous driving, quicker drug discovery and ealier detection of cancer."

Since the invention of fMRI two decades ago, researchers have been improving the ability to sift through the enormous amounts of data in each scan. An fMRI scanner captures signals from changes in blood flow that happen in the brain from moment to moment as we are thinking. But reading from these measurements the actual thoughts a person is having is a challenge, and doing it in real time is even more challenging.

A number of techniques for processing these data have been developed at Princeton and other institutions. For example, work by Peter Ramadge, the Gordon Y.S. Wu Professor of Engineering and professor of electrical engineering at Princeton, has enabled researchers to identify brain activity patterns that correlate to thoughts by combining data from brain scans from multiple people. Designing computerized instructions, or algorithms, to carry out these analyses continues to be a major area of research.

Powerful high-performance computers help cut down the time that it takes to do these analyses by breaking the task up into chunks that can be processed in parallel. The combination of better algorithms and parallel computing is what enabled the collaboration to achieve real-time brain scan processing, according to Kai Li, Princeton's Paul M. Wythes '55 P86 and Marcia R. Wythes P86 Professor in Computer Science and one of the founders of the collaboration.

Since the beginning of the collaboration in 2015, Intel has contributed to Princeton more than $1.5 million in computer hardware and support for Princeton graduate students and postdoctoral researchers. Intel also employs 10 computer scientists who work on this project with Princeton, and these experts work closely with Princeton faculty, students and postdocs to improve the software.

These algorithms locate thoughts within the data by using machine learning, the same technique that facial recognition software uses to help find friends in social media platforms such as Facebook. Machine learning involves exposing computers to enough examples so that the computers can classify new objects that they've never seen before.

One of the results of the collaboration has been the creation of a software toolbox, called the Brain Imaging Analysis Kit (BrainIAK), that is openly available via the Internet to any researchers looking to process fMRI data. The team is now working on building a real-time analysis service. "The idea is that even researchers who don't have access to high-performance computers, or who don't know how to write software to run their analyses on these computers, would be able to use these tools to decode brain scans in real time," said Li.

What these scientists learn about the brain may eventually help individuals combat difficulties with paying attention, or other conditions that benefit from immediate feedback.

For example, real-time feedback may help patients train their brains to weaken intrusive memories. While such "brain-training" approaches need additional validation to make sure that the brain is learning new patterns and not just becoming good at doing the training exercise, these feedback approaches offer the potential for new therapies, Cohen said. Real-time analysis of the brain could also help clinicians make diagnoses, he said.

The ability to decode the brain in real time also has applications in basic brain research, said Kenneth Norman, professor of psychology and the Princeton Neuroscience Institute. "As cognitive neuroscientists, we're interested in learning how the brain gives rise to thinking," said Norman. "Being able to do this in real time vastly increases the range of science that we can do," he said.

Another way the technology can be used is in studies of how we learn. For example, when a person listens to a math lecture, certain neural patterns are activated. Researchers could look at the neural patterns of people who understand the math lecture and see how they differ from neural patterns of someone who isn't following along as well, according to Norman.

The ongoing collaboration is now focused on improving the technology to obtain a clearer window into what people are thinking about, for example, decoding in real time the specific identity of a face that a person is mentally visualizing.

One of the challenges the computer scientists had to overcome was how to apply machine learning to the type of data generated by brain scans. A face-recognition algorithm can scan hundreds of thousands of photographs to learn how to classify new faces, but the logistics of scanning peoples' brains are such that researchers usually only have access to a few hundred scans per person.

Although the number of scans is few, each scan contains a rich trove of data. The software divides the brain images into little cubes, each about one millimeter wide. These cubes, called voxels, are analogous to the pixels in a two-dimensional picture. The brain activity in each cube is constantly changing.

To make matters more complex, it is the connections between brain regions that give rise to our thoughts. A typical scan can contain 100,000 voxels, and if each voxel can talk to all the other voxels, the number of possible conversations is immense. And these conversations are changing second by second. The collaboration of Intel and Princeton computer scientists overcame this computational challenge. The effort included Li as well as Barbara Engelhardt, assistant professor of computer science, and Yida Wang, who earned his doctorate in computer science from Princeton in 2016 and now works at Intel Labs.

Prior to the recent progress, it would take researchers months to analyze a data set, said Nicholas Turk-Browne, professor of psychology at Princeton. With the availability of real-time fMRI, a researcher can change the experiment while it is ongoing. "If my hypothesis concerns a certain region of the brain and I detect in real time that my experiment is not engaging that brain region, then we can change what we ask the research volunteer to do to better engage that region, potentially saving precious time and accelerating scientific discovery," Turk-Browne said.

One eventual goal is to be able to create pictures from people's thoughts, said Turk-Browne. "If you are in the scanner and you are retrieving a special memory, such as from childhood, we would hope to generate a photograph of that experience on the screen. That is still far off, but we are making good progress."

Source-Eurekalert


Latest Research News
View All
Advertisement