标签归档:Brain Decoding

Simultaneously uncovering the patterns of brain regions involved in different story reading Subprocesses

Simultaneously Uncovering the Patterns of Brain Regions Involved in Different  Story Reading Subprocesses

This Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders. This work was supported by the National Science Foundation (nsf.gov, 0835797, TM); the National Institute of Child Health and human Development (nichd.nih.gov, 5R01HD075328, TM); and the Rothberg Brain Imaging Award (http://www. cmu.edu/news/archive/2011/July/july7- rothbergawards.shtml, LW AF). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

理解故事这个任务包括了许多感知和认知上的流程,例如感知独立的单词,组合成句子以及理解故事中人物的关系。我们提出了一个计算模型,模型可以合并上述流程,同时发现他们的fMRI特征。我们的模型通过让受试者朗读随机的文段所获得的fMRI活动进行预测。并获得了文段分类的74%的准确率。这个方法是第一个可以同时追踪复杂故事阅读流程的方法,它可以预测关于多样的故事内容的细节神经表示,包括当提到不同故事角色时的视觉-语言特征和他们的不同动作。我们根据之前的研究构造脑部的表示图,它专注于语言处理,并且为其他领域能够运用提供了支持。另外,这个方法研究了个体差异,可以用作创建独立的个体样本,这些样本可能可以用作诊断阅读障碍。

Generic decoding of seen and imagined objects using hierarchical visual features

Generic decoding of seen and imagined objects using hierarchical visual  features | Nature Communications

Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

https://www.nature.com/articles/ncomms15037

目标识别是人类和机器视觉的主要功能。虽然对于看见或者想象目标时的大脑解码任务已经可以做到,但是预测性能还受制于训练样本。我们提出了一个可以作用于任意目标的解码方法,它利用机器视觉的机制:一个目标在不同层级的模型中会以无关的特征形式呈现。我们发现从深度卷积神经网络中获得的视觉特征也可以由fMRI图像中预测获得,对应的低-低,高-高层级特征可以获得更高的预测精度。预测出来的特征还能够用于识别看到或者想到的目标(甚至是训练时没有用到的类别)。试验结果更加证明了视觉特征是从高到低等级渐进表示的。我们的实验还证明了人类和机器视觉拥有相同的作用机制。

BOLD5000, a public fMRI dataset while viewing 5000 visual images

figure2

Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that include neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enables fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr’s dream of a singular vision science–the intertwined study of biological and computer vision.

https://bold5000.github.io

视觉科学,尤其是机器视觉,已经因为大规模图像数据集和统计学习方法地提出而带来革命性的变化。但是目前为止,对于人类视觉感知的神经图像研究依然依赖着少量的数据(常常是100张图片左右),这是因为实验过程受到时间的限制。为了将统计学习的方法介绍到神经科学领域,数据量必须有一个大的提升。我们提出了BOLD5000,一个人类fMRI的研究,它包括接近5000张不同的描述真实场景的图像。相对于之前的fMRI数据集,BOLD5000还更加注重图像的多样性,很多的图像与标准图像数据集有交集,例如SUN,COCO等。除了注重规模和多样性,BOLD5000还结合了慢速事件相关的fMRI设计,这使得在丰富视觉特征和种类以及语义上研究神经表达成为可能。目前,BOLD5000使得我们与Marr的梦想变得更近了:将生物和计算机世界放在一起研究。

Brain2Char: A Deep Architecture for Decoding Text from Brain Recordings

Decoding language representations directly from the brain can enable new Brain-Computer Interfaces (BCI) for high bandwidth human-human and human-machine communication. Clinically, such technologies can restore communication in people with neurological conditions affecting their ability to speak. In this study, we propose a novel deep network architecture Brain2Char, for directly decoding text (specifically character sequences) from direct brain recordings (called Electrocorticography, ECoG). Brain2Char framework combines state-of-the-art deep learning modules — 3D Inception layers for multiband spatiotemporal feature extraction from neural data and bidirectional recurrent layers, dilated convolution layers followed by language model weighted beam search to decode character sequences, optimizing a connectionist temporal classification (CTC) loss. Additionally, given the highly non-linear transformations that underlie the conversion of cortical function to character sequences, we perform regularizations on the network’s latent representations motivated by insights into cortical encoding of speech production and artifactual aspects specific to ECoG data acquisition. To do this, we impose auxiliary losses on latent representations for articulatory movements, speech acoustics and session specific non-linearities. In 3 participants tested here, Brain2Char achieves 10.6%, 8.5% and 7.0% Word Error Rates (WER) respectively on vocabulary sizes ranging from 1200 to 1900 words. Brain2Char also performs well when 2 participants silently mimed sentences. These results set a new state-of-the-art on decoding text from brain and demonstrate the potential of Brain2Char as a high-performance communication BCI.

https://arxiv.org/pdf/1909.01401.pdf

从脑部直接解码语言表示课提供人-人,人-机的高速脑机接口。在临床上,这样的技术可以用于为神经受损的人恢复交流能力。在本文中,我们提出了一种深度网络: Brain2Char, 直接从脑部记录(脑皮层电图)中解码词序列。Brain2Char模型包括:3D Inception层 (提取多波段时空域特征),扩张卷积层(解码词序列)以及一个联接的时域分类损失函数。

CogniVal: A framework for cognitive word embedding evaluation

PDF] CogniVal: A Framework for Cognitive Word Embedding Evaluation |  Semantic Scholar

An interesting method of evaluating word representations is by how much they reflect the semantic representations in the human brain. However, most, if not all, previous works only focus on small datasets and a single modality. In this paper, we present the first multimodal framework for evaluating English word representations based on cognitive lexical semantics. Six types of word embeddings are evaluated by fitting them to 15 datasets of eye-tracking, EEG and fMRI signals recorded during language processing. To achieve a global score over all evaluation hypotheses, we apply statistical significance testing accounting for the multiple comparisons problem. This framework is easily extensible and available to include other intrinsic and extrinsic evaluation methods. We find strong correlations in the results between cognitive datasets, across recording modalities and to their performance on extrinsic NLP tasks.

https://arxiv.org/pdf/1909.09001.pdf

评价一种词表示的有趣的指标是评估它反映人脑语义表示的程度。但是以前的工作仅仅专注于小规模数据以及单一模态。在本文中,我们为评估基于英文词表示的认知词语义任务提出了第一种多模态的框架。我们评估了六种词嵌入,这些词嵌入是通过眼部追踪,EEG,fMRI信号的15个数据集获得。为了获得全局评估得分,我们在比较问题上使用统计显著性测试来解决。这个框架可以方便的扩展和包括内在或者外在的评估方法。我们发现认知数据集之间,多模态之间,外部NLP任务之间有强相关。

Inducing brain-relevant bias in natural language processing models

Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model’s ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.

http://papers.nips.cc/paper/9559-inducing-brain-relevant-bias-in-natural-language-processing-models

自然语言处理模型对于语言序列预测的处理流程已经被认为对于理解大脑语言处理的流程有重要帮助。但是,这些模型并没有被特别为了捕捉脑部表示语言含义流程而设计。我们提供一个假设:利用测试者在朗读时的脑部活动记录fine-tuning自然语言处理模型可以使得这些模型学习到更多脑部活动相关的语言信息。我们提出了一个在fine-tuning后可以提高预测脑部活动BERT的变种,BERT是一种最近提出的性能强大的语言模型。我们通过MEG和fMRI多域数据的实验揭示了语言和大脑活动的相关性,并且说明了语言模型学习到的特征是捕捉到脑部活动相关信息的,而不是无意义的。当利用语言表示去预测脑部活动的时候,模型依然能够很好地执行downstream的自然语言处理任务。

Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains

In this paper, we define and apply representational stability analysis (ReStA), an intuitive way of analyzing neural language models. ReStA is a variant of the popular representational similarity analysis (RSA) in cognitive neuroscience. While RSA can be used to compare representations in models, model components, and human brains, ReStA compares instances of the same model, while systematically varying single model parameter. Using ReStA, we study four recent and successful neural language models, and evaluate how sensitive their internal representations are to the amount of prior context. Using RSA, we perform a systematic study of how similar the representational spaces in the first and second (or higher) layers of these models are to each other and to patterns of activation in the human brain. Our results reveal surprisingly strong differences between language models, and give insights into where the deep linguistic processing, that integrates information over multiple sentences, is happening in these models. The combination of ReStA and RSA on models and brains allows us to start addressing the important question of what kind of linguistic processes we can hope to observe in fMRI brain imaging data. In particular, our results suggest that the data on story reading from Wehbe et al. (2014) contains a signal of shallow linguistic processing, but show no evidence on the more interesting deep linguistic processing.

https://arxiv.org/abs/1906.01539

在本文中,我们提出了一种直观地分析自然语言处理模型的方法:表示稳定度分析方法(ReStA)。ReStA是一种广为欢迎的表示相似度分析方法(RSA)在认知神经科学领域的变形。RSA专注于比较模型,模型组件,人脑中的表示的区别,而ReStA比较的是在系统性地变化同一个模型的参数而带来对于表示的变化。我们通过ReStA比较和分析了最近成功应用的自然语言处理模型,并且评估了他们的内部表示对于先验信息的敏感程度。另外,我们还利用RSA系统地研究了语言模型中第一或者更高层的表示之间的相似度,以建模对于人脑的激活。我们的试验结果惊讶地揭示了语言模型之间的很大不同,深度语言处理会从多个句子中合并信息。我们的实验可以开始去解答我们在fMRI脑部图像中观察到的现象对应的是哪种语言处理过程。我们的结果表示Wehbe et al. (2014)的数据包含一个浅层语言处理的信号,但是没有包括深度语言处理的内容。