Download Free Attention And Vision In Language Processing Book in PDF and EPUB Free Download. You can read online Attention And Vision In Language Processing and write the review.

This volume provides a comprehensive overview of the nature of attentional and visual processes involved in language comprehension. Key concerns include how linguistic and non-linguistic processes jointly determine language comprehension and production and how the linguistic system interfaces with perceptual systems and attention. Language scientists have traditionally considered language in isolation from other cognitive and perceptual systems such as attention, vision and memory. In recent years, however, it has become increasingly clear that language comprehension must be studied within interaction contexts. The study of multimodal interactions and attentional processes during language processing has thus become an important theoretical focus that guides many research programs in psycholinguistics and related fields.
This original volume examines the interface between attentional and linguistic processes in humans from the perspectives of psycholinguistics and cognitive science. It systematically explores how autonomy and automaticity are reflected during language processing in a variety of situations. A true, mechanistic explanation of how humans process language would require a complete understanding of the interface language has with important cognitive systems like attention, memory, as well as with vision. Interdisciplinary work in this area has so far not been able to generate a substantial theoretical position on this issue. This volume therefore looks at different language processing domains, such as speaking, listening, reading, as well as discourse and text processing, to evaluate the role attention plays in such performances; and also at how often linguistic inputs affect attentional processing. In this sense, it proposes that the attention--language interface is bidirectional. It also considers applied issues like language disorders, bilingualism and illiteracy, where the attention--language interface seems especially relevant as a theoretical apparatus for research investigations. Therefore, this volume brings closer theoretical explanations from the language sciences and cognitive sciences. It argues that language processing is multi-modal in its very essence and many conceptual structures in language evolve out of a complex interplay among participating cognitive systems such as attention and memory, supported by vision and audition.
This book brings together chapters from investigators on the leading edge on this new research area to explore on the leading edge on this new research area to explore common theoretical issues, empirical findings, technical problems, and outstanding questions. This book will serve as a blueprint for work on the interface of vision, language, and action over the next five to ten years.
This thesis investigates the mechanisms underlying the formation, maintenance, and sharing of reference in tasks in which language and vision interact. Previous research in psycholinguistics and visual cognition has provided insights into the formation of reference in cross-modal tasks. The conclusions reached are largely independent, with the focus on mechanisms pertaining to either linguistic or visual processing. In this thesis, we present a series of eye-tracking experiments that aim to unify these distinct strands of research by identifying and quantifying factors that underlie the cross-modal interaction between scene understanding and sentence processing. Our results show that both low-level (imagebased) and high-level (object-based) visual information interacts actively with linguistic information during situated language processing tasks. In particular, during language understanding (Chapter 3), image-based information, i.e., saliency, is used to predict the upcoming arguments of the sentence, when the linguistic material alone is not sufficient to make such predictions. During language production (Chapter 4), visual attention has the active role of sourcing referential information for sentence encoding. We show that two important factors influencing this process are the visual density of the scene, i.e., clutter, and the animacy of the objects described. Both factors influence the type of linguistic encoding observed and the associated visual responses. We uncover a close relationship between linguistic descriptions and visual responses, triggered by the cross-modal interaction of scene and object properties, which implies a general mechanism of cross-modal referential coordination. Further investigation (Chapter 5) shows that visual attention and sentence processing are closely coordinated during sentence production: similar sentences are associated with similar scan patterns. This finding holds across different scenes, which suggests that coordination goes beyond the well-known scene-based effects guiding visual attention, again supporting the existence of a general mechanism for the cross-modal coordination of referential information. The extent to which cross-modal mechanisms are activated depends on the nature of the task performed. We compare the three tasks of visual search, object naming, and scene description (Chapter 6) and explore how the modulation of cross-modal reference is reflected in the visual responses of participants. Our results show that the cross-modal coordination required in naming and description triggers longer visual processing and higher scan pattern similarity than in search. This difference is due to the coordination required to integrate and organize visual and linguistic referential processing. Overall, this thesis unifies explanations of distinct cognitive processes (visual and linguistic) based on the principle of cross-modal referentiality, and provides a new framework for unraveling the mechanisms that allow scene understanding and sentence processing to share and integrate information during cross-modal processing.
"Bilinguals are constantly juggling competing information from two languages as they interact with their environment (i.e., non-selective activation). As a result, both first (L1) and second language (L2) communication may be obstructed when words share orthographic form but not meaning (i.e., interlingual homographs). For example, in French CRANE refers to a skull, whereas in English it refers to a machine. Similarly, divided L1/L2 exposure weakens the integrity of lexical representations through reduced baseline activation levels of words (i.e., weaker links hypothesis; Gollan et al., 2008, 2011), making bilinguals more vulnerable to frequency effects (i.e., low frequency words being more difficult to process). While the ways in which the bilingual language system manages these challenges has been studied extensively, less is known about how they extend to and interact with other cognitive processes, such as vision. In fact, possible interactions between language and visual processing remain understudied. According to prominent models of bilingual language processing (e.g., BIA+, Dijkstra & van Heuven, 2002a; Van Heuven et al., 1998a), the language system is architecturally separate from other cognitive systems. While this is consistent with some existing models of the language-vision link (e.g., parallel-contingent independence hypothesis; Allopenna et al., 1998; Dahan et al., 2001), other models (e.g., cascaded activation model of visual-linguistic interactions; De Groot et al., 2016; Huettig, Olivers, et al., 2011) propose bidirectional links between the language and visual processing systems. Here, we capitalized on characteristics of the bilingual lexicon to investigate the language-vision link. Chapter 2 investigated the extent to which effects of non-selective activation interact with complexity of a visual referent to modulate performance on a multimodal word-image matching task. We found that cross-language referential conflict (i.e., homograph interference) was lessened when the visual referent was clearer (i.e., lower visual complexity), leading to faster responses. Thus, contrary to what is proposed by the BIA+ model, it appears that feedback from the visual processing system modulates semantic processing. Chapter 3 furthers this work by investigating the extent to which feedback from the visual system interacts with lexical processing. Using the same multimodal word-image matching task, we manipulated both lexical frequency and image visual complexity to burden both systems simultaneously. We found that the lexical and image factors individually modulated task performance, but did not statistically interact. This suggests that output from the language system can inform other processes but feedback from these processes does not modulate lexical processing. In Chapter 4, we extend these findings using eye movements measures to investigate the effects of non-selective activation and lexical frequency in the context of a visual search task, requiring the integration of both visual and linguistic information. We found that both cross-language ambiguity and low lexical frequency impeded search performance. Furthermore, we found evidence that participants were able to integrate visual information to more efficiently resolve cross-language ambiguity, but not frequency-based ambiguity. Taken together, the findings presented in this thesis establish that well-studied semantic and lexical effects extend to attentional interactions beyond bilingual language processing. More broadly, our findings suggest an interactive link between vision and language, although the extent to which these two processes interact may depend on the type of linguistic information involved (i.e., lexical or semantic)"--
The brain ... There is no other part of the human anatomy that is so intriguing. How does it develop and function and why does it sometimes, tragically, degenerate? The answers are complex. In Discovering the Brain, science writer Sandra Ackerman cuts through the complexity to bring this vital topic to the public. The 1990s were declared the "Decade of the Brain" by former President Bush, and the neuroscience community responded with a host of new investigations and conferences. Discovering the Brain is based on the Institute of Medicine conference, Decade of the Brain: Frontiers in Neuroscience and Brain Research. Discovering the Brain is a "field guide" to the brainâ€"an easy-to-read discussion of the brain's physical structure and where functions such as language and music appreciation lie. Ackerman examines: How electrical and chemical signals are conveyed in the brain. The mechanisms by which we see, hear, think, and pay attentionâ€"and how a "gut feeling" actually originates in the brain. Learning and memory retention, including parallels to computer memory and what they might tell us about our own mental capacity. Development of the brain throughout the life span, with a look at the aging brain. Ackerman provides an enlightening chapter on the connection between the brain's physical condition and various mental disorders and notes what progress can realistically be made toward the prevention and treatment of stroke and other ailments. Finally, she explores the potential for major advances during the "Decade of the Brain," with a look at medical imaging techniquesâ€"what various technologies can and cannot tell usâ€"and how the public and private sectors can contribute to continued advances in neuroscience. This highly readable volume will provide the public and policymakersâ€"and many scientists as wellâ€"with a helpful guide to understanding the many discoveries that are sure to be announced throughout the "Decade of the Brain."
Paying attention is something we are all familiar with and often take for granted, yet the nature of the operations involved in paying attention is one of the most profound mysteries of the brain. This book contains a rich, interdisciplinary collection of articles by some of the pioneers of contemporary research on attention. Central themes include how attention is moved within the visual field; attention's role during visual search, and the inhibition of these search processes; how attentional processing changes as continued practice leads to automatic performance; how visual and auditory attentional processing may be linked; and recent advances in functional neuro-imaging and how they have been used to study the brain's attentional network
The Neural Theory of Visual Attention of Bundesen, Habekost, and Kyllingsbæk (2005) was proposed as a neural interpretation of Bundesen’s (1990) theory of visual attention (TVA). In NTVA, visual attention functions via two mechanisms: by dynamic remapping of receptive fields of cortical cells such that more cells are devoted to behaviorally important objects than to less important ones (filtering) and by multiplicative scaling of the level of activation in cells coding for particular features (pigeonholing). NTVA accounts for a wide range of known attentional effects in human performance and a wide range of effects observed in firing rates of single cells in the primate visual system and thus provides a mathematical framework to unify the 2 fields of research. In this Research Topic of Frontiers in Psychology, some of the leading theories of visual attention at both the cognitive, neuropsychological, and neurophysiological levels are presented and evaluated. In addition, the Research Topic encompasses application of the framework of NTVA to various patient populations and to neuroimaging as well as genetic and psychopharmacological studies.
Aims to identify, address and solve some major problems and issues in the psychology of visual perception, attention and intentional control.
Publisher Description