Download Free Talker Specific Encoding Effects On Recognition Memory For Spoken Sentences Book in PDF and EPUB Free Download. You can read online Talker Specific Encoding Effects On Recognition Memory For Spoken Sentences and write the review.

Implicit memory refers to a change in task performance due to an earlier experience that is not consciously remembered. The topic of implicit memory has been studied from two quite different perspectives for the past 20 years. On the one hand, researchers interested in memory have set out to characterize the memory system (or systems) underlying implicit memory, and see how they relate to those underlying other forms of memory. The alternative framework has considered implicit memory as a by-product of perceptual, conceptual, or motor systems that learn. That is, on this view the systems that support implicit memory are heavily constrained by pressures other than memory per se. Both approaches have yielded results that have been valuable in helping us to understand the nature of implicit memory, but studied somewhat in isolation and with little collaboration. This volume is unique in explicitly contrasting these approaches, bringing together world class scientists from both camps in an attempt to forge a new approach to understanding one of the most exciting and important issues in psychology and neuroscience. Written for postgraduate students and researchers in cognitive psychology and cognitive neuroscience, this is a book that will have an important influence on the direction that future research in this field takes.
Speech recognition in ‘adverse conditions’ has been a familiar area of research in computer science, engineering, and hearing sciences for several decades. In contrast, most psycholinguistic theories of speech recognition are built upon evidence gathered from tasks performed by healthy listeners on carefully recorded speech, in a quiet environment, and under conditions of undivided attention. Building upon the momentum initiated by the Psycholinguistic Approaches to Speech Recognition in Adverse Conditions workshop held in Bristol, UK, in 2010, the aim of this volume is to promote a multi-disciplinary, yet unified approach to the perceptual, cognitive, and neuro-physiological mechanisms underpinning the recognition of degraded speech, variable speech, speech experienced under cognitive load, and speech experienced by theoretically relevant populations. This collection opens with a review of the literature and a formal classification of adverse conditions. The research articles then highlight those adverse conditions with the greatest potential for constraining theory, showing that some speech phenomena often believed to be immutable can be affected by noise, surface variations, or attentional set in ways that will force researchers to rethink their theory. This volume is essential for those interested in speech recognition outside laboratory constraints.
Previous research has found that adding different forms of variability during study can affect later memory at test. For example, having words spoken by different talkers has been shown to improve recall of known and novel words (Goldinger et al., 1999; Barcroft & Sommers, 2005), and varying the cues in cue-target related word pairs has been found to improve recall of the targets (Glenberg, 1979; Bevan et al., 1966). It was unclear, however, whether benefits of variability would extend to more naturalistic stimuli, such as sentences, which have higher working memory demands. The present set of experiments investigated how talker and contextual variability, both individually and combined, affect free recall of target words that appear in semantically-related sentences.Target words were sentence-final items, and all stimuli in Experiment 1 were presented auditorily and orthographically. For each participant, targets appeared in one of the following four conditions: the same sentence spoken three times by the same person (no variability), three different sentences spoken by the same person every time (contextual variability), the same sentence spoken by three different talkers once each (talker variability), or a different sentence spoken by a different talker at each of three exposures (combined contextual and talker variability). Conditions with contextual variability resulted in significantly worse memory performance than constant-context conditions. There was no significant effect of talker variability and no significant interaction between talker and contextual variability. Experiment 2 further investigated the unexpected negative effect of contextual variability observed in Experiment 1 by changing the presentation modality to auditory-only (all with a constant talker). The switch from combined auditory-orthographic to auditory-only presentations was designed to both decrease working memory demands and encourage processing of the sentence as it unfolded in time. In addition, working memory measures were collected in order to test two predictions--that working memory would be a significant predictor of target word recall and that it would be a significantly better predictor in the variable-context compared to the constant-context condition. No recall differences between the constant- and variable-context conditions were found, but there was a significant positive relationship of working memory on target word recall. Lastly, although positive relationship of working memory on target word recall was stronger in the variable- than the constant-context condition, the interaction was not statistically significant.These findings suggest that the benefits of talker and contextual variability that have previously been found for lists of words or word pairs (e.g., Glenberg, 1979; Barcroft & Sommers, 2005) do not necessarily extend to semantically-related sentences. The results are discussed with regard to working memory demands and how this may interact with variability.
Spoken Word Recognition covers the entire range of processes involved in recognizing spoken words - both in and out of context. It brings together a number of essays dealing with important theoretical questions raised by the study of spoken word recognition - among them, how do we understand fluent speech as efficiently and effortlessly as we do? What are the mental processes and representations involved when we recognize spoken words? How do these differ from those involved in reading written words? What information is stored in our mental lexicon and how is it structured? What do linguistic and computational theories tell us about these psychological processes and representations?The multidisciplinary presentation of work by phoneticians, linguists, psychologists, and computer scientists reflects the growing interest in spoken word recognition from a number of different perspectives. It is a natural consequence of the mediating role that lexical representations and processes play in language understanding, linking sound with meaning.Following the editors' introduction, the contributions and their authors are: Acoustic-Phonetic Representation in Word Recognition (David B. Pisoni and Paul A. Luce). Phonological Parsing and Lexical Retrieval (Kenneth W. Church). Parallel Processing in Spoken Word Recognition (William D. Marslen-Wilson). A Reader's View of Listening (Dianne C. Bradley and Kenneth I. Forster). Prosodic Structure and Spoken Word Recognition (Francois Grosjean and James Paul Gee). Structure in Auditory Word Recognition (Lyn Frazier). The Mental Representation of the Meaning of Words (P. N. Johnson-Laird). Context Effects in Lexical Processing (Michael K. Tanenhaus and Margery M. Lucas).Uli H. Frauenfelder is a researcher with the Max-Planck-Institut für Psycholinguistik, and Lorraine Komisarjevsky Tyler is a professor in the Department of Experimental Psychology at the University of Cambridge. Spoken Word Recognition is in a series that is derived from special issues of Cognition: International Journal of Cognitive Science, edited by Jacques Mehler. A Bradford Book.
Abstract: The current study investigated recognition memory for dialect variation in a recognition memory experiment with separate training and test phases. In the training phase, participants were asked to identify words spoken by three female talkers from the Midland dialect region and three female talkers from the Northern dialect region. In the test phase, participants listened to another set of words and were asked to indicate whether each word was from the training phase, "old," or completely new, "new." In this phase of the experiment, half of the words were "old," having been previously introduced in the training phase, and half were "new," not having been introduced in the training phase. Of the "old" words, one-third were repeated by the same talker, one-third were repeated by a different talker from the same dialect region, and one-third were repeated by a different talker from a different dialect region. Based on previous research, it was expected that, for each original dialect, participants would be the most accurate and quickest for the "old" words that were repeated by the same talker, the least accurate and slowest for the "old" words that were repeated by a different talker from a different dialect region, and somewhat in between for the "old" words repeated by a different talker from the same dialect region. The results of this study indicate that episodic memory traces of spoken words retain fine-grained surface details, as found in Goldinger (1996) and Palmeri et al. (1993), as responses to same-talker repetitions were generally more accurate and faster than responses to different-talker same-dialect and different-talker different-dialect repetitions. In addition, response time patterns suggest that both abstract lexical representations and episodic traces are stored in long-term memory and contribute to perception. Finally, the significant vowel interactions provide some evidence that dialect information is implicitly coded by the listener, though further studies are needed to better understand this result.
Contemporary Issues in Experimental Phonetics provides comprehensive coverage of a number of research topics on experimental phonetics. This book is divided into four parts. Part I describes the instrumentation systems employed in the study of speech acoustics and speech physiology. The models, aerodynamic principles, and peripheral physiological mechanisms of speech production are discussed in Part II. Part III explains the problems in the specifications of the acoustic characteristics of speech sounds and suprasegmental features of speech. The speech perception process, speaker recognition, theories on the nature of the dichotic right ear advantage, and errors in auditory perception are elaborated in the last chapter. This text likewise covers the measurement of temporal processing in speech perception and interrelationship of speech, hearing, and language in an understanding of the total human communication process. This publication is valuable to speech and hearing scientists, speech pathologists, audiologists, psychologists, linguists, and graduate students researching on experimental phonetics.