Audiovisual Speech Processing

Audiovisual Speech Processing

  • Electronic book text
Edited by  , Edited by  , Edited by 

List price: US$96.00

Currently unavailable

We can notify you when this item is back in stock

Add to wishlist

AbeBooks may have this title (opens in new window).

Try AbeBooks

Description

When we speak, we configure the vocal tract which shapes the visible motions of the face and the patterning of the audible speech acoustics. Similarly, we use these visible and audible behaviors to perceive speech. This book showcases a broad range of research investigating how these two types of signals are used in spoken communication, how they interact, and how they can be used to enhance the realistic synthesis and recognition of audible and visible speech. The volume begins by addressing two important questions about human audiovisual performance: how auditory and visual signals combine to access the mental lexicon and where in the brain this and related processes take place. It then turns to the production and perception of multimodal speech and how structures are coordinated within and across the two modalities. Finally, the book presents overviews and recent developments in machine-based speech recognition and synthesis of AV speech.show more

Product details

  • Electronic book text
  • CAMBRIDGE UNIVERSITY PRESS
  • Cambridge University Press (Virtual Publishing)
  • Cambridge, United Kingdom
  • 103 b/w illus.
  • 1139368672
  • 9781139368674

Table of contents

1. Three puzzles of multimodal speech perception R. E. Remez; 2. Visual speech perception L. E. Bernstein; 3. Dynamic information for face perception K. Lander and V. Bruce; 4. Investigating auditory-visual speech perception development D. Burnham and K. Sekiyama; 5. Brain bases for seeing speech: FMRI studies of speechreading R. Campbell and M. MacSweeney; 6. Temporal organization of cued speech production D. Beautemps, M.-A. Cathiard, V. Attina and C. Savariaux; 7. Bimodal perception within the natural time-course of speech production M.-A. Cathiard, A. Vilain, R. Laboissiere, H. Loevenbruck, C. Savariaux and J.-L. Schwartz; 8. Visual and audiovisual synthesis and recognition of speech by computers N. M. Brooke and S. D. Scott; 9. Audiovisual automatic speech recognition G. Potamianos, C. Neti, J. Luettin and I. Matthews; 10. Image-based facial synthesis M. Slaney and C. Bregler; 11. A trainable videorealistic speech animation system T. Ezzat, G. Geiger and T. Poggio; 12. Animated speech: research progress and applications D. W. Massaro, M. M. Cohen, M. Tabain, J. Beskow and R. Clark; 13. Empirical perceptual-motor linkage of multimodal speech E. Vatikiotis-Bateson and K. G. Munhall; 14. Sensorimotor characteristics of speech production G. Bailly, P. Badin, L. Reveret and A. Ben Youssef.show more

About Gerard Bailly

Gerard Bailly is a Senior CNRS Research Director at the Speech and Cognition Department, GIPSA-Lab, University of Grenoble, where he is now Head of Department. Pascal Perrier is a Professor in the GIPSA-Lab at the University of Grenoble. Eric Vatikiotis-Bateson is Professor and Canada Research Chair in Linguistics and Cognitive Science in the Department of Linguistics at the University of British Colombia.show more