
Coarse-to-Fine Natural Language Processing
Free delivery worldwide
Available. Expected delivery to the United States in 10-13 business days.
Not ordering to the United States? Click here.
Description
The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical models, which assign probabilities for example to the different meanings of a word or the plausibility of grammatical constructions.
This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing.
Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. The book is intended for students and researchers interested in statistical approaches to Natural Language Processing.
Slav's work Coarse-to-Fine Natural Language Processing represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach.
Eugene Charniak (Brown University)
show more
This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing.
Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. The book is intended for students and researchers interested in statistical approaches to Natural Language Processing.
Slav's work Coarse-to-Fine Natural Language Processing represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach.
Eugene Charniak (Brown University)
show more
Product details
- Paperback | 106 pages
- 155 x 235 x 6.86mm | 207g
- 26 Jan 2014
- Springer-Verlag Berlin and Heidelberg GmbH & Co. KG
- Springer-Verlag Berlin and Heidelberg GmbH & Co. K
- Berlin, Germany
- English
- 2012 ed.
- XXII, 106 p.
- 3642427499
- 9783642427497
Other books in this series
Using Comparable Corpora for Under-Resourced Areas of Machine Translation
01 Mar 2019
Hardback
US$165.11
Back cover copy
The impact of computer systems that can understand natural language will be tremendous. To develop this capability we need to be able to automatically and efficiently analyze large amounts of text. Manually devised rules are not sufficient to provide coverage to handle the complex structure of natural language, necessitating systems that can automatically learn from examples. To handle the flexibility of natural language, it has become standard practice to use statistical models, which assign probabilities for example to the different meanings of a word or the plausibility of grammatical constructions.
This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing.
Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. This book is intended for students and researchers interested in statistical approaches to Natural Language Processing.
Slav's work Coarse-to-Fine Natural Language Processing represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach.
Eugene Charniak (Brown University)
show more
This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing.
Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. This book is intended for students and researchers interested in statistical approaches to Natural Language Processing.
Slav's work Coarse-to-Fine Natural Language Processing represents a major advance in the area of syntactic parsing, and a great advertisement for the superiority of the machine-learning approach.
Eugene Charniak (Brown University)
show more
Table of contents
1.Introduction.- 2.Latent Variable Grammars for Natural Language Parsing.- 3.Discriminative Latent Variable Grammars.- 4.Structured Acoustic Models for Speech Recognition.- 5.Coarse-to-Fine Machine Translation Decoding.- 6.Conclusions and Future Work.- Bibliography.
show more
show more
About Slav Petrov
Slav Petrov is a Research Scientist at Google New York. He works on problems at the intersection of natural language processing and machine learning. In particular, he is interested in syntactic parsing and its applications to machine translation and information extraction. He also teaches Statistical Natural Language Processing at New York University as an Adjunct Professor.
show more
show more