speechrecognition

更新时间:2023-07-01 07:46:47 阅读: 评论:0

Speech Recognition
In computer technology, Speech Recognition refers to the recognition of human speech by computers for the performance of speaker-initiated computer-generated functions (e.g., transcribing speech to text; data entry; operating electronic and mechanical devices; automated processing of telephone calls) —a main element of so-called natural language processing through computer speech technology.英语四级听力下载
The Challenge of Speech Recognitionperfect是什么意思
pair是什么意思
Writing systems are ancient, going back as far as the Sumerians of 6,000 years ago. The phonograph, which allowed the analog recording and playback of speech, dates to 1877. Speech recognition had to await the development of computer, however, due to multifarious problems with the recognition of speech.
First, speech is not simply spoken text--in the same way that Miles Davis playing So What can hardly be captured by a note-for-note rendition as sheet music. What humans understand as discrete words, phras or ntences with clear boundaries are actually delivered as a continuous stream of sounds: Iwenttothestoreyesterday, rather than I went to the store yesterday. Words can also blend, with Whadd
ayawa? reprenting What do you want? Second, there is no one-to-one correlation between the sounds and letters. In English, there are slightly more than five vowel letters--a, e, i, o, u, and sometimes y and w. There are more than twenty different vowel sounds, though, and the exact count can vary depending on the accent of the speaker. The rever problem also occurs, where more than one letter can reprent a given sound. The letter c can have the same sound as the letter k, as in cake, or as the letter s, as in citrus.
pork
rentHistory of Speech Recognition
Despite the manifold difficulties, speech recognition has been attempted for almost as long as there have been digital computers. As early as 1952, rearchers at Bell Labs had developed an Automatic Digit Recognizer, or "Audrey". Audrey attained an accuracy of 97 to 99 percent if the speaker was male, and if the speaker paud 350 milliconds between words, and if the speaker limited his vocabulary to the digits from one to nine, plus "oh", and if the machine could be adjusted to the speaker's speech profile. Results dipped as low as 60 percent if the recognizer was not adjusted.
Audrey worked by recognizing phonemes, or individual sounds that were considered distinct from each other. The phonemes were correlated to reference models of phonemes that were generated by training the recognizer. Over the next two decades, rearchers spent large amounts of time and money trying to improve upon this concept, with little success. Computer hardware improved by leaps and bounds, speech synthesis improved steadily, and Noam Chomsky's idea of generative grammar suggested that language could be analyzed programmatically. None of this, however, emed to improve the state of the art in speech recognition.
In 1969, John R. Pierce wrote a forthright letter to the Journal of the Acoustical Society of America, where much of the rearch on speech recognition was published. Pierce was one of the pioneers in
coveredsatellite communications, and an executive vice president at Bell Labs, which was a leader in speech recognition rearch. Pierce said everyone involved was wasting time and money.
It would be too simple to say that work in speech recognition is carried out simply becau one can get money for it. . . .The attraction is perhaps similar to the attraction of schemes for turning water into gasoline, extracting gold from the a, curing cancer, or going to the moon. One doesn't attract thoughtlessly given dollars by means of schemes for cutting the cost of soap by 10%. To ll suckers, one us deceit and offers glamor.
绯闻女孩第二季剧情Pierce's 1969 letter marked the end of official rearch at Bell Labs for nearly a decade. The defen rearch agency ARPA, however, cho to pervere. In 1971 they sponsored a rearch initiative to develop a speech recognizer that could handle at least 1,000 words and understand connected speech, i.e., speech without clear paus between each word. The recognizer could assume a low-background-noi environment, and it did not need to work in real time. By 1976, three contractors had developed six systems. The most successful system, developed by Carnegie Mellon University, was called Harpy. Harpy was slow—a four-cond ntence would have taken more than five minutes to process. It also still required speakers to 'train' it by speaking ntences to build up a reference model. Nonetheless, it did recognize a thousand-word vocabulary, and it did sup
port connected speech.
Rearch continued on veral paths, but Harpy was the model for future success. It ud hidden Markov models and statistical modeling to extract meaning from speech. In esnce, speech was broken up into overlapping small chunks of sound, and probabilistic models inferred the most likely words or parts of words in each chunk, and then the same model was applied
again to the aggregate of the overlapping chunks. The procedure is computationally intensive, but it has proven to be the most successful. Throughout the 1970s and 1980s rearch continued. By the 1980s, most rearchers were using hidden Markov models, which are behind all contemporary speech recognizers. In the latter part of the 1980s and in the 1990s, DARPA (the renamed ARPA) funded veral initiatives. The first initiative was similar to the previous challenge: the requirement was still a one-thousand word vocabulary, but this time a rigorous performance standard was devid. This initiative produced systems that lowered the word error rate from ten percent to a few percent. Additional initiatives have focud on improving algorithms and improving computational efficiency.
In 2001, Microsoft relead a speech recognition system that worked with Office XP. It neatly encapscompany缩写
ulated how far the technology had come in fifty years, and what the limitations still were. The system had to be trained to a specific ur's voice, using the works of great authors that were provided, Even after training ,the system was fragile enough that a warning was provided, "If you change the room in which you u Microsoft Speech Recognition and your accuracy drops, run the Microphone Wizard again." On the plus side, the system did work in real time, and it did recognize connected speech.
Speech Recognition Today
Technology
Current voice recognition technologies work on the ability to mathematically analyze the sound waves formed by our voices through resonance and spectrum analysis. Computer systems first record the sound waves spoken into a microphone through a digital to analog converter. The analog or continuous sound wave that we produce when we say a word is sliced up into small time fragments. The fragments are then measured bad on their amplit ude levels, the level of compression of air relead from a person’s mouth. To measure the amplitudes and convert a sound wave to digital format the industry has commonly ud the Nyquist-Shannon Theorem.
Nyquist-Shannon Theorem
The Nyquist –Shannon theorem was developed in 1928 to show that a given analog frequency is most accurately recreated by a digital frequency that is twice the original analog frequency. Nyquist proved this was true becau an audible frequency must be sampled once for compression and once for rarefaction. For example, a 20 kHz audio signal can be accurately reprented as a digital sample at 44.1 kHz.
Recognizing Commands
The most important goal of current speech recognition software is to recognize commands. This increas the functionality of speech software. Software such as Microsost Sync is built into many new vehicles, suppodly allowing urs to access all of the car’s electronic accessories, hands-free. This software is adaptive. It asks the ur a ries of questions and utilizes the pronunciation of commonly ud words to derive speech constants. The constants are then factored into the speech recognition algorithms, allowing the application to provide better recognition in the future. Current tech reviewers have said the technology is much improved from the early 1990’s but will not be replacing hand controls any time soon.
Dictation
Second to command recognition is dictation. Today's market es value in dictation software as discusd below in transcription of medical records, or papers for students, and as a more productive way to get one's thoughts down a written word. In addition many companies e value in dictation for the process of translation, in that urs could have their words translated for written letters, or translated so the ur could then say the word back to another party in their native language. Products of the types already exist in the market today.
Errors in Interpreting the Spoken Word
workon
explain是什么意思As speech recognition programs process your spoken words their success rate is bad on their ability to minimize errors. The scale on which they can do

本文发布于:2023-07-01 07:46:47,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/78/1071766.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:绯闻   听力   女孩
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图