考试资讯

咨询热线8:00-24:00 400-0999-680

首页 考试资讯考研英语 2020考研英语外刊阅读精选:如何让无声者发生?

2020考研英语外刊阅读精选:如何让无声者发生?

时间:2019-07-12 15:06:46 编辑:leichenchen

       考研英语外刊阅读能力的提升,需要日积月累才能看得到效果。接下来,北京文都考研网为帮助2020考研学子,在英语水平上更上一层台阶,特意整理出考研英语外刊阅读精选:如何让无声者发生?供考生参考。

2020考研英语外刊阅读精选:如何让无声者发生?

Of the many memorable things about Stephen Hawking, perhaps the most memorable of all was his conversation. The amyotrophic lateral sclerosis that confined him to a wheelchair also stopped him talking, so instead a computer synthesised what became a world-famous voice.

It was, though, a laborious process. Hawking had to twitch a muscle in his cheek to control a computer that helped him build up sentences, word by word. Others who have lost the ability to speak because of disease, or a stroke, can similarly use head or eye movements to control computer cursors to select letters and spell out words. But, at their best, users of these methods struggle to produce more than ten words a minute. That is far slower than the average rate of natural speech, around 150 words a minute.

A better way to communicate would be to read the brain of a paralysed person directly and then translate those readings into synthetic speech.And a study published in Nature this week, by Edward Chang, a neurosurgeon at the University of California, San Francisco, describes just such a technique. Speaking requires the precise control of almost 100 muscles in the lips, jaw, tongue and throat to produce the characteristic breaths and sounds that make up sentences. By measuring the brain signals that control these vocal-tract muscles, Dr Chang has been able to use a computer to synthesise speech accurately.

The volunteers for Dr Chang’s study were five people with epilepsy who had had electrodes implanted into their brains as part of their treatment. He and his colleagues used these electrodes to record the volunteers’ brain activity while those volunteers spoke several hundred sentences out loud. Specifically, the researchers tracked activity in parts of the brain responsible for controlling the muscles of the vocal tract.

To convert those signals into speech they did two things. First, they trained a computer program to recognise what the signals meant. They did this by feeding the program simultaneously with output from the electrodes and with representations of the shapes the vocal tract adopts when speaking the test sentences—data known from decades of study of voices. Then, when the program had learned the relevant associations, they used it to translate electrode signals into vocal-tract configurations, and thus into sound.

The principle proved, Dr Chang and his team went on to show that their system could synthesise speech even when a volunteer mimed sentences, rather than speaking them out loud. Although the accuracy was not as good, this is an important further step. A practical device that might serve the needs of people like Hawking would need to respond to brain signals which moved few or no muscles at all. Miming is a stepping stone to that. The team have also shown that the relationship between brain signals and speech is sufficiently similar from person to person for their approach to be employed to create a generic template that a user could fine-tune.That, too, will ease the process of making the technique practical.

So far, Dr Chang has worked with people able to speak normally. The next stage will be to ask whether his system can work for those who cannot speak. There is reason for cautious optimism here. What Dr Chang is doing is analogous to the now well-established field of using brain-computer interfaces to allow paralysed individuals to control limb movements simply by thinking about what it is they want to do. Restoring speech is a more complex task than moving limbs—but sufficiently similar in principle to give hope to those now in a position similar to that once endured by the late Dr Hawking.

 

【参考译文】

关于史蒂芬·霍金令人难忘的的事当中,或许印象做深刻的便是他的谈话了。渐冻人症将他禁锢于轮椅之上,还让他不能说话,因此,一台电脑合成了他那世界著名的声音。

尽管这是一个艰难的过程。霍金必须抽动面部肌肉来控制电脑帮助他一个词一个词地组成句子。其他因为疾病或者中风而失去说话能力的人,同样能使用头或者眼球的动作来控制电脑光标选择字母拼出单词。但是,在最好的时候,使用者勉强在一分钟内拼出十个单词。这远远低于正常说话每分钟约150个词的平均速度。

更好的交流方法是直接读取瘫痪人士的大脑,然后将读取的内容翻译成合成语音。加州大学洛杉矶分校的神经外科医生Edward Chang本周在《自然》上发布的一项研究,就描述了这种技术。说话需要精准控制嘴唇、下巴、舌头和喉咙上的近100块肌肉,以发出组成句子的独特气息和声音。通过测量控制这些声道肌肉的大脑信号,Chang医生已经能够精确地运用电脑合成语音了。

Chang医生研究的五位志愿者患有癫痫,他们的大脑被植入了电极作为治疗的一部分。当这些志愿者大声说出数百个句子时,Chang和他的同事们使用这些电极来记录志愿者的脑部活动。具体来说,研究人员所追踪的是负责控制声道肌肉的部分脑部活动。

为了将这些信号转化为语音他们做了两件事。首先,他们开发了一套电脑程序来识别信号的含义。他们的做法是:当志愿者说出测试的句子时(这些句子来自于数十年的声音研究),他们将电极的输出信息和对声道发声时形状的描述,同时输入程序中。然后,当程序学会了相关的联系,他们使用这个程序将电极信号翻译成声道结构,再转化成声音。

原理已被证实,Chang医生和他的团队继续证明他们的系统能够合成语音,甚至志愿者只要做出读句子的口型,不用将句子大声读出来。尽管准确度还不太高,这却是一个重要的进步。一件实用设备或许能满足像霍金这类人士的需求,这种设备需要对控制少数肌肉或者不控制肌肉的信号作出反应。这种对口型便是这一步的垫脚石。该团队已经证明,他们在制作一种使用者可微调的通用样板时,由于所运用的方法的原因,每个人的大脑信号和语音间的联系都非常相似。这也使得该项技术实用化的进程更加容易了。

迄今为止,Chang医生研究的一直是能正常说话的人。下一个阶段将会是他的系统能否适用于那些不能说话的人。这里有理由谨慎而又乐观。Chang医生正在做的类似于今天相当稳定的领域中的做法,即运用人机交互来让瘫痪人士仅通过想象他们要做的事来控制肢体动作。恢复说话是一项比较复杂的任务,不仅仅是动动四肢——但是它们基本上类似,这给与已故霍金博士曾忍受同样处境的人们带来了希望。

       以上是北京文都考研网给出的“2020考研英语外刊阅读精选:如何让无声者发生?”,希望对2020考研者有所帮助!祝2020考研成功!

推荐阅读:

2020考研英语外刊阅读精选汇总

2020考研英语阅读双语经济学人(七月)总结

2020考研英语时事阅读汇总(7月)

扫一扫

进考研专属交流群 获取更多考研干货资料

优先参加最新福利活动

我要吐槽

    • 文都考研课代表

    研友互动

    199管理类联考
      微信交流群

    396经济类联考
      微信交流群