NEWS COMMENTARY

Researchers from UCSF use AI to translate brain signals in epilepsy patients to synthesize audio signals

Published:
May 14, 2019
Coverage:
Digital Transformation More...
Activities:
Research
2908 thumb square
by Jerrold Wang
Average importance

The research team uses recurrent neural networks (RNNs) to first translate neural signals into simulated movements of the vocal tract articulators and then transform these simulated movements into speech. Compared with the incumbent approach that transforms brain signals directly into audio signals, the team's two-stage decoding process results in much less acoustic distortion and enhanced performance, even with limited data. Though in its early stages, the approach is novel; clients interested in brain-computer interfaces should monitor the team's progress.

For the original news article, click here .


Further Reading

Patent from Verily outlines a predictive electronic health record (EHR) aggregation system

News Commentary | February 5, 2019

Digital tools to extract and decentralize health records are not new. Apple and Amazon have already launched products in this category. However, Verily's (formerly Google Life Sciences) patent goes one step further by proposing a predictive EHR platform that would aggregate and compile medical data ... To read more, click here.

MIT's AI model learns language with very little training

News Commentary | November 2, 2018

NLP technology has seen drastic improvements in accuracy and performance, leading to AI‑enabled voice assistant products like Siri and Alexa. To date, most NLP algorithms have required extensive training with human‑annotated sentences that highlight the structure and meaning behind words. Now, MIT ... To read more, click here.

CMU and Google jointly develop novel neural architecture that offers better performance than incumbent NLP technologies

News Commentary | January 22, 2019

This new neural architecture, called Transformer‑XL, combines the features of recurrent neural networks (RNNs), which parse words individually to determine relationships between them, and transformer networks, which use attention mechanisms to determine dependencies between groups of words. The new ... Not part of subscription