NEWS COMMENTARY

Material scientists find success applying natural language processing toolkit to discover insights from published academic papers

Published:
July 9, 2019
Coverage:
Accelerating Materials Innovation More...
Activities:
Claims More...
5425 thumb square
by Cosmin Laslau
Very important

An increasingly common approach within machine learning is representing words, sentences, and documents in high-dimensional vector spaces, in part because it does not require hand-labeling data. Now, materials scientists from Berkley (with backing from Toyota) have shown in a study published in "Nature" that a similar approach can be applied to model scientific knowledge, mining already-published academic papers. Impressively, they were able to show that this approach hints at future discoveries years before they are, in fact, discovered. Having backtested this approach, the next test will be to apply it looking forward to enable new scientific breakthroughs. In the meantime, interested clients can get the code and try it themselves.

For the original news article, click here .


Further Reading

Stanford AI researchers recreate the periodic table

News Commentary | July 11, 2018

Taking inspiration from an established NLP algorithm (Word2Vec), Stanford researchers developed a program called Atom2Vec, which ingested names of known chemical compounds and, from those relationships, determined which elements have similar properties, grouping them together. Those in the chemicals... Not part of subscription

Deep learning pioneers win Turing Award

News Commentary | April 4, 2019

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, three of the early pioneers of deep learning, recently won the Turing Award, the computer science equivalent of the Nobel Prize. Each winner has been active in the field of deep learning for decades, and the performance gains seen in today's systems ... Not part of subscription

CMU and Google jointly develop novel neural architecture that offers better performance than incumbent NLP technologies

News Commentary | January 22, 2019

This new neural architecture, called Transformer‑XL, combines the features of recurrent neural networks (RNNs), which parse words individually to determine relationships between them, and transformer networks, which use attention mechanisms to determine dependencies between groups of words. The new ... Not part of subscription