The Future of Artificial Intelligence
210 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Juliette Decugis
Scoop.it!

We read the paper that forced Timnit Gebru out of Google. Here’s what it says.

We read the paper that forced Timnit Gebru out of Google. Here’s what it says. | The Future of Artificial Intelligence | Scoop.it
The company's star ethics researcher highlighted the risks of large language models, which are key to Google's business.
Juliette Decugis's insight:

Almost two years ago in December 2020, Timnit Gebru was fired from Google ethical AI team partly for exposing the dangers of Google's large language models (Transformers, BERT,  GPT-2 and GPT-3). I found this article interesting because it summarized effectively the four main risks Gebru's paper had highlighted: environmental and financial costs, large training sets which inevitably results in less scrutinization for abusive language & biases, research opportunity costs and illusions of meaning. Those risks are often lost amongst the enthusiasm the AI community has for breaking records on benchmark datasets but they must be at the center of research focus.

No comment yet.
Scooped by Juliette Decugis
Scoop.it!

Enhancing Backpropagation via Local Loss Optimization

Enhancing Backpropagation via Local Loss Optimization | The Future of Artificial Intelligence | Scoop.it

"Posted by Ehsan Amid, Research Scientist, and Rohan Anil, Principal Engineer, Google Research, Brain Team"

Juliette Decugis's insight:
Many recent ML papers hope to address the overwhelming problem of deep learning models: their computational and memory cost. Whereas lots of recent work has focused on the sparsification of said models, LocoProp attempts to rethink backpropagation - the most expensive step of neural network training.

LocoProp decomposes a model's objective function into a layer-wise loss, comparing a layer's output and the overall bath's final output, accompanied by a regularizer term (L2 loss). Breaking down the loss function across layers permits parallelization of training, smaller order calculations and more flexibility. Furthermore, the paper demonstrates that "the overall behavior of the combined updates closely resembles higher-order updates."

Potential limits of the paper: "small" networks used, "still remains to be seen how well the method generally works across tasks."

No comment yet.
Scooped by Juliette Decugis
Scoop.it!

Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence

Meta's AI Chief Publishes Paper on Creating ‘Autonomous’ Artificial Intelligence | The Future of Artificial Intelligence | Scoop.it
Yann LeCun, machine learning pioneer and head of AI at Meta, lays out a vision for AIs that learn about the world more like humans in a new study.
Juliette Decugis's insight:

In a talk at UC Berkeley this Tuesday, Yann LeCun, one of the founding fathers of deep learning, discussed approaches for more generalizable and autonomous AI.

 

Current deep learning frameworks require error training to learn very specific tasks and often fail to generalize to even out of distribution input on the same task. Specifically with reinforcement learning, we need a model to "fail" hundreds of times for it to start learning.

 

As a potential lead away from specialized AI, LeCun proposes a novel architecture composed of five sub-models mirroring the different parts of our brain. Specifically, one of the modules would ressemble memory as a world model moduleInstead of each model learning a representation of the wold specific to their task, this framework would maintain a world model usable across tasks by different module.

 

See full paper: https://openreview.net/pdf?id=BZ5a1r-kVsf

No comment yet.
Scooped by Juliette Decugis
Scoop.it!

AlphaGo: using machine learning to master the ancient game of Go

AlphaGo: using machine learning to master the ancient game of Go | The Future of Artificial Intelligence | Scoop.it
We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI.
Juliette Decugis's insight:
In 2015, AlphaGo created by DeepMind became the first computer program to beat a professional player in the game of Go. Playing against itself, AlphaGo learned to master not only the most complex Go moves but more importantly discovered new techniques unknown to champions. AlphaGo could represent the start of the future of AI. 
No comment yet.