The Future of Artificial Intelligence
210 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Juliette Decugis
Scoop.it!

Enhancing Backpropagation via Local Loss Optimization

Enhancing Backpropagation via Local Loss Optimization | The Future of Artificial Intelligence | Scoop.it

"Posted by Ehsan Amid, Research Scientist, and Rohan Anil, Principal Engineer, Google Research, Brain Team"

Juliette Decugis's insight:
Many recent ML papers hope to address the overwhelming problem of deep learning models: their computational and memory cost. Whereas lots of recent work has focused on the sparsification of said models, LocoProp attempts to rethink backpropagation - the most expensive step of neural network training.

LocoProp decomposes a model's objective function into a layer-wise loss, comparing a layer's output and the overall bath's final output, accompanied by a regularizer term (L2 loss). Breaking down the loss function across layers permits parallelization of training, smaller order calculations and more flexibility. Furthermore, the paper demonstrates that "the overall behavior of the combined updates closely resembles higher-order updates."

Potential limits of the paper: "small" networks used, "still remains to be seen how well the method generally works across tasks."

No comment yet.
Scooped by Juliette Decugis
Scoop.it!

AI Is Biased. Here's How Scientists Are Trying to Fix It | WIRED

AI Is Biased. Here's How Scientists Are Trying to Fix It | WIRED | The Future of Artificial Intelligence | Scoop.it
Researchers are revising the ImageNet data set. But algorithmic anti-bias training is harder than it seems.
Juliette Decugis's insight:
Many fields, such as judicial departments, are already starting to use AI as a way of making more neutral decisions. Many people view AI as a better version of humans capable of more informed judgments. But who do machines learn from? Humans.
No comment yet.