Meta Learning Backpropagation and Improving It

Meta Learning Backpropagation And Improving It.
Many concepts have been proposed for meta learning with neural networks
(NNs), e.g., NNs that learn to reprogram fast weights, Hebbian plasticity,
learned learning rules, and meta recurrent NNs. Our Variable Shared Meta
Learning (VSML) unifies the above and demonstrates that simple weight-sharing
and sparsity in an NN is sufficient to express powerful learning algorithms
(LAs) in a reusable fashion. A simple implementation of VSML where the weights
of a neural network are replaced by tiny LSTMs allows for implementing the
backpropagation LA solely by running in forward-mode. It can even meta learn
new LAs that differ from online backpropagation and generalize to datasets
outside of the meta training distribution without explicit gradient
calculation. Introspection reveals that our meta learned LAs learn through fast
association in a way that is qualitatively different from gradient descent.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.

Corresponding tweet for this thread:

Share link for this tweet.