Nothing makes sense in deep learning, except in the light of evolution

Nothing makes sense in deep learning, except in the light of evolution.
Deep Learning (DL) is a surprisingly successful branch of machine learning.
The success of DL is usually explained by focusing analysis on a particular
recent algorithm and its traits. Instead, we propose that an explanation of the
success of DL must look at the population of all algorithms in the field and
how they have evolved over time. We argue that cultural evolution is a useful
framework to explain the success of DL. In analogy to biology, we use
`development’ to mean the process converting the pseudocode or text description
of an algorithm into a fully trained model. This includes writing the
programming code, compiling and running the program, and training the model. If
all parts of the process don’t align well then the resultant model will be
useless (if the code runs at all!). This is a constraint. A core component of
evolutionary developmental biology is the concept of deconstraints – these are
modification to the developmental process that avoid complete failure by
automatically accommodating changes in other components. We suggest that many
important innovations in DL, from neural networks themselves to hyperparameter
optimization and AutoGrad, can be seen as developmental deconstraints. These
deconstraints can be very helpful to both the particular algorithm in how it
handles challenges in implementation and the overall field of DL in how easy it
is for new ideas to be generated. We highlight how our perspective can both
advance DL and lead to new insights for evolutionary biology.

Read in full here:

This thread was posted by one of our members via one of our news source trackers.

Corresponding tweet for this thread:

Share link for this tweet.