1902.06720.pdf


立即下载 v-star*위위
2024-04-19
net works neural wide work en training namics learning kernel.
1.4 MB

Wide Neural Networks of Any Depth
Evolve as Linear Models Under Gradient Descent
Jaehoon Lee * 1 2 Lechao Xiao * 1 2 Samuel S. Schoenholz 1 Yasaman Bahri 1
Jascha Sohl-Dickstein 1 Jeffrey Pennington 1
Abstract
A longstanding goal in deep learning research
has been to precisely characterize training and
generalization. However, the often complex loss
landscapes of neural networks have made a theory
of learning dynamics elusive. In this work, we
show that for wide neural networks the learning
dynamics simplify considerably and that, in the
infinite width limit, they are governed by a linear
model obtained from the first-order Taylor expan-
sion of the network around its initial parameters.
Furthermore, mirroring the correspondence be-
tween wide Bayesian neural networks and Gaus-
sian processes, gradient-based training of wide
neural networks with a squared loss produces test
set predictions drawn from a Gaussian process
with a particular compositional kernel. While


net/works/neural/wide/work/en/training/namics/learning/kernel./ net/works/neural/wide/work/en/training/namics/learning/kernel./
-1 条回复
登录 后才能参与评论
-->