Deep Sparse Rectifier Neural Networks.pdf


立即下载 甲基蓝
2024-12-17
Montre ́alMontre de ́al QC Universite sparse net neuron Canada
1,001 KB

315
Deep Sparse Rectifier Neural Networks
Xavier Glorot Antoine Bordes Yoshua Bengio
DIRO, Université de Montréal
Montréal, QC, Canada
glorotxa@iro.umontreal.ca
Heudiasyc, UMR CNRS 6599
UTC, Compiègne, France
and
DIRO, Université de Montréal
Montréal, QC, Canada
antoine.bordes@hds.utc.fr
DIRO, Université de Montréal
Montréal, QC, Canada
bengioy@iro.umontreal.ca
Abstract
While logistic sigmoid neurons are more bi-
ologically plausible than hyperbolic tangent
neurons, the latter work better for train-
ing multi-layer neural networks. This pa-
per shows that rectifying neurons are an
even better model of biological neurons and
yield equal or better performance than hy-
perbolic tangent networks in spite of the
hard non-linearity and non-differentiability
at zero, creating sparse representations with
true zeros, which seem remarkably suitable
for naturally sparse data. Even though they
can take advantage of semi-supervised setups
with extra-unlabeled


Montre/́alMontre/de/́al/QC/Universite/sparse/net/neuron/Canada/ Montre/́alMontre/de/́al/QC/Universite/sparse/net/neuron/Canada/
-1 条回复
登录 后才能参与评论
-->