We study the learning of fully connected neural networks for binary classification. For the networks of interest, we assume that the L1-norm of the incoming weights of any neuron is bounded by a constant. We further assume that there exists a neural network which separates the positive and negative samples by a constant margin. Under these assumptions, we present an efficient algorithm which learns a neural network with arbitrary generalization error ε>0 . The algorithm’s sample complexity and time complex
1 view
623
180
9 years ago 01:20:04 1
Provable Algorithms for Learning Neural Networks
5 years ago 01:48:52 6
MIA: Cem Anil and James Lucas on provable adversarial robustness; Primer, Roger Grosse
2 years ago 01:09:03 1
Naama Ben-David — Algorithms for practical distributed agreement
4 years ago 00:06:41 1
COINSLOOT - First And Unique Crypto Loot-Boxes With Provably Fair Algorithm // Win Crypto and NFT 🔥
5 years ago 00:44:53 17
SynFlow: Pruning neural networks without any data by iteratively conserving synaptic flow
6 years ago 00:07:23 1
IOHK | What is Proof of Stake?
3 years ago 00:55:07 14
This is a game changer! (AlphaTensor by DeepMind explained)
4 years ago 00:24:59 8
Yunhao Tang: “Reinforcement Learning for Integer Programming: Learning to Cut“
2 years ago 00:53:09 1
Accelerating Transformers via Kernel Density Estimation Insu Han
2 years ago 00:03:23 1
Play to Earn 🔥 What is The Highest Earning Play to Earn Game?