Home
Projects
Publications
People
Join the Lab
Contact
Login
Neural Networks
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
A Linear Network Theory of Iterated Learning
Language provides one of the primary examples of human’s ability to systematically generalize — reasoning about new …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
Generalisable Agents for Neural Network Optimisation
Optimising deep neural networks is a challenging task due to complex training dynamics, high computational requirements, and long …
Kale-ab Tessera
,
Callum Tilbury
,
Sasha Abramowitz
,
Ruan de Kock
,
Omayma Mahjoub
,
Benjamin Rosman
,
Sara Hooker
,
Arnu Pretorius
PDF
Cite
Revisiting the Role of Relearning in Semantic Dementia
Patients with semantic dementia (SD) present with remarkably consistent atrophy of neurons in the anterior temporal lobe and …
Devon Jarvis
,
Verena Klar
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
On The Specialization of Neural Modules
A number of machine learning models have been proposed with the goal of achieving systematic generalization: the ability to reason …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules
Sparse neural networks have various computational benefits while often being able to maintain or improve the generalization performance …
Kale-ab Tessera
,
Chiratidzo Matowe
,
Arnu Pretorius
,
Benjamin Rosman
,
Sara Hooker
PDF
Cite
Quantisation and Pruning for Neural Network Compression and Regularisation
Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered …
Kimessha Paupamah
,
Steven James
,
Richard Klein
PDF
Cite
Cite
×