Home
Projects
Publications
People
Join the Lab
Contact
Login
Representation Learning
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
A Linear Network Theory of Iterated Learning
Language provides one of the primary examples of human’s ability to systematically generalize — reasoning about new …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
Overlooked Implications of the Reconstruction Loss for VAE Disentanglement
Learning disentangled representations with variational autoencoders (VAEs) is often attributed to the regularisation component of the …
Nathan Michlo
,
Richard Klein
,
Steven James
PDF
Cite
Supplementary Material
On The Specialization of Neural Modules
A number of machine learning models have been proposed with the goal of achieving systematic generalization: the ability to reason …
Devon Jarvis
,
Richard Klein
,
Benjamin Rosman
,
Andrew Saxe
PDF
Cite
Accounting for the Sequential Nature of States to Learn Representations in Reinforcement Learning
In this work, we investigate the properties of data that cause popular representation learning approaches to fail. In particular, we …
Nathan Michlo
,
Devon Jarvis
,
Richard Klein
,
Steven James
PDF
Cite
Cite
×