
On April 17, Hui Peng successfully defended the thesis “Adaptation of learning dynamics and feature representations via the neural kernel” (advisor: John Murray).
Peng stated, “Biological and artificial systems must generalize from examples in order to learn efficiently in a complex world. We developed a mathematical framework, named the neural kernel, to understand the learning dynamics and quantify the generalization strategy in humans and artificial neural networks in multi-feature learning tasks. Our model captured human learning trajectories, and revealed the adaptation of generalization strategy during learning. Our framework establishes a link between neural representations and learning behavior, and provides predictions for future neural and behavioral experiments.”
Peng will be a machine learning engineer at Meta.
Thesis Abstract: The ability to learn from experience is essential for both biological and artificial agents. In complex environments where experience is sparse relative to the multitude of features, agents must efficiently generalize by focusing on features relevant for the task. The generalization strategy, known as inductive bias, shapes the dynamics of learning. We introduce a neural kernel framework to characterize inductive biases of humans and artificial neural networks in category learning, linking neural representations with learning behavior. Our kernel models captured the learning trajectories of human subjects across two experiments, and elucidated the learning strategies of neural networks through feature modes. We developed methods for fitting kernels to behavioral data, revealing the adaptation of inductive bias in human subjects. We also implemented a neural network model with feature-based gain modulation, capable of adapting representations and inductive bias. In summary, we established a novel perspective for understanding learning and generalization in relation to neural representations, providing testable predictions for future neural and behavioral experiments.