CogSci 2025

August 02, 2025

San Francisco, United States

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

keywords:

concepts and categories

computer science

learning

psychology

artificial intelligence

machine learning

neural networks

Recent research has shown that people can learn more new concepts than the number of examples they are presented with. However, these results relied on strong assumptions about what skills and prior knowledge are required to perform this kind of less-than-one-shot learning. This has included having participants disentangle soft labels that fuzzily map stimuli to multiple concepts, interpret continuous feature weights, and parse complex compositional statements. We propose a novel minimal paradigm that strips away these assumptions to explore how efficiently people can simultaneously learn visual and symbolic concepts. We show theoretically that it should be possible to learn up to $2^{k-1}$ binary features from $k$ examples, and to learn up to $2^{2^{k-1}}$ unique combinations of those features. We validate this empirically, showing that people may be able to learn as many as 8 novel binary features and up to 256 concepts corresponding to unique compositions of those features from just 4 examples.

Downloads

Paper

Next from CogSci 2025

Probing Mechanical Reasoning in Large Vision Language Models
poster

Probing Mechanical Reasoning in Large Vision Language Models

CogSci 2025

+3
Haoran Sun and 5 other authors

02 August 2025