Invited Talks
The Relational Reasoning meeting aims to bring together researchers studying
relational reasoning across disciplines (psychology, neuroscience, linguistics,
computer science), from a variety of perspectives (functional, cognitive-mechanistic,
neurological) and in different systems (nonhuman animals, humans, computer systems).
We have invited keynote speakers from different research fields to ensure the multidisciplinary
nature of the meeting.

Marcel Binz (Helmholtz Institute for Human-Centered AI, Germany)
Dr. Marcel Binz is a research scientist and deputy head of the Institute for Human-Centered AI at Helmholtz Munich. His research employs state-of-the-art machine learning methods to uncover the fundamental principles behind human cognition. He believes that to get a full understanding of the human mind, it is vital to consider it as a whole and not just as the sum of its parts. His current research goal is therefore to establish foundation models of human cognition – models that cannot only simulate, predict, and explain human behavior in a single domain but that offer a unified take on our mind.
Talk - Foundation Models of Human Cognition
Abstract. Most cognitive models are domain-specific, meaning that their scope is restricted to a single type of problem. The human mind, on the other hand, does not work like this – it is a unified system whose processes are deeply intertwined. In this talk, I will present our ongoing work on foundation models of human cognition: models that cannot only simulate, predict, and explain behavior in a single domain but instead offer a truly universal take on our mind. Together with a large international consortium, we have transcribed data from over 170 experiments – covering all major areas of cognitive psychology, including reinforcement learning, memory, decision-making, probabilistic reasoning, and many more – into a text-based form. We then used this data set to finetune a large language model, thereby aligning it to human behavior. The resulting model provides a window into human cognition and can be used for rapid prototyping of behavioral studies, to improve traditional cognitive models, and to generate new hypotheses about human information processing.

Louisa Bogaerts (Ghent University, Belgium)
Louisa Bogaerts is an Assistant Professor in the Department of Experimental Psychology at Ghent University. Her research focuses on the cognitive science of learning and language, drawing evidence from behavioural experiments, neuroimaging, and eye-tracking. Her group's current investigations specifically focus on the human ability to extract regularities from the sensory input and individual differences therein.
Talk - Is there such a thing as a 'good statistical learner'?
Abstract. There has been a surge of research investigating individual differences in the learning of statistical structure, tying them to variability in a range of cognitive (dis)abilities. Several studies have demonstrated that individuals’ statistical learning abilities are robustly different from one another. In addition, one's learning performance is relatively stable over time when measured appropriately. In this talk I will interrogate the question if there is a general statistical learning capacity that can sort individuals from ‘bad’ to ‘good’ statistical learners. Or is the inter-individual variability in assimilating statistical environments only meaningful within the different cognitive domains?

Leonidas Doumas (University of Edinburgh, Scotland)
My research is focused on answering the questions: What do human mental representations look like, and how do we learn them? I employ both empirical and formal (developing process level computational models) methods to investigate these issues. Broadly, I am interested in how humans learn structured representations, specifically relational representations (like “above”, “chases”, “ameliorates”), from unstructured examples, and use these representations in the service of solving problems. More specifically, my research has explored how children and adults learn relational concepts, how children develop the ability to reason by analogy, and how children and adults learn to recognize melody. More recently I have started work exploring how children and adults learn to reason about mathematical operations like addition and multiplication, and developing training regimens (motivated by predictions from a computational model for relation learning I have developed) to teach college age students fractions and how to reason from deductive syllogisms.
Talk - Towards a Theory of Human Visual Reasoning
Abstract. Humans are very good at transferring knowledge across domains. We frequently and capably use information we have learned in one domain to understand and then reason about another. One mundane, but interesting, example is our capacity to perform visual reasoning tasks with simple novel stimuli. The synthetic visual reasoning task (SVRT; Fleuret et al., 2011) is a task wherein participants learn to categorise stimuli constructed from basic line drawings of abstract shapes according to relational rules (e.g., one category might consist of one object inside another, or another where the largest shape is between two smaller shapes). Humans learn the various categories in the SVRT within a few trials, and generalise their solutions to novel instances with ease. By contrast, most successful machine vision systems rely on extensive training and fine-tuning and even still frequently fail with novel stimuli. My colleagues and I have developed a (certainly incomplete) visual reasoning pipeline based on successful theories of human object recognition, learning, and analogical reasoning. This pipeline starts from pixel images, recognises objects, learns structured (i.e., symbolic) representations of simple and eventually more complex relations from these inputs, and then uses these representations to solve the SVRT task via analogy. Our results mirror both the speed and limited training required by human reasoners.

Robert Johansson (Stockholm University, Sweden)
Robert Johansson is an interdisciplinary researcher with dual PhDs: one in clinical psychology (2013) and another in computer science (2024), specializing in the development of adaptive AI systems informed by learning psychology. He has developed Machine Psychology, an approach that integrates principles from learning psychology with the Non-Axiomatic Reasoning System (NARS) to create AI systems capable of human-like relational reasoning. Currently an Associate Professor of Clinical Psychology at the Department of Psychology, Stockholm University, Sweden, Robert has extensive experience in emotion-focused therapies and in developing innovative psychological treatments, particularly through guided self-help delivered via the Internet. His interdisciplinary expertise allows him to bridge the gap between psychological science and artificial intelligence, contributing to the development of adaptive AI systems that align with human values.
Talk - Arbitrarily Applicable Relational Responding with the Non-Axiomatic Reasoning System
Abstract. This talk explores the intersection of Relational Frame Theory (RFT) and Artificial General Intelligence (AGI) through the implementation of Arbitrarily Applicable Relational Responding (AARR) within the Non-Axiomatic Reasoning System (NARS). Building on principles of learning psychology and adaptive reasoning, NARS models the flexible cognitive behaviors essential for AGI. Specifically, AARR, a hallmark of human cognitive complexity, enables the derivation and application of relationships between stimuli based on arbitrary contextual cues, facilitating analogical reasoning, abstract problem-solving, and language-like capabilities. The discussion will outline how NARS incorporates operant conditioning and relational framing to enable AARR, emphasizing its implications for advancing Machine Psychology as an interdisciplinary framework. Empirical findings from experiments on stimulus equivalence, symmetry, and functional equivalence with NARS demonstrate its capacity to replicate key aspects of human-like intelligence. By integrating learning psychology paradigms with AI, this research offers a pathway toward more general and adaptable cognitive architectures.

Andrea E. Martin (Donders Centre for Cognitive Neuroimaging, the Netherlands)
Andrea Martin is the principal investigator of the 'Language and Computation in Neural Systems' group at the Donders Center for Cognitive Neuroimaging, Max Planck Institue, Nijmegen. Her research aims to develop a unified theory of how a physiological system like the brain can represent the wealth of expressions available to us in human language. Words can be combined together (or "composed" in Linguistics) into endlessly novel phrases and sentences; theories in Philosophy and Linguistics account for the functional independence of words and sentences as a byproduct of the fact that language and thought are symbol systems: systems of structured representations that separate a ‘variable’ (e.g., the role in a sentence) from a particular value (e.g., the particular word or entity). How the brain achieves compositionality remains unaccounted for in neurobiological, psychological, and computational theories of language. Similarly, the debate about whether and how symbolic computation might be realized in the brain, or in artificial neural networks or other brain-like models, persists and remains vigorous across cognitive science and arti ficial intelligence. The LaCNS Group fills this foundational gap via a determinedly interdisciplinary approach. We synthesize fundamental insights from the language sciences, computation, and neuroscience. The core of our vision is to capitalize on the role of “rhythmic computation,” as seen in neural oscillations, to achieve symbolic representations in brain-like systems, and then determine, through neuroscientific experiments, if the brain solves the problem in a similar way.
Talk - Neural Dynamics Reflect (Linguistic) Structure.
Abstract. Human language is an example of a natural system that leverages both statistical (e.g., surprisal) and structured (e.g., syntax) information. I focus on a foundational question at the heart of this issue – how can linguistic structure be encoded in a neural system that is often driven by statistics? I reconcile cognitive neuroimaging data with computational simulations to outline a theory of language representation and processing in the brain that integrates basic insights from linguistics and psycholinguistics with the currency of neural computation, population rhythmic activity.

Teresa Mulhern (South East Technological University, Ireland)
Teresa’s research has previously examined the relevance of Relational Frame Theory (RFT) to higher-order cognitive skills such as classification, language and intelligence. Her research has also focused on applying RFT to teach these skills in both neuromajority populations and neurodivergent and developmentally disabled populations and examining the effects of the acquisition of these relational repertoires on other areas such as language and education. Teresa is also a Board Certified Behaviour Analyst (Doctoral Level: BCBA-D) with over a decade of applied experience. Her interests are in the areas of language development, neurodivergence and education.
Talk - Relational Frame Theory: How Far Have We Come, and What Is Left to Explore?
Abstract. It has been close to 25 years since Hayes, Barnes-Holmes and Roche’s (2001) seminal book which was the first to offer a comprehensive framework for language and cognition in accordance with Relational Frame Theory (RFT). At the time of its’ composition, RFT was very much in its’ infancy and the ideas contained within the text, although primarily theoretical, offered interesting avenues for research including; language, intelligence, psychological wellbeing, education and social behaviours. Following this rising call to research, how much has the area of RFT actually addressed? The last twenty years has witnessed a rise in both experimental and applied research within the field, but the question remains, how much more can RFT be applied to? The current talk will consider the research base already examined by the field while also considering the significant advancements made as a result of this research. Finally, the talk will also consider the additional directions and applications of the theory that we may take in the future.

Claire Stevenson (University of Amsterdam, the Netherlands)
Claire Stevenson's research is driven by the question: “How do people get to be so smart?”. This has led to a focus on two related research themes: how intelligence develops and the creative process. I develop cognitive tasks and games and ask people and AI models to solve them. I use tools such as error analysis and neuroscientific techniques (eye-tracking and fMRI/EEG for humans; mechanistic interpretability for AI models) to gain deeper insights into how and why humans and AI models solve these problems the way they do. I then develop and test mathematical models of these cognitive (developmental) processes to improve our understanding of developing intelligence and creativity in humans and machines.
Talk - Learning to Solve Analogies: Why Do Children Excel Where AI Models Fail?
Abstract. Recent work with multimodal large language models concludes that analogical reasoning, using what you know about one thing to infer knowledge about a new, somehow related instance, has emerged in these systems. My work shows something different: the newest multimodal large language models (MLLMs) only appear to be "reasoning", but when challenged they seem unable to generalize what they've learned to novel rules or unfamiliar domains. I will present a series of studies demonstrating why I think that analogical reasoning has not emerged in these systems. I will go on to discuss the difference between learning patterns and learning rules and how disentangling these two can provide insights into why children learn to solve analogies, but MLLMs currently do not.

Ivilin Peev Stoianov (Institute of Cognitive Sciences and Techology, Italy)
Dr. Ivilin Peev Stoianov is Principle Investigator at the Institute of Cognitive Sciences and Technologies (Padova), National Research Council, Italy. With a background in computer science and expertise in computational linguistics, experimental psychology, and neuroscience, he investigates the algorithmic and computational principles underlying cognitive functions in humans and primates. His research integrates generative probabilistic modeling, artificial neural networks, and active inference to study perception, decision-making, and motor control from a computational perspective. His work advanced theories of Bayesian non-parametric model-based reinforcement learning and dynamic hybrid active inference, as well as our understanding of the neural and computational basis of numerosity perception, spatial organization in the hippocampus, predictive spatial goal coding and reference frame transformations in the parietal cortex, and decision-making in the prefrontal cortex. His recent work focuses on hierarchical kinematic inference and motor planning, embodied decision-making using active inference.
Talk - Space, Concepts, and Beyond: Hierarchical Generative Modeling of Spatiotemporal Events
Abstract: We recently advanced a novel computational theory on how the hippocampal formation hierarchically organizes spatiotemporal experiences into contexts. In this framework, individual items are organized into sequences based on past experiences, which are further structured into distinct contexts, or maps, according to the spatiotopic relationships among the items that compose these experiences. The model can both infer spatial contexts and generate coherent, context-dependent sequences of items. This idea aligns with the well-established role of the hippocampus in forming cognitive maps and offers hypotheses about the functional role of the generated sequences, known in the hippocampus as "replays". In this talk, I will discuss this hierarchical generative model and its extension to conceptual spaces, where individual items are related through their underlying properties, and the model generates coherent traces within these conceptual spaces.
Flash Talks and Posters
We will also accept a limited number of submitted talks (15 + 5 minutes), and invite researchers to present a research poster during the poster session on the 11th of April. To submit a talk or poster, please register via this link and indicate your preference (talk/poster) so we can contact you.