Research
Current Projects
Lifelong Multimodal Language Learning (LUMO)
DFG Individual Research Grant (2025–2029)
PI: Jae Hee Lee and Stefan Wermter
University of Hamburg
LUMO develops lifelong learning multimodal models that stay robust under task changes by identifying, explaining, and leveraging compositional concepts and relations. LUMO integrates XAI-driven analysis with tightly coupled neuro-symbolic constraints and evaluates the approach on vision–language integration and language-conditioned robotic manipulation.
Past Projects
Neurocognitive Models of Crossmodal Language Learning
DFG Collaborative Research Center (2020–2025)
PIs: Stefan Wermter and Cornelius Weber
University of Hamburg
This project investigated how neural models acquire and generalize language grounded in vision and action.
Formal Lexically Informed Logics for Searching the Web
ERC Consolidator Grant (2018–2020)
PI: Steven Schockaert
Cardiff University
This project explored how to make semantic search and reasoning robust when knowledge is incomplete, distributed, or inconsistent, by combining logical inference with vector-space semantics.
Feodor Lynen Research Fellowship
Humboldt Foundation (2016–2017)
Host: Sanjiang Li
University of Technology Sydney
This fellowship focused on constraint-based spatio-temporal reasoning and distributed algorithms for multiagent settings.
Artificial Intelligence Meets Wireless Sensor Networks
ARC Discovery Project (2015)
PI: Jochen Renz
The Australian National University
This project investigated AI methods for interpreting and reasoning over data from wireless sensor networks, with a focus on extracting usable spatio-temporal structure from noisy, distributed observations.
Spatial Cognition Research
DFG SFB/TR 8 (2009–2014)
PIs: Christian Freksa and Diedrich Wolter
University of Bremen
This work studied qualitative spatial reasoning—its computational properties, complexity, and practical algorithms for reasoning about directions and relations.