About Me

I am a postdoctoral researcher in the Knowledge Technology Group, University of Hamburg. My research interest is in building robust multimodal language models that generalize to new tasks while retaining previously learned knowledge. My approach is to employ explainable AI and neuro-symbolic AI to elucidate and facilitate the construction of robust models. Previously, I worked on symbolic approaches to AI such as spatio-temporal reasoning and multiagent systems.

I am an associate editor of AI Communications and co-organized the International Workshop on Spatio-Temporal Reasoning and Learning 2023 and 2024. I also regularly serve as a PC member of major AI conferences, including AAAI, IJCAI, ECAI, ACL, and EMNLP.

News

Selected Publications

: code : arXiv : project page.
Full publication list available on DBLP, Google Scholar, and Semantic Scholar.

Multimodal Learning

Explainable Artificial Intelligence

Neuro-Symbolic Artificial Intelligence:

Compositional Generalization

Continual Learning

Other Machine Learning Topics

Spatio-Temporal Reasoning and Learning

Multiagent Systems

Recent Teaching Activities

  • Lecture on “Large Language Models”, University of Hamburg (2024), with Xufeng Zhao [Slides]
  • Lecture on “Transformers”, University of Hamburg (2024) [Slides]
  • Organizer, LLM+XAI Reading Group, Knowledge Technology, University of Hamburg (2024)
  • Supervisor, Neural Networks Seminar, University of Hamburg (2024)
  • Supervisor, Bio-inspired Artificial Intelligence Seminar, University of Hamburg (2024)
  • Organizer, WISDUM meeting, Knowledge Technology, University of Hamburg (2024)

Thesis Supervision

  • Concept-Based Explanation for Continual Learning Models, Priscilla Cortese, MSc (2024)
  • LeoLM: Evaluating and Enhancing German Language Proficiency in Pretrained Large Language Models, Björn Plüster, MSc (2024), now co-founder of ellamind; LeoLM is the first comprehensive German LLM suite developed by Björn.
  • Continual Learning for Language-Conditioned Robotic Manipulation, Lennart Bengtson, MSc (2023)
  • Multivariate Normal Methods in Pre-trained Models for Continual Learning, Hans Hergen Lehmann, BSc (2023), improved version accepted in the TMLR journal
  • Generalization of Transformer-Based Models on Visual Question Answering Tasks, Frederic Voigt, MSc (2023)
  • Improving Compositional Generalization By Learning Concepts Individually, Ramtin Nouri, MSc (2023)
  • Learning Concepts a Developmental Lifelong Learning Approach to Visual Question Answering, Ramin Farkhondeh, BSc (2022)
  • Benchmarking Faithfulness: Towards Accurate Natural Language Explanations in Vision-Language Tasks, Jakob Ambsdorf, MSc (2022), now PhD student at University of Copenhagen
  • Tackling The Binding Problem And Compositional Generalization In Multimodal Language Learning, Caspar Volquardsen, BSc (2021), ICANN 2022
  • Learning Bidirectional Translation Between Robot Actions and Linguistic Descriptions, Markus Heidrich, BSc (2021)
  • Using the Reformer for Efficient Summarization, Yannick Wehr, BSc (2020)
  • Generalization in Multi-Modal Language Learning from Simulation, Aaron Eisermann, BSc (2020), IJCNN 2021
  • Commonsense Validation and Explanation, Christian Rahe, BSc (2020)

Blog Posts

Research Experience

Education

  • Dr. rer. nat. in Computer Science, University of Bremen (2009–2013)
  • Visiting PhD student, North Carolina State University (2012)
  • Visiting PhD student, University at Buffalo (2011)
  • Diplom in Mathematics, University of Bremen (2003–2009)
  • DAAD exchange student, Seoul National University (2008)

Contact

Address

Knowledge Technology Research Group
Vogt-Koelln-Str. 30
22527 Hamburg, Germany
Google Maps

Email

ed.eeleeheaj@eaj