Wednesday 

Workshop room 2 

10:20 - 11:20 

(UTC+02

Workshop (60 min)

Part 1/2: From Hallucination to Justification: Hands-On Explainability for LLMs

Human beings are biased and often wrong. AI learns from human-created data. Therefore, AI is biased and often wrong. This has been a critical problem across machine learning applications in the last years. To break open the black box of AI models, and understand how they make decisions, the concept of explainability was introduced.

Ethics
GenAI

Then, LLMs entered the chat. They answer our questions confidently and with a beautiful prose, even when they are making up data. Explainability then becomes essential to trust -or not- their output. But when the existing explainable AI methods cannot be directly applied to these models, what do we do?

In this workshop, we will delve into explainability and its importance in the current context of LLMs and agents. Starting from traditional ML to then focus on LLMs, we will cover the different methods that can be implemented, from well-known ones to novel proposals stemming from our internal research. We will also introduce research-proven prompting strategies, tips, and tricks to integrate explanations on third-party LLM services that are not natively explainable.

Through guided exercises, you will get to peek under the hood of AI models, LLMs' behavior, and agents reasoning, by trying out these different techniques, and seeing their benefits and limitations first-hand. You will experience the risks and challenges that generative AI and agentic AI bring when implementing explainability, and learn practical ways to tackle them.

Lucía Conde-Moreno

Lucía Conde-Moreno is a consultant software engineer at Info Support, specializing in data and AI applications. She is known as a Jack Of All Trades by her colleagues (and as a Master Of None by her imposter syndrome), having worked in varied roles ranging from .NET or Java developer to data scientist or platform engineer. She has worked for different national and international clients, in diverse fields such as finance, health care, energy, or education. She is part of the AI Champions chapter for promoting AI-augmented engineering tools, and she is responsible for supervising internal research in subfields of AI like explainability or computer vision. When she is not working, she is busy switching across random hobbies, from filmmaking to DJing. She holds a MSc in Computer Science, and a BSc in Telecommunications Engineering.

Tessel Haagen

Tessel is a consultant in data & AI at Info Support, specialized in interpretable AI. In her daily work, she applies AI-augmented engineering to build and evaluate intelligent systems. She is known as an enthusiastic knowledge source and an insatiably curious lifelong learner. Outside of work, she channels her strategic thinking into board games (with a soft spot for complex strategy games) and escapes into fantasy novels. With an MSc in Artificial Intelligence, an MSc in Computer Science, and an MA in Linguistics, she combines a strong technical foundation with a deep interest in language and reasoning.