Room 6 

17:40 - 18:40 


Talk (60 min)

LLMs gone wild

We've all seen fancy demos of LLM apps using your own data to answer questions... but what happens after the demo? when you put the app into production and it starts hallucinating or starts behaving weird.


In this session we will explore how things can go wrong, very wrong. We will look at LLMs from responsible AI perspectives, and from security perspectives, and see what happens if they are exploited or used improperly. And what you can do to evaluate or combat things like hallucination and undesired effects.

Tess Ferrandez-Norlander

Tess is a developer/data scientist working at Microsoft. Over the past 20 years she has changed the way we do .net debugging, developed a large number of mobile apps. As of a couple of years ago she moved into the world of data science and machine learning working with a lot of the largest companies in Europe and beyond on really tough ML problems.

She has has spoken at lots and lots of conferences around the world on a wide variety of topics including deep .net debugging, UX, web development and Machine Learning. You can also find her on twitter at @TessFerrandez