Friday
Room 1
10:20 - 11:20
(UTC+02)
Talk (60 min)
Mirror, mirror: LLMs and the illusion of humanity
Large language models (LLMs) exploded into mainstream awareness in 2022, and have continued to fascinate us since.
But what is it about LLMs, compared to other, similarly complex algorithms, that have so captured our imagination? And why is it that we are so ready to believe that these models have started to show signs of human behavior?
In this talk, we’ll delve into some of the more extraordinary claims that have been made about LLMs in the past few years, including that these models are showing signs of sentience or intelligence. We’ll discuss why humans have a tendency to see such traits in these models, due to the way they mirror back a “lossy compression” of our humanity. And we’ll talk about how dispelling myths about LLMs being anything more than language models can help us apply them to their best current uses.