Faglige nøgleord: Machine learning, AI, safety, uncertainty quantification
Oplæg tilgængeligt på: Engelsk og italiensk
Calibrating your confidence requires thinking of all the possible ways you can be wrong, and that's hard. AI models, especially language models like ChatGPT, suffer from dangerous overconfidence: they always provide a confident answer to any question you ask, even when they don't actually know the answer.
This has two consequences, (1) we should be more skeptical and critical when we use AI, and (2) we should find a way to teach AI models to say "I don't know", which is the goal of my PhD project.
My presentation will introduce to the basic concepts of Deep Learning (the field behind AI development) through examples and animations. I will do my best to communicate complex math concepts in an intuitive way, without relying on math formalism, which is exactly what made me passionate about math when I was a kid. Curiosity, for me, was the motivation to study the hard formalism, and I believe showing this very cool application of math concepts can make the students curious to know more.
As a side effect, especially on less scientific-inclined students, this presentation will raise some awareness of the limitations of the AI they use everyday (because no, they won't stop using it just because you ask them to).