Marco Zaffalon
IDSIA (USI-SUPSI) & Artificialy
IAS SEMINAR #04
Turin / OGR / Sala Duomo / 21 October 2025 / 2:00-5:00 pm CEST
IDSIA (USI-SUPSI) & Artificialy
EVENT&WEBINAR
We are witnessing peculiar times with trillion-dollar investments in AI, much hype on the imminent arrival of artificial general intelligence (AGI), and very diverging opinions about the actual power of AI even among experts. Moreover, AI applications that probably help the industry are far from commonplace, and the slow pace of AI adoption appears to strengthen some of the pessimistic views: is AI just hype? Aren’t we confusing a linguistic technology (GPTs) with a cognitive revolution? Isn’t all this just statistics after all? To clarify the matter, I find it useful to focus the discussion on the concept of reliability. Without reliable AI systems, we will be very limited in the possibility to apply AI to the real world. And current (Gen)AI systems are just unreliable. Pursuing reliability currently implies great and continuous amounts of human work on top of AI systems. Can we design AI systems that are reliable by design? Neurosymbolic AIs appear to delineate a main path to the goal; and among them “causal AI”-based systems look of paramount importance. After introducing these ideas with some details and examples, I will then focus on AI systems based on causal inference. I will go into some detail on this, climbing the ladder of causation: seeing, doing and imagining. I will end this abstract with a couple of teaser questions for you (data) scientists: do you really think you can do AGI without knowing the difference between observational and interventional data? Can you tell what is a confounder?
Marco Zaffalon