During the BDVA – Big Data Value Association DataWeek 2026 event in Oslo, Norway, on 5–6 May 2026, TURING had the opportunity to share its perspective on how we conceptually frame, implement, and quantify efficient trustworthy AI.
The session was organised as an interactive presentation together with our sibling projects PANDORA Horizon Europe Project, AI-DAPT EU Project, MANOLO Project, and RAIDO Project . Such synergies with relevant European AI and data projects are key for TURING, enabling knowledge exchange, stronger cross-project connections, and a shared contribution to the European trustworthy AI ecosystem. Despite the different contexts, levels of maturity, and technological approaches represented across the projects, an important common conclusion emerged: trustworthy AI requires explainability, and explainability must be designed with the human in the loop.
Represented by Dr Konstantinos Tserpes (ICCS – NTUA),TURING showcased its own approach to this challenge, focusing in particular on:
– Physics-informed and hybrid AI solvers, whose outputs can be interpreted against known physical laws, equations, and simulation logic, rather than treated as opaque neural predictions.
– Uncertainty quantification and explanation strategies, which help users understand not only what a model predicts, but also how reliable that prediction is.
For TURING, explainability is therefore not an add-on. It is an integral part of building AI systems that are efficient, trustworthy, and usable in complex scientific and engineering domains.