Veritas@Leiden

VeritasLab: Safe AI and Automated Reasoning

The Veritas Lab advances methods to ensure that hardware and AI systems are correct, safe, and trustworthy. Our research is concerned with using formal methods and automated reasoning to achieve trustworthy AI and safe autonomy, developing the theoretical and algorithmic foundations for rigorous reasoning about hardware circuits and neural control systems. Our primary application domains are trustworthy AI and safe autonomy, where we design certified learning methods and establish formal safety guarantees for the behaviour of AI and autonomous systems, with a particular focus on certificate-based neural control and safe reinforcement learning.