IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Thu, July 10, 2025
The rapid proliferation of artificial intelligence (A.I.) and large language models (LLMs) is revolutionizing our world. However, as these systems increasingly find real-world applications in controlling physical systems—such as autonomous robots, self-driving cars, and other critical infrastructure—their potential to cause harm has escalated dramatically. This is due to large error rates, lack of robustness, hallucinations, as well as a new LLM attack known as jailbreaking. Ensuring safety in safety critical contexts requires a paradigm shift from traditional A.I. development toward robust safety mechanisms. In this talk, I will explore how ideas from control theory can provide rigorous tools and frameworks for developing safety filters tailored towards control systems with deep learning in the loop and LLM-controlled robots, including VLA-controlled robots. By leveraging tools such as integrated quadratic constraints, temporal logic synthesis, and control barrier functions, I will address how our community can play a crucial role in designing A.I. safety systems that effectively mitigate risks while preserving the utility and adaptability of A.I. in real world applications.