
Autonomous systems in general, and self-driving cars, in particular, hold the promise to be one of the most disruptive technologies emerging in recent years. However, the security and resilience of these systems, if not proactively addressed, will pose a significant threat potentially impairing our relation with these technologies and may lead to a societal rejection of adopting them permanently.
In this talk, I will focus on three problems in the context of designing resilient and verifiable autonomous systems: (i) the design of resilient state estimators in the presence of false data injection attacks, (ii) the design of resilient multi-robot motion planning in the presence of Denial-of-Service (DoS) attacks and (iii) the formal verification of neural network-based controllers. I will argue that, although of a heterogeneous nature, all these problems have something in common. They can be formulated as the feasibility problem for a type of formula called monotone Satisfiability Modulo Convex programming (or SMC for short). I will present then a new SMC decision procedure that uses a lazy combination of Boolean satisfiability solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. I will finish by showing, through multiple experimental results, the real-time and the resilience performance of the proposed algorithms.
<bacarson@eng.ucsd.edu>