diff --git a/_projects/constraint_aware_refinement_for_verification.md b/_projects/constraint_aware_refinement_for_verification.md index 85937fc..52d42e5 100644 --- a/_projects/constraint_aware_refinement_for_verification.md +++ b/_projects/constraint_aware_refinement_for_verification.md @@ -5,25 +5,28 @@ date: 2024-09-05 description: This project developed an approach to reduce conservativeness in reachable set calculations for neural feedback loops summary: -featured_image: '/images/projects/CARV.png' +featured_image: '/images/projects/carv.gif' authors: - nrober papers: +- rober2024constraint active: true --- ### About -Neural networks~(NNs) are becoming increasingly popular in the design of control pipelines for autonomous systems. However, since the performance of NNs can degrade in the presence of out-of-distribution data or adversarial attacks, systems that have NNs in their control pipelines, i.e., neural feedback loops~(NFLs), need safety assurances before they can be applied in safety-critical situations. Reachability analysis offers a solution to this problem by calculating reachable sets that bound the possible future states of an NFL and can be checked against dangerous regions of the state space to verify that the system does not violate safety constraints. Since exact reachable sets are generally intractable to calculate, reachable set over approximations~(RSOAs) are typically used. The problem with RSOAs is that they can be overly conservative, making it difficult to verify the satisfaction of safety constraints, especially over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the overconservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This project presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the overconservativeness of RSOAs by explicitly using the safety constraints on the NFL to refine RSOAs only where necessary. +Neural networks (NNs) are becoming increasingly popular in the design of control pipelines for autonomous systems. However, since the performance of NNs can degrade in the presence of out-of-distribution data or adversarial attacks, systems that have NNs in their control pipelines, i.e., neural feedback loops (NFLs), need safety assurances before they can be applied in safety-critical situations. Reachability analysis offers a solution to this problem by calculating reachable sets that bound the possible future states of an NFL and can be checked against dangerous regions of the state space to verify that the system does not violate safety constraints. Since exact reachable sets are generally intractable to calculate, reachable set over approximations (RSOAs) are typically used. The problem with RSOAs is that they can be overly conservative, making it difficult to verify the satisfaction of safety constraints, especially over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the overconservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This project presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the overconservativeness of RSOAs by explicitly using the safety constraints on the NFL to refine RSOAs only where necessary. -![](/images/projects/CARV.png) + +![](/images/projects/carv.gif) -### Software +### Links -Coming soon. +* [Paper](https://ieeexplore.ieee.org/abstract/document/10804193) +* [Code](https://github.com/mit-acl/CARV) ### Sponsor diff --git a/_projects/online-reachability-for-uncertain-systems.md b/_projects/online-reachability-for-uncertain-systems.md index 3ae2796..5046c5f 100644 --- a/_projects/online-reachability-for-uncertain-systems.md +++ b/_projects/online-reachability-for-uncertain-systems.md @@ -10,6 +10,10 @@ featured_image: '/images/projects/heron_reachability_icon.png' authors: - nrober +papers: +- rober2024online +- mahesh2025safe + active: true --- @@ -20,7 +24,7 @@ As autonomy pipelines become more complex, possibly including learned components ![](/images/projects/heron_reachability.png) -### Software +### Links -Software is in review. +* [Paper](https://ieeexplore.ieee.org/abstract/document/10610163) diff --git a/_projects/safety-filtering-for-systems-with-perception-models.md b/_projects/safety-filtering-for-systems-with-perception-models.md new file mode 100644 index 0000000..167cf94 --- /dev/null +++ b/_projects/safety-filtering-for-systems-with-perception-models.md @@ -0,0 +1,36 @@ +--- +title: Safety Filtering for Systems with Perception Models Subject to Adversarial Attacks +date: 2026-01-01 + +description: This project developed an approach to guard systems with NN-based state estimators from adversarial attacks designed to cause safety constraint violations +summary: + +featured_image: '/images/projects/guardian_cover.png' + +authors: +- nrober +- ameredit + + +papers: +- rober2026guardian + +active: true +--- + +### About +Safety filtering is an effective method for enforcing constraints in safety-critical systems, but existing methods typically assume perfect state information. This limitation is especially problematic for systems that rely on neural network (NN)-based state estimators, which can be highly sensitive to noise and adversarial input perturbations. We address these problems by introducing GUARDIAN: Guaranteed Uncertainty-Aware Reachability Defense against Adversarial INterference, a safety filtering framework that provides formal safety guarantees for systems with NN-based state estimators. At runtime, GUARDIAN uses neural network verification tools to provide guaranteed bounds on the system’s state estimate given possible perturbations to its observation. It then uses a modified Hamilton-Jacobi reachability formulation to construct a safety filter that adjusts the nominal control input based on the verified state bounds and safety constraints. The result is an uncertainty-aware filter that ensures safety despite the system’s reliance on an NN estimator with noisy, possibly adversarial, input observations. Theoretical analysis and numerical experiments demonstrate that GUARDIAN effectively defends systems against adversarial attacks that would otherwise lead to a violation of safety constraints. + + +

+ +

+ +

+ +

+ +### Links + +* [Paper](https://arxiv.org/abs/2602.06026) + diff --git a/images/projects/carv.gif b/images/projects/carv.gif new file mode 100644 index 0000000..35f51ed Binary files /dev/null and b/images/projects/carv.gif differ diff --git a/images/projects/guardian_cover.png b/images/projects/guardian_cover.png new file mode 100644 index 0000000..a127945 Binary files /dev/null and b/images/projects/guardian_cover.png differ diff --git a/images/projects/guardian_trajectory.gif b/images/projects/guardian_trajectory.gif new file mode 100644 index 0000000..39e6250 Binary files /dev/null and b/images/projects/guardian_trajectory.gif differ