Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions _projects/constraint_aware_refinement_for_verification.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,25 +5,28 @@ date: 2024-09-05
description: This project developed an approach to reduce conservativeness in reachable set calculations for neural feedback loops
summary:

featured_image: '/images/projects/CARV.png'
featured_image: '/images/projects/carv.gif'

authors:
- nrober


papers:
- rober2024constraint

active: true
---

### About
Neural networks~(NNs) are becoming increasingly popular in the design of control pipelines for autonomous systems. However, since the performance of NNs can degrade in the presence of out-of-distribution data or adversarial attacks, systems that have NNs in their control pipelines, i.e., neural feedback loops~(NFLs), need safety assurances before they can be applied in safety-critical situations. Reachability analysis offers a solution to this problem by calculating reachable sets that bound the possible future states of an NFL and can be checked against dangerous regions of the state space to verify that the system does not violate safety constraints. Since exact reachable sets are generally intractable to calculate, reachable set over approximations~(RSOAs) are typically used. The problem with RSOAs is that they can be overly conservative, making it difficult to verify the satisfaction of safety constraints, especially over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the overconservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This project presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the overconservativeness of RSOAs by explicitly using the safety constraints on the NFL to refine RSOAs only where necessary.
Neural networks (NNs) are becoming increasingly popular in the design of control pipelines for autonomous systems. However, since the performance of NNs can degrade in the presence of out-of-distribution data or adversarial attacks, systems that have NNs in their control pipelines, i.e., neural feedback loops (NFLs), need safety assurances before they can be applied in safety-critical situations. Reachability analysis offers a solution to this problem by calculating reachable sets that bound the possible future states of an NFL and can be checked against dangerous regions of the state space to verify that the system does not violate safety constraints. Since exact reachable sets are generally intractable to calculate, reachable set over approximations (RSOAs) are typically used. The problem with RSOAs is that they can be overly conservative, making it difficult to verify the satisfaction of safety constraints, especially over long time horizons or for highly nonlinear NN control policies. Refinement strategies such as partitioning or symbolic propagation are typically used to limit the overconservativeness of RSOAs, but these approaches come with a high computational cost and often can only be used to verify safety for simple reachability problems. This project presents Constraint-Aware Refinement for Verification (CARV): an efficient refinement strategy that reduces the overconservativeness of RSOAs by explicitly using the safety constraints on the NFL to refine RSOAs only where necessary.

![](/images/projects/CARV.png)
<!-- ![](/images/projects/CARV.png) -->
![](/images/projects/carv.gif)

### Software
### Links

Coming soon.
* [Paper](https://ieeexplore.ieee.org/abstract/document/10804193)
* [Code](https://github.com/mit-acl/CARV)

### Sponsor

Expand Down
8 changes: 6 additions & 2 deletions _projects/online-reachability-for-uncertain-systems.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ featured_image: '/images/projects/heron_reachability_icon.png'
authors:
- nrober

papers:
- rober2024online
- mahesh2025safe


active: true
---
Expand All @@ -20,7 +24,7 @@ As autonomy pipelines become more complex, possibly including learned components

![](/images/projects/heron_reachability.png)

### Software
### Links

Software is in review.
* [Paper](https://ieeexplore.ieee.org/abstract/document/10610163)

36 changes: 36 additions & 0 deletions _projects/safety-filtering-for-systems-with-perception-models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
---
title: Safety Filtering for Systems with Perception Models Subject to Adversarial Attacks
date: 2026-01-01

description: This project developed an approach to guard systems with NN-based state estimators from adversarial attacks designed to cause safety constraint violations
summary:

featured_image: '/images/projects/guardian_cover.png'

authors:
- nrober
- ameredit


papers:
- rober2026guardian

active: true
---

### About
Safety filtering is an effective method for enforcing constraints in safety-critical systems, but existing methods typically assume perfect state information. This limitation is especially problematic for systems that rely on neural network (NN)-based state estimators, which can be highly sensitive to noise and adversarial input perturbations. We address these problems by introducing GUARDIAN: Guaranteed Uncertainty-Aware Reachability Defense against Adversarial INterference, a safety filtering framework that provides formal safety guarantees for systems with NN-based state estimators. At runtime, GUARDIAN uses neural network verification tools to provide guaranteed bounds on the system’s state estimate given possible perturbations to its observation. It then uses a modified Hamilton-Jacobi reachability formulation to construct a safety filter that adjusts the nominal control input based on the verified state bounds and safety constraints. The result is an uncertainty-aware filter that ensures safety despite the system’s reliance on an NN estimator with noisy, possibly adversarial, input observations. Theoretical analysis and numerical experiments demonstrate that GUARDIAN effectively defends systems against adversarial attacks that would otherwise lead to a violation of safety constraints.

<!-- ![](/images/projects/CARV.png) -->
<p>
<img src="/images/projects/guardian_cover.png" alt="">
</p>

<p>
<img src="/images/projects/guardian_trajectory.gif" alt="" style="width:50%;">
</p>

### Links

* [Paper](https://arxiv.org/abs/2602.06026)
<!-- * [Code](https://github.com/mit-acl/CARV) -->
Binary file added images/projects/carv.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/projects/guardian_cover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/projects/guardian_trajectory.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.