Skip to content

Commit 83f1796

Browse files
Merge pull request #59 from dasc-lab/kaleb
Add Kaleb papers (2ACC1L4DC)
2 parents 30d1f01 + d7030e8 commit 83f1796

3 files changed

Lines changed: 61 additions & 0 deletions

File tree

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
---
2+
layout: papers
3+
# specify the title of the paper
4+
title: "A Formal gatekeeper Framework for Safe Dual Control with Active Exploration"
5+
# specify the date it was published
6+
date: 2026-01-21
7+
# list the authors. if a "/people/id" page exists for the person, it will be linked. If not, the author's name is printed exactly as you typed it.
8+
authors:
9+
- kalebbennaveed
10+
- devanshagrawal
11+
- dimitrapanagou
12+
# give the main figure location, relative to /static/
13+
image: /images/2026_dual_gatekeeper_acc.png
14+
# specify the conference or journal that it was published in
15+
venue: "IEEE ACC 2026"
16+
# link to publisher site (optional)
17+
link:
18+
# link to arxiv (optional)
19+
arxiv: https://arxiv.org/abs/2510.06351
20+
# link to github (optional)
21+
# code: https://github.com/joonlee16/partial_resilient_leader_follower_consensus
22+
# link to video (optional)
23+
video:
24+
# link to pdf (optional)
25+
pdf: https://arxiv.org/2510.06351
26+
# abstract
27+
abstract: "Planning safe trajectories under model uncertainty is a fundamental challenge. Robust planning ensures safety by considering worst-case realizations, yet ignores uncertainty reduction and leads to overly conservative behavior. Actively reducing uncertainty on-the-fly during a nominal mission defines the dual control problem. Most approaches address this by adding a weighted exploration term to the cost, tuned to trade off the nominal objective and uncertainty reduction, but without formal consideration of when exploration is beneficial. Moreover, safety is enforced in some methods but not in others. We propose a framework that integrates robust planning with active exploration under formal guarantees as follows: The key innovation and contribution is that exploration is pursued only when it provides a verifiable improvement without compromising safety. To achieve this, we utilize our earlier work on gatekeeper as an architecture for safety verification, and extend it so that it generates both safe and informative trajectories that reduce uncertainty and the cost of the mission, or keep it within a user-defined budget. The methodology is evaluated via simulation case studies on the online dual control of a quadrotor under parametric uncertainty.
28+
# bib entry (optional). the |- is used to allow for multiline entry."
29+
bib:
30+
---

content/papers/2026/2026_non_uniform_exp_acc.md

Whitespace-only changes.
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
---
2+
layout: papers
3+
# specify the title of the paper
4+
title: "Provably Safe Stein Variational Clarity-Aware Informative Planning"
5+
# specify the date it was published
6+
date: 2026-01-22
7+
# list the authors. if a "/people/id" page exists for the person, it will be linked. If not, the author's name is printed exactly as you typed it.
8+
authors:
9+
- kalebbennaveed
10+
- Utkrisht Sahai
11+
- Anouck Girard
12+
- dimitrapanagou
13+
# give the main figure location, relative to /static/
14+
image: /images/2026_clarity_stein.png
15+
# specify the conference or journal that it was published in
16+
venue: "2026 L4DC"
17+
# link to publisher site (optional)
18+
link:
19+
# link to arxiv (optional)
20+
arxiv: https://arxiv.org/abs/2511.09836
21+
# link to github (optional)
22+
# code: https://github.com/joonlee16/partial_resilient_leader_follower_consensus
23+
# link to video (optional)
24+
video:
25+
# link to pdf (optional)
26+
pdf: https://arxiv.org/2511.09836
27+
# abstract
28+
abstract: "Autonomous robots are increasingly deployed for information-gathering tasks in environments that vary across space and time. Planning informative and safe trajectories in such settings is challenging because information decays when regions are not revisited. Most existing planners model information as static or uniformly decaying, ignoring environments where the decay rate varies spatially; those that model non-uniform decay often overlook how it evolves along the robot's motion, and almost all treat safety as a soft penalty. In this paper, we address these challenges. We model uncertainty in the environment using clarity, a normalized representation of differential entropy from our earlier work that captures how information improves through new measurements and decays over time when regions are not revisited. Building on this, we present Stein Variational Clarity-Aware Informative Planning, a framework that embeds clarity dynamics within trajectory optimization and enforces safety through a low-level filtering mechanism based on our earlier gatekeeper framework for safety verification. The planner performs Bayesian inference-based learning via Stein variational inference, refining a distribution over informative trajectories while filtering each nominal Stein informative trajectory to ensure safety. Hardware experiments and simulations across environments with varying decay rates and obstacles demonstrate consistent safety and reduced information deficits.
29+
# bib entry (optional). the |- is used to allow for multiline entry."
30+
bib:
31+
---

0 commit comments

Comments
 (0)