Skip to content

Background task: complete never-completed audit log entries#10026

Open
david-crespo wants to merge 7 commits intomainfrom
audit-log-timeout-task
Open

Background task: complete never-completed audit log entries#10026
david-crespo wants to merge 7 commits intomainfrom
audit-log-timeout-task

Conversation

@david-crespo
Copy link
Contributor

@david-crespo david-crespo commented Mar 11, 2026

Closes #8817

The most important thing about an audit log is that every event you care about is in there. This PR adds a job to make sure requests that never complete (e.g., because Nexus crashed in the middle) appear in the audit log after some timeout.

The audit log gets written in two steps. At the beginning of a request we initialize the log entry row. If this fails, the request fails. This guarantees we never miss an initialization. Then, at the end of the request, we "complete" the row with the result of the operation and a time_completed. The audit log list API response is ordered by time_completed. This lets us guarantee that if you retrieve a chunk of the audit log that is fully in the past, it will never change — you never have to fetch that chunk of the log again. But this requires thate entries do not appear in the log until they are completed.

Completion can fail for variety of hopefully very rare reasons. It seemed unreasonably complicated to guarantee completion by rolling back whatever operation happened on a failed completion. Instead, the idea was that we would run a job that cleans up incomplete rows after a long enough amount of time that it's very unlikely the request will complete. When we clean up such rows, we don't know what actually happened to the request, so we have to give it a special result_kind of timeout (as opposed to success or error).

/// This doesn't mean the operation itself timed out (which would be an
/// error, and I don't think we even have API timeouts) but rather that the
/// attempts to complete the log entry failed (or were never even attempted
/// because, e.g., Nexus crashed during the operation), and this entry had
/// to be cleaned up later by a background job (which doesn't exist yet)
/// after a timeout. Note we represent this result status as "Unknown" in
/// the external API because timeout is an implementation detail and makes
/// it sound like the operation timed out.
Timeout,

In the API response this becomes AuditLogEntryResult::Unknown because I feel like "timeout" is an implementation detail and makes it sound like the operation itself timed out.

AuditLogResultKind::Timeout => {
audit::AuditLogEntryResult::Unknown
}

How often are log entries left incomplete?

Fortunately, this should be a very rare event — I checked on dogfood and there were around 200k rows in the table and there was exactly 1 incomplete disk_create call from October 2025. Need to check on the colo rack.

Config options

All three live in [background_tasks.audit_log_timeout_incomplete] in
the Nexus config (AuditLogTimeoutIncompleteConfig in nexus_config.rs):

  • period_secs — how often the background task activates (currently 600 = 10 min)
  • timeout_secs — how old an incomplete entry must be before it gets timed out (default 14400 = 4 hours)
  • max_update_per_activation — max rows updated per activation (currently 1000), following the naming convention from session_cleanup.max_delete_per_activation

On the timeout, I think 4 hours is pretty conservative. My worry is that we could have a legitimate API call that takes an hour or two because it's a post of a giant file or something. I don't know how reasonable that worry is. This timeout could probably go down to two hours or an hour.

On the interval, 10 minutes seems maybe overkill for something that happens once every 5 months? But it's just one query, basically.

Claude skill for adding background tasks

After #10009 I had Claude write the add-background-task skill and used it to make this one. It was pretty good, so I think it's probably worth checking in.

-- Supports "find stale incomplete rows ordered by time_started".
CREATE INDEX IF NOT EXISTS audit_log_incomplete_by_time_started
ON omicron.public.audit_log (time_started, id)
WHERE time_completed IS NULL;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The robots in my employ pointed out that this index does have a cost — every single request that comes through causes this index to be updated because the audit log entry for it has time_completed null while the operation is in progress. That's probably worth it — otherwise I think the job would not work as written because it would have to do a table scan to find the incomplete rows.

entries (may indicate Nexus crashes or bugs)";
"cutoff" => %cutoff,
"limit" => self.max_update_per_activation,
);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured this should warn! since things showing up here should be quite unusual.

diesel::update(audit_log::table)
let rows = diesel::update(audit_log::table)
.filter(audit_log::id.eq(entry.id))
.filter(audit_log::time_completed.is_null())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really important, otherwise in the bizarro world situation where the request actually does finish after the row has been timed out, completing it would change its time_completed, violating the Rules of Audit Log. See the comment above.

),
}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This task isn't changed here but it was causing the tests to be flaky for me locally depending on whether I ran one test or the whole module.

.execute_async(&*self.pool_connection_authorized(opctx).await?)
.await
.map_err(|e| public_error_from_diesel(e, ErrorHandler::Server))
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the core of the task: for any audit log entries older than cutoff with null time_completed, mark those as timed out. In practice the ordering and limit should not matter because the number of timed out requests should be small.

@david-crespo david-crespo force-pushed the audit-log-timeout-task branch from 8639848 to b4bf111 Compare March 11, 2026 06:05
@david-crespo
Copy link
Contributor Author

The numbers from colo: 5M rows in the table, only two (!!!!!) incomplete rows.

root@[fd00:1122:3344:116::3]:32221/omicron> select count(*) from omicron.public.audit_log;
   count
-----------
  4898486
(1 row)

root@[fd00:1122:3344:116::3]:32221/omicron> select count(*) from omicron.public.audit_log where time_started > now() - interval '7days';
  count
----------
  385581
(1 row)

root@[fd00:1122:3344:116::3]:32221/omicron> select count(*) from omicron.public.audit_log where time_completed is null;
  count
---------
      2
(1 row)

root@[fd00:1122:3344:116::3]:32221/omicron> select time_started, operation_id from omicron.public.audit_log where time_completed is null;
          time_started          | operation_id
--------------------------------+---------------
  2026-03-02 23:16:32.159074+00 | login_saml
  2025-10-18 15:01:21.671874+00 | login_saml
(2 rows)

@david-crespo david-crespo requested a review from hawkw March 11, 2026 18:48
session_cleanup.max_delete_per_activation = 10000
audit_log_timeout_incomplete.period_secs = 600
audit_log_timeout_incomplete.timeout_secs = 14400
audit_log_timeout_incomplete.max_update_per_activation = 1000
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's even necessary to make these tunable in the config as opposed to hard coded in the task definition. I guess it's nice to be able to change them without making a code change, but will we really have to?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤷‍♀️ i don't really care about being able to tune these in the config, but it's fine to leave them since you've already done it?

/// of the immutable record. Overwriting it would change its timestamp
/// and violate the guarantee that the past doesn't change.
#[tokio::test]
async fn test_audit_log_timeout_then_late_completion() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test belongs here rather than the timeout task because it's really about what datastore.audit_log_entry_complete() does with an already timed out row. I got rid of the other tests that were redundant with the one in the task file, though.

@david-crespo david-crespo force-pushed the audit-log-timeout-task branch from 9ed1955 to c5b7f89 Compare March 13, 2026 05:40
};

let cutoff = Utc::now() - timeout_delta;
let cutoff = Utc::now() - self.timeout;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this fail if it's out of range?

) -> Self {
let Ok(timeout) = TimeDelta::from_std(timeout) else {
panic!(
"invalid timeout {timeout:?} \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be nice if we said the name of the config file key here, rather than just "invalid timeout"

let audit_log_sub =
diesel::alias!(nexus_db_schema::schema::audit_log as audit_log_sub);

let subquery = audit_log_sub
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i might give this a more semantically meaningful name --- it's selecting the timed-out rows to update, right?

Comment on lines +112 to +117
/// The `time_completed IS NULL` filter is critical: once an entry has been
/// completed — whether by the normal request path or by the background
/// timeout task — it is part of the immutable audit log. Overwriting it
/// would change its `time_completed` timestamp, shifting its position in
/// the `(time_completed, id)` pagination order and violating the guarantee
/// that past query results are stable.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't feel like API docs to me --- i think it should go in the place where you're actually setting that filter?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Audit log: background job to clean up entries that never completed

2 participants