Skip to content

Conversation

@jchris
Copy link
Contributor

@jchris jchris commented Aug 13, 2025

Summary

Implements warning logs to detect when applications hit the database with excessive parallelism, helping identify performance bottlenecks before they become critical issues.

Changes

ApplyHeadQueue Monitoring

  • Warning logs when applyHeadQueue size exceeds 5 items
  • Debug logs for all operations showing queue size and local update context
  • Added in

WriteQueue Monitoring

  • Warning logs when writeQueue size exceeds 10 items
  • Debug logs for all bulk operations showing queue size and task count
  • Added in

Comprehensive Test Suite

  • Queue behavior tests - Normal operations, high concurrency scenarios
  • ApplyHeadQueue specific tests - Task sorting, error handling, size tracking
  • WriteQueue specific tests - Chunking, concurrent operations, closure handling
  • Tests validate logging doesn't impact functionality or performance

Benefits

  • Early Warning System: Detect queue buildup before performance degrades
  • Performance Tuning: Help developers optimize high-concurrency access patterns
  • Debugging Aid: Correlate performance issues with queue sizes during stress testing
  • Operational Visibility: Monitor queue health in production environments

Testing

The feature can be tested by creating high-concurrency scenarios:

// Trigger applyHeadQueue warnings (>5 items)
const promises = Array.from({ length: 20 }, (_, i) => 
  db.put({ id: i, data: `concurrent-${i}` })
);

// Trigger writeQueue warnings (>10 items)  
const bulkPromises = Array.from({ length: 15 }, (_, i) => {
  const docs = Array.from({ length: 5 }, (_, j) => ({ batch: i, item: j }));
  return db.bulk(docs);
});

await Promise.all([...promises, ...bulkPromises]);

Resolves

Closes #1053

Future Enhancements

  • Configurable warning thresholds
  • Metrics/telemetry integration
  • Adaptive backpressure mechanisms

Summary by CodeRabbit

  • New Features
    • Enhanced observability: added debug logs for queue sizes and bulk operations; warnings when thresholds are exceeded to highlight potential high concurrency.
  • Tests
    • Introduced comprehensive suites validating logging behavior, write-queue operations (push/bulk), chunking, concurrency, ordering, error propagation, and database operations under load.
  • Chores
    • Instrumentation-only update with no functional or API changes.

- Add logging to applyHeadQueue in crdt-clock.ts with warnings when size > 5
- Add logging to writeQueue in write-queue.ts with warnings when size > 10
- Include debug logs for all queue operations with size and context
- Add comprehensive tests for queue logging functionality
- Tests cover normal operations, high concurrency, and error handling

Helps identify when applications hit the database with excessive parallelism
which can cause performance degradation. Warnings help developers optimize
their access patterns before performance issues occur.

Resolves #1053
@jchris jchris self-assigned this Aug 13, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

Walkthrough

Adds debug and warning logs to CRDT apply-head and write queue to report queue sizes (warn thresholds: >5 and >10). Introduces runtime tests exercising logging paths, concurrency, ordering, and error propagation with mocked workers/DB. No public API changes or algorithm modifications.

Changes

Cohort / File(s) Summary
Runtime logging: CRDT clock
core/base/crdt-clock.ts
Adds queue size measurement in int_applyHead; emits debug logs always and warning logs when size > 5; includes localUpdates context. No control-flow or API changes.
Runtime logging: Write queue
core/base/write-queue.ts
In WriteQueueImpl.bulk, logs writeQueue size and bulk task count (debug always, warn when size > 10). No changes to processing logic or signatures.
Tests: ApplyHeadQueue logging
core/tests/runtime/apply-head-queue-logging.test.ts
New Vitest suite validating applyHeadQueue logging-enabled behavior with mocked worker/logger; covers queue size, localUpdates flag, concurrency, ordering, and error propagation.
Tests: DB runtime logging scenarios
core/tests/runtime/queue-logging.test.ts
New Vitest suite driving high-parallelism DB operations (put/bulk/allDocs/get) with mocked logger; validates completion and result structures under concurrency.
Tests: WriteQueue logging and behavior
core/tests/runtime/write-queue-logging.test.ts
New Vitest suite for writeQueue with mocked worker/logger; covers bulk/push, chunking, concurrency, close(), errors, mixed ops, and ordering.

Sequence Diagram(s)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~18 minutes

Assessment against linked issues

Objective Addressed Explanation
ApplyHeadQueue: debug logs show queue size on every int_applyHead (#1053)
ApplyHeadQueue: warning when queue size exceeds 5; include localUpdates context (#1053)
WriteQueue: debug logs show queue size and bulk task count on every operation (#1053)
WriteQueue: warning when queue size exceeds 10 (#1053)

Possibly related PRs

Suggested reviewers

  • mabels
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jchris/queue-warn

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
core/base/crdt-clock.ts (1)

171-179: Async validation is not awaited; blocks may not be validated before use.

validateBlocks maps to async functions but never awaits them. This means the subsequent logic may proceed before the validations complete, causing race conditions and misleading errors later.

Apply this fix to await validation:

 async function validateBlocks(logger: Logger, newHead: ClockHead, blockstore?: BaseBlockstore) {
   if (!blockstore) throw logger.Error().Msg("missing blockstore");
-  newHead.map(async (cid) => {
-    const got = await blockstore.get(cid);
-    if (!got) {
-      throw logger.Error().Str("cid", cid.toString()).Msg("int_applyHead missing block").AsError();
-    }
-  });
+  for (const cid of newHead) {
+    const got = await blockstore.get(cid);
+    if (!got) {
+      throw logger.Error().Str("cid", cid.toString()).Msg("int_applyHead missing block").AsError();
+    }
+  }
 }
🧹 Nitpick comments (11)
core/base/crdt-clock.ts (3)

118-123: Good, targeted queue-size observability; consider throttling to prevent warn log floods.

The added warn/debug logs align with the PR objective and provide immediate visibility. Under heavy load, this can emit a warn per call. Consider throttling (e.g., once per N seconds while threshold is exceeded) to avoid excessive logs in stress scenarios and production.

Also consider:

  • Unifying field keys across modules (e.g., use "queueSize" in both apply/write queues) to simplify log querying.
  • Extracting thresholds as constants to prepare for the planned “configurable thresholds” enhancement.

167-169: Avoid in-place sort to prevent subtle side-effects on shared arrays.

sortClockHead mutates the given array in place. Since this.head, newHead, and prevHead can be shared references, this can lead to hard-to-trace behavior. Prefer a non-mutating sort.

-function sortClockHead(clockHead: ClockHead) {
-  return clockHead.sort((a, b) => a.toString().localeCompare(b.toString()));
-}
+function sortClockHead(clockHead: ClockHead) {
+  return [...clockHead].sort((a, b) => a.toString().localeCompare(b.toString()));
+}

181-183: Use a more robust equality check for ClockHeads.

Stringifying arrays can hide ordering issues or fail if element toString formats differ. Since you already sort heads, comparing by element-wise equality is clearer and safer.

-function compareClockHeads(head1: ClockHead, head2: ClockHead) {
-  return head1.toString() === head2.toString();
-}
+function compareClockHeads(head1: ClockHead, head2: ClockHead) {
+  if (head1.length !== head2.length) return false;
+  for (let i = 0; i < head1.length; i++) {
+    if (head1[i].toString() !== head2[i].toString()) return false;
+  }
+  return true;
+}
core/base/write-queue.ts (1)

74-79: Queue-size logging is on point; consider constant or option for the threshold.

The warn on >10 and debug log per enqueue meet the PR goal. To ease future configurability:

  • Hoist threshold to a module-level const or to opts (e.g., opts.warnThresholdWriteQueue with a default).
  • Optionally throttle warnings to avoid log storms under prolonged overload.

Example minimal refactor:

+const WRITE_QUEUE_WARN_THRESHOLD = 10;
 ...
   this.queue.push({ tasks, resolve, reject });
   const queueSize = this.queue.length;
-  if (queueSize > 10) {
+  if (queueSize > WRITE_QUEUE_WARN_THRESHOLD) {
     this.logger.Warn().Uint("writeQueueSize", queueSize).Uint("bulkTaskCount", tasks.length).Msg("High writeQueue size - potential high parallelism");
   }
core/tests/runtime/queue-logging.test.ts (3)

18-31: Remove unused mock logger setup.

mockLogger is created but never used. This adds noise and can confuse future readers about the intent.

-    // Mock logger to capture log calls
-    mockLogger = {
-      warn: vi.fn().mockReturnValue({
-        Uint: vi.fn().mockReturnThis(),
-        Bool: vi.fn().mockReturnThis(),
-        Msg: vi.fn().mockReturnThis()
-      }),
-      debug: vi.fn().mockReturnValue({
-        Uint: vi.fn().mockReturnThis(),
-        Bool: vi.fn().mockReturnThis(),
-        Msg: vi.fn().mockReturnThis()
-      })
-    };
+    // Intentionally not mocking internal logger here; functional verification only

38-47: Test name implies log verification, but asserts only functional completion.

That’s fine for end-to-end sanity. Consider adding a focused test that injects a mock logger into a lower-level queue (like you did for applyHeadQueue) to assert warn/debug emission at thresholds.


85-113: Good concurrency coverage; consider asserting DB invariants more strictly.

Optionally assert:

  • No duplicates (by id) after mixed operations.
  • All results have a valid clock/head shape.

This tightens validation that logging has zero functional side-effects under load.

core/tests/runtime/write-queue-logging.test.ts (2)

20-38: Remove unused local logger stub or wire it through ensureLogger.

The local logger object is created but never used by the queue since writeQueue constructs its own logger via ensureLogger(sthis, ...). To reduce confusion, remove it. Alternatively, enhance mockSuperThis to include logger and ensure ensureLogger picks it up if that’s supported.


153-174: Assert processing order deterministically.

You intend to verify FIFO ordering with chunkSize: 1, but the test only asserts membership, not order. Strengthen the check:

-    // Should have processed all tasks
-    expect(calls).toHaveLength(3);
-    expect(calls).toContain("first");
-    expect(calls).toContain("second");
-    expect(calls).toContain("third");
+    expect(calls).toEqual(["first", "second", "third"]);
core/tests/runtime/apply-head-queue-logging.test.ts (2)

3-3: Remove unused import.

ensureLogger is imported but not used.

-import { ensureLogger } from "@fireproof/core-runtime";

39-68: Minor: test doesn’t truly validate queue size tracking.

The test drains the generator but doesn’t assert on intermediate sizes or warn thresholds. If feasible, add assertions against queue.size() before and after pushing and after one worker tick, or assert that logger.Warn is called when enqueuing beyond the threshold.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c57d251 and 9190330.

📒 Files selected for processing (5)
  • core/base/crdt-clock.ts (1 hunks)
  • core/base/write-queue.ts (1 hunks)
  • core/tests/runtime/apply-head-queue-logging.test.ts (1 hunks)
  • core/tests/runtime/queue-logging.test.ts (1 hunks)
  • core/tests/runtime/write-queue-logging.test.ts (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
core/tests/runtime/write-queue-logging.test.ts (2)
core/types/base/types.ts (3)
  • SuperThis (137-148)
  • DocUpdate (238-243)
  • DocTypes (209-209)
core/base/write-queue.ts (1)
  • writeQueue (92-98)
core/tests/runtime/queue-logging.test.ts (3)
core/types/base/types.ts (1)
  • Database (593-626)
core/base/ledger.ts (1)
  • fireproof (343-345)
core/base/crdt.ts (1)
  • allDocs (230-237)
core/tests/runtime/apply-head-queue-logging.test.ts (3)
core/base/ledger.ts (1)
  • logger (116-118)
vendor/p-limit/index.js (2)
  • queue (6-6)
  • generator (57-60)
core/types/base/types.ts (3)
  • DocTypes (209-209)
  • ClockHead (201-201)
  • DocUpdate (238-243)

Comment on lines +112 to +148
it("should sort tasks with updates first", async () => {
const taskWithoutUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
};

const taskWithUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};

// Add task without updates first
const gen1 = queue.push(taskWithoutUpdates);

// Add task with updates second
const gen2 = queue.push(taskWithUpdates);

// Process both
await Promise.all([
(async () => {
let result = await gen1.next();
while (!result.done) {
result = await gen1.next();
}
})(),
(async () => {
let result = await gen2.next();
while (!result.done) {
result = await gen2.next();
}
})()
]);

// Both workers should have been called
expect(mockWorker).toHaveBeenCalledTimes(2);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

The “sort tasks with updates first” test doesn’t validate order.

You currently assert only that both tasks run. To verify prioritization, assert call order using the third argument (localUpdates):

-    // Both workers should have been called
-    expect(mockWorker).toHaveBeenCalledTimes(2);
+    // Both workers should have been called
+    expect(mockWorker).toHaveBeenCalledTimes(2);
+    // Validate that the call with localUpdates=true happened before the one with localUpdates=false
+    const calls = mockWorker.mock.calls;
+    expect(calls[0][2]).toBe(true);
+    expect(calls[1][2]).toBe(false);

Note: If you keep processing both generators concurrently, ordering may be nondeterministic. For a deterministic assertion, process the queue serially.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it("should sort tasks with updates first", async () => {
const taskWithoutUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
};
const taskWithUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};
// Add task without updates first
const gen1 = queue.push(taskWithoutUpdates);
// Add task with updates second
const gen2 = queue.push(taskWithUpdates);
// Process both
await Promise.all([
(async () => {
let result = await gen1.next();
while (!result.done) {
result = await gen1.next();
}
})(),
(async () => {
let result = await gen2.next();
while (!result.done) {
result = await gen2.next();
}
})()
]);
// Both workers should have been called
expect(mockWorker).toHaveBeenCalledTimes(2);
});
it("should sort tasks with updates first", async () => {
const taskWithoutUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
};
const taskWithUpdates = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};
// Add task without updates first
const gen1 = queue.push(taskWithoutUpdates);
// Add task with updates second
const gen2 = queue.push(taskWithUpdates);
// Process both
await Promise.all([
(async () => {
let result = await gen1.next();
while (!result.done) {
result = await gen1.next();
}
})(),
(async () => {
let result = await gen2.next();
while (!result.done) {
result = await gen2.next();
}
})()
]);
// Both workers should have been called
expect(mockWorker).toHaveBeenCalledTimes(2);
// Validate that the call with localUpdates=true happened before the one with localUpdates=false
const calls = mockWorker.mock.calls;
expect(calls[0][2]).toBe(true);
expect(calls[1][2]).toBe(false);
});
🤖 Prompt for AI Agents
In core/tests/runtime/apply-head-queue-logging.test.ts around lines 112-148, the
test only asserts both tasks ran but not their execution order; update the test
to process the queue serially (not concurrently) so ordering is deterministic,
consume the first generator to completion before starting the second, and then
assert mockWorker was called twice with the call order verifying the third
argument (localUpdates) — the first call should have the updates present and the
second call should have no updates.

Comment on lines +150 to +172
it("should handle worker errors gracefully", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker error"));

const task = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};

const generator = queue.push(task);

// Should handle the error without crashing
await expect(async () => {
let result = await generator.next();
while (!result.done) {
result = await generator.next();
}
}).rejects.toThrow();

// Error should have been logged
expect(logger.Error).toHaveBeenCalled();
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect usage of rejects with an async function; assert the promise instead.

await expect(async () => { ... }).rejects.toThrow() passes a function to expect, which is for sync .toThrow. For rejected promises, pass the promise itself.

Apply this correction:

-    await expect(async () => {
-      let result = await generator.next();
-      while (!result.done) {
-        result = await generator.next();
-      }
-    }).rejects.toThrow();
+    const drain = (async () => {
+      let result = await generator.next();
+      while (!result.done) {
+        result = await generator.next();
+      }
+    })();
+    await expect(drain).rejects.toThrow();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it("should handle worker errors gracefully", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker error"));
const task = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};
const generator = queue.push(task);
// Should handle the error without crashing
await expect(async () => {
let result = await generator.next();
while (!result.done) {
result = await generator.next();
}
}).rejects.toThrow();
// Error should have been logged
expect(logger.Error).toHaveBeenCalled();
});
it("should handle worker errors gracefully", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker error"));
const task = {
newHead: [] as ClockHead,
prevHead: [] as ClockHead,
updates: [{ id: "test", value: { test: "data" } }] as DocUpdate<DocTypes>[]
};
const generator = queue.push(task);
// Should handle the error without crashing
const drain = (async () => {
let result = await generator.next();
while (!result.done) {
result = await generator.next();
}
})();
await expect(drain).rejects.toThrow();
// Error should have been logged
expect(logger.Error).toHaveBeenCalled();
});
🤖 Prompt for AI Agents
In core/tests/runtime/apply-head-queue-logging.test.ts around lines 150-172, the
test incorrectly passes an async function to expect(...).rejects; change it to
assert the actual promise: invoke the async generator body immediately to
produce a promise (e.g. const run = (async () => { let result = await
generator.next(); while (!result.done) { result = await generator.next(); }
})();), then use await expect(run).rejects.toThrow(); after that assert
expect(logger.Error).toHaveBeenCalled(); to keep the logging assertion.

Comment on lines +123 to +133
it("should handle worker errors", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker failed"));

const queue = writeQueue(mockSuperThis, mockWorker, { chunkSize: 32 });

const task: DocUpdate<DocTypes> = { id: "test", value: { test: "data" } };

// Should propagate the error
await expect(queue.push(task)).rejects.toThrow("Worker failed");
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Error message assertion may be brittle; assert rejection type instead.

writeQueue wraps worker errors via the logger builder (.Msg("Error processing task").AsError()), which may alter the message. Asserting the exact message "Worker failed" can make the test flaky.

Use a type-based rejection assertion:

-    await expect(queue.push(task)).rejects.toThrow("Worker failed");
+    await expect(queue.push(task)).rejects.toBeInstanceOf(Error);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it("should handle worker errors", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker failed"));
const queue = writeQueue(mockSuperThis, mockWorker, { chunkSize: 32 });
const task: DocUpdate<DocTypes> = { id: "test", value: { test: "data" } };
// Should propagate the error
await expect(queue.push(task)).rejects.toThrow("Worker failed");
});
it("should handle worker errors", async () => {
// Make worker throw an error
mockWorker.mockRejectedValueOnce(new Error("Worker failed"));
const queue = writeQueue(mockSuperThis, mockWorker, { chunkSize: 32 });
const task: DocUpdate<DocTypes> = { id: "test", value: { test: "data" } };
// Should propagate the error
- await expect(queue.push(task)).rejects.toThrow("Worker failed");
+ await expect(queue.push(task)).rejects.toBeInstanceOf(Error);
});
🤖 Prompt for AI Agents
In core/tests/runtime/write-queue-logging.test.ts around lines 123 to 133, the
test asserts the exact error message "Worker failed", which is brittle because
the logger wraps/changes the message; change the assertion to check the
rejection type instead (e.g., use await
expect(queue.push(task)).rejects.toBeInstanceOf(Error) or
.rejects.toThrow(Error)) so the test verifies an Error was thrown without
depending on the exact message.

Copy link
Contributor

@mabels mabels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no way that i will accept that.
The Cement Logger has a Debugging feature and we planned not to use any mocks at all.
In fireproofing, which should not be necessary, there are multiple layers to make better software without logs.

@mabels
Copy link
Contributor

mabels commented Aug 13, 2025

I will pull a queue implementation (did a lot before) into @adviser/cement that has this feature built into it.

To fix my ResolveOnce reset problem, I need that anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add warning logs for high parallelism database access patterns

3 participants