Skip to content

Analysis: Read/Write Traffic Splitting Architecture for Database Replication#342

Draft
Copilot wants to merge 6 commits intomainfrom
copilot/analyze-read-write-splitting
Draft

Analysis: Read/Write Traffic Splitting Architecture for Database Replication#342
Copilot wants to merge 6 commits intomainfrom
copilot/analyze-read-write-splitting

Conversation

Copy link
Contributor

Copilot AI commented Feb 11, 2026

Analysis Complete - Ready for Review

Deliverables

  • Comprehensive architecture analysis (36KB)
  • Technical design with component specifications
  • Sequence diagrams in Mermaid format (27KB) ✨ Updated
  • Configuration templates for all major databases (18KB)
  • Documentation index and navigation (9.4KB)
  • Executive summary for stakeholders (11KB)

Recent Updates

  • ✨ Converted all ASCII diagrams to Mermaid format for better rendering
    • 5 sequence diagrams showing runtime behavior
    • 1 decision flowchart showing router logic
    • 2 architecture diagrams showing component structure
  • 🔧 Fixed transaction handling: transactions start lazily on first SQL execution (standard JDBC behavior)
  • 🔧 Fixed failover logic: tries other replicas before falling back to primary

Key Documents Created

  1. READ_WRITE_SPLITTING_ANALYSIS.md - Main technical analysis (1,124 lines)
  2. read-write-splitting-sequence-diagram.md - Visual flow diagrams (Mermaid)
  3. read-write-splitting-configuration-templates.md - Config examples (440 lines)
  4. READ_WRITE_SPLITTING_README.md - Documentation index (288 lines)
  5. READ_WRITE_SPLITTING_SUMMARY.md - Executive summary (488 lines)

Recommendation

Approach: SQL Parsing and Automatic Routing

  • ✅ Fully transparent (no application changes)
  • ✅ Backward compatible (opt-in feature)
  • ✅ Leverages existing OJP architecture
  • ✅ 8-10 week implementation timeline

No Code Changes Made

This PR contains analysis and documentation only - no code has been modified.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI and others added 3 commits February 11, 2026 16:18
Co-authored-by: rrobetti <7221783+rrobetti@users.noreply.github.com>
Co-authored-by: rrobetti <7221783+rrobetti@users.noreply.github.com>
Co-authored-by: rrobetti <7221783+rrobetti@users.noreply.github.com>
Copilot AI changed the title [WIP] Analyze read and write traffic splitting in OJP Analysis: Read/Write Traffic Splitting Architecture for Database Replication Feb 11, 2026
Copilot AI requested a review from rrobetti February 11, 2026 16:24
Copy link
Contributor

@rrobetti rrobetti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot convert diagrams to mermaid where possible.

Co-authored-by: rrobetti <7221783+rrobetti@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Feb 11, 2026

@copilot convert diagrams to mermaid where possible.

Converted all diagrams to Mermaid format in commit fdd6786:

  • 5 sequence diagrams (read routing, write routing, transactions, failover, SELECT FOR UPDATE)
  • 1 decision flowchart (router logic)
  • 2 architecture diagrams (analysis and summary documents)

The Mermaid diagrams now render properly on GitHub and are easier to maintain.

```

## Scenario 4: Replica Failover to Primary

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot why fall back directly to primary? should it not fallback to another replica first?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're absolutely right. Updated in commit de566e6 to try other replicas first before falling back to primary. The failover logic now:

  1. Tries the selected replica
  2. If it fails, tries the next replica in rotation
  3. Only falls back to primary if all replicas are unavailable

Updated the sequence diagram to show "Scenario 4: Replica Failover with Multiple Replicas" demonstrating this behavior, and updated the ReplicaSelector interface to include selectHealthyReplica() method that attempts all replicas.

participant Router as ReadWrite<br/>Router
participant Primary as Primary<br/>Pool

App->>Driver: setAutoCommit(false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot are you sure a transaction is started imediatelly after setAutoCommit(false) is called? I had the impression the transaction start was delayed, is it not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're correct - transaction start is delayed (lazy). Fixed in commit de566e6.

setAutoCommit(false) only changes the auto-commit mode. The transaction actually starts when the first SQL statement is executed after setting auto-commit to false. This is standard JDBC behavior across PostgreSQL, MySQL, Oracle, and SQL Server.

Updated the sequence diagram to show the transaction starting on first SQL execution, and added clarification in the analysis document explaining the lazy transaction start behavior.

- Correct transaction start behavior: transactions start lazily on first SQL execution after setAutoCommit(false), not immediately
- Update failover logic: try other replicas before falling back to primary
- Update sequence diagrams to show multi-replica failover scenario
- Add health-aware replica selection logic
- Update all documentation to reflect standard JDBC behavior

Co-authored-by: rrobetti <7221783+rrobetti@users.noreply.github.com>
@sonarqubecloud
Copy link

@rrobetti
Copy link
Contributor

The analysis looks good, wait for the migration to Actions to finish before implementing to avoid conflicts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants