-
Notifications
You must be signed in to change notification settings - Fork 30
⚡ Bolt: Optimize DB operations and implement blockchain verification #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚡ Bolt: Optimize DB operations and implement blockchain verification #357
Conversation
Optimized 'upvote_issue' and 'verify_issue_endpoint' to use column projection and atomic updates, reducing memory overhead and database I/O.
Implemented 'GET /api/issues/{issue_id}/blockchain-verify' to allow cryptographic integrity verification of reports.
Consolidated database transactions in 'verify_issue_endpoint' using flush() to maintain atomicity while reducing commits.
Added BlockchainVerificationResponse schema and comprehensive tests for the new features.
Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
✅ Deploy Preview for fixmybharat canceled.
|
🙏 Thank you for your contribution, @RohanExploit!PR Details:
Quality Checklist:
Review Process:
Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken. |
📝 WalkthroughWalkthroughAdds an async atomic upvote flow, a new blockchain integrity verification endpoint that computes and compares SHA-256 hashes, schema for blockchain responses and a user role enum, image-processing API changes (returning image bytes), multiple tests for blockchain and verification flows, and helper/import-check tooling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API as API Server
participant DB as Database
Client->>API: GET /api/issues/{id}/blockchain-verify
API->>DB: SELECT id, description, category, integrity_hash, prev_issue_id
DB-->>API: Issue row (description, category, integrity_hash, prev_issue_id)
alt prev_issue_id present
API->>DB: SELECT integrity_hash AS prev_hash FROM issues WHERE id=prev_issue_id
DB-->>API: prev_hash
else no prev
API->>API: prev_hash = ""
end
rect rgba(100,150,200,0.5)
API->>API: computed_hash = SHA256(description || category || prev_hash)
API->>API: compare computed_hash with stored integrity_hash
end
alt hashes match
API-->>Client: {is_valid: true, current_hash, computed_hash, message}
else mismatch
API-->>Client: {is_valid: false, current_hash, computed_hash, message: "Integrity check failed..."}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 5 files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@backend/routers/issues.py`:
- Around line 596-636: The verification currently finds the predecessor by
querying Issue.id < issue_id (in verify_blockchain_integrity) which breaks the
chain if intermediate issues are hard-deleted; modify the data model and
verification to persist the original predecessor and use it for verification:
add a prev_issue_id column when computing/storing Issue.integrity_hash at
creation (the code that sets integrity_hash should also store prev_issue_id),
update verify_blockchain_integrity to fetch Issue.integrity_hash and
Issue.prev_issue_id for the current_issue and then look up the predecessor by
that prev_issue_id (fall back to empty string only if prev_issue_id is null), or
alternatively implement soft-delete for Issue so historical predecessors remain;
update any code paths that compute integrity_hash to set the new prev_issue_id
consistently.
- Around line 612-617: The chain verification here assumes monotonic Issue.id
ordering which can race when issues are created concurrently; update the lookup
in the block using prev_issue_hash (the run_in_threadpool lambda querying
Issue.integrity_hash with Issue.id < issue_id) to order deterministically by a
creation timestamp first (e.g., ORDER BY Issue.created_at DESC, Issue.id DESC)
so simultaneous inserts are disambiguated, and add an inline comment in the same
function describing this known limitation and recommending a stronger fix
(persisting a chain_parent / previous_issue_id on create_issue) for future work.
- Around line 248-275: The returned upvote count can race with concurrent
updates because you commit before reading; modify upvote_issue so you perform
the update, then call db.flush() (via run_in_threadpool) and read the
Issue.upvotes with db.query(...).scalar() while still in the same transaction,
and only after that call db.commit(); keep the atomic update using
db.query(Issue).update(...) and use run_in_threadpool for flush, scalar read,
and commit to ensure the value you return reflects the committed change.
In `@tests/test_blockchain.py`:
- Around line 24-63: test_blockchain_verification_success relies on
auto-incremented Issue IDs and shared DB state; ensure the test starts with a
clean issues table to avoid ordering-dependent failures by clearing the table or
using a transactional rollback fixture. Modify the test (or its fixtures) to
delete all Issue rows and commit/flush before creating test data (or run the
test inside a rollbacking transaction), referencing the Issue model and the
db_session fixture (or replace with a per-test transactional DB fixture) so the
chain lookup (Issue.id < issue_id) is deterministic.
- Around line 9-15: The test fixture db_session currently calls
Base.metadata.create_all(bind=engine) and drop_all against the application's
real engine (engine, Session, Base), which is destructive; update db_session to
create and use a dedicated in-memory SQLite Engine (e.g.,
create_engine("sqlite:///:memory:"), configure a Session bound to that engine)
and run Base.metadata.create_all and drop_all against that test engine so real
DB is untouched, then close the session; additionally replace explicit
comparisons to True/False in the tests (the assertions that reference
data["is_valid"] on the failing lines) with truthiness checks (use assert
data["is_valid"] and assert not data["is_valid"]) to satisfy E712.
🧹 Nitpick comments (3)
backend/routers/issues.py (1)
433-443: Remove unusedfinal_statusvariable.
final_statusis assigned on Lines 433 and 443 but never read. TheVoteResponseschema doesn't include a status field, so this is dead code.Proposed fix
- final_status = updated_issue.status if updated_issue else "open" final_upvotes = updated_issue.upvotes if updated_issue else 0 if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open": await run_in_threadpool( lambda: db.query(Issue).filter(Issue.id == issue_id).update({ Issue.status: "verified" }, synchronize_session=False) ) logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes") - final_status = "verified"tests/test_verification_feature.py (1)
13-66: Dead mock setup:mock_issueat Lines 16-22 is overridden and never used.
mock_issueand the initialreturn_valueon Line 22 are immediately overridden by theside_effectset on Line 47. The mock_issue object is never consumed. Remove the dead setup to reduce confusion.Proposed cleanup
def test_manual_verification_upvote(client): # Mock DB dependency to return a fake issue mock_db = MagicMock() - mock_issue = MagicMock() - mock_issue.id = 1 - mock_issue.status = "open" - mock_issue.upvotes = 2 # Initial upvotes - - # We need to mock the query chain: db.query().filter().first() - mock_db.query.return_value.filter.return_value.first.return_value = mock_issue - - # Mock Issue class for Issue.upvotes expression - # Since we can't easily mock the expression evaluation in SQLAlchemy without a real DB or complex mocks, - # we just verify the flow doesn't crash and calls commits/flush. - # We patch run_in_threadpool to execute immediately or mock it. - - # Actually, we can rely on MagicMock accepting everything. # Override dependency app.dependency_overrides[get_db] = lambda: mock_dbtests/test_blockchain.py (1)
55-55: Use truthiness checks instead of== True/== False.Per Ruff E712, prefer idiomatic assertions:
Proposed fix
- assert data["is_valid"] == True + assert data["is_valid"] is True - assert data["is_valid"] == True + assert data["is_valid"] is True - assert data["is_valid"] == False + assert data["is_valid"] is FalseAlso applies to: 62-62, 79-79
| @router.post("/api/issues/{issue_id}/vote", response_model=VoteResponse) | ||
| def upvote_issue(issue_id: int, db: Session = Depends(get_db)): | ||
| issue = db.query(Issue).filter(Issue.id == issue_id).first() | ||
| if not issue: | ||
| raise HTTPException(status_code=404, detail="Issue not found") | ||
| async def upvote_issue(issue_id: int, db: Session = Depends(get_db)): | ||
| """ | ||
| Upvote an issue. | ||
| Optimized: Performs atomic update without loading full model instance. | ||
| """ | ||
| # Use update() for atomic increment and to avoid full model overhead | ||
| updated_count = await run_in_threadpool( | ||
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | ||
| Issue.upvotes: func.coalesce(Issue.upvotes, 0) + 1 | ||
| }, synchronize_session=False) | ||
| ) | ||
|
|
||
| # Increment upvotes atomically | ||
| if issue.upvotes is None: | ||
| issue.upvotes = 0 | ||
| if not updated_count: | ||
| raise HTTPException(status_code=404, detail="Issue not found") | ||
|
|
||
| # Use SQLAlchemy expression for atomic update | ||
| issue.upvotes = Issue.upvotes + 1 | ||
| await run_in_threadpool(db.commit) | ||
|
|
||
| db.commit() | ||
| db.refresh(issue) | ||
| # Fetch only the updated upvote count using column projection | ||
| new_upvotes = await run_in_threadpool( | ||
| lambda: db.query(Issue.upvotes).filter(Issue.id == issue_id).scalar() | ||
| ) | ||
|
|
||
| return VoteResponse( | ||
| id=issue.id, | ||
| upvotes=issue.upvotes, | ||
| id=issue_id, | ||
| upvotes=new_upvotes or 0, | ||
| message="Issue upvoted successfully" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential TOCTOU: upvote count returned may not match the committed value.
After db.commit() on Line 264, the session's transaction is closed. The subsequent scalar() query on Line 267-269 starts a new implicit transaction, so a concurrent upvote could slip in between, causing the returned count to differ from the one this request committed. The impact is minor (cosmetic staleness), but it can be avoided by using flush + scalar + commit (same pattern you correctly use in verify_issue_endpoint).
Proposed fix: flush-then-read-then-commit
- await run_in_threadpool(db.commit)
-
- # Fetch only the updated upvote count using column projection
- new_upvotes = await run_in_threadpool(
- lambda: db.query(Issue.upvotes).filter(Issue.id == issue_id).scalar()
- )
+ await run_in_threadpool(db.flush)
+
+ # Fetch the updated upvote count within the same transaction
+ new_upvotes = await run_in_threadpool(
+ lambda: db.query(Issue.upvotes).filter(Issue.id == issue_id).scalar()
+ )
+
+ await run_in_threadpool(db.commit)🧰 Tools
🪛 Ruff (0.14.14)
[warning] 249-249: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In `@backend/routers/issues.py` around lines 248 - 275, The returned upvote count
can race with concurrent updates because you commit before reading; modify
upvote_issue so you perform the update, then call db.flush() (via
run_in_threadpool) and read the Issue.upvotes with db.query(...).scalar() while
still in the same transaction, and only after that call db.commit(); keep the
atomic update using db.query(Issue).update(...) and use run_in_threadpool for
flush, scalar read, and commit to ensure the value you return reflects the
committed change.
| @router.get("/api/issues/{issue_id}/blockchain-verify", response_model=BlockchainVerificationResponse) | ||
| async def verify_blockchain_integrity(issue_id: int, db: Session = Depends(get_db)): | ||
| """ | ||
| Verify the cryptographic integrity of a report using the blockchain-style chaining. | ||
| Optimized: Uses column projection to fetch only needed data. | ||
| """ | ||
| # Fetch current issue data | ||
| current_issue = await run_in_threadpool( | ||
| lambda: db.query( | ||
| Issue.id, Issue.description, Issue.category, Issue.integrity_hash | ||
| ).filter(Issue.id == issue_id).first() | ||
| ) | ||
|
|
||
| if not current_issue: | ||
| raise HTTPException(status_code=404, detail="Issue not found") | ||
|
|
||
| # Fetch previous issue's integrity hash to verify the chain | ||
| prev_issue_hash = await run_in_threadpool( | ||
| lambda: db.query(Issue.integrity_hash).filter(Issue.id < issue_id).order_by(Issue.id.desc()).first() | ||
| ) | ||
|
|
||
| prev_hash = prev_issue_hash[0] if prev_issue_hash and prev_issue_hash[0] else "" | ||
|
|
||
| # Recompute hash based on current data and previous hash | ||
| # Chaining logic: hash(description|category|prev_hash) | ||
| hash_content = f"{current_issue.description}|{current_issue.category}|{prev_hash}" | ||
| computed_hash = hashlib.sha256(hash_content.encode()).hexdigest() | ||
|
|
||
| is_valid = (computed_hash == current_issue.integrity_hash) | ||
|
|
||
| if is_valid: | ||
| message = "Integrity verified. This report is cryptographically sealed and has not been tampered with." | ||
| else: | ||
| message = "Integrity check failed! The report data does not match its cryptographic seal." | ||
|
|
||
| return BlockchainVerificationResponse( | ||
| is_valid=is_valid, | ||
| current_hash=current_issue.integrity_hash, | ||
| computed_hash=computed_hash, | ||
| message=message | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blockchain chain breaks if intermediate issues are deleted.
The verification fetches the previous issue via Issue.id < issue_id (Line 614), but the hash was originally computed against whatever issue was last at creation time (Line 170-171). If any issue in the chain is deleted after creation, the prev_hash lookup will resolve to a different issue, causing all subsequent verifications to fail.
Consider one of:
- Storing
prev_issue_idalongsideintegrity_hashso verification always uses the correct predecessor. - Soft-deleting issues instead of hard-deleting to preserve the chain.
#!/bin/bash
# Check if there's any hard-delete logic for issues in the codebase
rg -n --type=py 'delete.*Issue|Issue.*delete|\.delete\(\)' -g '!*test*' -g '!*migration*'🧰 Tools
🪛 Ruff (0.14.14)
[warning] 597-597: Do not perform function call Depends in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable
(B008)
🤖 Prompt for AI Agents
In `@backend/routers/issues.py` around lines 596 - 636, The verification currently
finds the predecessor by querying Issue.id < issue_id (in
verify_blockchain_integrity) which breaks the chain if intermediate issues are
hard-deleted; modify the data model and verification to persist the original
predecessor and use it for verification: add a prev_issue_id column when
computing/storing Issue.integrity_hash at creation (the code that sets
integrity_hash should also store prev_issue_id), update
verify_blockchain_integrity to fetch Issue.integrity_hash and
Issue.prev_issue_id for the current_issue and then look up the predecessor by
that prev_issue_id (fall back to empty string only if prev_issue_id is null), or
alternatively implement soft-delete for Issue so historical predecessors remain;
update any code paths that compute integrity_hash to set the new prev_issue_id
consistently.
| # Fetch previous issue's integrity hash to verify the chain | ||
| prev_issue_hash = await run_in_threadpool( | ||
| lambda: db.query(Issue.integrity_hash).filter(Issue.id < issue_id).order_by(Issue.id.desc()).first() | ||
| ) | ||
|
|
||
| prev_hash = prev_issue_hash[0] if prev_issue_hash and prev_issue_hash[0] else "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Chain lookup assumes issue ID ordering is stable, consider concurrent creation.
If two issues are created simultaneously, both might chain off the same predecessor (race in create_issue). Then Issue.id < issue_id ORDER BY id DESC here would pick the wrong predecessor for one of them, causing verification to fail. This is inherent to the chain design but worth documenting as a known limitation.
🤖 Prompt for AI Agents
In `@backend/routers/issues.py` around lines 612 - 617, The chain verification
here assumes monotonic Issue.id ordering which can race when issues are created
concurrently; update the lookup in the block using prev_issue_hash (the
run_in_threadpool lambda querying Issue.integrity_hash with Issue.id < issue_id)
to order deterministically by a creation timestamp first (e.g., ORDER BY
Issue.created_at DESC, Issue.id DESC) so simultaneous inserts are disambiguated,
and add an inline comment in the same function describing this known limitation
and recommending a stronger fix (persisting a chain_parent / previous_issue_id
on create_issue) for future work.
| @pytest.fixture | ||
| def db_session(): | ||
| Base.metadata.create_all(bind=engine) | ||
| session = Session(bind=engine) | ||
| yield session | ||
| session.close() | ||
| Base.metadata.drop_all(bind=engine) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the engine configuration in backend/database.py
rg -n 'engine|DATABASE_URL|create_engine' -C3 --type=py backend/database.pyRepository: RohanExploit/VishwaGuru
Length of output: 1024
🏁 Script executed:
#!/bin/bash
# Check for conftest.py files and pytest configuration
find . -maxdepth 3 -name "conftest.py" -o -name "pytest.ini" -o -name "setup.cfg" -o -name "pyproject.toml" | head -20
cat tests/conftest.py 2>/dev/null || echo "No conftest.py in tests/"
cat conftest.py 2>/dev/null || echo "No conftest.py in root/"Repository: RohanExploit/VishwaGuru
Length of output: 115
🏁 Script executed:
#!/bin/bash
# Check for GitHub Actions or CI configuration
find . -maxdepth 3 -name "*.yml" -o -name "*.yaml" | grep -E "(workflow|ci|github)" | head -10
ls -la .github/workflows/ 2>/dev/null || echo "No .github/workflows/"
# Also check the actual test file to see all fixtures and dependencies
head -30 tests/test_blockchain.pyRepository: RohanExploit/VishwaGuru
Length of output: 1637
🏁 Script executed:
#!/bin/bash
# Check if there are other test files and their patterns
find tests/ -name "test_*.py" -o -name "*_test.py" | head -10
wc -l tests/test_*.py 2>/dev/null | tail -1
# Check if there's a requirements.txt or similar that shows pytest usage
cat requirements.txt 2>/dev/null || cat setup.py 2>/dev/null || cat pyproject.toml 2>/dev/null | head -30Repository: RohanExploit/VishwaGuru
Length of output: 348
🏁 Script executed:
#!/bin/bash
# Check for project config
if [ -f pyproject.toml ]; then
cat pyproject.toml
elif [ -f requirements.txt ]; then
cat requirements.txt
elif [ -f setup.py ]; then
cat setup.py
fi
# Also check one of the other test files to see if they use same pattern
head -20 tests/test_api_validation.pyRepository: RohanExploit/VishwaGuru
Length of output: 683
🏁 Script executed:
#!/bin/bash
# Check the E712 violations (== True/False comparisons) in test_blockchain.py
rg -n "== True|== False" tests/test_blockchain.pyRepository: RohanExploit/VishwaGuru
Length of output: 184
Tests use the application's real database engine — this is destructive and not isolated.
Base.metadata.create_all(bind=engine) and drop_all on lines 11 and 15 execute against the actual engine from backend.database. Since engine is configured to use DATABASE_URL (pointing to PostgreSQL in production) or the persistent file ./data/issues.db locally, tests will create and drop all tables in a real database. This corrupts dev state and risks destroying production data if tests run with production credentials.
Use a dedicated in-memory SQLite engine for test isolation:
Proposed fix: isolated test engine
+from sqlalchemy import create_engine
+from sqlalchemy.orm import sessionmaker
from fastapi.testclient import TestClient
import pytest
import hashlib
from backend.main import app
-from backend.database import get_db, Base, engine
+from backend.database import get_db, Base
from backend.models import Issue
-from sqlalchemy.orm import Session
+
+test_engine = create_engine("sqlite:///:memory:")
+TestSessionLocal = sessionmaker(bind=test_engine)
`@pytest.fixture`
def db_session():
- Base.metadata.create_all(bind=engine)
- session = Session(bind=engine)
+ Base.metadata.create_all(bind=test_engine)
+ session = TestSessionLocal()
yield session
session.close()
- Base.metadata.drop_all(bind=engine)
+ Base.metadata.drop_all(bind=test_engine)Also fix E712 style violations on lines 55, 62, 79: use truthiness checks (assert data["is_valid"] and assert not data["is_valid"]) instead of explicit comparisons to True/False.
🤖 Prompt for AI Agents
In `@tests/test_blockchain.py` around lines 9 - 15, The test fixture db_session
currently calls Base.metadata.create_all(bind=engine) and drop_all against the
application's real engine (engine, Session, Base), which is destructive; update
db_session to create and use a dedicated in-memory SQLite Engine (e.g.,
create_engine("sqlite:///:memory:"), configure a Session bound to that engine)
and run Base.metadata.create_all and drop_all against that test engine so real
DB is untouched, then close the session; additionally replace explicit
comparisons to True/False in the tests (the assertions that reference
data["is_valid"] on the failing lines) with truthiness checks (use assert
data["is_valid"] and assert not data["is_valid"]) to satisfy E712.
| def test_blockchain_verification_success(client, db_session): | ||
| # Create first issue | ||
| hash1_content = "First issue|Road|" | ||
| hash1 = hashlib.sha256(hash1_content.encode()).hexdigest() | ||
|
|
||
| issue1 = Issue( | ||
| description="First issue", | ||
| category="Road", | ||
| integrity_hash=hash1 | ||
| ) | ||
| db_session.add(issue1) | ||
| db_session.commit() | ||
| db_session.refresh(issue1) | ||
|
|
||
| # Create second issue chained to first | ||
| hash2_content = f"Second issue|Garbage|{hash1}" | ||
| hash2 = hashlib.sha256(hash2_content.encode()).hexdigest() | ||
|
|
||
| issue2 = Issue( | ||
| description="Second issue", | ||
| category="Garbage", | ||
| integrity_hash=hash2 | ||
| ) | ||
| db_session.add(issue2) | ||
| db_session.commit() | ||
| db_session.refresh(issue2) | ||
|
|
||
| # Verify first issue | ||
| response = client.get(f"/api/issues/{issue1.id}/blockchain-verify") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["is_valid"] == True | ||
| assert data["current_hash"] == hash1 | ||
|
|
||
| # Verify second issue | ||
| response = client.get(f"/api/issues/{issue2.id}/blockchain-verify") | ||
| assert response.status_code == 200 | ||
| data = response.json() | ||
| assert data["is_valid"] == True | ||
| assert data["current_hash"] == hash2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests lack isolation between test cases — shared DB state can cause ordering-dependent failures.
test_blockchain_verification_success creates issues with IDs that depend on auto-increment state. If test_upvote_optimization runs first, the IDs and the "previous issue" chain lookup (Issue.id < issue_id) will resolve differently, potentially causing hash verification to fail. Each test should start with a clean slate. Consider adding a teardown that truncates the issues table, or use a transaction-rollback approach per test.
🧰 Tools
🪛 Ruff (0.14.14)
[error] 55-55: Avoid equality comparisons to True; use data["is_valid"]: for truth checks
Replace with data["is_valid"]
(E712)
[error] 62-62: Avoid equality comparisons to True; use data["is_valid"]: for truth checks
Replace with data["is_valid"]
(E712)
🤖 Prompt for AI Agents
In `@tests/test_blockchain.py` around lines 24 - 63,
test_blockchain_verification_success relies on auto-incremented Issue IDs and
shared DB state; ensure the test starts with a clean issues table to avoid
ordering-dependent failures by clearing the table or using a transactional
rollback fixture. Modify the test (or its fixtures) to delete all Issue rows and
commit/flush before creating test data (or run the test inside a rollbacking
transaction), referencing the Issue model and the db_session fixture (or replace
with a per-test transactional DB fixture) so the chain lookup (Issue.id <
issue_id) is deterministic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Optimizes issue voting/verification DB operations and adds a new endpoint to verify report integrity via a SHA-256 “blockchain-style” hash chain.
Changes:
- Refactors
/api/issues/{issue_id}/voteand/api/issues/{issue_id}/verifyto use atomicupdate()operations and column projection. - Adds
GET /api/issues/{issue_id}/blockchain-verifyplus a newBlockchainVerificationResponseschema. - Updates/introduces tests for the verification flow and blockchain integrity checks.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
backend/routers/issues.py |
Implements atomic update patterns for voting/verification and adds blockchain verification endpoint. |
backend/schemas.py |
Adds BlockchainVerificationResponse response model. |
tests/test_blockchain.py |
New tests for blockchain verification endpoint and upvote behavior. |
tests/test_verification_feature.py |
Updates manual verification test to align with update()-based implementation. |
.jules/bolt.md |
Documents performance learnings and transaction consolidation guidance. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Check that update was called to set status to verified | ||
| # We can verify that update was called with Issue.status: "verified" | ||
| # Since we are using mocks, we check if update was called at least twice | ||
| # (once for upvotes, once for status) | ||
| assert mock_db.query.return_value.filter.return_value.update.call_count >= 2 |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The assertion update.call_count >= 2 is too weak to prove the endpoint actually set the status to verified; it would still pass if the code accidentally performed two upvote updates or never issued the status update. Assert that update() was called with a mapping that includes Issue.status: "verified" (and ideally that the threshold condition was evaluated via the second first() result) to make this test meaningful.
| Base.metadata.create_all(bind=engine) | ||
| session = Session(bind=engine) | ||
| yield session | ||
| session.close() | ||
| Base.metadata.drop_all(bind=engine) | ||
|
|
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixture uses the application's global engine (which is configured from DATABASE_URL) and then calls Base.metadata.drop_all(). If a developer/CI environment points DATABASE_URL at a non-test database, running this test can drop real tables. Use a dedicated test engine (e.g., in-memory SQLite with StaticPool) or ensure the engine is explicitly a test database before calling drop_all.
| # Performance Boost: Fetch only necessary columns | ||
| issue_data = await run_in_threadpool( | ||
| lambda: db.query( | ||
| Issue.id, Issue.category, Issue.status, Issue.upvotes | ||
| ).filter(Issue.id == issue_id).first() | ||
| ) |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like upvote_issue, this endpoint performs several separate run_in_threadpool() DB operations using the same synchronous SQLAlchemy Session (db). Because the threadpool can hop threads between awaits, this can use a single Session across threads (not supported by SQLAlchemy, and can break with SQLite even with check_same_thread=False). Prefer keeping all DB work in a single threadpool call or making the endpoint sync / switching to AsyncSession.
| updated_count = await run_in_threadpool( | ||
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | ||
| Issue.upvotes: func.coalesce(Issue.upvotes, 0) + 1 | ||
| }, synchronize_session=False) | ||
| ) | ||
|
|
||
| # Increment upvotes atomically | ||
| if issue.upvotes is None: | ||
| issue.upvotes = 0 | ||
| if not updated_count: | ||
| raise HTTPException(status_code=404, detail="Issue not found") | ||
|
|
||
| # Use SQLAlchemy expression for atomic update | ||
| issue.upvotes = Issue.upvotes + 1 | ||
| await run_in_threadpool(db.commit) | ||
|
|
||
| db.commit() | ||
| db.refresh(issue) | ||
| # Fetch only the updated upvote count using column projection | ||
| new_upvotes = await run_in_threadpool( | ||
| lambda: db.query(Issue.upvotes).filter(Issue.id == issue_id).scalar() | ||
| ) |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run_in_threadpool() is invoked multiple times with the same synchronous SQLAlchemy Session (db). Session is not thread-safe, and Starlette's threadpool may execute each call on different worker threads, leading to cross-thread session/connection usage (can surface as SQLite thread errors or undefined behavior). Consider making this endpoint sync (def) and running DB work in the request thread, or wrapping all DB operations in a single run_in_threadpool call, or migrating to SQLAlchemy async sessions (AsyncSession) to avoid thread hopping.
| final_status = updated_issue.status if updated_issue else "open" | ||
| final_upvotes = updated_issue.upvotes if updated_issue else 0 | ||
|
|
||
| if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open": | ||
| await run_in_threadpool( | ||
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | ||
| Issue.status: "verified" | ||
| }, synchronize_session=False) | ||
| ) | ||
| logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes") | ||
| final_status = "verified" | ||
|
|
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
final_status is computed/updated but never used in the response or persisted beyond the subsequent update(). This is dead state that makes the control flow harder to follow. Either remove final_status entirely or include it in the response model if callers need to know whether auto-verification happened.
| Issue.id, Issue.description, Issue.category, Issue.integrity_hash | ||
| ).filter(Issue.id == issue_id).first() | ||
| ) | ||
|
|
||
| if not current_issue: | ||
| raise HTTPException(status_code=404, detail="Issue not found") | ||
|
|
||
| # Fetch previous issue's integrity hash to verify the chain | ||
| prev_issue_hash = await run_in_threadpool( | ||
| lambda: db.query(Issue.integrity_hash).filter(Issue.id < issue_id).order_by(Issue.id.desc()).first() | ||
| ) | ||
|
|
||
| prev_hash = prev_issue_hash[0] if prev_issue_hash and prev_issue_hash[0] else "" | ||
|
|
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The verification logic infers the “previous hash” by selecting the issue with the greatest id less than issue_id. This can produce false integrity failures if issues are created concurrently (two requests can compute their integrity_hash against the same previous record, creating a fork) or if rows are deleted. To make verification deterministic, store an explicit link at creation time (e.g., previous_issue_id or previous_integrity_hash) and verify against that stored value instead of guessing by id order.
| Issue.id, Issue.description, Issue.category, Issue.integrity_hash | |
| ).filter(Issue.id == issue_id).first() | |
| ) | |
| if not current_issue: | |
| raise HTTPException(status_code=404, detail="Issue not found") | |
| # Fetch previous issue's integrity hash to verify the chain | |
| prev_issue_hash = await run_in_threadpool( | |
| lambda: db.query(Issue.integrity_hash).filter(Issue.id < issue_id).order_by(Issue.id.desc()).first() | |
| ) | |
| prev_hash = prev_issue_hash[0] if prev_issue_hash and prev_issue_hash[0] else "" | |
| Issue.id, | |
| Issue.description, | |
| Issue.category, | |
| Issue.integrity_hash, | |
| Issue.previous_integrity_hash, | |
| ).filter(Issue.id == issue_id).first() | |
| ) | |
| if not current_issue: | |
| raise HTTPException(status_code=404, detail="Issue not found") | |
| # Use the explicitly stored previous integrity hash to verify the chain | |
| prev_hash = current_issue.previous_integrity_hash or "" |
| from sqlalchemy.orm import Session | ||
|
|
||
| @pytest.fixture | ||
| def db_session(): | ||
| Base.metadata.create_all(bind=engine) | ||
| session = Session(bind=engine) | ||
| yield session | ||
| session.close() | ||
| Base.metadata.drop_all(bind=engine) | ||
|
|
||
| @pytest.fixture | ||
| def client(db_session): | ||
| app.dependency_overrides[get_db] = lambda: db_session |
Copilot
AI
Feb 8, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test overrides get_db to return a single Session instance (db_session) that was created in the pytest thread, but requests executed via TestClient run the app in a different thread. Sharing a SQLAlchemy Session across threads is unsafe and can fail (especially with SQLite). Prefer overriding get_db with a generator that creates a fresh SessionLocal() per request (and closes it), or create a dedicated test engine/sessionmaker per test and instantiate the session inside the override.
| from sqlalchemy.orm import Session | |
| @pytest.fixture | |
| def db_session(): | |
| Base.metadata.create_all(bind=engine) | |
| session = Session(bind=engine) | |
| yield session | |
| session.close() | |
| Base.metadata.drop_all(bind=engine) | |
| @pytest.fixture | |
| def client(db_session): | |
| app.dependency_overrides[get_db] = lambda: db_session | |
| from sqlalchemy.orm import Session, sessionmaker | |
| TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) | |
| def override_get_db(): | |
| db = TestingSessionLocal() | |
| try: | |
| yield db | |
| finally: | |
| db.close() | |
| @pytest.fixture | |
| def db_session(): | |
| Base.metadata.create_all(bind=engine) | |
| session = TestingSessionLocal() | |
| try: | |
| yield session | |
| finally: | |
| session.close() | |
| Base.metadata.drop_all(bind=engine) | |
| @pytest.fixture | |
| def client(db_session): | |
| app.dependency_overrides[get_db] = override_get_db |
Fixed Render deployment failure by adding missing mandatory dependencies (python-jose and passlib) to requirements-render.txt. Finalized blockchain verification feature with high-performance column projection. Optimized 'upvote_issue' and 'verify_issue_endpoint' for better database efficiency. Ensured all core tests and new blockchain tests pass. Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
Fixed a critical return type mismatch in 'process_uploaded_image' that caused crashes in detection endpoints. Updated 'process_uploaded_image' to return a tuple of (PIL.Image, bytes) for better performance and compatibility. Optimized 'save_processed_image' to handle raw bytes directly. Verified that 'create_issue' and all detection endpoints now correctly handle the optimized image processing pipeline. Ensured the application starts successfully and all core tests pass. Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
🔍 Quality Reminder |
- Fixed Render deployment failure by ensuring all mandatory dependencies are in requirements-render.txt. - Corrected IssueCategory and UserRole Enum mixup in schemas.py. - Improved image processing robustness in utils.py (handling RGBA mode and format preservation). - Fixed unit test mocks for create_issue and cache invalidation. - Added 'from __future__ import annotations' for better type hint compatibility. - Verified stable application startup and passed all relevant backend tests. Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@backend/requirements-render.txt`:
- Around line 19-20: Pin the crypto/auth dependencies in requirements to avoid
runtime breakage: change python-jose[cryptography] to python-jose==3.5.0 and add
cryptography>=46.0.3; for passlib[bcrypt] either pin bcrypt to a version
compatible with passlib 1.7.4 and re-run auth tests, or migrate off passlib
1.7.4 (e.g., switch to the libpass fork v1.9.3 or use bcrypt directly) and then
verify password handling semantics in the authentication flows; update the
requirements entry for passlib/bcrypt accordingly and run the test suite to
confirm fixes.
In `@backend/routers/issues.py`:
- Around line 435-445: final_status and final_upvotes are dead variables (never
used in the response) — remove them and keep only the necessary side-effect
logic: when updated_issue exists and meets the upvote threshold update the DB
via the run_in_threadpool call and call logger.info as currently done; delete
assignments to final_status/final_upvotes and any references and do not try to
add status to VoteResponse. Locate the block that checks updated_issue.upvotes
>= 5 and the run_in_threadpool/db.query(Issue)...update call and remove the
preparatory final_status/final_upvotes assignments around updated_issue to
eliminate the unused variables.
🧹 Nitpick comments (2)
check_imports.py (1)
1-36: Unusedosimport and repetitive try/except pattern.
osis imported on Line 2 but never used. Also,tracebackis re-imported inside eachexceptblock — move it to the top-level imports.The three try/except blocks are nearly identical and could be consolidated into a loop for maintainability:
♻️ Suggested refactor
import sys -import os from pathlib import Path +import traceback # Add project root to path sys.path.insert(0, str(Path(__file__).parent.absolute())) -try: - print("Importing backend.main...") - from backend.main import app - print("Successfully imported backend.main") -except Exception as e: - print(f"FAILED to import backend.main: {e}") - import traceback - traceback.print_exc() - sys.exit(1) - -try: - print("Importing backend.routers.issues...") - from backend.routers import issues - print("Successfully imported backend.routers.issues") -except Exception as e: - print(f"FAILED to import backend.routers.issues: {e}") - import traceback - traceback.print_exc() - sys.exit(1) - -try: - print("Importing backend.routers.detection...") - from backend.routers import detection - print("Successfully imported backend.routers.detection") -except Exception as e: - print(f"FAILED to import backend.routers.detection: {e}") - import traceback - traceback.print_exc() - sys.exit(1) +import importlib + +modules = [ + ("backend.main", "app"), + ("backend.routers.issues", None), + ("backend.routers.detection", None), +] + +for module_path, attr in modules: + try: + print(f"Importing {module_path}...") + mod = importlib.import_module(module_path) + if attr: + getattr(mod, attr) + print(f"Successfully imported {module_path}") + except Exception as e: + print(f"FAILED to import {module_path}: {e}") + traceback.print_exc() + sys.exit(1)backend/utils.py (1)
143-198: Return type change and format handling look correct overall.The refactored
process_uploaded_image_syncproperly capturesoriginal_formatbefore any transforms and returns the(Image, bytes)tuple consistently.One minor note:
quality=85on Line 195 is silently ignored for lossless formats (PNG, BMP, TIFF). This is harmless but could be misleading. Consider passingqualityconditionally:♻️ Optional: pass quality only for lossy formats
- img_no_exif.save(output, format=fmt, quality=85) + save_kwargs = {"format": fmt} + if fmt.upper() in ("JPEG", "WEBP"): + save_kwargs["quality"] = 85 + img_no_exif.save(output, **save_kwargs)
| python-jose[cryptography] | ||
| passlib[bcrypt] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
latest secure versions python-jose cryptography 2024 2025
💡 Result:
- python-jose: latest release is 3.5.0 (May 28, 2025). [1]
- cryptography: latest release is 46.0.4 (Jan 28, 2026); the latest release in 2025 was 46.0.3 (Oct 15, 2025). [2]
Sources:
[1] python-jose (PyPI)
[2] cryptography (PyPI)
🌐 Web query:
passlib bcrypt latest version security advisories 2024 2025
💡 Result:
Latest versions (as of Feb 8, 2026)
- passlib: 1.7.4 (latest release Dec 9, 2022) (i.e., no new upstream releases in 2024–2025). [1]
- bcrypt (Python / PyPI “bcrypt”): 5.0.0 (released Sep 25, 2025). [2]
- (Related) libpass (active fork of passlib): 1.9.3 (released Oct 9, 2025). [3]
Security advisories / known vulns (2024–2025)
- passlib: No new 2024–2025 advisories surfaced in common vulnerability databases; the primary known issue referenced is PVE-2021-25921 affecting passlib < 1.4 (so not affecting 1.7.4). [1]
- bcrypt (PyPI): Safety and Snyk both report no known direct vulnerabilities affecting bcrypt 5.0.0. [4][5]
Important 2024–2025 “bcrypt + passlib” risk (compatibility / behavior changes)
Even without CVEs, there’s a practical risk: newer bcrypt releases have been incompatible with assumptions in passlib, and passlib hasn’t shipped updates to track those changes.
bcrypt4.1.0 was yanked specifically for “Incompatibility with assumptions made by passlib”. [2]- Downstream projects reported that passlib 1.7.4 + bcrypt 4.1.x can error at runtime (examples filed in 2024). [6][7]
bcrypt5.0.0 changed behavior so that passwords >72 bytes now raiseValueError(previously they were silently truncated), which can surface new failures depending on how passlib/apps handle inputs. [2]
Practical takeaway: if you must stay on passlib 1.7.4, you likely need to pin bcrypt and/or consider migrating to libpass or using bcrypt directly, then re-test authentication flows against the newer bcrypt semantics. [2][3][6]
Sources:
[1] passlib package security/version info (Safety DB) (data.safetycli.com)
[2] bcrypt PyPI changelog & release/yank notes (pypi.org)
[3] libpass PyPI release history (pypi.org)
[4] bcrypt package security info (Safety DB) (data.safetycli.com)
[5] bcrypt vulnerability status (Snyk) (security.snyk.io)
[6] Launchpad bug: “bcrypt version installed exceeds passlib support” (2024-04-22) (bugs.launchpad.net)
[7] Launchpad bug: passlib 1.7.4 with bcrypt 4.1.3 error (2024-06-10) (bugs.launchpad.net)
Citations:
- 1: https://data.safetycli.com/packages/pypi/passlib/?utm_source=openai
- 2: https://pypi.org/pypi/bcrypt?utm_source=openai
- 3: https://pypi.org/project/libpass/?utm_source=openai
- 4: https://data.safetycli.com/packages/pypi/bcrypt/?utm_source=openai
- 5: https://security.snyk.io/package/pip/bcrypt?utm_source=openai
- 6: https://bugs.launchpad.net/bugs/2063140?utm_source=openai
- 7: https://bugs.launchpad.net/bugs/2068879?utm_source=openai
Pin passlib[bcrypt] to avoid authentication failures; python-jose[cryptography] should also be pinned.
Unpinned auth/crypto dependencies create reproducibility and supply-chain risks. More critically, passlib 1.7.4 (unmaintained since 2022) has documented runtime incompatibility with bcrypt >= 4.1.0: bcrypt 4.1.0 was yanked due to incompatibility with passlib's assumptions, and bcrypt 5.0.0 enforces breaking changes (72-byte password limit now raises ValueError instead of silently truncating), causing reported authentication failures in production.
- For
python-jose[cryptography]: pin topython-jose==3.5.0andcryptography>=46.0.3 - For
passlib[bcrypt]: either (a) pinbcryptto a version known compatible with passlib 1.7.4 and re-test password handling, or (b) consider migrating to the actively maintainedlibpassfork (1.9.3) or usingbcryptdirectly, then verify authentication flows against the new semantics.
🤖 Prompt for AI Agents
In `@backend/requirements-render.txt` around lines 19 - 20, Pin the crypto/auth
dependencies in requirements to avoid runtime breakage: change
python-jose[cryptography] to python-jose==3.5.0 and add cryptography>=46.0.3;
for passlib[bcrypt] either pin bcrypt to a version compatible with passlib 1.7.4
and re-run auth tests, or migrate off passlib 1.7.4 (e.g., switch to the libpass
fork v1.9.3 or use bcrypt directly) and then verify password handling semantics
in the authentication flows; update the requirements entry for passlib/bcrypt
accordingly and run the test suite to confirm fixes.
| final_status = updated_issue.status if updated_issue else "open" | ||
| final_upvotes = updated_issue.upvotes if updated_issue else 0 | ||
|
|
||
| if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open": | ||
| await run_in_threadpool( | ||
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | ||
| Issue.status: "verified" | ||
| }, synchronize_session=False) | ||
| ) | ||
| logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes") | ||
| final_status = "verified" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
final_status is assigned but never used.
final_status (Lines 435, 445) is never referenced in the return statement or anywhere else. VoteResponse has no status field, making this dead code.
Proposed fix
- final_status = updated_issue.status if updated_issue else "open"
final_upvotes = updated_issue.upvotes if updated_issue else 0
if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open":
await run_in_threadpool(
lambda: db.query(Issue).filter(Issue.id == issue_id).update({
Issue.status: "verified"
}, synchronize_session=False)
)
logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes")
- final_status = "verified"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| final_status = updated_issue.status if updated_issue else "open" | |
| final_upvotes = updated_issue.upvotes if updated_issue else 0 | |
| if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open": | |
| await run_in_threadpool( | |
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | |
| Issue.status: "verified" | |
| }, synchronize_session=False) | |
| ) | |
| logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes") | |
| final_status = "verified" | |
| final_upvotes = updated_issue.upvotes if updated_issue else 0 | |
| if updated_issue and updated_issue.upvotes >= 5 and updated_issue.status == "open": | |
| await run_in_threadpool( | |
| lambda: db.query(Issue).filter(Issue.id == issue_id).update({ | |
| Issue.status: "verified" | |
| }, synchronize_session=False) | |
| ) | |
| logger.info(f"Issue {issue_id} automatically verified due to {updated_issue.upvotes} upvotes") |
🧰 Tools
🪛 Ruff (0.14.14)
[error] 445-445: Local variable final_status is assigned to but never used
Remove assignment to unused variable final_status
(F841)
🤖 Prompt for AI Agents
In `@backend/routers/issues.py` around lines 435 - 445, final_status and
final_upvotes are dead variables (never used in the response) — remove them and
keep only the necessary side-effect logic: when updated_issue exists and meets
the upvote threshold update the DB via the run_in_threadpool call and call
logger.info as currently done; delete assignments to final_status/final_upvotes
and any references and do not try to add status to VoteResponse. Locate the
block that checks updated_issue.upvotes >= 5 and the
run_in_threadpool/db.query(Issue)...update call and remove the preparatory
final_status/final_upvotes assignments around updated_issue to eliminate the
unused variables.
⚡ Bolt is here with a performance boost!
💡 What:
upvote_issueandverify_issue_endpointinbackend/routers/issues.pyto use SQLAlchemy column projection (db.query(Issue.col)) and atomicupdate()statements.GET /api/issues/{issue_id}/blockchain-verifyendpoint to verify the SHA-256 integrity seal of reports.verify_issue_endpointto use a single transaction withdb.flush()instead of multiple commits, reducing database round-trips.🎯 Why:
📊 Impact:
verify_issue_endpointby consolidating commits.🔬 Measurement:
pytest tests/test_blockchain.py(new) andpytest tests/test_verification_feature.py(updated).verify_issue_endpointnow performs a single commit.PR created automatically by Jules for task 4946814769908671594 started by @RohanExploit
Summary by cubic
Speeds up issue voting and verification with atomic updates and a single transaction to reduce DB latency. Adds blockchain integrity verification and stabilizes Render deploy with robust image processing, enum fixes, and missing dependencies.
New Features
Refactors
Written for commit 2722b68. Summary will update on new commits.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests