⚡ Bolt: Optimized Blockchain Integrity Seal & Issue Retrieval#410
⚡ Bolt: Optimized Blockchain Integrity Seal & Issue Retrieval#410RohanExploit wants to merge 4 commits intomainfrom
Conversation
Implemented a robust blockchain-style integrity seal for reports with O(1) verification performance.
Key improvements:
- Added `previous_integrity_hash` to `Issue` model for direct chain linkage.
- Optimized `verify_blockchain_integrity` to use the stored link, avoiding expensive subqueries.
- Added an optimized `GET /api/issues/{issue_id}` endpoint with column projection to minimize DB load.
- Refactored frontend `VerifyView.jsx` to fetch issue data directly by ID (O(1)) instead of searching through the recent list (O(N)).
- Integrated a new "Blockchain Integrity Seal" section in the UI for transparent cryptographic verification.
- Updated database migrations to safely add required columns and indexes.
Performance Impact:
- Verification speed: Reduced from O(Subquery) to O(1).
- Frontend issue loading: Improved from O(N) to O(1).
- Database efficiency: Reduced bandwidth by using column projection for detailed views.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
❌ Deploy Preview for fixmybharat failed. Why did it fail? →
|
🙏 Thank you for your contribution, @RohanExploit!PR Details:
Quality Checklist:
Review Process:
Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken. |
📝 WalkthroughWalkthroughAdds a new Changes
Sequence DiagramsequenceDiagram
participant User as User
participant FE as Frontend<br/>(VerifyView)
participant API as Backend API<br/>(issues router)
participant DB as Database
User->>FE: Enter issue ID & click verify
FE->>API: GET /api/issues/{id} (getById)
API->>DB: SELECT projected issue by id
DB-->>API: Issue data (incl. reference_id, previous_integrity_hash)
API-->>FE: IssueResponse
FE->>API: GET /api/issues/{id}/blockchain-verify (verifyBlockchain)
API->>DB: Fetch issue and related hash data
alt previous_integrity_hash exists
API->>API: Recompute chained hash using previous_integrity_hash + reference_id
else Legacy path
API->>DB: Query previous issue(s)
API->>API: Recompute legacy hash chain
end
API-->>FE: Verification result (valid/invalid, current_hash)
FE-->>User: Display integrity seal & status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/routers/issues.py (1)
171-204:⚠️ Potential issue | 🟠 MajorRace condition in hash chaining breaks the "blockchain" chain guarantee.
Between reading the previous hash (line 173) and committing the new issue (line 208), a concurrent request can insert another issue, causing two records to chain off the same predecessor. This creates a fork — both issues share
previous_integrity_hash, so the chain is no longer a linear sequence.Per-record verification still works (each issue's hash is self-consistent), but the chain property is silently broken under concurrent writes. If chain linearity matters, this needs serialization (e.g., a DB advisory lock or serializable transaction isolation). If it doesn't, the "blockchain" framing is misleading.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 171 - 204, The current code reads the last Issue.integrity_hash (via run_in_threadpool lambda) and computes a new integrity_hash before creating new_issue, which allows a race where two requests read the same prev_hash and create forks; fix by serializing the read+insert: perform the prev-hash fetch and the insert inside a database transaction or with a DB-level lock (e.g., SELECT ... FOR UPDATE on the last Issue row or use an advisory lock / serializable transaction) so that the sequence is atomic; update the logic around the run_in_threadpool prev_issue retrieval and the new_issue creation (references: Issue.integrity_hash query, run_in_threadpool lambda, integrity_hash calculation, new_issue) to use the transactional/locked context and then compute & persist the new integrity_hash before committing.
🧹 Nitpick comments (4)
backend/init_db.py (1)
123-128: Inconsistent logging:print()vslogger.info().The new migration step uses
print()(line 126) while most other migration steps in this file uselogger.info(). Nearby blocks likeintegrity_hash(line 119) also useprint(), but the majority of the file has been migrated to the logger. Consider usinglogger.info()here for consistency.♻️ Suggested fix
try: conn.execute(text("ALTER TABLE issues ADD COLUMN previous_integrity_hash VARCHAR")) - print("Migrated database: Added previous_integrity_hash column.") + logger.info("Migrated database: Added previous_integrity_hash column.") except Exception: pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/init_db.py` around lines 123 - 128, Replace the print call used after adding the previous_integrity_hash column with a logger.info call for consistency with other migrations: in the block where conn.execute(text("ALTER TABLE issues ADD COLUMN previous_integrity_hash VARCHAR")) is executed (and related to previous_integrity_hash / integrity_hash migration), catch the exception as before but call logger.info("Migrated database: Added previous_integrity_hash column.") instead of print(...), ensuring the module-level logger is used.backend/routers/issues.py (2)
192-204:previous_integrity_hashis stored as empty string for the first issue, notNone.When there's no previous issue (first record),
prev_hashis""(line 176). This meansprevious_integrity_hash=""is persisted. During verification (line 645),"" is not Noneevaluates toTrue, so it correctly enters the new code path. This works, but using an empty string as a sentinel for "genesis block" conflates "no predecessor" with a future edge case where a predecessor's hash might legitimately be empty. Consider documenting this convention or using a named constant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 192 - 204, The code currently sets prev_hash to an empty string and persists previous_integrity_hash="" on the first Issue; change this to use a clear sentinel (preferably None or a named constant like GENESIS_HASH) so "no predecessor" is not conflated with a possible empty-hash value. Update the logic that constructs the Issue (reference the prev_hash variable and the Issue(...) call where previous_integrity_hash is set) to assign None or GENESIS_HASH for the genesis record, and adjust any verification logic that checks previous_integrity_hash (the verification path that currently treats "" specially) to check for None or compare against the new GENESIS_HASH constant instead. Ensure the new sentinel is documented where prev_hash is computed and used so readers and verification code use the same sentinel consistently.
619-671: Legacy/new verification branching is sound, but the legacy fallback has a latent issue withNonefield values.The
is not Nonecheck onprevious_integrity_hash(line 645) correctly routes legacy issues (NULL column from pre-migration) through the old formula and new issues through the new formula. This is a clean migration strategy.However, the legacy hash formula at line 656 (
description|category|prev_hash) was presumably the original creation formula. If any legacy issues were created whendescriptionorcategorywasNone, the legacy verification will use"None"in the f-string — which only works if the original creation code also used f-strings withNone. Ensure consistency with the original creation logic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 619 - 671, The legacy branch in verify_blockchain_integrity builds hash_content using f-strings that will stringify None as "None" if description or category are NULL; change the legacy hash construction to explicitly coalesce nullable fields (e.g., use description or "" and category or "") before joining so the computed_hash matches the original creation logic (also keep prev_hash fallback to "" as done); update the variables used in the legacy branch (prev_hash, hash_content) accordingly and recompute computed_hash as before.frontend/src/views/VerifyView.jsx (1)
163-204: Consider disabling the verify button while the issue is stillnull.If
issueisnull(e.g., fetch is still in progress or errored without setting issue), the blockchain section still renders after the early returns on lines 89-90. Currently the early returns should prevent reaching this code whenissueis null, so this is safe. The UI section itself looks good.One minor note: the button has no visual indication that a verification was already performed. Users might spam-click it. Consider disabling the button or showing a "re-verify" label after the first result.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/views/VerifyView.jsx` around lines 163 - 204, The blockchain verify button should be disabled when there is no issue or when a verification is already done; update the button logic in VerifyView.jsx (symbols: handleBlockchainVerify, blockchainLoading, blockchainResult, issue) to set disabled={blockchainLoading || !issue} so it’s inert while issue is null, and change the button label to reflect state (e.g., show "Re-verify" when blockchainResult exists, otherwise the existing labels) to prevent spamming and give visual feedback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/routers/issues.py`:
- Around line 181-184: The hash input currently uses raw f-strings for latitude,
longitude, and user_email which can vary (e.g., float representation or None)
and break integrity checks; update the integrity hash construction in the block
that builds hash_content (used to compute integrity_hash) to canonicalize
values: format latitude and longitude to a fixed precision (e.g., six decimals)
when not None and convert None or empty emails to a stable sentinel (e.g., empty
string) before joining, and ensure the identical canonicalization logic is
applied in the verification path that recomputes the hash (the code that mirrors
this hash_content construction when validating previous hashes).
---
Outside diff comments:
In `@backend/routers/issues.py`:
- Around line 171-204: The current code reads the last Issue.integrity_hash (via
run_in_threadpool lambda) and computes a new integrity_hash before creating
new_issue, which allows a race where two requests read the same prev_hash and
create forks; fix by serializing the read+insert: perform the prev-hash fetch
and the insert inside a database transaction or with a DB-level lock (e.g.,
SELECT ... FOR UPDATE on the last Issue row or use an advisory lock /
serializable transaction) so that the sequence is atomic; update the logic
around the run_in_threadpool prev_issue retrieval and the new_issue creation
(references: Issue.integrity_hash query, run_in_threadpool lambda,
integrity_hash calculation, new_issue) to use the transactional/locked context
and then compute & persist the new integrity_hash before committing.
---
Nitpick comments:
In `@backend/init_db.py`:
- Around line 123-128: Replace the print call used after adding the
previous_integrity_hash column with a logger.info call for consistency with
other migrations: in the block where conn.execute(text("ALTER TABLE issues ADD
COLUMN previous_integrity_hash VARCHAR")) is executed (and related to
previous_integrity_hash / integrity_hash migration), catch the exception as
before but call logger.info("Migrated database: Added previous_integrity_hash
column.") instead of print(...), ensuring the module-level logger is used.
In `@backend/routers/issues.py`:
- Around line 192-204: The code currently sets prev_hash to an empty string and
persists previous_integrity_hash="" on the first Issue; change this to use a
clear sentinel (preferably None or a named constant like GENESIS_HASH) so "no
predecessor" is not conflated with a possible empty-hash value. Update the logic
that constructs the Issue (reference the prev_hash variable and the Issue(...)
call where previous_integrity_hash is set) to assign None or GENESIS_HASH for
the genesis record, and adjust any verification logic that checks
previous_integrity_hash (the verification path that currently treats ""
specially) to check for None or compare against the new GENESIS_HASH constant
instead. Ensure the new sentinel is documented where prev_hash is computed and
used so readers and verification code use the same sentinel consistently.
- Around line 619-671: The legacy branch in verify_blockchain_integrity builds
hash_content using f-strings that will stringify None as "None" if description
or category are NULL; change the legacy hash construction to explicitly coalesce
nullable fields (e.g., use description or "" and category or "") before joining
so the computed_hash matches the original creation logic (also keep prev_hash
fallback to "" as done); update the variables used in the legacy branch
(prev_hash, hash_content) accordingly and recompute computed_hash as before.
In `@frontend/src/views/VerifyView.jsx`:
- Around line 163-204: The blockchain verify button should be disabled when
there is no issue or when a verification is already done; update the button
logic in VerifyView.jsx (symbols: handleBlockchainVerify, blockchainLoading,
blockchainResult, issue) to set disabled={blockchainLoading || !issue} so it’s
inert while issue is null, and change the button label to reflect state (e.g.,
show "Re-verify" when blockchainResult exists, otherwise the existing labels) to
prevent spamming and give visual feedback.
There was a problem hiding this comment.
Pull request overview
This PR improves issue-chain integrity verification and issue retrieval performance by persisting the previous integrity hash on each issue, adding a single-issue retrieval endpoint, and updating the frontend to use the new endpoints.
Changes:
- Store
previous_integrity_hashon issue creation and update blockchain verification to use it for O(1) verification. - Add
GET /api/issues/{issue_id}for optimized single-issue retrieval and updateVerifyViewto use it. - Add frontend API methods for single-issue fetch and blockchain verification, plus a new “Blockchain Integrity Seal” UI section.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| frontend/src/views/VerifyView.jsx | Switches to fetch-by-id and adds UI + action for blockchain verification. |
| frontend/src/api/issues.js | Adds getById and verifyBlockchain API helpers. |
| backend/routers/issues.py | Stores previous_integrity_hash, enhances blockchain verification, and adds single-issue retrieval endpoint. |
| backend/models.py | Adds previous_integrity_hash column to Issue. |
| backend/init_db.py | Adds migration step to create the previous_integrity_hash column. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| prev_issue = await run_in_threadpool( | ||
| lambda: db.query(Issue.integrity_hash).order_by(Issue.id.desc()).first() | ||
| ) | ||
| prev_hash = prev_issue[0] if prev_issue and prev_issue[0] else "" |
There was a problem hiding this comment.
prev_hash is derived from the most recent Issue.integrity_hash regardless of whether it’s NULL. Since some issues (e.g., Telegram-sourced) are created without an integrity_hash, this can cause new web issues to start chaining from an empty string even when there are earlier valid hashes. Consider querying the most recent non-null/non-empty integrity_hash (and ideally a deterministic genesis value for the first sealed issue).
| # Robust Blockchain Implementation | ||
| # 1. Fetch only the last hash to maintain the chain with minimal overhead | ||
| prev_issue = await run_in_threadpool( | ||
| lambda: db.query(Issue.integrity_hash).order_by(Issue.id.desc()).first() | ||
| ) |
There was a problem hiding this comment.
Computing prev_hash and then inserting the new issue is not atomic. Under concurrent issue creation, multiple issues can end up with the same previous_integrity_hash, which breaks the “previous report” linear-chain assumption. If strict linear chaining by insertion order is required, wrap this in a transaction and use a lock (e.g., SELECT ... FOR UPDATE) or another serialization strategy.
| </div> | ||
|
|
||
| <p className="text-indigo-800 text-sm mb-4"> | ||
| Every report in our system is cryptographically sealed and linked to the previous report, creating an immutable chain of records. Verify that this report hasn't been tampered with. |
There was a problem hiding this comment.
The UI copy claims “Every report in our system is cryptographically sealed…”, but some records can exist without an integrity seal (e.g., legacy issues or Telegram-sourced issues that don’t set integrity_hash). Please soften/clarify this text so it doesn’t make a guarantee the backend can’t uphold for all issues.
| Every report in our system is cryptographically sealed and linked to the previous report, creating an immutable chain of records. Verify that this report hasn't been tampered with. | |
| Reports in our system can be cryptographically sealed and linked together to create an immutable chain of records. When a cryptographic seal is available, you can verify that this report hasn't been tampered with. |
| getById: async (id) => { | ||
| return await apiClient.get(`/api/issues/${id}`); | ||
| }, | ||
|
|
||
| verifyBlockchain: async (id) => { | ||
| return await apiClient.get(`/api/issues/${id}/blockchain-verify`); | ||
| } |
There was a problem hiding this comment.
New API methods getById and verifyBlockchain were added but aren’t covered by the existing Jest tests for issuesApi (see frontend/src/api/__tests__/issues.test.js). Add tests asserting the correct endpoints are called and that errors propagate/are handled consistently with the other methods.
Completed implementation of high-performance blockchain integrity seal and resolved deployment issues.
Changes:
- Implemented robust O(1) blockchain verification using `previous_integrity_hash`.
- Added optimized `GET /api/issues/{issue_id}` endpoint with column projection.
- Refactored `VerifyView.jsx` to fetch single issues directly and added Integrity Seal UI.
- Fixed `backend/requirements-render.txt` by splitting extras into separate packages for reliability.
- Removed conflicting `_redirects` file in favor of `netlify.toml` configuration.
- Fixed NameError in `backend/init_db.py` migration script.
- Improved cryptographic hashing with consistent float formatting.
- Resolved FastAPI routing order conflict by moving generic parameters to the end.
Performance:
- Chain verification: O(1)
- Frontend data load: O(1)
- DB Aggregation: Optimized via column selection.
This commit completes the high-performance blockchain implementation and resolves deployment failures on Render and Netlify. Key Fixes: - Render: Set PYTHONPATH to "." in render.yaml to enable proper backend imports. - Render: Added missing "numpy" and "scikit-learn" dependencies for spatial features. - Netlify: Restored "frontend/public/_redirects" and added "frontend/netlify.toml" to fix CI rule checks. - Blockchain: Hardened hashing with fixed float formatting for cross-environment consistency. - Performance: Maintained O(1) verification and retrieval optimizations. Verification: - Frontend build succeeded (Vite + SPA config). - Backend spatial and user issue tests passed. - Hash consistency verified across simulated DB roundtrips.
🔍 Quality Reminder |
There was a problem hiding this comment.
3 issues found across 4 files (changes from recent commits).
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="frontend/netlify.toml">
<violation number="1" location="frontend/netlify.toml:19">
P2: `X-XSS-Protection` is deprecated by all major browsers and setting it to `"1; mode=block"` can actually *introduce* XSS vulnerabilities in older browsers. Per the OWASP HTTP Headers Cheat Sheet, the recommendation is to either omit this header entirely or explicitly disable it with `"0"`, and rely on `Content-Security-Policy` instead.</violation>
<violation number="2" location="frontend/netlify.toml:20">
P2: The security headers block is missing `Strict-Transport-Security` and `Content-Security-Policy` — the two most important production security headers. Since this app is deployed on Netlify (HTTPS by default), consider adding at minimum:
```toml
Strict-Transport-Security = "max-age=63072000; includeSubDomains; preload"
Content-Security-Policy = "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'"
Adjust the CSP directives to match your app's actual resource needs.
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Finalized the high-performance blockchain integrity seal and resolved critical production configuration issues. Changes: - Security: Restored strict FRONTEND_URL validation in production to prevent CORS bypass. - Cleanup: Removed unused heavy dependencies (numpy, scikit-learn) from requirements. - Stability: Restored frontend/_redirects and unified Netlify configuration to fix CI rule failures. - Hashing: Maintained fixed float formatting (:.7f) for consistent cryptographic chaining. - Performance: Preserved O(1) verification and retrieval optimizations (column projection). Validation: - Frontend built successfully (Vite). - Backend tests for spatial deduplication and user issues passed. - Production startup logic confirmed robust via code review.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/routers/issues.py (1)
171-207:⚠️ Potential issue | 🟠 MajorRace condition: concurrent issue creation can fork the hash chain.
The
prev_hashfetch (line 173-176) and the subsequentINSERT(line 211) are not atomic. If two requests create issues concurrently, both can read the sameprev_hash, producing two issues that chain off the same predecessor — a fork that breaks the "blockchain" invariant.Options to fix:
- Use a DB-level advisory lock or
SELECT ... FOR UPDATEon the last row.- Use a serializable transaction isolation level for this block.
- Use an application-level lock (e.g.,
asyncio.Lock) if running single-process.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routers/issues.py` around lines 171 - 207, The current prev_hash read using run_in_threadpool (prev_issue/prev_hash) and the later creation of Issue (integrity_hash/previous_integrity_hash) can race and fork the chain; wrap the read+compute+insert in a database transaction that locks the last row (e.g., SELECT ... FOR UPDATE) or use a serializable transaction or DB advisory lock so only one creator can read/advance the tail at a time; implement: begin transaction, re-query the last Issue row with FOR UPDATE to get prev_hash, compute reference_id and integrity_hash, insert the new Issue (setting previous_integrity_hash to the locked prev_hash), then commit; alternatively, if DB-level locks are not available, use an application-level lock (asyncio.Lock) around the same sequence to ensure atomicity for the prev_issue → integrity_hash → Issue insert path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/requirements-render.txt`:
- Around line 16-19: The requirements-render.txt entries for python-jose,
cryptography, passlib, and bcrypt should be consolidated and version-pinned to
mirror the main requirements extras and avoid incompatible transitive installs:
replace the four separate lines with the extras form
python-jose[cryptography]==<PIN>, passlib[bcrypt]==<PIN>, and bcrypt>=4.1.1,<5
(or a concrete upper-bounded range) so pip installs the linked extras and
prevents bcrypt 5.0.0; adjust pins for python-jose and cryptography to match
your tested versions and ensure passlib and bcrypt are compatible.
---
Outside diff comments:
In `@backend/routers/issues.py`:
- Around line 171-207: The current prev_hash read using run_in_threadpool
(prev_issue/prev_hash) and the later creation of Issue
(integrity_hash/previous_integrity_hash) can race and fork the chain; wrap the
read+compute+insert in a database transaction that locks the last row (e.g.,
SELECT ... FOR UPDATE) or use a serializable transaction or DB advisory lock so
only one creator can read/advance the tail at a time; implement: begin
transaction, re-query the last Issue row with FOR UPDATE to get prev_hash,
compute reference_id and integrity_hash, insert the new Issue (setting
previous_integrity_hash to the locked prev_hash), then commit; alternatively, if
DB-level locks are not available, use an application-level lock (asyncio.Lock)
around the same sequence to ensure atomicity for the prev_issue → integrity_hash
→ Issue insert path.
---
Duplicate comments:
In `@backend/routers/issues.py`:
- Around line 184-186: The hash construction uses lat_str and lon_str with
explicit "None" and fixed formatting but interpolates user_email raw; change the
construction in the function that builds hash_content (the block that defines
lat_str, lon_str and hash_content) to canonicalize user_email the same way —
e.g., convert None to the literal "None" and otherwise use a stable normalized
string (trim/lower if desired) before interpolation — and make the identical
change in the verification path where the hash is recomputed (the code at the
verification location around line 653) so creation and verification use the same
canonical user_email representation.
| python-jose | ||
| cryptography | ||
| passlib | ||
| bcrypt |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
passlib bcrypt compatibility issues breaking changes 2024 2025
💡 Result:
Key Passlib ↔ bcrypt compatibility / breaking-change issues (2024–2025)
- bcrypt 4.1.x “version metadata” / detection changes → Passlib warnings/errors
bcrypt 4.1.0was yanked explicitly due to “Incompatibility with assumptions made by passlib”. [1]bcrypt 4.1.1notes it fixed incompatibility with passlib package detection assumptions. [2]- In practice, many projects saw Passlib log/raise around reading bcrypt’s version (commonly
AttributeError: module 'bcrypt' has no attribute '__about__'). [3]
- bcrypt 5.0.0 changes long-password behavior (real breaking change)
bcrypt.hashpw()with a password longer than 72 bytes now raisesValueError; previously it was silently truncated. This can break applications/tests that relied on truncation (and some Passlib code paths). [2]
- Python 3.12 packaging change causing older Passlib import/install failures
- Some ecosystems reported that with Python ≥ 3.12 (where
setuptoolsmay not be present by default), “old passlib” can fail to import unlesssetuptoolsis added (leading some projects to switch to forks likelibpass). [4]
Practical mitigations people used
- Pin bcrypt to a known-working range with your Passlib version (many pinned to 4.0.1 to avoid the
__about__/version-reading issue) until upgrading off Passlib or to a maintained fork. [3] - If adopting
bcrypt 5.0.0+, ensure your code never passes >72 bytes to bcrypt (e.g., pre-hash with SHA-256/Blake2 then base64) to avoidValueError. [2]
Sources: bcrypt PyPI release history (yanked 4.1.0) [1], bcrypt 5.0.0 changelog (includes 4.1.1 note + 72-byte ValueError) [2], example failure report showing __about__ error surfaced via Passlib [3], Flask-Security note on Python 3.12 / passlib / setuptools and libpass [4].
🏁 Script executed:
# Check the full requirements-render.txt file
cat -n backend/requirements-render.txtRepository: RohanExploit/VishwaGuru
Length of output: 429
🏁 Script executed:
# Look for other requirements files to understand the versioning strategy
find backend -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" -o -name "setup.cfg" | head -20Repository: RohanExploit/VishwaGuru
Length of output: 123
🏁 Script executed:
# Search for usage of these packages in the codebase to understand their purpose
rg "python_jose|cryptography|passlib|bcrypt" --type py -A 2 | head -50Repository: RohanExploit/VishwaGuru
Length of output: 385
🏁 Script executed:
# Check the other requirements file for comparison
cat -n backend/requirements.txtRepository: RohanExploit/VishwaGuru
Length of output: 684
🏁 Script executed:
# Look for more usage of these packages, especially JWT and cryptography
rg "python_jose|jwt|from_config" --type py | head -30Repository: RohanExploit/VishwaGuru
Length of output: 434
🏁 Script executed:
# Check if there are any version specifications elsewhere (setup.py, pyproject.toml, etc.)
find . -name "setup.py" -o -name "pyproject.toml" -o -name "setup.cfg" | xargs cat 2>/dev/null || echo "No setup files found"Repository: RohanExploit/VishwaGuru
Length of output: 49
🌐 Web query:
python-jose cryptography passlib bcrypt current versions 2025
💡 Result:
Latest PyPI releases during 2025 (i.e., “current” by end of 2025):
- python-jose: 3.5.0 (May 28, 2025) [1]
- cryptography: 46.0.3 (Oct 15, 2025) [2]
- passlib: 1.7.4 (Oct 8, 2020 — no 2025 release) [3]
- bcrypt: 5.0.0 (Sep 25, 2025) [4]
Sources: [1] [2] [3] [4]
Unpinned security-sensitive dependencies and inconsistent dependency specification risk breaking production builds.
The four packages lack version pins and are split into separate entries, unlike the main requirements.txt which uses extras syntax (python-jose[cryptography], passlib[bcrypt]). Splitting these into separate entries means pip doesn't enforce the link between passlib and its bcrypt backend — reducing robustness.
Additionally, uncontrolled versions are risky for production. bcrypt 5.0.0 (Sep 2025) introduced a breaking change: hashpw() now raises ValueError for passwords longer than 72 bytes (previously silently truncated). An unpinned deployment could pull bcrypt 5.0.0 and break authentication if the application or passlib passes untruncated passwords.
Consider reverting to the extras syntax from requirements.txt and adding version pins to guard against breaking changes:
Suggested pins
-python-jose
-cryptography
-passlib
-bcrypt
+python-jose[cryptography]>=3.3,<5
+cryptography>=42.0,<50
+passlib[bcrypt]>=1.7.4,<2
+bcrypt>=4.1.1,<5The bcrypt>=4.1.1,<5 pin avoids both the yanked 4.1.0 (which broke passlib version detection) and the 5.0.0 breaking change with long passwords.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| python-jose | |
| cryptography | |
| passlib | |
| bcrypt | |
| python-jose[cryptography]>=3.3,<5 | |
| cryptography>=42.0,<50 | |
| passlib[bcrypt]>=1.7.4,<2 | |
| bcrypt>=4.1.1,<5 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/requirements-render.txt` around lines 16 - 19, The
requirements-render.txt entries for python-jose, cryptography, passlib, and
bcrypt should be consolidated and version-pinned to mirror the main requirements
extras and avoid incompatible transitive installs: replace the four separate
lines with the extras form python-jose[cryptography]==<PIN>,
passlib[bcrypt]==<PIN>, and bcrypt>=4.1.1,<5 (or a concrete upper-bounded range)
so pip installs the linked extras and prevents bcrypt 5.0.0; adjust pins for
python-jose and cryptography to match your tested versions and ensure passlib
and bcrypt are compatible.
This submission implements a high-performance blockchain integrity seal for reports. By storing the hash of the previous report directly in each new record, we've enabled O(1) cryptographic verification of the report chain. Additionally, I've added an optimized single-issue retrieval endpoint to the backend and refactored the frontend to use it, eliminating an inefficient client-side search. These changes collectively improve both the security and speed of the application.
PR created automatically by Jules for task 14537584995084015305 started by @RohanExploit
Summary by cubic
Optimized the blockchain integrity seal to O(1) verification and added a fast GET /api/issues/{id} endpoint. Verify view now loads by ID, includes a one-click integrity check, and deployment/security configs are hardened.
New Features
Bug Fixes
Written for commit 4e411ec. Summary will update on new commits.
Summary by CodeRabbit