Training: DataLoader, feature cache, AMP, lazy package exports#3
Open
cropsgg wants to merge 1 commit intopr/02-clinvar-data-loaderfrom
Open
Training: DataLoader, feature cache, AMP, lazy package exports#3cropsgg wants to merge 1 commit intopr/02-clinvar-data-loaderfrom
cropsgg wants to merge 1 commit intopr/02-clinvar-data-loaderfrom
Conversation
This was referenced Apr 15, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Updates training/inference plumbing so pipelines consume
AppSettings: batched DNABERT encoding, optional on-disk feature cache, DataLoader-based training, optional AMP, and lazy imports inbloom_dnabert/__init__.pyso importing light modules does not require every heavy dependency at import time.Base branch:
pr/02-clinvar-data-loader(stacked — merge after #2 or retarget once earlier PRs land).What changed
bloom_dnabert/feature_cache.py— Fingerprinted cache for precomputed Bloom/DNABERT features whendata.feature_cache_diris set.bloom_dnabert/classifier.py—HybridClassifierPipelineandBloomGuidedPipelineread batch sizes, workers, AMP, max length, etc. from settings.bloom_dnabert/dnabert_wrapper.py— Batch encode / token-level outputs aligned with training loops.bloom_dnabert/bloom_filter.py— Aligns with config paths and seed loading APIs used by the app/CLI.bloom_dnabert/__init__.py— Lazy__getattr__exports for torch/transformer-heavy submodules.Why this PR exists
Scales experimentation beyond notebook-style loops: reproducible batching, less redundant DNABERT work via cache, and cleaner import graph for tests and partial installs.
Dependencies
How to verify
Run classifier/bloom tests in an environment with
torchand dependencies fromrequirements.txt(fullpytest tests/).