Add AI-generated unit testing from SECQUOIA fork#10
Conversation
Ickaser
left a comment
There was a problem hiding this comment.
Note to self: still need to reorganize tests across files and DRY them out a bit. Some tests are still not passing, and so far that sometimes means the tests are nonsense but in a couple cases means that there was a bug.
|
Note to self: some tests are skipped, because they pertain to cases where decisions must still be made (e.g. how to gracefully handle cases where optimization has no feasible solution). All others should be passing now, but opt_Pch_Tsh, opt_Tsh still haven't been reviewed, and in general tests should be reorganized logically across test modules (rather than ..._coverage.py, etc.). It's annoying to wait for some slow tests to run when doing frequent iteration, but I find that I come down against the "only run slow tests after merge" philosophy. Either logically separate tests so that only the "relevant" ones are run on a PR, or run all of them; at most, let a developer run the "slow" tests separately on their local machine with |
|
Thanks for the review! I have a specific question in reply:
Those warnings are more or less expected, since the optimization problem is formulated separately for each timestep and sometimes is given infeasible constraints. Some of the other basic tests I reformulated to deliberately avoid infeasible regions, while other tests check that these warnings should be produced in deliberately infeasible conditions. I didn't wrap the ones that currently produce unhandled warnings in a And some other general responses:
Overlooked that, thanks.
I had separated those out into another set of optional dependencies, so that doctests (in notebooks) were running in a separate set of tests on CI; but I suppose (unlike my experience in Julia) adding a few extra dependencies doesn't really slow down testing. I'll simplify that down to a single set of
Working on those now, so it looks better. Some were questionable, some just needed to be written still. |
There was a problem hiding this comment.
Pull request overview
This pull request extracts comprehensive unit testing infrastructure from PR #6, adding AI-generated test suites to establish a baseline before the API refactor in PR #9. The changes include extensive test coverage for all lyophilization modules, test data files for validation, CI/CD workflows for automated testing, and several bug fixes to the production code.
Changes:
- Added comprehensive test suite with 9 test files covering functions, calculators, optimizers, freezing, design space, and documentation notebooks
- Introduced test utilities and fixtures for consistent validation across modules
- Added reference test data files from web interface for regression testing
- Implemented CI/CD workflows for PR tests, main branch tests, slow tests, and documentation
- Fixed multiple bugs in freezing, functions, and calculator modules
- Updated project configuration for testing framework and dependencies
Reviewed changes
Copilot reviewed 34 out of 38 changed files in this pull request and generated 16 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/conftest.py | Shared test fixtures for vials, products, and standard setups |
| tests/utils.py | Helper functions for physical reasonableness validation |
| tests/test_*.py (9 files) | Comprehensive unit and integration tests for all modules |
| test_data/*.csv | Reference data from web interface for regression testing |
| lyopronto/functions.py | Bug fixes in freezing functions and interpolation logic |
| lyopronto/freezing.py | Corrected temperature ramping and time trigger calculations |
| lyopronto/calc_knownRp.py | Added time-dependent functions and better simulation limits |
| lyopronto/opt_Pch.py | Added failure handling for optimization (with issues) |
| lyopronto/opt_Tsh.py | Added optional ramp_rate parameter handling |
| .github/workflows/*.yml | New CI/CD workflows for testing and documentation |
| pyproject.toml | Updated test configuration and dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Answering your question: It is entirely reasonable to leave some of the warnings on for this PR I just wanted to make you aware of this. Once the tests in CI pass, this should be good to go! |
| @@ -15,9 +15,10 @@ | |||
| # You should have received a copy of the GNU General Public License | |||
| # along with this program. If not, see <https://www.gnu.org/licenses/>. | |||
There was a problem hiding this comment.
Does this apply to this repo? I thought we were using a MIT license
There was a problem hiding this comment.
We put the GPL on this library maybe 2 or 3 years ago. Originally in 2019 LyoPRONTO was released without any sort of license (so, not technically open source).
…g for changing to other interpretation
PR LyoHUB#10 changed output column 6 from fraction (0-1) to percentage (0-100). This commit updates all documentation to reflect the correct unit: - .github/copilot-instructions.md: percent_dried (0-100%) - .github/copilot-examples.md: percent_dried (0-100%) - docs/PHYSICS_REFERENCE.md: corrected to percentage - docs/ARCHITECTURE.md: corrected to percent_dried (0-100%) - tests/utils.py: corrected docstrings and assert messages - lyopronto/pyomo_models/utils.py: corrected docstrings - lyopronto/pyomo_models/optimizers.py: corrected docstrings - benchmarks/adapters.py: DRYNESS_TARGET = 98.9 (not 0.989) - benchmarks/validate.py: IDX_PERCENT, percent thresholds This was a critical bug - using frac_dried * Lpr0 would give Lck ~100x too large (e.g., 40cm vs 0.4cm), causing wrong optimized solutions.
This takes just the unit testing scripts, etc. from #6 and makes it into a separate PR.
Ping: @bernalde . I may request your feedback about some of the structure and patterns about these unit tests, but there are some things I know for sure I would like to change about these tests. I intend to merge this before #9 to ensure that I don't break the API in that refactor.