Skip to content

Add AI-generated unit testing from SECQUOIA fork#10

Merged
Ickaser merged 88 commits intoLyoHUB:mainfrom
Ickaser:just-secq-tests
Feb 3, 2026
Merged

Add AI-generated unit testing from SECQUOIA fork#10
Ickaser merged 88 commits intoLyoHUB:mainfrom
Ickaser:just-secq-tests

Conversation

@Ickaser
Copy link
Member

@Ickaser Ickaser commented Jan 1, 2026

This takes just the unit testing scripts, etc. from #6 and makes it into a separate PR.

Ping: @bernalde . I may request your feedback about some of the structure and patterns about these unit tests, but there are some things I know for sure I would like to change about these tests. I intend to merge this before #9 to ensure that I don't break the API in that refactor.

Copy link
Member Author

@Ickaser Ickaser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: still need to reorganize tests across files and DRY them out a bit. Some tests are still not passing, and so far that sometimes means the tests are nonsense but in a couple cases means that there was a bug.

@Ickaser
Copy link
Member Author

Ickaser commented Jan 9, 2026

Note to self: some tests are skipped, because they pertain to cases where decisions must still be made (e.g. how to gracefully handle cases where optimization has no feasible solution). All others should be passing now, but opt_Pch_Tsh, opt_Tsh still haven't been reviewed, and in general tests should be reorganized logically across test modules (rather than ..._coverage.py, etc.).

It's annoying to wait for some slow tests to run when doing frequent iteration, but I find that I come down against the "only run slow tests after merge" philosophy. Either logically separate tests so that only the "relevant" ones are run on a PR, or run all of them; at most, let a developer run the "slow" tests separately on their local machine with pytest tests/ -k "not slow". Either way, the list of tests that are "slow" needs to be updated.

@Ickaser
Copy link
Member Author

Ickaser commented Jan 28, 2026

Thanks for the review! I have a specific question in reply:

I had the following warnings

Those warnings are more or less expected, since the optimization problem is formulated separately for each timestep and sometimes is given infeasible constraints. Some of the other basic tests I reformulated to deliberately avoid infeasible regions, while other tests check that these warnings should be produced in deliberately infeasible conditions.

I didn't wrap the ones that currently produce unhandled warnings in a pytest.warns() check because I don't really want these warnings to occur forever; this is one area where I hope Pyomo can help. Given that, would it be reasonable to leave these tests producing warnings?


And some other general responses:

This is looking good. I checked out the PR locally and have run the tests. A few comments

* [ ]  The bash file `run_test_local.sh` is broken in this case. Maybe we can remove it from this PR

Overlooked that, thanks.

* [ ]  The packages `papermill` and `ruamel.yaml` are missing from the `dev` `project.toml`, so the tests relevant to it failed initially

I had separated those out into another set of optional dependencies, so that doctests (in notebooks) were running in a separate set of tests on CI; but I suppose (unlike my experience in Julia) adding a few extra dependencies doesn't really slow down testing. I'll simplify that down to a single set of dev dependencies.

* [ ]  There are 3 tests skipped on my side. `tests/test_freezing.py::TestFreezingReference::test_freezing_reference`

Working on those now, so it looks better. Some were questionable, some just needed to be written still.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request extracts comprehensive unit testing infrastructure from PR #6, adding AI-generated test suites to establish a baseline before the API refactor in PR #9. The changes include extensive test coverage for all lyophilization modules, test data files for validation, CI/CD workflows for automated testing, and several bug fixes to the production code.

Changes:

  • Added comprehensive test suite with 9 test files covering functions, calculators, optimizers, freezing, design space, and documentation notebooks
  • Introduced test utilities and fixtures for consistent validation across modules
  • Added reference test data files from web interface for regression testing
  • Implemented CI/CD workflows for PR tests, main branch tests, slow tests, and documentation
  • Fixed multiple bugs in freezing, functions, and calculator modules
  • Updated project configuration for testing framework and dependencies

Reviewed changes

Copilot reviewed 34 out of 38 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
tests/conftest.py Shared test fixtures for vials, products, and standard setups
tests/utils.py Helper functions for physical reasonableness validation
tests/test_*.py (9 files) Comprehensive unit and integration tests for all modules
test_data/*.csv Reference data from web interface for regression testing
lyopronto/functions.py Bug fixes in freezing functions and interpolation logic
lyopronto/freezing.py Corrected temperature ramping and time trigger calculations
lyopronto/calc_knownRp.py Added time-dependent functions and better simulation limits
lyopronto/opt_Pch.py Added failure handling for optimization (with issues)
lyopronto/opt_Tsh.py Added optional ramp_rate parameter handling
.github/workflows/*.yml New CI/CD workflows for testing and documentation
pyproject.toml Updated test configuration and dependencies

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@bernalde
Copy link

Answering your question: It is entirely reasonable to leave some of the warnings on for this PR I just wanted to make you aware of this. Once the tests in CI pass, this should be good to go!

@@ -15,9 +15,10 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this apply to this repo? I thought we were using a MIT license

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We put the GPL on this library maybe 2 or 3 years ago. Originally in 2019 LyoPRONTO was released without any sort of license (so, not technically open source).

@Ickaser Ickaser merged commit 5050323 into LyoHUB:main Feb 3, 2026
3 checks passed
bernalde added a commit to SECQUOIA/LyoPRONTO that referenced this pull request Feb 4, 2026
PR LyoHUB#10 changed output column 6 from fraction (0-1) to percentage (0-100).
This commit updates all documentation to reflect the correct unit:

- .github/copilot-instructions.md: percent_dried (0-100%)
- .github/copilot-examples.md: percent_dried (0-100%)
- docs/PHYSICS_REFERENCE.md: corrected to percentage
- docs/ARCHITECTURE.md: corrected to percent_dried (0-100%)
- tests/utils.py: corrected docstrings and assert messages
- lyopronto/pyomo_models/utils.py: corrected docstrings
- lyopronto/pyomo_models/optimizers.py: corrected docstrings
- benchmarks/adapters.py: DRYNESS_TARGET = 98.9 (not 0.989)
- benchmarks/validate.py: IDX_PERCENT, percent thresholds

This was a critical bug - using frac_dried * Lpr0 would give Lck ~100x
too large (e.g., 40cm vs 0.4cm), causing wrong optimized solutions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments