Add integration tests for multi-user support and other funnel regression bugs#489
Add integration tests for multi-user support and other funnel regression bugs#489
Conversation
|
Failed to Prepare CI environment Please find the Github Action logs here |
|
Failed to Prepare CI environment Please find the Github Action logs here |
|
Test summary after running integration tests
Test summary after rerunning failed integration tests
Please find the detailed integration test report here Please find the detailed integration test report after rerunning failed tests here Please find the Github Action logs here |
|
Failed to Prepare CI environment Please find the Github Action logs here |
|
Failed to Prepare CI environment Please find the Github Action logs here |
|
Test summary after running integration tests
Test summary after rerunning failed integration tests
Please find the detailed integration test report here Please find the detailed integration test report after rerunning failed tests here Please find the Github Action logs here |
|
Test summary after running integration tests
Test summary after rerunning failed integration tests
Please find the detailed integration test report here Please find the detailed integration test report after rerunning failed tests here Please find the Github Action logs here |
| }, | ||
| ] | ||
|
|
||
| for test_case in test_cases: |
There was a problem hiding this comment.
Can speed this up by creating all tasks in parallel, and polling them simultaneously. We can use* this design to combine more regression test cases and test them in parallel.
There was a problem hiding this comment.
can we not use pytest parametrize mark? - https://docs.pytest.org/en/stable/how-to/parametrize.html
|
Test summary after running integration tests
Test summary after rerunning failed integration tests
Please find the detailed integration test report here Please find the detailed integration test report after rerunning failed tests here Please find the Github Action logs here |
| "command": ["echo 'This will fail' && exit 1"], | ||
| "expected_exit_code": 1, | ||
| "expected_state": "EXECUTOR_ERROR", | ||
| }, | ||
| { | ||
| "command": ["False"], |
There was a problem hiding this comment.
Could you clarify in comments or the docstring the difference between the 2 cases, why does the TES server behave differently
There was a problem hiding this comment.
In case of a command like False, we get a K8s error --
This is the cause of the System Error
| # TODO: | ||
| # * Test the POST /ga4gh/tes/v1/tasks:cancel with a task id that is not present in the system, and expect a 404 from gen3-workflow |
There was a problem hiding this comment.
Nice! there's a ticket for the next tests https://ctds-planx.atlassian.net/browse/MIDRC-1230
Are these just the ones you could think of off the top of your head, or did you go through the list on the OHSU slack board?
There was a problem hiding this comment.
Are these just the ones you could think of off the top of your head.
Yeah, these were the "might as well"s!! !
|
The style in this PR agrees with This formatting comment was generated automatically by a script in uc-cdis/wool. |
|
Test summary after running integration tests
Test summary after rerunning failed integration tests
Please find the detailed integration test report here Please find the detailed integration test report after rerunning failed tests here Please find the Github Action logs here |
…/gen3-code-vigil into chore/add_new_g3wf_tests
| }, | ||
| ] | ||
|
|
||
| for test_case in test_cases: |
There was a problem hiding this comment.
can we not use pytest parametrize mark? - https://docs.pytest.org/en/stable/how-to/parametrize.html
Link to JIRA ticket if there is one: MIDRC-1166
New Features
Breaking Changes
Bug Fixes
Improvements
Dependency updates
Deployment changes