Conversation
By moving it into a hook, it mirrors pytest's pytest_pyfunc_call, allowing a user to control how a step ends up being evaluated.
|
Is it possible to integrate this feature in the next release? |
|
Reviewed 3 of 3 files at r1. Comments from Reviewable |
|
work very well for me . good job. Review status: Comments from Reviewable |
|
Okay, if that's the case, do you guys want to merge it in its current state? Is there documentation you might want me to update? Should I mark the before/after hooks as deprecated? |
|
Any chance we can get something like this in for the next release? |
|
I would also be interested in finding out the status for this PR. @bubenkoff and @olegpidsadnyi in your opinion, what would be the remaining steps to finish this pull request? |
bubenkoff
left a comment
There was a problem hiding this comment.
Please document this new hook in the readme
|
Any idea when/if this will get merged? |
|
FWIW, I rebased this on master and used it with some asyncio steps. It seemed to break on example tables, however, making all calls with the literal |
|
@bubenkoff, @youtux, @olegpidsadnyi, It seems that the only thing to merge this improvement is just to document the hook. Would it be merged if I complete that documentation? |
youtux
left a comment
There was a problem hiding this comment.
Yes, we would probably merge this once there are tests and documentation added
| step_func(**step_func_args) | ||
| return True |
There was a problem hiding this comment.
I would probably return the result of the step_func
| step_func(**step_func_args) | |
| return True | |
| return step_func(**step_func_args) |
| reporting.before_step(request, feature, scenario, step, step_func) | ||
|
|
||
|
|
||
| @pytest.mark.trylast |
There was a problem hiding this comment.
not sure why this is a trylast rather than a tryfirst
Putting this up to get some feedback and see how receptive people are of the idea before I spend much more time on polishing this.
I'm currently using this library to try and bring this style of testing to our regression, but lots of our code is heavily built around asyncio.
I want to have control over how steps are evaluated, so I can automatically jump in when there's a coroutine and evaluate the step automatically in the current event loop. I'm currently using something like this to demo pytest-bdd internally:
I don't expect this merged as is, as its is still missing unit tests, documentation, and some quality of life stuff to suppress pluggy from the traceback should a step fail.
Also not sure if you guys want to keep the before and after call hooks too, then, since technically they're now equivalent to having a hook wrapper around this new hook.
This change is