Conversation
…nchmark, packaging and a few tests.
…alsError on trying to access them
…emoval of pool_score
|
Hi @benlonnqvist, we're doing some spring cleaning on the PR backlog and noticed that this PR is active and passing tests! Is it ready to be reviewed and merged? Thanks for the contributions! |
|
Hi @deirdre-k, thanks for messaging! Let me update the branch to double check that #917 didn't cause any issues, add one more test, and if it passes after that, it should be good to go! Sorry about not having it tagged as a draft, I'll ping you later today/this week when it's good to go. |
|
No worries at all, sounds great! And thanks for the quick reply 😀 |
mschrimpf
left a comment
There was a problem hiding this comment.
Required change: remove aggregation dimension from all scores.
Recommended change: update naming convention to reserve - for benchmark-metric separation only (use . for sub-data instead)
| return ceiling | ||
|
|
||
| @staticmethod | ||
| def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list: |
There was a problem hiding this comment.
| def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> list: | |
| def compute_threshold_elevations(assemblies: Dict[str, PropertyAssembly]) -> List: |
| # independent_variable is not used since we compute from thresholds, and do not need to fit them | ||
| metric = load_metric('threshold', independent_variable='placeholder') | ||
| score = metric(float(assembly.sel(subject='A').values), assembly) | ||
| print(score) |
| baseline_condition='placeholder', | ||
| test_condition='placeholder') | ||
| score = metric(float(assembly.sel(subject='A').values), assembly) | ||
| print(score) |
| score = float(score[(score['aggregation'] == 'center')].values) | ||
| human_thresholds.append(random_human_score) | ||
| scores.append(score) |
There was a problem hiding this comment.
| score = float(score[(score['aggregation'] == 'center')].values) | |
| human_thresholds.append(random_human_score) | |
| scores.append(score) | |
| human_thresholds.append(random_human_score) | |
| scores.append(score.values) |
| score = float(score[(score['aggregation'] == 'center')].values) | ||
| human_threshold_elevations.append(random_human_score) | ||
| scores.append(score) |
There was a problem hiding this comment.
| score = float(score[(score['aggregation'] == 'center')].values) | |
| human_threshold_elevations.append(random_human_score) | |
| scores.append(score) | |
| human_threshold_elevations.append(random_human_score) | |
| scores.append(score.values) |
Co-authored-by: Martin Schrimpf <mschrimpf@users.noreply.github.com>
Thanks @mschrimpf for the review. I implemented both sets of changes and pending jenkins plugin tests, all good to go from my side. |
PR for Psychometric Threshold metric and the Malania2007 benchmarks.
Brief todo: