Conversation
|
@Innixma does this look reasonable? |
PGijsbers
left a comment
There was a problem hiding this comment.
Prefixing neg_ is probably more sensible (since scikit-learn does it) so I agree with that choice even though it takes more space. Other than that I tested it (with constant predictor) and it looks good.
Innixma
left a comment
There was a problem hiding this comment.
Apologies for late response, I was on PTO.
Looks good to me, had a comment for long-term improving code quality and extensibility.
| return float(balanced_accuracy_score(self.truth, self.predictions)) | ||
|
|
||
| def auc(self): | ||
| """Array Under (ROC) Curve, computed on probabilities, not on predictions""" |
There was a problem hiding this comment.
nit: area instead of array
| return float(r2_score(self.truth, self.predictions)) | ||
|
|
||
|
|
||
| def higher_is_better(metric): |
There was a problem hiding this comment.
This seems a bit hacky. Better to have either a dictionary mapping or metrics as classes (example in AutoGluon).
There was a problem hiding this comment.
I can't disagree with you: it IS a bit hacky.
Ideally, there should be a class for each metric. It's probably something I'll do at some point to support custom metrics or other customizations in a more satisfying way that what was done in #141.
If there's a demand for it, I'll do it.
#268
Only changing the
resultandmetriccolumns when the metric represents an error:metricname is prefixed withneg_.resultis negated.Example: