Allow custom metrics to be reported in results#173
Allow custom metrics to be reported in results#173PGijsbers merged 2 commits intoimprov/extensionsfrom
Conversation
In general, we can expect AutoML frameworks to expect different
signatures for their metric function. We define a new signature specific
to the amlb, so that we can also report the scores. This
will be denoted with a trailing underscore ("metric_"). It allows the
user to define two methods, e.g. Accuracy and Accuracy_, the former will
be used by the automl framework, the latter by us.
There is no automl framework that is going to have the format that the amlb uses, so sharing the definition makes no sense until we alter the signature in the amlb.
|
This PR should be merged (or rejected) before #141. |
|
There's a difficulty here. If we take TPOT as an example, then the training and the result processing are not done on the same process/venv, this means that if the The lookup logic suggested in #141 is then probably too simple. Now let's imagine that the I think you can still merge this PR, I will probably consider this to improve #141 though. |
As far as I could tell, the custom metric could be used for optimization, but was never reported.
I set up a separate branch because I wasn't sure if this was correct (maybe I misinterpreted the documentation on the pr).
In general, we can expect AutoML frameworks to expect different signatures for their metric function. For example, TPOT requires a scikit-learn scorer:
However we want a uniform signature for our own score calculations. So in addition to defining the custom metric for the automl framework(s), one needs to be defined for the automl benchmark. This be a function with the same name as the metric, but with a trailing underscore. The signature is
Callable[[amlb.results.Result], float]:This provides the user with
result.truth,result.predictionsand, if applicable,result.probabilities.