-
Notifications
You must be signed in to change notification settings - Fork 45
Description
On page 1:
What is our classifier has 90% accuracy?
What if our classifier has 90% accuracy?
In this particular example, we would never find a positive review, which what we are trying to do in our scenario.
In this particular example, we would never find a positive review, which is what we are trying to do in our scenario.
On page 3:
So we used the probability = 0.5 that the data is classified as positive to make the decision about the classification. We can simply choose a threshold other that 0.5 to make the tradeoff between precision and recall.
So we used as a threshold probability = 0.5 that the data is classified as positive to make the decision about the classification. We can simply choose a threshold other than 0.5 to make the tradeoff between precision and recall.
On page 5:
It is not always this clear. In some cases, the precision-recall curves for two classifiers my cross. This creates regions in which one curve is better (has higher precision for the same recall) than the other.
It is not always this clear. In some cases, the precision-recall curves for two classifiers may cross. This creates regions in which one classifier is better (has higher precision for the same recall) than the other.
It is common to use area-under-the-curve (AUC) measures to choose one curve over another. Area under the curve calculates a measure for a range of t, so that we can pick a curve that works best over a range.
It is common to use area-under-the-curve (AUC) measures to choose one classifier over another. Area under the curve calculates a measure for a range of t, so that we can pick a classifier that works best over a range of thresholds.
Also, the hyperlink to the image matching classifier research (great idea to link to current research, by the way!) is not clickable in the PDFs. Maybe it would be better to replace the word "here" with the URL?