@@ -16,36 +16,53 @@ class EMQ(SoftLearnerQMixin, AggregationMixin, BaseQuantifier):
1616 Estimates class prevalences under prior probability shift by alternating
1717 between expectation **(E)** and maximization **(M)** steps on posterior probabilities.
1818
19- E-step:
20- .. math::
21- p_i^{(s+1)}(x) = \frac{q_i^{(s)} p_i(x)}{\sum_j q_j^{(s)} p_j(x)}
19+ .. dropdown:: Mathematical Formulation
2220
23- M-step:
24- .. math::
25- q_i^{(s+1)} = \frac{1}{N} \sum_{n=1}^N p_i^{(s+1)}(x_n)
21+ E-step:
2622
27- where
28- - :math:`p_i(x)` are posterior probabilities predicted by the classifier
29- - :math:`q_i^{(s)}` are class prevalence estimates at iteration :math:`s`
30- - :math:`N` is the number of test instances.
23+ .. math::
3124
32- Calibrations supported on posterior probabilities before **EM** iteration:
25+ p_i^{(s+1)}(x) = \frac{q_i^{(s)} p_i(x)}{\sum_j q_j^{(s)} p_j(x)}
3326
34- Temperature Scaling (TS):
35- .. math::
36- \hat{p} = \text{softmax}\left(\frac{\log(p)}{T}\right)
27+ M-step:
3728
38- Bias-Corrected Temperature Scaling (BCTS):
39- .. math::
40- \hat{p} = \text{softmax}\left(\frac{\log(p)}{T} + b\right)
29+ .. math::
4130
42- Vector Scaling (VS):
43- .. math::
44- \hat{p}_i = \text{softmax}(W_i \cdot \log(p_i) + b_i)
31+ q_i^{(s+1)} = \frac{1}{N} \sum_{n=1}^N p_i^{(s+1)}(x_n)
4532
46- No-Bias Vector Scaling (NBVS):
47- .. math::
48- \hat{p}_i = \text{softmax}(W_i \cdot \log(p_i))
33+ where:
34+
35+ - :math:`p_i(x)` are posterior probabilities predicted by the classifier
36+
37+ - :math:`q_i^{(s)}` are class prevalence estimates at iteration :math:`s`
38+
39+ - :math:`N` is the number of test instances.
40+
41+ Calibrations supported on posterior probabilities before **EM** iteration:
42+
43+ Temperature Scaling (TS):
44+
45+ .. math::
46+
47+ \hat{p} = \text{softmax}\left(\frac{\log(p)}{T}\right)
48+
49+ Bias-Corrected Temperature Scaling (BCTS):
50+
51+ .. math::
52+
53+ \hat{p} = \text{softmax}\left(\frac{\log(p)}{T} + b\right)
54+
55+ Vector Scaling (VS):
56+
57+ .. math::
58+
59+ \hat{p}_i = \text{softmax}(W_i \cdot \log(p_i) + b_i)
60+
61+ No-Bias Vector Scaling (NBVS):
62+
63+ .. math::
64+
65+ \hat{p}_i = \text{softmax}(W_i \cdot \log(p_i))
4966
5067 Parameters
5168 ----------
0 commit comments