@@ -10,35 +10,42 @@ Every model must inherit `inclearn.models.base.IncrementalLearner`.
1010
1111## Papers implemented:
1212
13- :white_check_mark : --> Paper implemented & reached expected results.\
13+ :white_check_mark : --> Paper implemented & reached expected (or more) results.\
1414:construction : --> Runnable but not yet reached expected results.\
15- :x : --> Not yet implemented or barely working.\
15+ :x : --> Not yet implemented or barely working.
1616
17- [ 1] : :construction : iCaRL\
18- [ 2] : :construction : LwF\
19- [ 3] : :construction : End-to-End Incremental Learning\
17+ :white_check_mark : iCaRL\
18+ :construction : Learning without Forgetting (LwF)\
19+ :white_check_mark : End-to-End Incremental Learning (E2E)
20+ :x : Overcoming catastrophic forgetting (EWC)
2021
22+ ## Results
2123
22- ## iCaRL
24+ Every experiments have been runned at least 20 times, each with a different class
25+ ordering. The class ordering is defined by random using a different seed. I'm
26+ using the seed from 1 to 20.
2327
24- ![ icarl] ( figures/icarl.png )
28+ ````
29+ python3 inclearn --model <model> --seed-range 1 20 --name <exp_name> <other options>
30+ ```
2531
26- My experiments are in green, with their means & standard deviations plotted.
27- They were runned 40 times, with seed going from 1 to 40, each producing a
28- different classes ordering.
32+ The metric used is what iCaRL defined the `average incremental accuracy`. It's
33+ what is plotted on every graph. In addition the in-parenthesis metric is the
34+ average of those average incremental accuracy. You can see in the notebook
35+ [here](results.ipynb) how it is done.
2936
30- The metric used is the ` average incremental accuracy ` :
37+ I'll always precise whether the results are from a paper `[paper]` or myself `[me]`.
3138
32- > The result of the evaluation are curves of the classification accuracies after
33- > each batch of classes. If a single number is preferable, we report the average of
34- > these accuracies, called average incremental accuracy.
3539
36- ~ If I understood well, the accuracy at task i (computed on all seen tasks) is averaged~
37- ~ with all previous accuracy. A bit weird, but doing so get me a curve very similar~
38- ~ to what the papier displayed.~
40+ ### iCIFAR100, 10-split
3941
40- EDIT: I've plot on the curve the "average incremental accuracy" but I'm not sure
41- if the authors plot this metrics or simply used it in the tables results. Thus I'm
42- not sure of my results validity.
42+ 
4343
44- ---
44+ ### iCIFAR100, 2-split
45+
46+ TODO
47+
48+
49+ ## TODO
50+
51+ - [ ] Add subparser per paper
0 commit comments