@@ -14,7 +14,7 @@ Every model must inherit `inclearn.models.base.IncrementalLearner`.
1414:construction : --> Runnable but not yet reached expected results.\
1515:x : --> Not yet implemented or barely working.\
1616
17- [ 1] : :white_check_mark : iCaRL\
17+ [ 1] : :construction : iCaRL\
1818[ 2] : :construction : LwF\
1919[ 3] : :construction : End-to-End Incremental Learning\
2020
@@ -33,63 +33,12 @@ The metric used is the `average incremental accuracy`:
3333> each batch of classes. If a single number is preferable, we report the average of
3434> these accuracies, called average incremental accuracy.
3535
36- If I understood well, the accuracy at task i (computed on all seen tasks) is averaged
37- with all previous accuracy. A bit weird, but doing so get me a curve very similar
38- to what the papier displayed.
36+ ~ If I understood well, the accuracy at task i (computed on all seen tasks) is averaged~
37+ ~ with all previous accuracy. A bit weird, but doing so get me a curve very similar~
38+ ~ to what the papier displayed.~
3939
40- ---
40+ EDIT: I've plot on the curve the "average incremental accuracy" but I'm not sure
41+ if the authors plot this metrics or simply used it in the tables results. Thus I'm
42+ not sure of my results validity.
4143
42- # References
43-
44- [ 1] iCaRL:
45-
46- ```
47- @InProceedings{icarl,
48- author = {Rebuffi, Sylvestre-Alvise and Kolesnikov, Alexander and Sperl, Georg and Lampert, Christoph H.},
49- title = {iCaRL: Incremental Classifier and Representation Learning},
50- booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
51- month = {July},
52- year = {2017}
53- }
54- ```
55-
56- [ 2 ] : LwF:
57-
58- ```
59- @ARTICLE{lwf,
60- author={Z. {Li} and D. {Hoiem}},
61- journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
62- title={Learning without Forgetting},
63- year={2018},
64- volume={40},
65- number={12},
66- pages={2935-2947},
67- keywords={convolution;feature extraction;feedforward neural nets;learning (artificial intelligence);fine-tuning adaption techniques;CNN;forgetting method;convolutional neural network;vision system;feature extraction;Feature extraction;Deep learning;Training data;Neural networks;Convolutional neural networks;Knowledge engineering;Learning systems;Visual perception;Convolutional neural networks;transfer learning;multi-task learning;deep learning;visual recognition},
68- doi={10.1109/TPAMI.2017.2773081},
69- ISSN={0162-8828},
70- month={Dec},}
71- ```
72-
73- [ 3] : End-to-End Incremental Learning:
74-
75- ```
76- @inproceedings{end_to_end_inc_learn,
77- TITLE = {{End-to-End Incremental Learning}},
78- AUTHOR = {Castro, Francisco M. and Mar{\'i}n-Jim{\'e}nez, Manuel J and Guil, Nicol{\'a}s and Schmid, Cordelia and Alahari, Karteek},
79- URL = {https://hal.inria.fr/hal-01849366},
80- BOOKTITLE = {{ECCV 2018 - European Conference on Computer Vision}},
81- ADDRESS = {Munich, Germany},
82- EDITOR = {Vittorio Ferrari and Martial Hebert and Cristian Sminchisescu and Yair Weiss},
83- PUBLISHER = {{Springer}},
84- SERIES = {Lecture Notes in Computer Science},
85- VOLUME = {11216},
86- PAGES = {241-257},
87- YEAR = {2018},
88- MONTH = Sep,
89- DOI = {10.1007/978-3-030-01258-8\_15},
90- KEYWORDS = {Incremental learning ; CNN ; Distillation loss ; Image classification},
91- PDF = {https://hal.inria.fr/hal-01849366/file/IncrementalLearning_ECCV2018.pdf},
92- HAL_ID = {hal-01849366},
93- HAL_VERSION = {v1},
94- }
95- ```
44+ ---
0 commit comments