September 09, 2018
Humans can learn in a continuous manner. Old rarely utilized knowledge can be overwritten by new incoming information while important, frequently used knowledge is prevented from being erased. In artificial learning systems, lifelong learning so far has focused mainly on accumulating knowledge over tasks and overcoming catastrophic forgetting. In this paper, we argue that, given the limited model capacity and the unlimited % TT: unlimited sounds a bit too strong to me... new information to be learned, knowledge has to be preserved or erased selectively. Inspired by neuroplasticity and earlier work on weight regularization for lifelong learning, we propose an online method to compute the importance of the parameters of a neural network, based on the data that the network is actively applied to, in an unsupervised manner. To this end, after learning a task and whenever a new sample is fed to the network, we accumulate an importance measure for each parameter of the network, based on how sensitive the predicted output is to a change in this parameter. This results in importance weights that are data or context dependent. When learning a new task, changes to important parameters can then be penalized, effectively preventing knowledge important for previous tasks from being overwritten. Further, we show an interesting connection between a local version of our method and
Written by
Mohamed Elhoseiny
Marcus Rohrbach
Francesca Babiloni
Rahaf Aljundi
Tinne Tuytelaars
Publisher
ECCV
Research Topics
January 02, 2026
Yuanhao Cai, Kunpeng Li, Menglin Jia, Jialiang Wang, Junzhe Sun, Feng Liang, Weifeng Chen, Felix Xu, Chu Wang, Ali Thabet, Xiaoliang Dai, Xuan Ju, Alan Yuille, Ji Hou
January 02, 2026
December 18, 2025
Aleksandar Petrov, Pierre Fernandez, Tomáš Souček, Hady Elsahar
December 18, 2025
December 18, 2025
Sylvestre Rebuffi, Tuan Tran, Valeriu Lacatusu, Pierre Fernandez, Tomáš Souček, Tom Sander, Hady Elsahar, Alexandre Mourachko
December 18, 2025
December 18, 2025
Tomáš Souček, Pierre Fernandez, Hady Elsahar, Sylvestre Rebuffi, Valeriu Lacatusu, Tuan Tran, Tom Sander, Alexandre Mourachko
December 18, 2025

Our approach
Latest news
Foundational models