RESEARCH

COMPUTER VISION

Memory Aware Synapses: Learning what (not) to forget

September 09, 2018

Abstract

Humans can learn in a continuous manner. Old rarely utilized knowledge can be overwritten by new incoming information while important, frequently used knowledge is prevented from being erased. In artificial learning systems, lifelong learning so far has focused mainly on accumulating knowledge over tasks and overcoming catastrophic forgetting. In this paper, we argue that, given the limited model capacity and the unlimited % TT: unlimited sounds a bit too strong to me... new information to be learned, knowledge has to be preserved or erased selectively. Inspired by neuroplasticity and earlier work on weight regularization for lifelong learning, we propose an online method to compute the importance of the parameters of a neural network, based on the data that the network is actively applied to, in an unsupervised manner. To this end, after learning a task and whenever a new sample is fed to the network, we accumulate an importance measure for each parameter of the network, based on how sensitive the predicted output is to a change in this parameter. This results in importance weights that are data or context dependent. When learning a new task, changes to important parameters can then be penalized, effectively preventing knowledge important for previous tasks from being overwritten. Further, we show an interesting connection between a local version of our method and

Download the Paper

AUTHORS

Written by

Mohamed Elhoseiny

Marcus Rohrbach

Francesca Babiloni

Rahaf Aljundi

Tinne Tuytelaars

Publisher

ECCV

Research Topics

Computer Vision

Related Publications

January 02, 2026

COMPUTER VISION

PhyGDPO: Physics-Aware Groupwise Direct Preference Optimization for Physically Consistent Text-to-Video Generation

Yuanhao Cai, Kunpeng Li, Menglin Jia, Jialiang Wang, Junzhe Sun, Feng Liang, Weifeng Chen, Felix Xu, Chu Wang, Ali Thabet, Xiaoliang Dai, Xuan Ju, Alan Yuille, Ji Hou

January 02, 2026

December 18, 2025

COMPUTER VISION

We Can Hide More Bits: The Unused Watermarking Capacity in Theory and Practice

Aleksandar Petrov, Pierre Fernandez, Tomáš Souček, Hady Elsahar

December 18, 2025

December 18, 2025

COMPUTER VISION

Learning to Watermark in the Latent Space of Generative Models

Sylvestre Rebuffi, Tuan Tran, Valeriu Lacatusu, Pierre Fernandez, Tomáš Souček, Tom Sander, Hady Elsahar, Alexandre Mourachko

December 18, 2025

December 18, 2025

RESEARCH

COMPUTER VISION

Pixel Seal: Adversarial-only training for invisible image and video watermarking

Tomáš Souček, Pierre Fernandez, Hady Elsahar, Sylvestre Rebuffi, Valeriu Lacatusu, Tuan Tran, Tom Sander, Alexandre Mourachko

December 18, 2025

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.