June 27, 2021
Trust has become a first-order concept in AI, urging experts to call for measures ensuring AI is ‘trustworthy’. The danger of untrustworthy AI often culminates with Deepfake, perceived as unprecedented threat for democracies and online trust, through its potential to back sophisticated disinformation campaigns. Little work has, however, been dedicated to the examina- tion of the concept of trust, what undermines the arguments supporting such initiatives. By investigating the concept of trust and its evolutions, this paper ultimately defends a non-intuitive position: Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the chal- lenges entailed by the digital age. Discussing the dilemmas traditional societies had to overcome to establish social trust and the evolution of their solution across modernity, I come to reject rational choice theories to model trust and to distinguish an ‘instrumental rationality’ and a ‘social rationality’. This allows me to refute the argument which holds Deepfake to be a threat to online trust. In contrast, I argue that Deepfake may even support a transition from instrumental to social rationality, better suited for making decisions in the digital age.
November 30, 2020
Koustuv Sinha, Christopher Pal, Nicolas Gontier, Siva Reddy
November 30, 2020
December 03, 2018
Gabriel Synnaeve, Daniel Gant, Jonas Gehring, Nicolas Carion, Nicolas Usunier, Vasil Khalidov, Vegard Mella, Zeming Lin
December 03, 2018
December 03, 2018
Gabriel Synnaeve, Zeming Lin, Jonas Gehring, Dan Gant, Vegard Mella, Vasil Khalidov, Nicolas Carion, Nicolas Usunier
December 03, 2018
April 24, 2017
Nicolas Usunier, Gabriel Synnaeve, Zeming Lin, Soumith Chintala
April 24, 2017
May 06, 2019
Kenneth Marino, Abhinav Gupta, Rob Fergus, Arthur Szlam
May 06, 2019
July 03, 2019
July 03, 2019
Foundational models
Latest news
Foundational models