Facebook and Microsoft have announced a joint challenge which aims to address the growing threat of 'deepfake' content - computer-modified videos in which one or more participants is replaced by someone else, often with as little as a single image with which to work.
'The camera,' the saying goes, 'never lies.' It's never been true, of course, but the explosion of affordable computing power coupled with deep-learning algorithms which take the difficult part of forging footage and simplify it to a single click has made it more difficult than ever to trust what you see on the screen. Known as 'deepfakes,' these videos - and photos, and even audio - paste one user's features over another's in a surprisingly convincing manner; and the quality of the result is only likely to improve from here, while the accessibility of the technique has shifted from needing a reasonable understanding of deep learning and a high-end GPU to being able to do it in a single tap on a smartphone app.
Facebook and Microsoft, both of which are heavily involved in deep learning, have announced that they are backing a project from the Partnership on AI and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY to find a reliable means of detecting a deepfake video: The Deepfake Detection Challenge (DFDC).
'The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,' explains Facebook's chief technology officer Mike Schroepfer. 'The Deepfake Detection Challenge will include a data set and leaderboard, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others. The governance of the challenge will be facilitated and overseen by the Partnership on AI’s new Steering Committee on AI and Media Integrity, which is made up of a broad cross-sector coalition of organisations including Facebook, WITNESS, Microsoft, and others in civil society and the technology, media, and academic communities. '
'Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy, and hence freedom,' adds Professor Philip H. S. Torr of the Department of Engineering Science at the University of Oxford. 'I believe we urgently need new tools to detect and characterise this misinformation, so I am happy to be part of an initiative that seeks to mobilise the research community around these goals — both to preserve the truth whilst pushing the frontiers of science.'
The first DFDC events are scheduled to begin in October this year, with more information available on the official website.
November 6 2020 | 17:30