“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online.
Yet the industry doesn’t have a great data set or benchmark for detecting them.
More research and development in this area is needed to ensure that there are better open source tools to detect deepfakes.
That’s why Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).
The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.
The Deepfake Detection Challenge will include a realistic data set using paid actors, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.
The full data set release and the DFDC launch will happen at the week-long Conference on Neural Information Processing Systems (NeurIPS) in Vancouver from Sunday, December 8th through Saturday December 14th.