A contest to spot “deepfakes” – AI

This article was 1st published on our sister Site, The Internet Of All Things.


Heard of “deepfakes”? These are artificial-intelligence-generated videos, where someone inserts the face of say a politician or a journalist into some other video.

Sometimes such deepfakes are easy to spot but technology is making it more and more difficult.

In the fight against deepfakes, Facebook, Microsoft, the “Partnership on AI” coalition, and academics from seven universities have now launched a contest “to encourage better ways of detecting deepfakes.”

The Deepfake Detection Challenge will run from late 2019 until the spring of 2020.

Announcing this on its official blog, Facebook said:

We want to catalyze more research and development in this area and ensure that there are better open-source tools to detect deepfakes. That’s why Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).

Facebook

The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.

The Deepfake Detection Challenge will include a data set and leader board, as well as grants and awards, to “spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others.”

The governance of the challenge will be facilitated and overseen by the Partnership on AI’s new Steering Committee on AI and Media Integrity, which is made up of a broad cross-sector coalition of organisations, including Facebook, WITNESS, Microsoft, and others in civil society and the technology, media, and academic communities.

For more, click here.

Image Credit: Facebook


Click here to opt-out of Google Analytics