Home News Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

Microsoft Video Authenticator will analyse videos and assign a percentage chance, or confidence score to curb doctored videos and misinformation.

The Internet has connected the world to such an extent that whatever you need is available at the click of a button. With the rise of technology, spreading and sharing of data and information is not a tedious task anymore. Technological advancements have certainly made our lives easier. However, because of the same technological advancements, spreading of misinformation or fake information has become child’s play.

WireX August Download

A new fad has taken over social media these days. Videos featuring eminent personas from various fields giving controversial statements or singing a funny song have gone up on these social media apps. These videos are relatively easy to make using online tools and can easily deceive even the best eyes.

These videos containing misinformation are called Deepfakes, that has become the cause of major concerns throughout the world owing to the negativity and misinformation it spreads.

Combating Deepfakes in US Elections With Microsoft

Microsoft recently added to its slowly growing stack of innovative technology with an AI assisted tool especially to curb synthetic media and and generate a manipulation score. Microsoft will call the new tool Microsoft Video Authenticator and it will provide “a percentage chance, or confidence score” to indicate whether the media has been artificially manipulated.

Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

The algorithm has been developed with the help of Microsoft’s Responsible AI team and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. And it’s designed to identify manipulated elements that are not easily caught by the human eye. The team trained the AI by using publicly available datasets such as Face Forensic++, and was tested on the DeepFake Detection Challenge Dataset.

Deepfake generation tool has advanced so much so that each frame of a video can be unnoticeably manipulated to look real. And while it is very easy to create a realistic video using AI, identifying visual disinformation using technology is still a hard problem.

Microsoft is also releasing a tool powered by Microsoft’s Azure cloud infrastructure to let creators sign a piece of content with a certificate. There’s also a reader that can scan that certificate to verify that the content is authentic.

Microsoft is teaming up with Francisco-based AI Foundation Reality Defender 2020 (RD2020) to make the tool available to organizations involved in the democratic process this year — including news outlets such as BBC news and the New York Times and also for political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

Microsoft acknowledges the detection tool is not perfect and generative technologies might always be ahead of it. As a result, some videos might slip through and the company will have to keep working to make the algorithm robust. Currently, Microsoft’s tool will focus on preventing deepfakes from possibly disrupting the US elections. Also, Microsoft should open-source their tool to collaborate with other researchers to curb the rise of deepfake generating tools. Till then, a critically thinking mind is still our best bet against these AI generated realistic deepfakes.

WireX August Download

Prakhar Sushant
I follow business and Technology passionately. A native of Delhi, living in Bangalore. I am an avid reader, a traveller but not a tourist.


Please enter your comment!
Please enter your name here

Most Popular

Global 5G chipset market forecasted to reach USD 22,929 Million by 2026

The growing demand for mobile data services is expected to increase the 5G chipset market size growth The global 5G Chipset market size is expected...

Government aims to setup one EV charging station for every 69,000 petrol pumps

To speed up the process, the government has also reduced 5% GST on electric vehicles In an effort to boost electric vehicle production in India,...

Samsung’s next wireless earbuds may “ditch” the Bean-shaped design

Samsung gained a lot of attention for its launch of beans-shaped Galaxy Buds Live Earbuds this year. Although at first, it seemed like the...

Why was Infinix Zero 8i launch delayed to Dec 3rd

Earlier in November, reports online began hinting that Infinix is gearing up to launch the new ‘Zero’ Series in India. The Infinix Zero 8...

Recent Comments