Home News Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

Microsoft Video Authenticator will analyse videos and assign a percentage chance, or confidence score to curb doctored videos and misinformation.

The Internet has connected the world to such an extent that whatever you need is available at the click of a button. With the rise of technology, spreading and sharing of data and information is not a tedious task anymore. Technological advancements have certainly made our lives easier. However, because of the same technological advancements, spreading of misinformation or fake information has become child’s play.

WireX August Download

A new fad has taken over social media these days. Videos featuring eminent personas from various fields giving controversial statements or singing a funny song have gone up on these social media apps. These videos are relatively easy to make using online tools and can easily deceive even the best eyes.

These videos containing misinformation are called Deepfakes, that has become the cause of major concerns throughout the world owing to the negativity and misinformation it spreads.

Combating Deepfakes in US Elections With Microsoft

Microsoft recently added to its slowly growing stack of innovative technology with an AI assisted tool especially to curb synthetic media and and generate a manipulation score. Microsoft will call the new tool Microsoft Video Authenticator and it will provide “a percentage chance, or confidence score” to indicate whether the media has been artificially manipulated.

Microsoft’s Deepfake Detection Tool Will Combat Disinformation During US Elections

“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” it writes in a blog post announcing the tech. “It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”

The algorithm has been developed with the help of Microsoft’s Responsible AI team and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. And it’s designed to identify manipulated elements that are not easily caught by the human eye. The team trained the AI by using publicly available datasets such as Face Forensic++, and was tested on the DeepFake Detection Challenge Dataset.

Deepfake generation tool has advanced so much so that each frame of a video can be unnoticeably manipulated to look real. And while it is very easy to create a realistic video using AI, identifying visual disinformation using technology is still a hard problem.

Microsoft is also releasing a tool powered by Microsoft’s Azure cloud infrastructure to let creators sign a piece of content with a certificate. There’s also a reader that can scan that certificate to verify that the content is authentic.

Microsoft is teaming up with Francisco-based AI Foundation Reality Defender 2020 (RD2020) to make the tool available to organizations involved in the democratic process this year — including news outlets such as BBC news and the New York Times and also for political campaigns.

“Video Authenticator will initially be available only through RD2020 [Reality Defender 2020], which will guide organizations through the limitations and ethical considerations inherent in any deepfake detection technology. Campaigns and journalists interested in learning more can contact RD2020 here,” Microsoft adds.

“We expect that methods for generating synthetic media will continue to grow in sophistication,” it continues. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”

Microsoft acknowledges the detection tool is not perfect and generative technologies might always be ahead of it. As a result, some videos might slip through and the company will have to keep working to make the algorithm robust. Currently, Microsoft’s tool will focus on preventing deepfakes from possibly disrupting the US elections. Also, Microsoft should open-source their tool to collaborate with other researchers to curb the rise of deepfake generating tools. Till then, a critically thinking mind is still our best bet against these AI generated realistic deepfakes.

WireX August Download

Prakhar Sushant
I follow business and Technology passionately. A native of Delhi, living in Bangalore. I am an avid reader, a traveller but not a tourist.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

This AI tool predicts onset of Alzheimer’s disease seven years before doctors

An Artificial Intelligence model, developed by IBM Research and pharmaceutical giant Pfizer, can predict Alzheimer’s seven years earlier than doctors with 70% accuracy. The...

Teardown video shows iPhone 12 and 12 Pro share same logic board, battery

A teardown video of the iPhone 12 and iPhone 12 Pro has confirmed the battery life of the models and given iOS lovers a...

Amid #BoycottChina, Chinese firms lead Indian smartphone market with 76% share

The Indian smartphone market showed an 8% growth in the third quarter of 2020, according to a market report by Canalys, a technology market...

Covid vaccine may not be safe in US

Vaccine stakeholders like health officials, hospitals, and pharmaceutical companies are taking stringent measures to prevent Covid vaccines from landing in the hands of criminals....

Recent Comments