How AI Can Help to Combat Deepfakes
Deepfakes put doctored images or voices together to create controversy and influence opinion. But before you throw up your hands in despair and wonder where the world is going, other forms of technology may yet be able to counter this threat. How can AI, for example, be used to combat this technology?


AI Innovations
Towards the tail end of 2020, Microsoft launched its Video Authenticator. This is an AI-powered tool that can scan a video or image and tell viewers how likely it is to be a deepfake. The tool will give a percentage estimate as it uncovers tell-tale signs that may not be discernible to the average person.
Also, researchers at the Universities of California in Berkeley and Southern California are developing machine learning technology. This tech examines soft biometrics (speech patterns and facial quirks) to uncover deepfakes with a very high degree of accuracy. 
Facebook ran a DeepFake Detection Challenge in 2019 and also released a huge set of deepfakes (more than 100,000 of them) in order to help third parties develop AI tools for the future.


Look Closely
In the meantime, you may be able to tell if a video is fake by studying it closely. Look for unnatural eye movement, facial expressions, or a lack of 'expected' emotion that should match the activity or discussion.


Expect Developments
Deepfakes may be in their infancy at the moment, and the perpetrators may be getting more sophisticated — but you can also expect the good guys to up their game as well. It’s great to know that AI will become an increasingly powerful line of defence against harmful deepfakes.