Microsoft's VASA-1 AI Model Can Generate a Deepfake Video That Looks Real
While all of us think that deepfake videos are ruining our lives, Microsoft just released a research paper showing how their latest AI model named VASA-1 can generate realistic deepfake videos just from one image and an audio clip.
Please try to compare these screenshots for what Microsoft released as demo versions of VASA-1 video outputs:
Checked? yes, the AI model is capable of generating emotions and gesture of real humans from just an image and audio clip!
Why I think it could lead to a disaster?
Imagine when you can generate a conversational fake story using ChatGPT and turn it into audio using 10s of freely available tools like ElevenLabs even have the same voice tone, and accent, and same-voice in your audio, then you can get images of almost any person online from their social media accounts and their audio from any video on Facebook etc, you get a tool that merges everything together and helps you generate any kind of message that looks so real and undetectable?
Properly a world, where you can’t trust anything now. As I explained here, this new AI model introduced by Microsoft could make things worse for general people who don’t know what AI is capable of.
With VASA-1 you don’t need to write a prompt, you can just get an image, upload an audio clip from anywhere, and let the AI model generate a lifelike audio-driven talking face that looks real and is being generated in real-time.
Isn’t this an alarming news?
Well, the good news with this announcement is that Microsoft has no plans to realize any online demo or any API access for the particular research-only product.
Still, they should the capabilities of the VASA 1 AI model raising concerns and hundreds of questions for people who are fighting a privacy war with social media platforms and who don’t want their images to talk in real-time without their permission and even say what they don’t want to.
Well, AI is getting improved every day as every tech company and 10s of startups are burning billions of dollars in AI research efforts to make new tools and products that are being introduced every then and now.
This is a good thing while at the same time conferencing for the general public as privacy is at risk for all of us.
Since companies like Microsoft, Facebook, and Google already have access to the personal data of millions of people, and the same companies are into making such controversial AI models, this might be an alarming situation for people to think about their privacy and data being leaked, stolen or being used by these companies who also sell user data for generating profit. This could lead to a disaster or a new era for technology companies.
Only time will tell, and here's the official research page for you to check it on your own.