Now it appears that artificial intelligence can make anyone say anyone. Literally anyone and literally anything. And the proof is in the Mona Lisa … RAPPING?
All that’s required is a still image of a face, a recording of words, singing, anything, and the new software.
Microsoft’s product exhibits the possibilities.
A report at CNN detailed one of the recent “advances” in computer tech, with Microsoft’s codes able to “take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking.”
That software is called VASA-1 and the report calls the results “a bit jarring.”
“Microsoft said the technology could be used for education or ‘improving accessibility for individuals with communication challenges,’ or potentially to create virtual companions for humans. But it’s also easy to see how the tool could be abused and used to impersonate real people,” CNN documented.
“Wow. Creating videos realistically depicting people saying words they never said? What could possibly go wrong with that?” commented author and WND Managing Editor David Kupelian. “Today’s ruling elites, from the Deep State to Big Tech, are so dependent on lies and deception – while censoring and attacking unwelcome truth as ‘disinformation,’ ‘misinformation’ and ‘malinformation’ – it’s easy to imagine that before long they’ll be using technology like this to enhance their daily practice of portraying the innocent as guilty and the guilty as innocent.”
CNN noted that experts now worry the tech could “disrupt” existing industries of film and advertising, and elevate the level of “misinformation” to which consumers are subjected.
The report said Microsoft isn’t going to release the software … yet.
“The move is similar to how Microsoft partner OpenAI is handling concerns around its AI-generated video tool, Sora: OpenAI teased Sora in February, but has so far only made it available to some professional users and cybersecurity professors for testing purposes,” the report said.
Online, Microsoft researchers claimed they are “opposed” to anything that creates “misleading” content.
However, they’ve designed to code to take into account face and head movements, lip motion, expression, eye gaze, blinking and much more.
Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected].
SUPPORT TRUTHFUL JOURNALISM. MAKE A DONATION TO THE NONPROFIT WND NEWS CENTER. THANK YOU!
The post Deceptive new tech has people voicing words they never said appeared first on WND.
Click this link for the original source of this article.
Author: Bob Unruh
This content is courtesy of, and owned and copyrighted by, https://www.wnd.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.