YouTube made AI enhancements to videos without telling users or asking permission. As AI quietly mediates our world, what happens to our shared connection with real life?
Rick Beato’s face just didn’t look right. “I was like ‘man, my hair looks strange’, he says. “And the closer I looked it almost seemed like I was wearing makeup.” Beato runs a YouTube channel with over five million subscribers, where he’s made nearly 2,000 videos exploring the world of music. Something seemed off in one of his recent posts, but he could barely tell the difference. “I thought, ‘am just I imagining things?’”
It turns out, he wasn’t. In recent months, YouTube has secretly used artificial intelligence (AI) to tweak people’s videos without letting them know or asking permission. Wrinkles in shirts seem more defined. Skin is sharper in some places and smoother in others. Pay close attention to ears, and you may notice them warp. These changes are small, barely visible without a side-by-side comparison. Yet some disturbed YouTubers say it gives their content a subtle and unwelcome AI-generated feeling.
There’s a larger trend at play. A growing share of reality is pre-processed by AI before it reaches us. Eventually, the question won’t be whether you can tell the difference, but whether it’s eroding our ties to the world around us.
“The more I looked at it, the more upset I got,” says Rhett Shull, another popular music YouTuber. Shull, a friend of Beato’s, started looking into his own posts and spotted the same strange artefacts. He posted a video on the subject that’s racked up over 500,000 views. “If I wanted this terrible over-sharpening I would have done it myself. But the bigger thing is it looks AI-generated. I think that deeply misrepresents me and what I do and my voice on the internet. It could potentially erode the trust I have with my audience in a small way. It just bothers me.”
AI is increasingly a medium that defines our lives and realities – Samuel Woolley
Shull and Beato weren’t the first to notice the problem. Complaints on social media date back to June, at least, with users posting closeups of odd-looking body parts and questioning YouTube’s intentions. Now, after months of rumors in comment sections, the company has finally confirmed it is altering a limited number of videos on YouTube Shorts, the app’s short-form video feature.
“We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video),” said Rene Ritchie, YouTube’s head of editorial and creator liaison, in a post on X. “YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features.”
YouTube did not respond to the BBC’s questions about whether users will be given a choice about AI tweaking their videos.
It’s certainly true that modern smartphones come with built-in AI features that can enhance image and video quality. But that’s an entirely different affair, according to Samuel Woolley, the Dietrich chair of disinformation studies at the University of Pittsburgh in the US. “You can make decisions about what you want your phone to do, and whether to turn on certain features. What we have here is a company manipulating content from leading users that is then being distributed to a public audience without the consent of the people who produce the videos.”
Woolley argues YouTube’s choice of words feels like a misdirection. “I think using the term ‘machine learning’ is an attempt to obscure the fact that they used AI because of concerns surrounding the technology. Machine learning is in fact a subfield of artificial intelligence,” he says.
Ritchie shared additional details in a follow-up post, drawing a line between “traditional machine-learning” and generative AI – where an algorithm creates entirely new content by learning patterns in vast datasets. Woolley, however, says this isn’t a meaningful distinction here.
Regardless, the move is indicative of how AI continues to add additional steps between us and the information and media we consume, often in ways you’d never notice at first glance.
“Footsteps in the sand are a great analogy,” says Jill Walker Rettberg, a professor at the Center for Digital Narrative at the University of Bergen in Norway. “You know someone made those footprints. With an analogue camera, you know something was in front of the camera because the film was exposed to light. But with algorithms and AI, what does this do to our relationship with reality?”
In March 2025, controversy erupted over an apparent AI remaster of ’80s sitcoms The Cosby Show and A Different World streaming on Netflix. The shows are available in high-definition, despite the fact they were originally shot on video tape. The Verge called the results a “nightmarish mess of distorted faces, garbled text and misshapen backgrounds”.
What happens if people know that companies are editing content from the top down, without even telling the content creators themselves? – Samuel Woolley
It’s more than ’80s reruns and YouTube videos, however. In 2023, Samsung was caught artificially enhancing photos of the Moon taken on its newer devices. The company later issued a blog post detailing the AI behind its Moon photos.
Owners of Google Pixel smartphones get an even more dramatic feature which uses AI to fix smiles. The Best Take tool selects all of the most appealing expressions on people’s faces from a series of group photos, compiling them into a handsome new picture of a moment that never happened in the real world. Google’s latest device, the Pixel 10, comes with a new feature that uses generative AI in its camera to allow its users to zoom up to 100x – far beyond what the camera is physically able to capture.
As features like these grow more common, it raises new questions about what photos even represent.
It isn’t a new paradigm. Thirty years ago, there was similar handwringing about the havoc that Photoshop would reap on society. Decades later, we had conversations about the harms of airbrushing models in magazines and beauty filters on social media. Perhaps AI is more of the same, but it puts these trends on steroids, according to Woolley.
Click this link for the original source of this article.
Author: Thomas Germain via BBC
This content is courtesy of, and owned and copyrighted by, https://www.technocracy.news and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.