If you use social media you’ve probably seen them – polished fitness videos promising dramatic body transformations in weeks.
They show chiselled physiques, striking before‑and‑after images and claims that you can look years younger by following a simple routine.
The results often look too good to be true.
In many cases, they are.
A BBC investigation has uncovered misleading fitness adverts featuring AI‑generated characters that breach UK advertising rules.
Many adverts also failed to make clear that the people featured were not real.
And why were they doing it? To sell a subscription to a fitness app.
So how easy is it to tell whether the person giving fitness advice exists? And does it matter?
AI content has flooded social media feeds in the past couple of years, and videos promoting exercise and online fitness programmes are becoming increasingly common.
Many of the adverts flagged to the Advertising Standards Authority (ASA) by the BBC featured AI‑generated characters claiming to have followed workout programmes themselves. They also show transformations experts say are scientifically implausible in such short timeframes.
The videos promise users they can change their bodies in weeks, “look 20 years younger”, or “lose 40lb in one month”.
Once users engage with exercise or fitness content, algorithms quickly flood their feeds with similar material.
Prof Andy Miah, an AI expert from the University of Salford, says the trend is “huge” and those scrolling are drawn in because they are looking for advice.
“People are looking for solutions to their health, their fitness, their looks,” he says. “There’s always been an appetite for that kind of content – but now it’s incredibly hard to tell who to believe.”
Unlike human influencers, AI characters can produce content endlessly, and users cannot opt out.
“You can’t turn [AI content] off,” Prof Miah says. “It’s impossible to stop your feeds being proliferated with this material.”
He accepts there are many positive aspects to AI, but describes the current landscape as a “wild west” in terms of regulation and says some ads could be harmful.
“The claims about how quickly you can make gains are completely unrealistic,” he says. “That feeds false hope and creates damaging expectations.”
The BBC contacted the companies behind several of the adverts found to be problematic. None responded.
Many of the adverts seen by the BBC contained different AI characters but similar messaging. They included:
-
A podcast-style setup with a fake instructor being interviewed about her workout that would make women look “20 years younger” in a month
-
A fake army sergeant claiming the gym doesn’t work and promising “unbelievable” results in weeks by following his military work-out
-
Three women on a beach taking about their body transformations and showing themselves before and after. None of their bodies are real
-
An AI woman giving a fake presentation about how doctors ask her for advice about fitness, who claims her routine can see people lose 40lb in 28 days – and is cheered by an AI crowd.
On a beach in North Tyneside, fitness instructor David Fairlamb is putting nearly 40 people of all ages through their paces in a group training session.
He has worked in the fitness industry for 30 years – long before social media, let alone artificial intelligence.
Fairlamb, 54, believes AI has its place in fitness programmes and nutrition, but says it cannot fully replace real-life coaching.
“You cannot beat that real person, that real connection, the accountability,” he says.
When shown the AI‑generated adverts that breached advertising rules, his reaction is immediate.
“It’s so wrong. It’s so misleading. And it’s so worrying for younger kids,” he says.
“These ads talk about 28‑day transformations. I’ve been doing this for 30 years and I’m telling you now – that just doesn’t happen. You’ve got no chance.”
Fairlamb recently started working alongside his daughter Georgia Sybenga, 25, who says even people who grew up around social media struggle to tell what is real.
“Sometimes I question it myself,” she says. “Some of them, you really can’t tell.”
Both worry a constant exposure to idealised, artificial bodies can damage confidence – particularly among young people.
“They think ‘I could look like that in 30 days’,” Fairlamb says. “But that body might not even be real. For young lads, for their mental health, it’s really concerning.”
Sybenga also warns AI‑generated fitness programmes do not have the full picture.
“It doesn’t take into consideration injuries or health conditions, so… you could injure yourself,” she says.
The ASA says AI isn’t banned in advertising, but it’s all about the message.
“We don’t judge ads based on whether they contain AI. We judge them on whether they’re misleading or likely to be harmful,” Adam Davison, the ASA’s director of data science, tells BBC Sport.
He says the regulator has received about 300 complaints involving AI‑generated advertising in the past year – and the number is rising.
“One challenge is that sometimes it can be hard even for us to tell whether AI has been used in an ad,” he adds.
AI tools make it easier to generate advertising quickly for social media, and sometimes by people less familiar with advertising rules than traditional companies, says Davison.
The ASA does not comment on specific cases, but is taking steps against the advertisers flagged by the BBC which made claims that were “unlikely” to be substantiated.
Because the advertisers had no previous complaints issued against them, “advice notices” were issued providing guidance to them on how to comply with advertising codes. As a result the BBC is choosing not to identify those involved.
“A big part of what the ASA does, as well as our enforcement work, is trying to educate advertisers on their responsibilities,” Davison says.
“If you’re not being careful to review the content that’s coming out of those tools then it’s very easy to have something misleading ending up being posted.”
Social media companies say AI‑generated content should always be labelled, but the BBC found multiple examples where disclaimers were hidden, unclear or missing.
We showed our findings to Meta and TikTok, but both companies declined to comment.
However, TikTok says it has labelled more than 1.3 billion AI‑generated videos to date, while Meta assesses whether something has been created by AI by relying on indicators that other companies include in their creation tools.
Many users the BBC spoke to said they would welcome the option to opt out of AI‑generated content entirely.
Meta and TikTok declined to say whether this option was under consideration.
But the scale of AI content is increasing all the time.
“I think the economics of social media and the kind of attention economy in which we live lend itself towards more AI content,” says Prof Miah.
“It’s clearly useful in many ways. But where it then misleads people to have false expectations… is where perhaps regulation needs to step in.”
Add Comment