A concerning development has come to light regarding Spotify’s content moderation practices, with dozens of podcasts promoting illegal online pharmacies being discovered, according to a news agency investigation.
These podcasts were often discovered via simple searches for prescription medications like “Adderall” and “Xanax.” Although they appeared as legitimate wellness or health shows, they were actually fronts for promoting and selling controlled substances without a valid prescription, an act that directly violates U.S. federal laws.
Many of these podcasts used robotic or AI-generated voices to deliver advertisements for dangerous prescription drugs like Oxycodone, Methadone, Vicodin, and Ambien.
“Listeners were frequently directed to shady pharmacy websites with promises of ‘FDA-approved delivery’ without a doctor’s approval.”
These platforms exploit the growing accessibility of generative AI tools to create convincing, yet deceptive content that bypasses standard moderation filters.
Podcast titles such as:
“My Adderall Store”
“Order Xanax 2 mg Online Big Deal On Christmas Season”
“Xtrapharma.com”
...left no ambiguity about their intentions. Many of these episodes even included direct URLs in the description, linking users to illegal online drug marketplaces.
Search terms like “Adderall” or “Xanax” consistently pulled up these drug-peddling podcasts in top search results. Shockingly, some of these remained live for weeks or months, despite their illegal nature.
“Some podcasts only vanished after they began attracting attention, suggesting the company’s system leans heavily on user reports and external scrutiny, rather than actively identifying harmful content on its own.”
This exposes a gap in Spotify's proactive moderation capabilities, especially in dealing with emerging content threats amplified by AI.
Although Spotify has removed many of the flagged podcasts, new ones continue to appear, often within days. The speed and ease of content publishing on platforms like Spotify makes them ripe for exploitation.
“Spotify reaffirmed its commitment to removing illegal and spam content, noting that it uses both automated systems and human moderators to monitor podcasts.”
Despite these claims, experts believe Spotify’s current systems are falling short of what’s required to tackle the scale and speed of AI-generated spam.
With teen overdoses linked to online drug sales on the rise, the urgency for action has never been greater.
“Parents, advocates, and online safety groups are calling for stricter safeguards, more proactive monitoring, and greater accountability.”
This incident not only raises questions about Spotify’s role, but also underscores the wider challenge tech companies face in preventing digital drug abuse.
The emergence of drug-promoting podcasts on Spotify reveals a troubling loophole in the platform’s content moderation system. While Spotify claims to use both AI and human oversight to monitor content, the persistence of these illegal shows—often featuring AI-generated voices and direct links to illicit pharmacies—highlights the inadequacy of current safeguards.
The fact that these podcasts remain live for extended periods, sometimes only being removed after public attention, raises serious concerns about the effectiveness of Spotify’s proactive enforcement. With growing cases of teen overdoses and the rising threat of online drug abuse, this issue extends beyond digital policy into real-world harm.
As generative AI becomes more accessible, bad actors are exploiting platforms with open publishing models to spread dangerous and unlawful content. To regain user trust and ensure safety, Spotify—and similar platforms—must urgently strengthen content moderation, implement real-time threat detection, and be held accountable for enabling access to harmful content.