Apple and Google Apps Allow “Nudify” Features, Cross 483 Million Downloads
News Synopsis
A report has raised serious concerns about the continued availability of so-called “nudify” applications on platforms operated by Apple and Google, revealing massive downloads and revenue generation despite policies prohibiting such content.
Report Highlights Scale of the Problem
According to findings by the Tech Transparency Project, dozens of mobile applications capable of generating non-consensual sexualized images remain accessible on both the Apple App Store and Google Play Store.
These apps, often marketed under terms like “nudify” or “undress,” allow users to digitally manipulate images of individuals—including celebrities—making them appear partially or fully nude. The report estimates that such applications have collectively been downloaded over 483 million times, generating approximately $122 million in revenue.
The data, supported by analytics firm AppMagic, underscores the widespread reach and profitability of these controversial tools.
Search and Discovery Mechanisms Under Fire
One of the most troubling aspects highlighted in the report is how easily users can discover these apps. Searching for keywords related to nudification on app stores often yields direct results, making it simple for users to find and download such software.
In some cases, autocomplete suggestions within app store search bars even recommend similar apps, effectively guiding users toward them. Researchers argue that this raises questions about the role of platform algorithms in promoting harmful content.
Katie Paul, director of the Tech Transparency Project, criticized the platforms for not only failing to remove such apps consistently but also indirectly promoting them through search and advertising systems.
Apps Exploit Loopholes in Platform Policies
Both Apple and Google maintain strict guidelines prohibiting explicit or sexually exploitative content. Apple’s App Store rules ban overtly pornographic material, while Google Play policies explicitly prohibit apps that claim to undress individuals or objectify them.
However, the report suggests that many developers are bypassing these restrictions by presenting their apps as general-purpose image editors or AI generators. Once installed, these apps often reveal capabilities that can be misused to create inappropriate or non-consensual content.
Researchers identified at least 18 such apps on Apple’s platform and 20 on Google’s, many of which offered subscription-based models, further monetizing their controversial features.
Platform Responses and App Removals
In response to the findings, both tech giants have taken action against several flagged applications. Apple confirmed that it removed 15 apps after being alerted to their presence, while also warning additional developers to comply with its guidelines or face removal.
Google, meanwhile, stated that many of the apps mentioned in the report have been suspended from the Google Play Store for policy violations. The company also indicated that investigations into remaining apps are ongoing.
Despite these efforts, the report notes that new or modified versions of similar apps continue to appear, suggesting that enforcement remains inconsistent.
Examples Reveal Extent of Misuse
The report and subsequent investigations uncovered several troubling examples. One app, identified as a video generator, allegedly included suggestive templates that could be misused to create explicit content. Another app demonstrated the ability to digitally remove clothing from uploaded images.
In one case, an app available on Google Play allowed users to swap faces onto pre-existing video templates featuring suggestive content. Although marketed as a harmless entertainment tool, its functionality raised serious ethical concerns.
Developers of some apps have responded by removing problematic features or launching internal investigations. However, critics argue that these actions often come only after public scrutiny.
Experts Criticize ‘Opaque’ Enforcement
Experts say the issue points to deeper structural problems in how app marketplaces operate. Anne Helmond, a digital platforms researcher, described enforcement efforts as “uneven and largely opaque.”
She noted that apps framed as generic AI tools can easily pass review processes, even if they are later used for harmful purposes. Additionally, app store ranking systems often prioritize engagement, which can inadvertently boost the visibility of controversial apps.
This creates a cycle where problematic apps gain popularity, generate revenue, and remain visible despite policy violations.
Growing Global Pressure for Regulation
The proliferation of nudify apps has sparked concern among policymakers worldwide. Governments are increasingly calling for stricter oversight of digital platforms and stronger enforcement of existing rules.
In the United States, Donald Trump recently signed the Take It Down Act, legislation that criminalizes the distribution of non-consensual sexual content and mandates its removal from online platforms.
Similarly, the United Kingdom is preparing to introduce laws that could hold tech executives personally accountable if their platforms fail to remove such content promptly.
These developments signal a growing recognition of the risks posed by AI-based image manipulation tools and the need for robust security measures.
Ethical and Social Implications of AI Misuse
Beyond regulatory concerns, the issue raises broader ethical questions about the use of artificial intelligence. Nudify apps exploit advanced AI technologies to create realistic but fabricated images, often without the consent of the individuals depicted.
Such misuse can lead to reputational damage, emotional distress, and even legal consequences for victims. As AI tools become more sophisticated, the potential for abuse is expected to increase, making it imperative for platforms to act responsibly.
The Road Ahead: Balancing Innovation and Responsibility
The controversy surrounding nudify apps highlights the challenges faced by tech companies in balancing innovation with user safety. While AI-powered tools offer immense potential, they also require careful regulation and oversight.
For Apple and Google, the task ahead involves strengthening app review processes, improving detection mechanisms, and ensuring greater transparency in enforcement actions.
As scrutiny intensifies, both companies will need to demonstrate that they can effectively uphold their policies and protect users from harmful content.


