Apple's Quiet Threat to Grok: How App Store Pressure Failed to Stop Sexual Deepfakes

Apple quietly wielded its App Store gatekeeping power against Grok in January, threatening removal over nonconsensual sexual deepfakes flooding the platform, according to reporting by NBC News and The Verge. The action remained hidden from public view even as the crisis intensified.

In a letter to US senators obtained by NBC News, Apple revealed it had contacted the teams behind both X and Grok after receiving complaints and seeing news coverage of the deepfake scandal. The company demanded the developers "create a plan to improve content moderation" or face removal.

At the time, Grok was freely accessible on X and as a standalone app with minimal safeguards. Users could easily generate and share sexualized deepfakes and "undress" images of real people, affecting disproportionately women and some apparently minors. These were clear violations of App Store guidelines that Apple typically enforces aggressively.

Apple's review of proposed changes yielded different outcomes for each app. The company concluded that X had "substantially resolved its violations," but Grok "remained out of compliance." Apple warned xAI that "additional changes to remedy the violation would be required, or the app could be removed from the App Store."

Only after further negotiations did Apple determine Grok had "substantially improved" and approve its submission. Throughout this behind-the-scenes process, both apps remained live on the App Store, creating a drawn-out timeline that may explain the confusing rollout of moderation changes announced in real time.

Those changes included limiting Grok on X to paying subscribers and attempting to prevent the tool from undressing women. Neither measure proved particularly effective beyond making the tool slightly harder to access. Later interventions, such as allowing users to block Grok from editing their photos, are easily circumvented.

Despite Apple's approval and xAI's claims of tightened safeguards, Grok continues to generate sexualized deepfakes with relative ease. Cybersecurity sources reported they have successfully created explicit images of celebrities and political figures using the tool. Reporters have independently replicated similar results with images of consenting adults. NBC News reported comparable findings.

The episode highlights the tension between Apple's role as a powerful gatekeeper and its financial interests. The company profits from having apps like X and Grok on its App Store. Google, which similarly profits from the apps through its Google Play store, also refrained from public comment on the matter. Both companies chose to act quietly rather than apply the iron-fisted enforcement they typically reserve for other violations.

Source: The Verge AI
← Back to Daily
Apple's Quiet Threat to Grok: How App Store Pressure Failed to Stop Sexual Deepfakes — 38twelveDaily