The App Store has an AI slop problem
Sep 15, 2025
A few weeks ago, a friend came to me for advice. He’d come across an app developer who was selling the rights to an AI sports app and was wondering if it was a worthwhile investment. The developer claimed that the app could analyze game or practice footage and provide the player with tips to help them improve their skills. It was powered by a popular LLM and the API costs were supposedly very reasonable.
I was immediately suspicious for all sorts of reasons, namely that I didn’t think there was any chance the LLM in question was actually trained on footage of the sport in question, and without specific training data I doubted that it would be able to provide accurate feedback. But for the sake of doing proper due diligence for my friend, I downloaded the app anyway.
My concerns were quickly validated. The app seemed like it was probably vibe-coded. It used non-standard navigation. Onboarding choices seemed to have no impact on any of the copy. There was a clip-art mascot that was probably AI-generated. At the end of the setup process, it asked me to submit game or practice footage so it could give me an initial grade. I gave it a clip of me playing ultimate frisbee (which, to be clear, was not the sport the app was geared towards) and it rated me a 78. At that point, I hit the paywall, and didn’t bother subscribing to investigate the app further.
I had a strong feeling that the video analysis was being faked. Why would the developer bear the expense of running LLM inference on the video of a non-paying user? But just to be sure, I put my phone in airplane mode and ran through the onboarding process again. Sure enough, the app accepted my video, took the same amount of time to upload and analyze it (despite me not being connected to the internet) and again rated me a 78.
At this point, I’d seen enough. But just out of curiosity, I checked the developer’s App Store page to see what else they had on there. If someone were to hypothetically try to make money posing as a developer and selling vibe-coded apps to prospective entrepreneurs, they probably wouldn’t do so one app at a time. So I wasn’t surprised to find that this developer had 10 or so AI-oriented apps available.
All of their apps were launched in the last year and each of them was either unrated or had a couple 1- or 2-star reviews. There were a couple other sports apps. There was one that let you virtually try on clothes. There were a few apps to help you “improve your rizz,” your attention span, or your sexual stamina. (Two of those apps actually used the same app icon, and I’ll leave it up to your imagination to figure out which two.)
There was also an AI doctor app and an AI therapist app. And this is what made me realize there were much bigger issues at play than making sure my friend didn’t cut this developer a check for an app that doesn’t do what he claims it does.
The first issue has to do with vibe coding. I think lowering software development’s barrier of entry has a lot of benefits, but everyone isn’t necessarily going to use those powers for good.
The rest have to do with App Store policy. There has always been crap on the App Store, but the proliferation of LLMs has likely made this worse. And the release of Apple’s Foundation Models with iOS 26 is only going to make it even easier and cheaper for developers to integrate LLMs into their apps.
Just a few days before this, Sam Altman and OpenAI were sued by the parents of a teenager who killed himself after discussing his suicide plans with ChatGPT. What sort of liability was Apple opening itself up to when by allowing shady AI health and therapy apps on the App Store? How much worse does was it going to get when those apps start using Apple’s own models? When Apple claims that it deserves a 30% cut of App Store purchases, in part because its policies and review processes protect users from harm and fraud, is this really the sort of standard they’re fighting to uphold?
Coming across this trove of AI slop apps also hit close to home because an app that I recently submitted and app to the store and was rejected. I’d made a very simple astrology app to fetch a person’s sun and moon signs based on their birthday, and it was rejected for “not doing enough.” Which is sort of fair, but sort of not. Simplicity was kind of the app’s whole point. I made it because I hated the experience of googling a horoscope and then having to wade through SEO-ridden webpages to find the information I wanted. My app may not do a lot, but I’ve been using it way more than I thought I would, and I wrote it using my regular human intelligence, and it’s definitely not going to harm anyone in any way.

(My silly little astrology app, which is for some reason available on the Mac but not iOS.)
It’s frustrating that vibe-coded AI slop health and therapy apps can exist on the App Store but my harmless little astrology app can’t. And coming off the Apple Intelligence debacle, it’s troubling to spot another AI-related Apple controversy brewing. I’m worried that while we’re all busy arguing about the merits of Liquid Glass, vibe coding and LLMs are about to dramatically alter the nature of the software that’s available on the App Store.
In the meantime, I filed a report with Apple to investigate the app developer in question. And I plan to resubmit my astrology app with a new feature based on Apple’s Foundation Models soon. I’m sure this time it’ll get approved.