Uploading your face to a random app feels categorically different from downloading a game or signing up for a newsletter. Your face is biometric data — it can identify you, it can be used to generate fake content, and once it's in a database somewhere, you have no real way to take it back. So before you tap Upload, it's worth understanding what actually happens on the other side of that button.
AI portrait apps need your photos to generate personalised results — that part is unavoidable. What's not unavoidable is storing your face indefinitely, using it to train AI models, or sharing it with third parties. The difference is architectural, and the best apps make it explicit. Look for three things before uploading: a clear retention policy, an explicit no-training guarantee, and in-app deletion. Cherry was built with all three from day one.
Why they need your photos at all
The whole point of an AI portrait app is that the output looks like you — not a generic AI face. To achieve that, the app needs to anchor the generation to your specific features: your face shape, your eyes, the particular geometry of how everything sits together.
There are two technically different ways to do this, and the distinction matters enormously from a privacy standpoint.
Method 1: Training (the riskier approach)
Some apps — Lensa's Magic Avatars being the most prominent example — use a per-user fine-tuning approach. When you upload your 10 to 20 photos, the app doesn't just use them as input; it actually trains a small AI model on your face. That model learns to reproduce your likeness, and then the app uses that trained model to generate your portraits.
The problem: that trained model is a compressed mathematical representation of your face. It lives on the app's servers. Even if they delete your original photos afterward, the model may still encode information derived from you. The training process is also why these apps take 20 to 60 minutes — you're not waiting for generation, you're waiting for your face to be learned.
Method 2: Inference (the architecturally cleaner approach)
Other apps use a single shared model pre-trained on a large general dataset. Your photos are passed to this model as inputs at the moment of generation — the model uses them to guide the output, produces the portrait, and then your photos are no longer part of the process. No per-user model is created. Nothing about you is baked into a model that sits on a server.
This is the approach Cherry uses. Your five reference photos are input, not training material. They guide the generation, the portrait is created in seconds, and your photos remain in your private encrypted storage — not inside a model.
The one-sentence version: Training means part of you lives in a model on their server indefinitely. Inference means you were a guest input, not a permanent resident.
What an AI photo app should never do
There's a spectrum of behavior here, and most legitimate apps fall somewhere in the middle. But a few practices should be absolute dealbreakers.
1. Using your photos to train shared models without consent
Your selfies should never be fed into a model that then serves other users. If an app's privacy policy says anything like "we may use your content to improve our services" without explicitly carving out biometric photos, that's a red flag. Your face improving their product for the benefit of all users — without explicit opt-in — is the kind of clause that has triggered class-action lawsuits under Illinois BIPA.
2. Retaining your photos indefinitely
There's no legitimate reason for an AI portrait app to keep your reference photos on their servers after you've generated your portraits. The generation is done. The photos served their purpose. Any app that retains photos "until you request deletion" with no stated time limit is keeping data it no longer needs — and creating risk it shouldn't be carrying on your behalf.
3. Sharing your photos with third parties
No AI portrait app has a legitimate reason to share your face photos with advertisers, data brokers, or partner companies. If an app's privacy policy doesn't explicitly prohibit third-party sharing of user-uploaded photos, you should treat it as if they do share. The absence of a prohibition is not the same as a guarantee.
4. Generating content of you without your active request
Your reference photos should only produce portraits when you tap a style and request a generation. There is no scenario in which an AI portrait app should be running background generations, using your face in their own marketing materials, or producing derivative content outside of your explicit interaction with the app.
How to actually read an AI app's privacy policy
Nobody reads privacy policies. Companies know this and write them accordingly. Here's the shortcut: you only need to find answers to four questions.
How long do they keep my photos? Look for "retention period," "deletion," or "how long we keep your data." If the answer is "until you request deletion" with no stated maximum, that's indefinite storage with extra steps.
Do they use my photos to train AI? Search for "train," "improve our models," "machine learning," and "content." A privacy-respecting app will explicitly carve out biometric photos from any training clause. "We may use your content to improve our services" is not a carve-out — it's the opposite.
Do they share my photos with anyone? Look for "third parties," "partners," "affiliates," and "service providers." Sharing with cloud storage infrastructure providers is acceptable — that's how hosting works. Sharing with data partners or advertising networks is not.
Can I delete everything? There should be a clear mechanism — ideally in-app — to delete your photos and generated portraits. "Contact us to request deletion" is weaker than a delete button you can tap in 30 seconds.
No training. No third-party sharing. In-app deletion. Your photos stay only as long as you want them.
Try Cherry on iOSThe laws that actually protect you
If you're in the US and you use an AI photo app, three laws are most relevant to how your face data is handled.
Illinois BIPA (Biometric Information Privacy Act) is the strongest biometric privacy law in the United States. It requires explicit written consent before collecting biometric identifiers — which can include facial geometry derived from photos — prohibits selling biometric data, and mandates destruction schedules. Companies that violate BIPA face statutory damages of $1,000 to $5,000 per violation, which is why several AI photo apps have faced class-action suits under it.
CCPA (California Consumer Privacy Act) gives California residents the right to know what personal data is collected about them, the right to delete it, and the right to opt out of its sale. Photos qualify as personal data, and biometric data receives heightened protection as "sensitive personal information" under CCPA's expanded 2023 rules.
GDPR applies if you're in Europe, or if the app is designed to serve European users. Under GDPR, biometric data is a "special category" requiring explicit consent for processing. Apps that want European users have to meet this standard — and the fines for violations are significant enough that most serious apps take it seriously.
Cherry was built to comply with all three. Not because we were forced to, but because these regulations describe what reasonable behavior looks like — and we agreed with them.
The honest reason we built it this way
I'm writing this as Cherry's founder, so the conflict of interest is obvious: I want you to trust Cherry over apps with looser practices. But the argument here doesn't require you to use Cherry — it just requires you to ask the right questions before uploading your face anywhere.
We chose inference over per-user training early in development. It was the cleaner architecture from a privacy standpoint — and it turned out to also be faster and better for the user experience. Sometimes the right call is also the product-improving call.
The broader point: your face is among the most personal things you can hand over to an app. The fact that most people do it without reading a single line of the privacy policy isn't a signal that it doesn't matter. It's a signal that the defaults have been made easy enough that people don't stop to ask.
Ask anyway.
No training. No third-party sharing. GDPR, CCPA, and BIPA compliant. In-app deletion. Available on iPhone now, Android coming soon.
Download Cherry on iOS