Why AI Photo Apps Need Your Selfies — And What They Should Never Do With Them

Uploading your face to a random app feels categorically different from downloading a game or signing up for a newsletter. Your face is biometric data — it can identify you, it can be used to generate fake content, and once it's in a database somewhere, you have no real way to take it back. So before you tap Upload, it's worth understanding what actually happens on the other side of that button.

TL;DR

AI portrait apps need your photos to generate personalised results — that part is unavoidable. What's not unavoidable is storing your face indefinitely, using it to train AI models, or sharing it with third parties. The difference is architectural, and the best apps make it explicit. Look for three things before uploading: a clear retention policy, an explicit no-training guarantee, and in-app deletion. Cherry was built with all three from day one.

Why they need your photos at all

The whole point of an AI portrait app is that the output looks like you — not a generic AI face. To achieve that, the app needs to anchor the generation to your specific features: your face shape, your eyes, the particular geometry of how everything sits together.

There are two technically different ways to do this, and the distinction matters enormously from a privacy standpoint.

Method 1: Training (the riskier approach)

Some apps — Lensa's Magic Avatars being the most prominent example — use a per-user fine-tuning approach. When you upload your 10 to 20 photos, the app doesn't just use them as input; it actually trains a small AI model on your face. That model learns to reproduce your likeness, and then the app uses that trained model to generate your portraits.

The problem: that trained model is a compressed mathematical representation of your face. It lives on the app's servers. Even if they delete your original photos afterward, the model may still encode information derived from you. The training process is also why these apps take 20 to 60 minutes — you're not waiting for generation, you're waiting for your face to be learned.

Method 2: Inference (the architecturally cleaner approach)

Other apps use a single shared model pre-trained on a large general dataset. Your photos are passed to this model as inputs at the moment of generation — the model uses them to guide the output, produces the portrait, and then your photos are no longer part of the process. No per-user model is created. Nothing about you is baked into a model that sits on a server.

This is the approach Cherry uses. Your five reference photos are input, not training material. They guide the generation, the portrait is created in seconds, and your photos remain in your private encrypted storage — not inside a model.

The one-sentence version: Training means part of you lives in a model on their server indefinitely. Inference means you were a guest input, not a permanent resident.

What an AI photo app should never do

There's a spectrum of behavior here, and most legitimate apps fall somewhere in the middle. But a few practices should be absolute dealbreakers.

1. Using your photos to train shared models without consent

Your selfies should never be fed into a model that then serves other users. If an app's privacy policy says anything like "we may use your content to improve our services" without explicitly carving out biometric photos, that's a red flag. Your face improving their product for the benefit of all users — without explicit opt-in — is the kind of clause that has triggered class-action lawsuits under Illinois BIPA.

2. Retaining your photos indefinitely

There's no legitimate reason for an AI portrait app to keep your reference photos on their servers after you've generated your portraits. The generation is done. The photos served their purpose. Any app that retains photos "until you request deletion" with no stated time limit is keeping data it no longer needs — and creating risk it shouldn't be carrying on your behalf.

3. Sharing your photos with third parties

No AI portrait app has a legitimate reason to share your face photos with advertisers, data brokers, or partner companies. If an app's privacy policy doesn't explicitly prohibit third-party sharing of user-uploaded photos, you should treat it as if they do share. The absence of a prohibition is not the same as a guarantee.

4. Generating content of you without your active request

Your reference photos should only produce portraits when you tap a style and request a generation. There is no scenario in which an AI portrait app should be running background generations, using your face in their own marketing materials, or producing derivative content outside of your explicit interaction with the app.

How to actually read an AI app's privacy policy

Nobody reads privacy policies. Companies know this and write them accordingly. Here's the shortcut: you only need to find answers to four questions.

How long do they keep my photos? Look for "retention period," "deletion," or "how long we keep your data." If the answer is "until you request deletion" with no stated maximum, that's indefinite storage with extra steps.

Do they use my photos to train AI? Search for "train," "improve our models," "machine learning," and "content." A privacy-respecting app will explicitly carve out biometric photos from any training clause. "We may use your content to improve our services" is not a carve-out — it's the opposite.

Do they share my photos with anyone? Look for "third parties," "partners," "affiliates," and "service providers." Sharing with cloud storage infrastructure providers is acceptable — that's how hosting works. Sharing with data partners or advertising networks is not.

Can I delete everything? There should be a clear mechanism — ideally in-app — to delete your photos and generated portraits. "Contact us to request deletion" is weaker than a delete button you can tap in 30 seconds.

Cherry's answer to all four questions

No training. No third-party sharing. In-app deletion. Your photos stay only as long as you want them.

Try Cherry on iOS

The laws that actually protect you

If you're in the US and you use an AI photo app, three laws are most relevant to how your face data is handled.

Illinois BIPA (Biometric Information Privacy Act) is the strongest biometric privacy law in the United States. It requires explicit written consent before collecting biometric identifiers — which can include facial geometry derived from photos — prohibits selling biometric data, and mandates destruction schedules. Companies that violate BIPA face statutory damages of $1,000 to $5,000 per violation, which is why several AI photo apps have faced class-action suits under it.

CCPA (California Consumer Privacy Act) gives California residents the right to know what personal data is collected about them, the right to delete it, and the right to opt out of its sale. Photos qualify as personal data, and biometric data receives heightened protection as "sensitive personal information" under CCPA's expanded 2023 rules.

GDPR applies if you're in Europe, or if the app is designed to serve European users. Under GDPR, biometric data is a "special category" requiring explicit consent for processing. Apps that want European users have to meet this standard — and the fines for violations are significant enough that most serious apps take it seriously.

Cherry was built to comply with all three. Not because we were forced to, but because these regulations describe what reasonable behavior looks like — and we agreed with them.

The honest reason we built it this way

I'm writing this as Cherry's founder, so the conflict of interest is obvious: I want you to trust Cherry over apps with looser practices. But the argument here doesn't require you to use Cherry — it just requires you to ask the right questions before uploading your face anywhere.

We chose inference over per-user training early in development. It was the cleaner architecture from a privacy standpoint — and it turned out to also be faster and better for the user experience. Sometimes the right call is also the product-improving call.

The broader point: your face is among the most personal things you can hand over to an app. The fact that most people do it without reading a single line of the privacy policy isn't a signal that it doesn't matter. It's a signal that the defaults have been made easy enough that people don't stop to ask.

Ask anyway.

An AI portrait app built with privacy by design

No training. No third-party sharing. GDPR, CCPA, and BIPA compliant. In-app deletion. Available on iPhone now, Android coming soon.

Download Cherry on iOS

Frequently asked questions

Do AI portrait apps store your photos?
Most do, for at least as long as it takes to generate results. Some store them indefinitely unless you manually request deletion. Look for an explicit retention period, in-app deletion, and a no-training guarantee before uploading. Cherry stores your photos only as long as you choose to keep them — and you can delete everything from inside the app at any time.
What's the difference between inference and training?
Inference means your photos are passed to a pre-existing model as inputs at the moment of generation — the model uses them to condition the output, then they're no longer needed. Training means your photos are used to update or fine-tune a model, which now contains a representation of your face. Inference is architecturally cleaner from a privacy standpoint. Cherry uses inference only.
Are AI photo apps collecting biometric data?
Under laws like Illinois BIPA and aspects of GDPR and CCPA, facial geometry data extracted from photos can qualify as biometric data subject to heightened protection. Apps that train models on your face are most likely to trigger these protections. Always check whether an app explicitly claims BIPA or GDPR compliance before uploading.
Can AI apps sell your selfies?
Technically yes, if their privacy policy permits it and you agreed by accepting their terms. Reputable AI portrait apps explicitly prohibit selling user photos to third parties. Always check the third-party sharing section of any AI app's privacy policy before uploading your face.
Does Cherry train AI models on your photos?
No. Cherry uses your reference photos only as inference input to generate portraits — they are never used to train, fine-tune, or improve any AI model. Your photos are stored in private encrypted servers in the United States and can be deleted at any time from inside the app. Cherry is explicitly compliant with GDPR, CCPA, and the Illinois Biometric Information Privacy Act (BIPA).

Keep reading

Comparison·8 min read

Cherry vs Lensa: Which AI Portrait App Is Better in 2026?

Read more
Roundup·11 min read

The 7 Best AI Portrait Apps in 2026 (Tested & Ranked)

Read more