A reaction to “In ChatGPT-Powered Virtual Influencers We (Dis)Trust?”, Jin, S. V., Behavioral Sciences, 2026, 16, 651.
“You are looking at the lobby.”
A new paper in Behavioral Sciences examines what happens when ChatGPT-powered virtual influencers try to sell us things. Specifically: what predicts whether we will buy.
Most of the findings land where you would expect. People who feel they can benefit from AI trust ChatGPT more. People who trust ChatGPT more are more willing to buy what its avatars recommend. Lonelier users, with a higher need to belong, are more persuadable when access to AI is limited, and that gap closes when access goes up.
The finding worth sitting with is the last one.
Among users with high access to AI, the people who reported the highest privacy concerns also reported the highest perceived benefits of ubiquitous ChatGPT. Read that again. The people most worried about how their data is handled are the same people most convinced the system is good for them.
The literature calls this the privacy paradox: a long-documented gap between what people say about privacy and what they actually do online. The paper treats it as a psychological curiosity, a tension to be managed by better UI/UX and smarter targeting.
It is not.
The privacy paradox is not a paradox. It is the sound of an architecture working exactly as designed.
The mistake is treating the user as the unit of analysis
When you frame the privacy paradox at the level of the individual, it looks like irrationality. Why would a person who is afraid of how their data is used keep handing it over? The answer starts to feel like it has to be psychological. Cognitive dissonance. The seduction of convenience. Affective computing. The need to belong.
Step back one level and the paradox dissolves.
The user is not making a free choice between privacy and benefit. The system has been built so that the only path to the benefit runs through the data. There is no third option. There is no setting in which the user gets the recommendation, the assistant, the personalised reply, without also surrendering the input.
When you have engineered out the alternative, “willing disclosure” is no longer a meaningful category. The person with high privacy concerns and high access to AI is not paradoxical. They are coerced. They have correctly identified the cost, and they have correctly identified that there is no way to get the benefit without paying it.
That is not a paradox. That is a hostage situation with good UX.
The model is not the point
There is a public conversation about AI privacy, and it has been almost entirely consumed by one question: is this company training on my data?
That is not a useless question. But it is a small one. And it has done useful work for almost everything else these companies are actually doing.
Sitting alongside the model is the platform. The platform is the same kind of thing every SaaS product has been since the early 2000s: a system that watches what you do. Which links you clicked from inside the chat. What you bought after a recommendation. What time of day you log in. How you configured your settings, your saved profiles, the persona instructions you gave the assistant about who you are. The new wave of AI health assistants is collecting all of this and more, regardless of what the company says about its training pipeline.
None of that is model training. None of it needs to be. It is platform telemetry. It is the same business model as ad-tech, as e-commerce surveillance, as enterprise SaaS upselling. The model is sitting in the middle of a Web 2.0 surveillance product.
When a company tells you it does not train on your data, that may be perfectly true and the platform layer is still doing the work. The promise is a narrow technical commitment about one channel, while the rest of the building is wired. The surveillance is happening upstream of the model, around the model, after the model. Almost anywhere except inside it.
This is why “make the model more private” is not a coherent answer to the privacy paradox. The model is not where the data goes when the data goes somewhere.
What the paper does not name
The paper closes with practical advice for “UI/UX designers, digital marketers, and brand managers.” It suggests using consumers’ privacy settings to customise marketing messages. It suggests using follower counts as a proxy for the need to belong, so that virtual influencers can better serve consumers’ social needs.
That is the recommendation set you arrive at when you accept that the data extraction is fixed and the only remaining variable is how skilfully to persuade.
There is another option. The paper does not consider it.
You can change the architecture.
You can build a system where the user gets the benefit without the system ever holding the input. Not as a policy commitment. Not as a privacy promise that the next acquirer can quietly walk back at the next funding round. As a structural fact: there is no centralised database. There is nothing in the warehouse to lose, sell, subpoena, or breach. The processing is ephemeral. The result is the user’s. The company never sees the trail.
If you do that, the privacy paradox stops being a paradox. Not because users have learned to trust differently. Because the structural conditions that produced the paradox no longer exist. The trade-off was the bug. Removing the trade-off is the fix.
Why this matters more in wellness than in shopping
The paper is about ChatGPT-powered virtual influencers selling products in e-commerce. The stakes there are real but bounded. A poorly chosen sweater. A recommendation that was actually a placement. A price you could have got cheaper elsewhere.
The same architecture is being deployed against intimate body data. Cycle tracking. Sleep. Mood. Reproductive intent. Symptoms. Sexual activity. Diary entries about the body that the user typed in themselves. These end up in the same kinds of centralised databases, governed by the same privacy promises, subject to the same exit-pressure incentives that walk those promises back when an acquisition is on the table.
A logged diary entry about a missed period is not less sensitive than a sensor reading. In a database, they sit on the same shelf.
The coercion is also more acute. Health is the place where people feel they have already run out of other options. They have struggled to get a useful answer from a provider in a fifteen-minute appointment. They have searched the open web and hit paywalls, ad bombardment, contradictory advice, and content engineered for clicks rather than comprehension. They are not turning to AI for their bodies because they are reckless about their data. They are turning to AI because every other path to a clear, plain-language, contextual answer about their own body has been broken or monetised out from under them. Broken information. Broken access. Broken time. AI walks into a vacuum the rest of the system created, and then the architecture extracts.
The privacy paradox in those systems is not a curious finding for marketers. It is the operational mechanism by which intimate data is harvested at scale from people who knew exactly what was happening and could not afford to opt out.
That is the real implication of this paper, even though the paper does not say it. When the only way to get a benefit is to give up the data, high concern plus high access will reliably produce high disclosure. The paper has documented the trap. The trap is what we should be talking about.
The structural question
There is a single question that cuts through this entire literature, and it is the one I keep coming back to when people ask how to design a “trustworthy AI interface.”
It is not “how do we make users trust us.”
It is not “how do we communicate our privacy policy more effectively.”
It is not “how do we calibrate the marketing message to the user’s privacy settings.”
It is: how is it structurally impossible for the company to ever hold user data?
That question has an answer. It is not easy, and it is not free, and the engineering decisions it forces are real. But once you build for it, the privacy paradox is not a finding in your user research. It is a relic of the architecture you replaced.
Until then, every paper that documents the paradox is documenting the same thing: a market being asked to choose between participation and protection, and correctly noticing there is no path that offers both.
The job is to build the path.