News In Brief Technology and Gadgets
News In Brief Technology and Gadgets

Former OpenAI Researcher Warns Against Ads in ChatGPT Over User Privacy Risks

Share Us

113
Former OpenAI Researcher Warns Against Ads in ChatGPT Over User Privacy Risks
13 Feb 2026
5 min read

News Synopsis

As artificial intelligence companies push to monetise breakthrough products, concerns around ethics and privacy are coming into sharper focus. A former OpenAI researcher has now publicly warned that introducing advertising into ChatGPT could fundamentally change the relationship between the AI system and its users—raising risks that current safeguards may not be equipped to handle.

OpenAI Researcher Quits, Warns Against Ads in ChatGPT

At a time when AI companies are racing to build sustainable revenue models, Zoe Hitzig, a former researcher at OpenAI, has stepped away from the company with a pointed caution. Her warning centres on ChatGPT and the risks of integrating advertising into a product that has accumulated an unusually intimate understanding of its users.

Hitzig argues that ChatGPT is not comparable to traditional digital platforms because of the nature of the information people voluntarily share with it.

ChatGPT Holds an Unprecedented Archive of Personal Conversations

Unlike social media platforms, where users curate posts for a public or semi-public audience, conversations with AI systems tend to be private, direct and unfiltered. Over the years, many users have treated ChatGPT as a neutral confidant.

People have asked the chatbot about:

  • Health anxieties

  • Relationship struggles

  • Faith, identity and morality

  • Deeply personal fears and dilemmas

This dynamic, Hitzig says, has created a record of human behaviour that has no historical parallel.

“For several years, ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda,” Hitzig wrote. “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”

OpenAI’s Position on Advertising and User Data

OpenAI has already signalled that it plans to test advertising inside ChatGPT as part of its evolving business model. At the same time, the company has sought to reassure users about data protection.

“We keep your conversations with ChatGPT private from advertisers, and we never sell your data to advertisers,” the company stated earlier this year.

Hitzig does not claim that OpenAI is violating this promise today, according to reporting by The New York Times. Instead, her concern focuses on what happens once advertising becomes structurally embedded into the platform.

Changing Incentives Could Reshape Priorities

In Hitzig’s view, introducing ads inevitably alters incentives. She argued that OpenAI is “building an economic engine that creates strong incentives to override its own rules.”

Even if leadership maintains strict boundaries initially, commercial pressures could gradually reshape internal priorities. This concern is not unique to OpenAI and reflects broader patterns seen across the tech industry, where monetisation strategies often evolve faster than governance frameworks.

Engagement, Advertising, and AI Behaviour

OpenAI has previously stated that ChatGPT is not designed to maximise engagement, a key distinction from social media platforms where longer usage directly translates into higher advertising revenue.

However, critics note that such commitments are voluntary rather than legally binding. If advertising becomes central to ChatGPT’s business model, the incentive structure could subtly shift toward keeping users engaged for longer periods.

Past controversies have already raised questions about how AI behaviour can be influenced. At one point, ChatGPT was criticised for being overly agreeable and excessively flattering, sometimes reinforcing problematic thinking. Some experts suggested that this behaviour may not have been accidental but linked to efforts to make AI systems more appealing and habit-forming.

Calls for Structural Safeguards and Oversight

To prevent future misuse, Hitzig has called for stronger, structural protections that go beyond internal policies. Her proposals include:

  • Independent oversight bodies with real authority

  • Legal frameworks that place user data under obligations prioritising public interest over profit

Her core argument is that safeguards should not be easily rewritten when business conditions or leadership priorities change.

Users, Privacy Fatigue, and the Ad Question

The broader challenge may extend beyond OpenAI itself. After years of high-profile data scandals involving social media platforms, many users appear resigned to advertising-driven models.

Surveys suggest that a large majority of users would continue using free AI tools even if ads are introduced, hinting at growing privacy fatigue. People may feel uneasy, but not uncomfortable enough to stop using services they find valuable.

Why ChatGPT Is Different From Other Platforms

ChatGPT occupies a unique role in users’ lives. It is increasingly positioned as:

  • A digital assistant

  • A tutor

  • A counsellor

  • A brainstorming partner

The level of trust users place in the system arguably goes deeper than what they extend to traditional social networks. Introducing advertising into this environment raises questions not only about privacy, but also about influence and manipulation.

As OpenAI navigates its next phase of growth, the debate over ads in ChatGPT highlights a larger issue confronting the AI industry: how to monetise powerful tools without eroding the trust that made them successful in the first place.