The Quote That Stopped Me

“Americans have always adopted technologies they distrust. They scroll platforms they believe are damaging. They hand personal data to companies they suspect will misuse it. The pattern isn’t confusion. It’s resignation. And the industry should find that more alarming than resistance.”

That line, from an analysis published on Implicator in March 2026, stayed in my head for days. Because it describes exactly what I feel — and I bet you do too.

I use AI every day. I write about AI. I build workflows with AI. And yet, I verify almost everything AI tells me. Not out of philosophical principle — out of accumulated experience with confident errors that almost caused me real problems.

900 million people use ChatGPT weekly. And if a survey asked each of them “do you trust the results?”, the answer would be a resounding “no.” 76% of Americans say they rarely or never trust AI-generated information.

And most fascinating: that number got worse compared to last year. Usage went up. Trust went down. They’re moving in opposite directions.

The Numbers Silicon Valley Doesn’t Want to Hear

The Quinnipiac University poll, published March 30, 2026, is devastating for the industry’s optimistic narrative. With 1,397 American adults surveyed by phone, margin of error ±3.3 points:

51% of Americans use AI for research — a 14-point jump from April 2025 (37%). Data analysis rose from 17% to 27%. Image generation from 16% to 24%. Only 27% say they’ve never used AI.

76% trust AI rarely or sometimes. Only 21% trust it “most of the time” or “almost always.” And according to YouGov, just 5% trust it “a lot.”

55% say AI will do more harm than good in their lives — an 11-point increase since April 2025. This isn’t statistical noise. It’s a trend with momentum.

70% expect AI to cause job losses. But only 30% fear for their own jobs (up from 21% in 2024). As Quinnipiac’s Professor Tamilla Triantoro observed: “Americans are more worried about what AI may do to the labor market than about what it may do to their own jobs.”

65% oppose building AI data centers in their communities.

And Verasight’s survey of 2,000 adults completes the picture: 56% report anxiety about AI’s rise. Only 42% express excitement. 37% of non-users cite distrust as their reason for avoiding AI.

Gen Z — the generation most familiar with AI — is the most pessimistic about the job market. Usage and optimism are moving in opposite directions.

Sam Altman’s Wrong Diagnosis

Sam Altman, OpenAI’s CEO, reacted to similar data by saying he’d “love to have better marketing.” For him, people simply don’t understand the incredible things AI can do.

I deeply disagree. And the research backs me up.

As one analyst wrote: “The industry frames this as a communication failure — people would trust AI more if they understood it better. This view is wrong.”

If low trust were a knowledge problem, usage and trust would move together. But they’re moving in opposite directions. More Americans are using AI more often — and more of them are arriving at negative assessments of its net impact.

This isn’t ignorance. It’s experience.

People are using AI. They’re finding hallucinations in research. They’re seeing outputs that look correct but aren’t reliable. They’re dealing with AI slop flooding the internet. They’re seeing deepfakes of people they know. They’re getting “customer service” emails that are clearly poorly disguised bots.

Public skepticism doesn’t come from lack of information. It comes from direct contact with the product’s failures.

The Corporate Productivity Myth

And in the corporate world, the story isn’t much different. Companies are investing billions, but the return is still a massive question mark.

A Sonar survey of over 1,100 developers found that 96% don’t fully trust that AI-generated code is functionally correct. 61% agree AI often produces code that “looks correct but isn’t reliable.” And 45% say debugging AI code now takes more time than writing it themselves.

The data point that impressed me most: the ADP survey of 39,000 workers (which I mentioned in the Token Anxiety post) showed that daily AI users are 4 times more likely to say they’re not being as productive as they could be. AI automated the tasks that made people feel productive — without necessarily making them more productive.

The comparison between promise and delivery is uncomfortable. Trust? Marketing promises “your source of truth.” Reality delivers 76% public distrust. Productivity? Marketing promises “save 10 hours a week.” Reality shows many companies haven’t seen real gains. AI’s nature? Marketing promises “logical intelligence.” Reality delivers non-deterministic behavior and hallucinations. Environmental impact? Marketing promises “solution for climate.” Reality delivers record energy consumption and water crisis.

What’s Really Happening

After deep research, I think the use-trust paradox reveals something deeper than any poll captures:

People aren’t confused. They’re resigned. They use AI because it’s useful despite being unreliable. Just as they use social media they know is harmful. Just as they hand data to companies they know will misuse it. It’s pragmatic adoption without moral endorsement.

The industry confuses adoption with endorsement. 900 million weekly ChatGPT users is impressive. But if 76% of them don’t trust the product, that’s a glass ceiling for the entire industry. Enterprise contracts, favorable regulation, and market expansion depend on trust — not just usage.

The problem is product, not marketing. There’s no rebranding your way out of a trust deficit. If AI keeps hallucinating, if it automates mediocrity, if the social and environmental cost is high, no brilliant ad campaign will change perception. As Quinnipiac noted: over 1,500 AI-related bills were introduced in state legislatures in 2026. Regulation is coming — not because politicians are anti-tech, but because the public is asking.

The Part That Gives Me Hope

Not everything is dark. Anthropic, with $30 billion in enterprise revenue, built trust through transparency (244-page system cards, decision not to release Mythos). C2PA is creating provenance standards. Observability tools are improving. And the developer community is becoming more mature about limitations.

Trust is built with product — not promises. Every hallucination prevented by a guardrail, every mandatory citation that anchors a response in evidence, every time the system says “I don’t know” instead of inventing — that’s what builds trust. Brick by brick. Not with keynotes.

Conclusion: Adoption Without Trust Is a Castle in the Sand

Sam Altman isn’t naive. He knows the product has flaws. The question is whether he truly believes marketing will fix it — or whether he’s waiting for us, the users, to simply accept the “new normal.”

I refuse to accept it. Not because I’m anti-AI — I’m clearly pro-AI, and this entire blog is evidence of that. But because accepting hallucinations and distrust as normal is lowering the bar for a technology that can be much better.

2026’s AI is incredibly useful. And incredibly untrustworthy. And the only way to fix that isn’t better marketing. It’s better product.

Share your perspective:

900 million use it. 76% don’t trust it. The paradox isn’t confusion — it’s experience. And experience is saying something the industry needs to hear.


Read Also