Trust or Suspicion? How We See AI Today

What the Pew survey reveals about AI’s limits and society’s expectations.

Artificial intelligence is no longer science fiction. But as with all powerful tools, people’s feelings about it are mixed — hopeful, wary, curious, even conflicted. A new Pew Research Center survey of 5,023 U.S. adults (June 2025) illuminates exactly how Americans perceive AI’s impact on human abilities, society, and their own lives.

As marketers, understanding these perceptions matters: it shapes trust, informs how we communicate about AI-enabled products/services, and offers clues about what kinds of uses people will accept (or push back against).

What stands out is not just what people are excited or worried about. It is the why. And if you read between the lines, the concern is not only fear of the unknown. It is a deep concern for fairness, authenticity, and control.

People are comfortable with AI until it gets personal

According to the survey, Americans are relatively open to AI when it is used for large-scale, data-heavy tasks: weather forecasting (74 percent), financial fraud detection (70 percent), and government benefits fraud detection (70 percent). These are high-volume, pattern-matching problems where AI feels additive rather than invasive.

But the moment you shift into areas where human bias, values, or relationships come into play, such as hiring, matchmaking, or spiritual advice, support drops off sharply. Two-thirds do not want AI involved in romantic decisions. Nearly three-quarters say no to AI giving faith-based guidance.

Marketplace recently examined how job seekers suffer from automated application systems (AAS) in an already difficult job market. These systems can reproduce bias, eliminate human judgment, and strip away context. They often act not as assistants but gatekeepers that amplify unfairness and frustration. Many applicants feel dehumanized when their qualifications are passed over without real human review, especially when a system fails to understand nuance or background.

This observation lines up with Pew’s findings. Many people are cautious not simply because AI is powerful, but because AI in these personal, high-stakes situations feels incapable of fairness. It removes subtle judgment, it ignores context, and it can misjudge in ways a human might not.

This is not just about the what. It is about the perceived fairness of the process.

Problem-solving is not the same as critical thinking

This nuance is easy to overlook but important. Many of the tasks where people support AI use involve problem-solving: deterministic systems where there is a clear objective, such as identifying fraudulent transactions.

People are far more resistant when tasks involve critical thinking: weighing competing values, navigating ambiguity, or interpreting emotion. These scenarios cannot be reduced to right-or-wrong binaries without risking unfair or reductive outcomes.

And if we are honest, people probably have not articulated this distinction consciously. But it feels wrong when a machine evaluates your suitability for a job, your compatibility with a partner, or the “authenticity” of your faith journey.

As we design, market, or evaluate AI tools, the question should not be only “can AI do it?” but “is this a situation where human critical thinking matters more than computational precision?”

For marketers, this data offers both caution and clarity

A) AI in hiring or HR scenarios

From a brand trust perspective, using AI in employment screening, résumé parsing, or interview analysis is a reputational risk. Unless you are deeply investing in explainability, fairness auditing, and human oversight, it is likely not worth the gamble.

People want fairness, not just efficient outcomes.

Marketers promoting AI-powered HR tech should lead with transparency, emphasize human checks, and avoid “hands-off automation” language. Trust is earned through clarity and consent.

B) AI in content creation: authenticity matters

People still prefer original human content when it comes from a voice they trust. They will buy a course, read a book, or follow someone because of their insight and personality. An AI avatar as the main presenter often feels off, unless it supports existing human content.

Ray Dalio’s AI avatar is interesting because it does that: it supplements content, clarifies ideas, but it does not replace the human source.

When you design content or training, lean into human storytelling, personal touch, and transparency about what was AI-assisted.

The trust equation is shifting

A full 76 percent of Americans say it is important to know whether content was made by AI or a human. At the same time, 53 percent admit they are not confident they can tell the difference.

That is a problem. But it is also an opportunity.

The brands that win in this next wave will not just be fast adopters of AI. They will be clear communicators. They will disclose how AI is used, what it does not do, and where the human role begins and ends.

This moment does not call for more automation. It calls for more stewardship.

5 Things You Can Do Based on This Post:

  1. Audit AI touchpoints in your products and campaigns. Identify where automation is visible to customers and where transparency is needed.

  2. Lead with disclosure. Clearly label AI-assisted content or recommendations. Show how humans remain in the loop.

  3. Avoid “black box” messaging. Explain how AI tools work in plain language and provide channels for feedback or appeal.

  4. Prioritize authenticity in content. Use AI to supplement rather than replace human voices, especially in learning or thought leadership.

  5. Run fairness checks. Especially in HR, hiring, or customer-facing tools, invest in bias audits and communicate those efforts to build trust.

Final thought: AI should not only be powerful, it should be principled

Yes, people are using AI. Yes, they will continue to. But adoption is not the full story. Trust is.

If you are a marketer, product owner, or content strategist, now is the time to draw the line between augmentation and replacement, between fair and opaque, between helpful and hollow.

And if you are not sure where that line is, ask your audience. They will tell you.

Blogs, Transcripts, and Show Links

Find more insights like this on the podcast and Enterprising Minds blog!

Share the Love!

If you know anyone who could benefit from this content, please share it!