AI in Government: A Question of Trust

Emily_AI Lab_Sig

Written by Emily Wolfteich
Senior Industry Analyst

In May 2025, GovExec Intelligence asked the question: how much do you trust the AI you work with? After all, AI adoption has been a federal priority since 2019, first enacted under President Trump’s first term, and expanded during President Biden’s administration, where reported use cases across the federal civilian sector tripled from 710 in 2023 to 2,133 in 2024. New guidance from the Trump administration aims to expand AI procurement and use cases, including through data sharing and removal of regulations. In March of 2025, GovExec Intelligence’s survey found that 71% of respondents said that AI was more important to their agency than it was three months prior.
 
Across the government, AI is expected to be a mission enabler, a powerful tool, and a national security imperative. Yet for AI systems to be fully adopted, they need buy-in – and this is proving to be a challenge.
 
Our research finds that less than a third of federal respondents completely or mostly trust the AI system they work with, and 11% do not trust their AI systems at all. The majority (60%) “somewhat” trust their system.
Overall, the data indicates hesitancy. (“Somewhat” is not exactly a ringing endorsement.) This is not unexpected – five years is a relatively short time for paradigm-shifting technologies to be adopted, and AI systems have grown exponentially during that period. For federal agencies who still struggle with legacy technology equipment and face budgetary and regulatory constraints, it may be difficult to fully lean into the AI revolution.

Factors Affecting Trust

The research suggests a few possible contributing factors for this low level of trust.

General hesitancy among federal agencies to share data.

In February 2025, GovExec Intelligence found that 61% of respondents are somewhat or very hesitant to share data with other agencies, and only 4% are not at all hesitant. As new federal guidance pushes agencies to share both data and AI models, respondents who were already reluctant to share their data may need more clarity and reassurance that their data will be used securely and appropriately.

 

A.I. fatigue.

GovExec Intelligence’s February survey found that, when federal decision-makers were asked which work-related buzzwords they found most annoying, AI was at the top of the list. There has certainly been unprecedented chatter about artificial intelligence, including a slew of requirements for agencies that have changed from the previous administration, as well as a broader increase in companies looking to offer their AI solutions to the public sector, and non-governmental applications like ChatGPT that bring AI into the everyday.

AI errors.

Major problems with AI systems, particularly with generative AI, may be eroding federal confidence in its abilities. Generative hallucinations have hit big law firms across the country, leading to sanctions. Chatbots have encouraged businesses to break the law andlied to air passengers. While many of these errors have been more embarrassing than dangerous (a fake list of summer reading books, for example), and most of which would have been rectified by a quick quality check, there are also larger examples of workforce losses, defamation, and psychologicalimpacts. Particularly in federal agencies who deal with sensitive data, these public AI issues may decrease their trust in AI systems to securely handle their information.

 

Lack of public trust in AI.

These low trust levels correlate with a broader public distrust of artificial intelligence – a distrust that is growing. Edelman’s March 2025 Trust Barometer survey found that public trust in AI has dropped from 50% to 32% since 2019, and the Pew Research Trust reports that half of U.S. adult respondents are more concerned than excited about the increased use of AI in daily life.

What's Next?

Both government decision-makers and industry partners should be taking these low trust levels seriously. Edelman’s survey reports that the US public has one of the lowest trust ratings globally, and 40 percentage points less likely than the Chinese public to trust AI (32% versus 72%). This poses national security concerns. At the same time, public trust in tech companies to “do what is right” is the second-lowest in the world at 63%, a 10 percentage-point drop from 2024. This does not indicate an environment that is ready to trust in an AI revolution.

Federal workers are not immune from these public sentiments, but they are also tasked with AI adoption within their agencies. Finding the line between encouraging and supporting AI use, and demanding it without addressing some of the root causes of the mistrust, will be increasingly crucial to both promote innovation and curiosity, as well as to stay at the forefront of global AI dominance.

To read additional thought leadership from Emily, connect with her on LinkedIn.

Related Posts

AI in December (2025)

A look at December’s top AI news in government: The House Task Force report, DHS’s DHSChat launch, and the White House’s 1,700+ federal AI use case inventory.

7 Ways BD & Sales Teams Can Use Federal Personas for Competitive Advantage

Explore 7 strategies for BD and sales teams to use research-based federal personas to sharpen pursuit strategy, tailor messaging, boost credibility, and gain a competitive edge in government contracting.

AI in August (2025)

Key AI news from August 2025: GSA launched USAi.Gov for federal AI adoption, the Pentagon’s Advana platform faced cuts and setbacks, and the Army tested smart glasses for vehicle maintenance. Also, the Department of Labor unveiled a national AI workforce strategy, and Colorado lawmakers began revising the state’s pioneering AI Act.

Deep Dive: Department of Treasury

A look inside Treasury’s 2026-era tech strategy: AI isn’t a standalone budget line — but ~$48.8 M funding for a centralized fraud-detection platform points to growing use of chatbots, generative AI pilots, taxpayer services, fraud monitoring, and data-driven automation under its IT-modernization efforts.

Decoding OMB Memorandums M-25-21 and M-25-22

Explains Office of Management and Budget (OMB) M‑25‑21 and M‑25‑22 — new federal‑AI directives that replace prior guidance, empower agencies to adopt AI faster, and streamline procurement, while aiming to balance innovation, governance and public trust.

News Bite: AI Slop, Jon Oliver, and (Literally) Fake News

Examines the rise of ‘AI slop’ — cheap, AI-generated content masquerading as real media — and how it’s fueling viral fake news, degrading digital discourse, and undermining trust online.

Deep Dive: Department of Defense

Overview of the Department of Defense’s FY 2026 budget: $961.6 billion total, with heavy investment in AI, unmanned aerial, ground, maritime, and undersea systems — spotlighting a modernization push across all domains.

AI in July (2025)

Federal AI Policy Heats Up in July: The Trump administration unveiled its “America’s AI Action Plan,” prompting a lawsuit over deregulation and a battle with states. Also featuring: a new defense bill with AI provisions and GSA’s $1 ChatGPT deal for federal agencies.

Insights at a Glance: May 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from May 2025.

AI in April (2025)

April’s AI news: New White House policies, controversial federal agency automation, military digital overhaul, the TAKE IT DOWN Act, and plans to integrate AI in K-12 education.

Insights at a Glance: March 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from March 2025.

Policy Dive: AI in the First Week of Trump

Covers the first‑week AI moves by the new administration — from revoking prior federal AI safeguards to launching a sweeping AI‑domination agenda that prioritizes innovation and global competitiveness over prior guardrails.

AI in January (2025)

Explore the new Trump administration’s deregulatory shift, the massive “Stargate Project” with tech giants, the emergence of a high-performing, cost-effective Chinese AI model (DeepSeek), the launch of OpenAI’s ChatGPT Gov, and key ethical priorities set by the NAIAC.

Artificial Intelligence & the Government: Who’s Driving the Car?

The GAO’s report on the federal government’s adoption of AI is as comprehensive as it can be – but do we like what we see?

How government can experience the Great Stay

As the labor market begins to stabilize, experts predict FY 2024 to be the year of “The Great Stay” among the federal workforce.

AI & the Pentagon: Cautiously Curious

As AI hype increases across the public and private sectors, organizations are weighing the possibilities (and risks) the tech creates.

AFA’s Air Space & Cyber Conference 2023: Key Takeaways and Insights

Key takeaways from David Hutchins (Government Business Council) and Jon Hemler (Forecast International) on the AFA’s 2023 Air Space & Cyber Conference.

How the Federal Government Can Attract Employees

As the federal workforce ages, attracting young talent is critical. Taking these 10 actions can help attract the next generation.

Top Cybersecurity Trends in the Federal Government and Why They are Important

As cybersecurity tech, frameworks, and standards evolve, there are many trends driving cyber investments within the federal sector in 2023.

Top 5 Supply Chain Issues in the Federal Government… and What’s Being Done About it

This article discusses supply chain disruptions and their impact on the federal government, businesses, and society.