The Integrity of the Future

by Emily Wolfteich – Senior Industry Analyst

How are we teaching AI to shape our future?

Maybe I asked it the wrong question, but what interests me about this answer is that it doesn’t mention much about the integrity of the data that it learns from. The volume, yes – processing enormous amounts of sometimes conflicting data and being asked to form logical pathways and conclusions from it can sometimes lead to mistakes or unpredictability. But that’s more about the system processing mechanism. What about the data itself? 

The AI Gold Rush

This type of investment is important. It’s expensive to develop the natural language models that AI relies on – some investors estimate around $500 million – and to power the computing that allows the system to learn from the data. However, a key component of this investment must be that it funds rigorous analysis to ensure data quality. 

Without a rich, contextual, and accurate data fertilizer, what kind of flowers will we be growing?

Quality versus Quantity

1 – Accuracy and Quality

More data exists now than ever before, and the growth of Internet of Things (IoT) devices, 5G, and cloud computing means that volume is exponentially expanding. With this avalanche of data, the likelihood of inaccurate or bad data also grows – and when analyzed at scale, small mistakes can become big problems. 

2 – Enterprise-wide Integration

3 – Location Intelligence

4 – Data Enrichment

Fertilizing “1,000 Flowers”

Flowers in a field

Imagine if you were asked to describe trees, but were only given information about trees that grow in Florida. You could accurately and in detail describe the taxonomy, appearance, uses and origins of all the trees that fall under that dataset. But what would be missing? What would you not know? And, importantly, how would you identify what it is that you don’t know?

If you were only being asked about trees in Florida, of course, your knowledge would be more than sufficient. But without a complete data set, the conclusions miss the mark. 

This is one of the biggest problems facing AI and ML developers. These systems are learning from the worldview that we are providing to them. How do we know where our own blinders are? How do we ensure that our own biases are not becoming the baseline of the decision-making of the future?

Silicon Valley’s model is “move fast and break things,” but we cannot afford to let this cavalier attitude build the language of the future. The models, programs, and applications that will come out in the next few years are likely the building blocks of what we will all use going forward, from governments to businesses to high school students. We will be using it to hire people, to communicate with each other, to make funding decisions and write opinions and triage organ recipients and determine likelihood of incarcerated people to re-offend and estimate threat levels from our adversaries. If we do not act now, to ensure that these models learn and train from quality data that is an accurate and contextual reflection of what our world looks like, we will not only replicate but enshrine inequity and discrimination.


Related Posts

AI in December (2025)

A look at December’s top AI news in government: The House Task Force report, DHS’s DHSChat launch, and the White House’s 1,700+ federal AI use case inventory.

7 Ways BD & Sales Teams Can Use Federal Personas for Competitive Advantage

Explore 7 strategies for BD and sales teams to use research-based federal personas to sharpen pursuit strategy, tailor messaging, boost credibility, and gain a competitive edge in government contracting.

AI in August (2025)

Key AI news from August 2025: GSA launched USAi.Gov for federal AI adoption, the Pentagon’s Advana platform faced cuts and setbacks, and the Army tested smart glasses for vehicle maintenance. Also, the Department of Labor unveiled a national AI workforce strategy, and Colorado lawmakers began revising the state’s pioneering AI Act.

Deep Dive: Department of Treasury

A look inside Treasury’s 2026-era tech strategy: AI isn’t a standalone budget line — but ~$48.8 M funding for a centralized fraud-detection platform points to growing use of chatbots, generative AI pilots, taxpayer services, fraud monitoring, and data-driven automation under its IT-modernization efforts.

Decoding OMB Memorandums M-25-21 and M-25-22

Explains Office of Management and Budget (OMB) M‑25‑21 and M‑25‑22 — new federal‑AI directives that replace prior guidance, empower agencies to adopt AI faster, and streamline procurement, while aiming to balance innovation, governance and public trust.

News Bite: AI Slop, Jon Oliver, and (Literally) Fake News

Examines the rise of ‘AI slop’ — cheap, AI-generated content masquerading as real media — and how it’s fueling viral fake news, degrading digital discourse, and undermining trust online.

Deep Dive: Department of Defense

Overview of the Department of Defense’s FY 2026 budget: $961.6 billion total, with heavy investment in AI, unmanned aerial, ground, maritime, and undersea systems — spotlighting a modernization push across all domains.

AI in July (2025)

Federal AI Policy Heats Up in July: The Trump administration unveiled its “America’s AI Action Plan,” prompting a lawsuit over deregulation and a battle with states. Also featuring: a new defense bill with AI provisions and GSA’s $1 ChatGPT deal for federal agencies.

AI in Government: A Question of Trust

Explores how the use of AI by government agencies raises fundamental questions of trust — weighing the benefits of efficiency, fraud detection and streamlined services against serious risks around bias, transparency, accountability, and public confidence.

Insights at a Glance: May 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from May 2025.

AI in April (2025)

April’s AI news: New White House policies, controversial federal agency automation, military digital overhaul, the TAKE IT DOWN Act, and plans to integrate AI in K-12 education.

Insights at a Glance: March 2025

A data driven rundown of the latest federal AI, policy and government‑tech developments from March 2025.

Policy Dive: AI in the First Week of Trump

Covers the first‑week AI moves by the new administration — from revoking prior federal AI safeguards to launching a sweeping AI‑domination agenda that prioritizes innovation and global competitiveness over prior guardrails.

AI in January (2025)

Explore the new Trump administration’s deregulatory shift, the massive “Stargate Project” with tech giants, the emergence of a high-performing, cost-effective Chinese AI model (DeepSeek), the launch of OpenAI’s ChatGPT Gov, and key ethical priorities set by the NAIAC.

Artificial Intelligence & the Government: Who’s Driving the Car?

The GAO’s report on the federal government’s adoption of AI is as comprehensive as it can be – but do we like what we see?

How government can experience the Great Stay

As the labor market begins to stabilize, experts predict FY 2024 to be the year of “The Great Stay” among the federal workforce.

AI & the Pentagon: Cautiously Curious

As AI hype increases across the public and private sectors, organizations are weighing the possibilities (and risks) the tech creates.

AFA’s Air Space & Cyber Conference 2023: Key Takeaways and Insights

Key takeaways from David Hutchins (Government Business Council) and Jon Hemler (Forecast International) on the AFA’s 2023 Air Space & Cyber Conference.

How the Federal Government Can Attract Employees

As the federal workforce ages, attracting young talent is critical. Taking these 10 actions can help attract the next generation.

Top Cybersecurity Trends in the Federal Government and Why They are Important

As cybersecurity tech, frameworks, and standards evolve, there are many trends driving cyber investments within the federal sector in 2023.