Page 8 - SeptemberOctober25 Report
P. 8
Generative AI Has
99 Problems—
and Governance
is Many of Them
By Nancy Magoteaux
“ChatGPT write me an article on how to govern AI.”
“Sure! Here is a draft article on how to govern AI.”
Full disclaimer: that is not how I wrote this article, but
ChatGPT did help me come up with a clever title. Naturally,
that piqued my curiosity. What would ChatGPT suggest about
how to govern itself? I asked and it was happy to offer several
recommendations.
ChatGPT’s first suggestion? Define clear objectives.
Groundbreaking, right?
Let’s unpack that. What does it really mean to “govern AI?”
On occasion, a discussion of AI governance will involve mention
of guardrails, as if guardrails and governance are synonymous.
In the real world, guardrails are designed to keep vehicles from
veering off the road. They’re fixed, visible, and built with a clear
understanding of where the road ends and the danger begins.
With AI, the road is constantly shifting, and the vehicle is learning
to drive itself. So, building guardrails isn’t about setting bound-
aries; it’s about anticipating where those boundaries might need
to be tomorrow.
Take large language models (LLMs), for example. They learn
from the information users provide and use that data to improve
future responses.1 So, one potential guardrail might be ensuring
the information you give an LLM isn’t misused to answer someone
else’s question, and the responses it generates are based on legiti-
mate, verifiable sources.2
How would that work in practice? I don’t have a perfect
answer, and that’s part of the problem. Implementing a guardrail
that limits how LLMs use the information they receive, while also
requiring them to verify the accuracy of their outputs, isn’t just
a technical tweak. It would require a fundamental rethinking of
how these systems are trained, deployed, and governed.
Right now, most LLMs are trained on massive datasets scraped
from the internet, often with little transparency or control over
what goes in. Once trained, they don’t “remember” individual
conversations unless explicitly designed to, but they do generalize
from patterns in the data. So, if you want to prevent an LLM from
using your input to inform someone else’s output, you’d need to
either:
Isolate user data (which limits learning and personalization),
or Implement strict data tagging and consent protocols (which
adds complexity and slows development).
Both options are feasible, but they’re not widely adopted
because they conflict with the current incentives: faster models,
cheaper training, and broader capabilities.
Verification is another challenge. LLMs don’t “know” whether
something is true. They generate plausible-sounding text based on
statistical patterns. Adding a verification layer would mean inte-
grating external fact-checking systems, curated knowledge bases,
or real-time access to vetted sources. That’s doable, but it’s expen-
sive, and it introduces new risks. Who decides what’s “verifiable?”
What happens when facts are contested?
Even if we solved all that for ChatGPT, we’d still be left with
a fragmented ecosystem. LLMs are embedded in everything
from customer service bots to legal research tools. A meaningful
governance framework would need to apply across platforms,
industries, and jurisdictions. That’s not just a code update. It’s a
coordinated, cross-sector effort involving technologists, regula-
tors, ethicists, and users.
8 THE REPORT | September/October 2025 | CincyBar.org