Why Trust Matters in AI, and Why We Still Don’t Have It
In AI, “trust” shouldn’t be a slogan; it should be a property of the system. Yet even today’s most capable large language models can produce confident but incorrect answers, misinterpret constraints, or generate details that were never provided. These behaviors are often called hallucinations, but that word can make them sound mysterious or rare. They’re not. They are a direct consequence of how these models work: they generate plausible continuations, not verified facts.
One promising direction people in the field are exploring is semiformal verification for AI systems. The idea is simple: instead of trying to fully prove a neural model correct (which is often infeasible), we specify what the model is supposed to do and then systematically check whether its outputs line up with those expectations. That can involve structured prompts, multiple runs to test consistency, or rule-based checks over the model’s outputs.
This sits in between two extremes:
informal “it seems to work” demos,
full formal verification from classic software engineering.
Semiformal approaches aim for a middle ground: practical assurance. They don’t guarantee perfection, but they make model behavior more observable, more testable, and therefore more accountable.
Why is that important? Because in most other branches of engineering, we don’t ship things on vibes. We write requirements, we test against them, and we make failures detectable. AI needs the same mindset. That means:
defining requirements up front,
checking model behavior against those requirements under different conditions,
and treating unexplained variability as something to investigate, not ignore.
We don’t have fully trustworthy AI yet, not because it’s impossible, but because we haven’t finished building the verification, evaluation, and accountability layers around these models. Trust won’t come only from bigger models. It will come from better engineering.

