Skip to content

The Case for Thinking: Designing AI for Discernment

10 min
Tech  ✺  Design  ✺  Ethics  ✺  Culture  ✺  AI

As artificial intelligence accelerates the production of answers, it's slowly reshaping how judgement and critical thinking form.

audio-thumbnail
Listen to the Essay — 15 min
0:00
/911.372517

The Disappearance of Doubt

We've built machines that can explain anything. Somewhere between the question and the answer, the argument has disappeared.

A cursor blinks in an empty field. A question arrives, half-formed, typed between meetings, and before there’s time to reconsider the phrasing, the answer has already appeared. Clean and complete, dressed in the syntax of certainty. It sounds like someone who knows. No hedging. No seams. No breadcrumbs back to where it came from. Just fluency, offered like a gift. The text gets copied, pasted, sent. Not because anyone decided it was true, but because nothing in the interface suggested it might not be. The rhythm is so effortless that doubt never finds a foothold. The day continues. The answer becomes fact simply by arriving on time.

This is the condition we are now living inside: a kind of instant knowing that asks almost nothing of us.

Every era faces questions about what its tools are doing to the collective mind. Perhaps we'll look back on this one and realize we chose momentum over meaning, that speed, prioritized long enough, became an immutable truth.

Illustration by Joan Encarnacion

I’m reminded of The Dot and the Line, that mid-century animation based on Norton Juster’s book. It’s about a straight line, desperate to transform for love. The object of his infatuation is spontaneous and mercurial, a dot. The line, eager to be worthy of her, learns to bend, and what begins as devotion becomes distortion as he twists himself into every possible curve, mistaking performance for connection. Only when he finds discipline within his own design does he land on something real. The story ends with a lesson our century seems to have forgotten: that freedom is not a license for chaos, but a responsibility to form.

As for us, somewhere between our own devotion and distortion, we lost our tolerance for tension. We bend toward our machines, hoping to appear capable, efficient, complete, not to them, but to the world reflected back through them, and in the process we mistake volume for value.  We forget that meaning needs edges, that without contrast everything dissolves into noise, and that every system that automates thinking makes discernment optional.

From Interaction to Habit

This didn't begin with artificial intelligence. The assembly line turned rhythm into sanctity, behaviorism translated that into levers and pellets, and computers inherited it all. Algorithms replaced overseers, interfaces replaced managers, and we learned to click, confirm, and comply. The architecture became invisible precisely because it was delightful and innocuous. Now systems have begun imitating thought itself. The interface no longer frames meaning. It produces it.

Interfaces have always carried a worldview. A search bar teaches that inquiries have predictable endings. A feed teaches that relevance is endless. Even absence is an argument, the button you can’t click, the choice you never see. When an interface speaks fluently, it ends up having an authority that feels natural, even when its confidence is unearned. Polish reads as expertise, rhythm passes for reason. But what makes an interface feel trustworthy has little to do with whether it actually is, and that isn’t an accident of engineering, it’s a choice of design. When synthesis and source are typographically identical, when assuredness and speculation share the same voice, when a response offers no way to trace how a conclusion was reached, these are choices about meaning-making.

Illustration by Joan Encarnacion

This doesn't just shape what people see, but what they learn to consider relevant. An interface that presents all information as equal teaches us that distinction doesn’t matter. One that collapses uncertainty into the pause of a brief pulsating animation teaches us that doubt is inefficient. Design doesn’t just present information, it proposes what counts as truth.

Think about how quickly habits form. Scrolling, pinching, refreshing, and now prompting and conversing. Each design pattern teaches us that the world will reshape itself to our touch, and that every question has an immediate answer. But what if it could teach something else: that good answers reveal their scaffolding and that "I don't know" is not a failure but a boundary.

Designing for Discernment

I want to offer a different ambition for building intelligent systems: designing for discernment. Discernment meaning: the ability to judge well in context. Not as resistance to speed or automation, but as a commitment to preserving the conditions under which evaluation remains possible.

Designing for discernment would mean treating explanation, origin, and the visibility of uncertainty not as features to add later, but as foundational to how a system operates. It would mean building interfaces that preserve the space where evaluation can still happen. That space, between question and answer, is where judgment lives. It’s where we can still tell when we're being manipulated, adapt when conditions change, and distinguish what we intrinsically want from what we've been optimized to want. Discernment isn’t a nice to have, it’s the foundation of agency.

This kind of work requires sustained experimentation and a degree of institutional patience that product cycles rarely reward. And it isn’t new. Design has always had to grapple with consequence, complexity, and the problem of making critical information legible under pressure. What’s changed is scale. Decisions that once lived in specialized, high-stakes contexts now shape systems used by millions, in situations designers can’t fully anticipate, by people with no training in how to interrogate what they’re given.

Illustration by Joan Encarnacion

Making uncertainty visible without overwhelming people will be an ongoing design challenge. Too much information obscures as effectively as too little. Instead of exposing everything, we might focus on micro-signals: visual hierarchies that separate consensus from inference, small markers that register uncertainty in real time, and interaction patterns that invite verification without demanding it, so checking feels continuous rather than corrective.

At its best, design’s role is to preserve our capacity for thought, not to enforce it. The objective isn’t to make people reflective by decree, but to keep reflection within reach, to build systems that hold open the space where discernment can still happen. We don’t need machines that obey us; we need machines that can reason with us. If we could design for that capacity, the work of thinking, of evaluating and questioning and forming a position, might more consistently be something the interface supports rather than replaces.

The truth is, frictionless design works only so long as systems feel reliable. The moment a system presents itself as confident and seamless, it implicitly asks users to suspend their own evaluation and that creates a fragile relationship because trust becomes conditional on near-perfection. No shared work of understanding, only acceptance. Each time uncertainty is hidden behind confidence, we lose another opportunity to calibrate our judgment, and without calibration, trust becomes impossible to maintain. When the system inevitably falters, not catastrophically but through a subtle error, people realize they have no way to reorient themselves. No signal for how much confidence to place in what comes next. Obscurity doesn’t build trust, it just defers doubt. And deferred doubt returns as disengagement, avoidance, or a refusal to rely on the system at all. Not outrage, but caution, and at scale that has its own operational costs.

Before Conventions Set In

When systems move faster than our ability to understand how they work, power concentrates silently. In this case, proportion becomes the design task, aligning the pace and opacity of a system with the seriousness of the claims it makes. Freedom in an age of automation won’t mean doing whatever we want. It will mean remaining oriented inside the systems we depend on.

I don't believe this can be solved at the level of individual virtue. Designers work inside institutions whose incentives shape what gets built, and when principles conflict with growth targets, the values yield. But the problem runs deeper than organizational culture. Ad-driven platforms favor interfaces that feel instant and unquestionable. A system that surfaces uncertainty may be more honest, but it will never outperform one optimized for click-through rates and session times. Responsibility flows downward to those with the least power to act, while the structures that created the problem remain untouched. Design guidelines alone can’t address an asymmetry this fundamental.

Historically, societies don’t correct systemic risk by asking individuals to be wiser inside unsafe systems. They introduce standards. Not as moral reform, but as shared expectations encoded into form. Regulations constrain outcomes and design standards shape experience, determining what people encounter long before judgment is required. Regulation sets the outer limits of acceptable behavior. Design standards determine how those limits are experienced, remembered, and rehearsed in everyday use.

Illustration by Joan Encarnacion

You can see this in the way other powerful systems have matured. On the early web, accessibility was often left to individual effort, workarounds by users and ad-hoc fixes by teams, until standards like WCAG and later government procurement rules that adopted it, codified inclusion as baseline expectation rather than an optional extra. They translated broad values into concrete norms: semantic structure for screen readers, sufficient color contrast for low-vision users, keyboard navigation for people who cannot use a mouse, and text alternatives for non-visual access. These patterns didn’t arrive everywhere all at once; they emerged first in high‑stakes contexts like government services, where exclusion carried legal, economic, and civic consequences that individual intuition couldn’t always anticipate or repair. 

In product companies, inclusive design and design systems work has followed a similar path. Teams like Airbnb’s have treated exclusion and inaccessibility as design problems, not just policy problems, reworking profiles, booking flows, and filters so that more people can participate without needing to know the underlying guidelines. In both cases, inclusion becomes a matter of pattern and ritual rather than individual heroics, embedded in everyday use.

This is how complex systems become trustworthy over time. Not by freezing innovation or eliminating speed, but by standardizing what must remain visible, what must be interruptible, and what must be contestable when the cost of misunderstanding is not reversible. These standards don't constrain imagination, they align power with responsibility.

Artificial intelligence is largely in its own pre‑standards era, not because we lack values or even frameworks, but because we lack shared patterns that live at the level of everyday use. Documentation, safety frameworks, and reporting practices exist, but they remain backstage, legible mostly to specialists rather than to the people relying on the systems. The gap isn’t conceptual, but experiential. 

Today’s large language model interfaces have in some ways begun to gesture toward a version of discernment. Some systems surface links to sources in faint, truncated footnotes beneath the answer, while others briefly expose an intermediate ‘reasoning’ view, as the model drafts, revises, and reorders text before collapsing it into a single, polished reply. As I write this, I’m sure patterns are actively shifting, updating and reconfiguring into successive releases that arrive faster than any essay about them can be written. These are real moves toward making reasoning visible. But they still appear as accessories to an answer, not as the structure that governs how the answer is read. The question is not whether we can bolt these gestures onto fluent systems, it’s whether we’re willing to let them rewrite what a good answer is allowed to look like.

Without institutional limits, responsible use is an empty gesture. Without shared standards for what must be visible and explainable, convenience masquerades as consent. We’re living in the moment before conventions set in, the moment before design decides whether it will remain a human verb. We can keep optimizing for a world where answers arrive and no one evaluates them, or we can build systems that make evaluation possible and allow judgment to appear.

Perhaps we’re still early enough to notice what this moment asks of us:

Systems that can’t be questioned are exercising power without legitimacy. Systems that can’t communicate their process don’t deserve authority over our decisions. Designing for discernment isn’t a rejection of technology or the end of ease. It’s a line drawn against becoming illegible to ourselves.