Skip to main content

The Framework

Human in Control

A design and governance principle for AI systems where humans retain real control — not just on paper.

“The competitive advantage of the AI era is not maximum automation, but maximum control with minimum friction.”

Human-in-the-Loop is not enough

Many organizations believe that having a human somewhere in the process means they have control. But there is a fundamental difference between being in the loop and being in control.

Human-in-the-Loop

  • -Human is a node in the process
  • -Often reduced to an “approve” button
  • -Control is symbolic, not real
  • -Automation sets the pace
  • -No time to understand or evaluate

Human in Control

  • +Human has real authority
  • +Can stop, change, override
  • +Understands what the system does and why
  • +Sets the pace for critical choices
  • +Retains responsibility and insight

Four foundational principles

A framework for AI systems where humans retain real control — built on principles that acknowledge the limits of automation.

01

Responsibility Cannot Be Automated

Responsibility requires intention, understanding, and the ability to be held accountable. Machines can execute actions, but they cannot bear responsibility. When something goes wrong with an automated decision, it is still a human who must answer.

Implications:

  • Legal accountability remains with humans, regardless of system sophistication
  • Ethical choices require human judgment and contextual understanding
  • Accountability must be designed into systems, not delegated to them
02

Autonomy Requires Proportional Control

The more a system can do on its own, the stronger the control mechanisms must be. Increased autonomy without increased control is not efficiency — it is risk.

Implications:

  • Critical decisions require human approval
  • Automation of irreversible actions requires additional safety layers
  • Control levels must scale with consequence magnitude
03

Explainability Is a Prerequisite for Trust

Trust in systems we do not understand is not trust — it is blind faith. Real control requires that we can understand why a system does what it does.

Implications:

  • Black-box systems are unacceptable for critical decisions
  • Humans must be able to understand and challenge system logic
  • Explainability is not a nice-to-have — it is a requirement
04

Control Must Be Designed, Not Assumed

Control does not emerge automatically. It must be built in from the start. A system not designed for human override will resist it.

Implications:

  • Override mechanisms must be built-in, not bolted on
  • Control points must be identified before the system is built
  • Human-in-the-loop without real authority is theater

Domains that should never be fully automated

Some decisions require human judgment, regardless of how advanced the technology becomes. These are domains where the cost of error is too high, the context too complex, or the accountability too important to delegate.

Strategic choices

Direction, prioritization, value trade-offs

Ethical trade-offs

Balancing competing values and interests

Legal accountability

Decisions with legal consequences

Irreversible actions

What cannot be undone or reversed

Practical application

For leaders and boards

Use this framework to evaluate AI initiatives. Ask: Where does responsibility lie? What are the control mechanisms? Can we explain what the system does? Are override capabilities built in?

For architects and builders

Design control into systems from the start. Identify decision points that require human judgment. Build explainability into the architecture. Create meaningful override mechanisms.

For governance and compliance

Map automated decisions to accountability structures. Ensure explainability requirements are met. Verify that human control is real, not theatrical.

Want to discuss the framework?

I give talks and workshops on Human in Control for leaders, boards, and teams working with AI strategy and governance.