The Framework
Human in Control
A design and governance principle for AI systems where humans retain real control — not just on paper.
Apply this framework before approving, deploying, or scaling any AI system that makes or influences decisions on behalf of your organization.
“The competitive advantage of the AI era is not maximum automation, but maximum control with minimum friction.”
Human-in-the-Loop is not enough
Many organizations believe that having a human somewhere in the process means they have control. But there is a fundamental difference between being in the loop and being in control.
Human-in-the-Loop
- -Human is a node in the process
- -Often reduced to an “approve” button
- -Control is symbolic, not real
- -Automation sets the pace
- -No time to understand or evaluate
Human in Control
- +Human has real authority
- +Can stop, change, override
- +Understands what the system does and why
- +Sets the pace for critical choices
- +Retains responsibility and insight
Four foundational principles
A framework for AI systems where humans retain real control — built on principles that acknowledge the limits of automation.
Responsibility Cannot Be Automated
Responsibility requires intention, understanding, and the ability to be held accountable. Machines can execute actions, but they cannot bear responsibility. When something goes wrong with an automated decision, it is still a human who must answer.
The key question:
“Who bears the consequence if this system gets it wrong?”
Implications:
- →Legal accountability remains with humans, regardless of system sophistication
- →Ethical choices require human judgment and contextual understanding
- →Accountability must be designed into systems, not delegated to them
Autonomy Requires Proportional Control
The more a system can do on its own, the stronger the control mechanisms must be. Increased autonomy without increased control is not efficiency — it is risk.
The key question:
“Can we stop this while it’s happening, or only after?”
Implications:
- →Critical decisions require human approval
- →Automation of irreversible actions requires additional safety layers
- →Control levels must scale with consequence magnitude
Explainability Is a Prerequisite for Trust
Trust in systems we do not understand is not trust — it is blind faith. Real control requires that we can understand why a system does what it does.
The key question:
“Can the accountable person explain why the system did what it did?”
Implications:
- →Black-box systems are unacceptable for critical decisions
- →Humans must be able to understand and challenge system logic
- →Explainability is not a nice-to-have — it is a requirement
Control Must Be Designed, Not Assumed
Control does not emerge automatically. It must be built in from the start. A system not designed for human override will resist it.
The key question:
“Where in the process can a human actually intervene?”
Implications:
- →Override mechanisms must be built-in, not bolted on
- →Control points must be identified before the system is built
- →Human-in-the-loop without real authority is theater
Domains that should never be fully automated
Some decisions require human judgment, regardless of how advanced the technology becomes. These are domains where the cost of error is too high, the context too complex, or the accountability too important to delegate.
Strategic choices
Direction, prioritization, value trade-offs
Ethical trade-offs
Balancing competing values and interests
Legal accountability
Decisions with legal consequences
Irreversible actions
What cannot be undone or reversed
Regulatory alignment
Human in Control is designed to complement, not replace, established standards. The four principles align with key requirements across major AI governance frameworks.
| Principle | EU AI Act | NIST AI RMF | ISO/IEC 42001 |
|---|---|---|---|
| P1: Responsibility | Art. 14 — Human oversight obligations | GOVERN — Roles, accountability, culture | Leadership commitment, accountability structures |
| P2: Proportional control | Art. 9 — Risk management system | MANAGE — Risk response, escalation | Risk assessment, control proportionality |
| P3: Explainability | Art. 13 — Transparency requirements | MAP — Context, impact characterization | Documentation, transparency controls |
| P4: Control by design | Art. 14(4) — Ability to override | MANAGE — Controls, monitoring | Operational controls, design requirements |
HITL — HOTL — HIC: The oversight spectrum
The EU High-Level Expert Group on AI established three levels of human oversight: Human-in-the-Loop (human can intervene in each decision cycle), Human-on-the-Loop (human can intervene during design and monitoring), and Human-in-Command (human can oversee overall activity and decide when and how to use the system). This framework positions itself at the Human-in-Command level — the highest form of human oversight, where humans retain genuine authority over the system's role and reach.
Decision authority matrix
Not all decisions require the same level of human involvement. This matrix helps organizations classify decisions based on two dimensions: whether the action can be reversed, and how severe the consequences of error are.
System decides. Monitoring and logging in place.
System acts, human monitors and can intervene.
System recommends, human approves each action.
Human decides. System supports with information.
The matrix is a starting point, not a formula. Organizations should assess each use case individually, considering factors like regulatory requirements, organizational risk appetite, and the maturity of the technology involved.
Governance lifecycle
AI governance is not a one-time exercise. It is a continuous cycle of assessment, design, monitoring, and adaptation.
Assess
Principle 3: Explainability
- Map AI-driven decisions and their impact
- Identify where human judgment is critical
- Understand current control gaps
Design
Principle 4: Control by design
- Build control points into system architecture
- Define override mechanisms and escalation paths
- Assign accountability for each decision type
Monitor
Principle 2: Proportional control
- Track system behavior and decision outcomes
- Verify that control mechanisms work in practice
- Monitor for drift, bias, or degraded performance
Adapt
Principle 1: Responsibility
- Update controls as systems and risks evolve
- Incorporate lessons from incidents and near-misses
- Reassess authority levels as capabilities change
Maturity levels
Organizations differ in how far they have come with AI governance. These four levels provide a way to assess current maturity and identify what to work on next.
Reactive
No formal AI governance. Decisions about AI are ad hoc. Control is accidental, not designed.
Structured
Principles are defined. Roles exist. Control points are identified but not consistently applied.
Embedded
Control mechanisms are part of system design. Accountability is clear. Override capabilities work in practice, not just on paper.
Adaptive
Continuous learning. Controls evolve with systems. The organization can respond to new risks without starting over.
Practical application
For leaders and boards
Use this framework to evaluate AI initiatives. Ask: Where does responsibility lie? What are the control mechanisms? Can we explain what the system does? Are override capabilities built in?
For architects and builders
Design control into systems from the start. Identify decision points that require human judgment. Build explainability into the architecture. Create meaningful override mechanisms.
For governance and compliance
Map automated decisions to accountability structures. Ensure explainability requirements are met. Verify that human control is real, not theatrical.
Who owns each principle?
Not a rigid rule — a starting point so someone walks out of the meeting knowing what is theirs.
| Principle | Natural owner | Why |
|---|---|---|
| P1: Responsibility | Board / CEO | They bear the ultimate consequence |
| P2: Proportional control | CTO / Architect | They design the systems |
| P3: Explainability | Product owner / Domain lead | They must be able to explain |
| P4: Control by design | CISO / Risk owner | They verify control is real |
Applying the framework: AI-assisted credit decisions
A worked example showing how the four principles apply to a common real-world scenario.
The scenario
A bank considers automating parts of its credit assessment process. An AI model analyzes applicant data and produces a recommendation: approve, reject, or refer for manual review.
P1: Responsibility cannot be automated
Who is accountable when the model rejects a borderline applicant? The model cannot be held responsible. The bank must designate a person or role that owns the outcome of each credit decision, whether the model recommended it or not.
P2: Autonomy requires proportional control
The system may recommend, but final approval for amounts above a defined threshold requires human sign-off. The higher the stakes, the stronger the human involvement.
P3: Explainability is a prerequisite for trust
The model must explain which factors drove each recommendation. A credit officer who cannot understand why the system recommended rejection cannot meaningfully review or override that decision.
P4: Control must be designed, not assumed
Credit officers can override any recommendation, and overrides are logged — not discouraged. The system is designed so that human judgment complements model output, rather than being subordinated to it.
Decision matrix placement: Credit decisions are irreversible (a rejected applicant may go elsewhere) and carry severe consequences (financial loss, regulatory risk, reputational harm). This places them in the Human-in-Command quadrant.
Want to discuss the framework?
I give talks and workshops on Human in Control for leaders, boards, and teams working with AI strategy and governance.