Skip to main content

MD-U01 - Management I Development

Chapter 0 — How to Use This Manual

Standards, Scope, and Rules of Engagement


Chapter Purpose

This chapter establishes how this manual must be used, who it applies to, and what happens if it is ignored.

This manual is not:

  • optional reading
  • leadership philosophy
  • personality guidance
  • motivational content

This manual defines the management standard.


0.1 Who This Manual Is For

This manual applies to:

  • anyone currently managing people, workflows, or outcomes
  • anyone being evaluated for a management role
  • anyone accountable for team or departmental performance

It applies regardless of:

  • tenure
  • past performance as a doer
  • title
  • personality or leadership style

0.2 Who This Manual Is NOT For

This manual is not for:

  • individual contributors who do not influence others’ work
  • people seeking authority without accountability
  • managers who want to “manage by feel”
  • leaders unwilling to be measured

Those individuals must not hold management responsibility.


0.3 How This Manual Is Structured

This manual is structured around:

  • standards, not suggestions
  • measurement, not opinion
  • validation, not attendance
  • behavior change, not awareness

Each chapter includes:

  • defined expectations
  • quantitative requirements
  • failure signals
  • validation criteria

Reading without execution is non-compliance.


0.4 How Progress Is Evaluated

Progress through MD is gated, not linear.

You do not “finish” a chapter by reading it.
You complete a chapter by demonstrating evidence.

Evidence Includes:

  • metrics
  • artifacts (SOPs, dashboards, logs)
  • observable behavior changes
  • sustained performance trends

0.5 Enforcement and Authority

This manual carries authority when adopted by leadership.

Failure to meet MD standards may result in:

  • delayed promotion
  • removal from management role
  • reassignment to individual contributor roles
  • performance improvement plans

This is intentional.


0.6 The Manager Agreement

By participating in MD, managers agree that:

  • their performance will be measured
  • decisions must be data-backed
  • people issues require operational diagnosis
  • dependency is a failure condition
  • improvement is mandatory, not optional

0.7 Qualitative vs Quantitative Rule

This manual operates under a strict rule:

Quantitative signals drive decisions.
Qualitative input explains them.

Violation of this rule invalidates conclusions.


0.8 The MD Completion Standard

A manager is considered “MD-compliant” only when:

  • their team produces consistent results
  • metrics are visible and reviewed regularly
  • dependency on the manager is decreasing
  • improvements are versioned and measured
  • the system functions during manager absence

0.9 How to Use This Manual Day-to-Day

This manual should be used:

  • as a reference during decisions
  • as a standard during coaching
  • as a gate during promotions
  • as documentation during accountability discussions

It should not be customized per person.


Chapter 1 — The Manager Mandate

Outcomes, Not Activity


Chapter Purpose

This chapter resets what it means to be a manager.

It establishes:

  • What managers are accountable for
  • How success is measured
  • Why activity, effort, and intent are irrelevant without outcomes
  • Why managers must manage through systems and people using data

This chapter is the non-negotiable foundation for all MD levels.


1.1 What a Manager Is Accountable For

A manager is accountable for results produced by other people, operating inside systems.

Not:

  • personal output
  • effort
  • being busy
  • being liked
  • “holding things together”

Manager Accountability Has Three Pillars

  1. Results – outcomes achieved
  2. Stability – consistency over time
  3. Scalability – performance holds as volume increases

If any one is missing, management is failing.


1.2 Outcomes vs Activity

Activity Is Not Evidence

Common activity traps:

  • “I was in meetings all day”
  • “I stayed late to help”
  • “I answered a lot of questions”
  • “The team is working hard”

None of these indicate success.

Outcomes Are Evidence

Outcomes are:

  • measurable
  • comparable
  • trendable
  • repeatable

Rule:

If it cannot be measured over time, it cannot be managed.


1.3 The Manager’s Measurement Obligation

Every manager must be able to answer with numbers, not stories:

Question

Required Evidence

Is the team performing?

Output, quality, rework, cycle time

Is performance improving?

Trend data

Is the system stable?

Variance over time

Is the manager a bottleneck?

Escalations, manager-touch rate

Can the manager step away?

Performance delta with absence

Qualitative input is allowed only to explain quantitative signals, never to replace them.


1.4 The Manager Leverage Principle

The power of many must exceed the output of one.

A manager succeeds only when:

  • the team produces more than the manager could alone
  • performance does not collapse when the manager steps back
  • improvement continues without constant intervention

The Bus Test

If the manager is unavailable for 5 business days:

  • Does output continue?
  • Does quality remain within standards?
  • Do decisions still happen at the correct level?

If not, the manager is functioning as a hidden single point of failure.


1.5 Managers Do Not Manage Effort — They Manage Variance

Effort is invisible and subjective.
Variance is observable and measurable.

Managers manage:

  • variance in output
  • variance in quality
  • variance in cycle time
  • variance in behavior relative to standards

Rule:

Managers manage deviation from standard, not motivation.


1.6 Quantitative Management Standard

Every manager must maintain a minimum signal set for their area.

Required Core Metrics (All Managers)

These exist regardless of function:

Category

Minimum Signals

Volume

Work completed per period

Time

Cycle time / aging

Quality

Defect or rework rate

Stability

Variance week-over-week

Dependency

Escalations / manager touch rate

A manager who cannot produce these metrics is not managing.


1.7 People + Operations Are Not Separate

Managers do not manage:

  • people and work
    They manage people producing work through systems.

People Decisions Must Be Data-Backed

Examples:

  • Coaching is triggered by trends, not moods
  • Delegation is evaluated by output stability
  • Motivation issues are diagnosed through performance signals
  • Accountability is based on missed standards, not personality

Rule:

Fair management is objective management.


1.8 Common Early Failure Signals

A manager is failing this chapter if:

  • Decisions start with opinions instead of metrics
  • Performance discussions lack numbers
  • The manager explains outcomes using effort
  • Problems are described as “people issues” without data
  • Improvement actions have no baseline or target

These signals must be corrected before moving forward.


1.9 Chapter Validation (Required)

A manager passes Chapter 1 only if they can produce:

  1. A written statement of:
    • the outcomes they own
    • how those outcomes are measured
  2. A live metric set showing:
    • volume
    • quality
    • time
    • variance
  3. One example decision:
    • made using data
    • explained with metrics first
    • narrative second

No metrics = no pass.

 

Unit 1 — MD-I-U01

Manager Mandate: Outcomes + Measurement Discipline

(Related to Manual Chapter 1)

Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U01
  • Related Manual Chapter: Chapter 1 — The Manager Mandate
  • Unit Name: Outcomes, Not Activity
  • Timebox: 7 days
  • Prerequisites: None
  • Primary Outcome: Manager can define what they own and prove it with metrics.
  • Pass Standard: Metrics exist, are reviewed on cadence, and are used to justify at least one decision.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Can state their outcomes in plain language (not tasks)
  • Has a minimum KPI set for their area:
    • Volume, Cycle Time, Quality/Rework, Stability/Variance, Escalations/Dependency
  • Reviews KPIs on a schedule (daily/weekly minimum)
  • Can explain a decision using metrics first, narrative second
  • Can show at least 2 consecutive weeks of data (or baseline + initial week if new)
  • Does not use “effort” or “busy” as proof of performance

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Outcomes Clarity

  1. I can list 3–5 outcomes I own that are measurable (not task lists).
  2. I can clearly explain what “success” and “failure” look like for my area using numbers.

Measurement System Exists

  1. I have a visible dashboard or tracker showing Volume for my system.
  2. I track Cycle Time or aging for work items.
  3. I track Quality (defects, rework, returns, corrections).
  4. I track Stability/Variance (week-over-week trend, not one-offs).
  5. I track Escalations/Dependency (how often work/decisions come to me).

Cadence & Use

  1. I review exceptions daily or at a consistent frequency.
  2. I review KPI trends weekly (not only when something goes wrong).
  3. My decisions can be tied to a triggering metric (documented at least once).

Integrity Standard (Critical)

  1. I do not make performance judgments without referencing the KPI set.
  2. Qualitative input is used only to explain metrics, not replace them.

Critical questions: 3–7, 9–10, 11
(If any critical question scores 0–1, the unit cannot pass.)

Scoring Outputs

  • Unit Score = average of all questions
  • Stage:
    • 0–1.4 Not Ready
    • 1.5–2.4 Developing
    • 2.5–3.4 Operating
    • 3.5–4.0 Leading

3) Evidence Checklist (Required to Pass)

Provide the following artifacts:

  1. Outcome Statement (1 page max) including:
    • 3–5 measurable outcomes you own
    • the metrics that prove each outcome
  2. KPI Dashboard/Tracker showing at minimum:
    • Volume
    • Cycle time (or aging)
    • Quality/rework
    • Variance trend (even a simple weekly trend line)
    • Escalations/dependency
  3. Cadence Proof
    • screenshot or calendar showing daily/weekly reviews scheduled
  4. One Decision Record
    • metric → decision → expected impact → review date

4) Step-by-Step Training Plan (7 Days)

Day 1 — Define Outcomes

  • Action: Write 3–5 outcomes you own (not tasks).
  • Format: 1-page doc.
  • Output: Outcome Statement draft.
  • Validation: Outcomes must be measurable and within your authority.

Day 2 — Define the Minimum KPI Set

  • Action: For each outcome, assign at least 1 KPI.
  • Output: Outcome → KPI mapping.
  • Validation: KPIs must be measurable weekly.

Day 3 — Build a Simple KPI Tracker (MVP)

  • Action: Create a basic dashboard/tracker (sheet/db/view).
  • Output: KPI tracker with fields and initial values.
  • Validation: Includes Volume, Time, Quality, Variance, Escalations.

Day 4 — Establish Review Cadence

  • Action: Schedule:
    • daily exception review (10–15 min)
    • weekly KPI review (30 min)
  • Output: Calendar invite(s) or documented schedule.
  • Validation: Reviews have a consistent time and owner.

Day 5 — Capture First Baseline Week

  • Action: Populate the tracker with the most recent week (or current running week).
  • Output: Week 0 baseline.
  • Validation: Data is complete enough to spot trends.

Day 6 — Make One Metric-Driven Decision

  • Action: Identify 1 metric gap and make 1 decision.
  • Output: Decision Record:
    • triggering metric
    • decision made
    • expected metric change
    • review date
  • Validation: Decision is tied directly to a metric.

Day 7 — Review + Submit Evidence

  • Action: Package evidence for manager review.
  • Output: Evidence packet (links/screenshots).
  • Validation: All required artifacts present.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 20–30 minutes evidence review + 10 min Q&A
  • Pass Criteria (Must Meet All):
    • All evidence checklist items provided
    • All critical questions scored ≥2
    • KPI tracker includes the 5 required categories
    • One decision record exists and includes a review date
  • Fail Criteria (Any One Fails):
    • Missing KPI categories
    • No cadence proof
    • Decision not tied to a metric
    • Critical questions scored 0–1
  • Remediation Plan (If Fail):
    • Fix missing KPI categories within 48 hours
    • Add cadence events
    • Provide revised decision record
    • Re-review in 7 days

6) Carry-Forward Commitments (After Passing)

  • Weekly KPI review continues permanently
  • Exceptions log continues daily/weekly (based on volume)
  • Every improvement/change must reference baseline metrics (feeds Unit 7 later)

 

Chapter 2 — The Power of Many vs the Trap of One

Leverage, Dependency, and Manager Involvement


Chapter Purpose

This chapter defines the leverage requirement of management.

It establishes:

  • Why team output must exceed individual output
  • When manager “doing” is acceptable—and when it is failure
  • How to measure dependency objectively
  • How managers balance doing vs managing without becoming the system

This chapter exists to eliminate hero management.


2.1 The Leverage Requirement

A manager’s output is not their work.
A manager’s output is the team’s work.

If the manager’s personal production is required to meet baseline performance, the system is already failing.

Leverage Defined

Leverage exists when:

  • The team produces more than the manager could alone
  • Output remains stable without constant manager intervention
  • Quality and pace do not collapse during manager absence

Leverage is measurable, not assumed.


2.2 The Trap of One (How Managers Break Their Own Teams)

What the Trap Looks Like

  • The manager fills gaps instead of fixing causes
  • The manager “helps” the same issues repeatedly
  • The team waits for the manager to decide
  • Performance looks fine—until the manager steps away

Why It’s Dangerous

  • Weak systems are hidden, not corrected
  • Team capability stalls
  • Dependency increases silently
  • Growth becomes impossible

Rule:

If the manager is required for normal operations, the system is not operational.


2.3 Acceptable Manager “Doing” vs Failure

Managers are allowed to contribute directly only when it serves the system.

Acceptable Reasons to Do Work

  1. Surge Capacity – short-term volume spikes
  2. Stabilization – stopping active failure
  3. Training by Demonstration – showing the standard once, then delegating
  4. Temporary Coverage – defined gap with exit plan

Failure Condition

Manager “doing” becomes failure when:

  • It replaces fixing inputs, standards, or training
  • It becomes expected for normal output
  • It hides underperformance
  • It continues without a reduction plan

Rule:

Help that does not reduce future help is rescue, not management.


2.4 The Manager Involvement Dial (0–5)

Managers must consciously regulate how involved they are in execution.

Dial

Role

Description

0

Observe

Measure only

1

Coach

Review + feedback

2

Quality Gate

Approve/checkpoints

3

Assist

Temporary task help

4

Produce

Direct contributor

5

Carry

Team depends on manager

Target State

  • Dial 1–2 most of the time
  • Dial 3–4 only temporarily
  • Dial 5 is a failure state

2.5 Quantitative Signals of Dependency

Dependency is not a feeling. It is detectable.

Required Dependency Metrics

Every manager must track:

Signal

What It Indicates

Manager-touch rate

% of tasks manager touches

Escalation frequency

Decisions pushed upward

Repeat rescues

Same issue saved multiple times

Output variance

Performance swings without manager

Absence delta

Output change during manager absence

Red Flags

  • Manager touches >30% of normal work
  • Same issue rescued more than twice
  • Output drops >15% during absence
  • Escalations increase week over week

These require immediate correction.


2.6 Time Allocation Standards (Managing vs Doing)

Managers must actively manage their time split.

Normal Operations (Healthy Team)

  • 70–85% Managing
  • 15–30% Doing

New Team / New Process

  • 50–70% Managing
  • 30–50% Doing
  • Doing must decrease week over week

Crisis / Turnaround

  • 40–60% Managing
  • 40–60% Doing
  • Must include a written exit plan back to Dial 1–2

Rule:

If a manager remains primarily a doer for more than 4 weeks, dependency has formed.


2.7 The Two-Lane Manager Model

Managers must operate in two lanes simultaneously.

Lane A — Production Support (Doing)

  • Temporary assistance
  • Training demonstrations
  • Stabilization actions

Lane B — Management (Non-Negotiable)

  • Standards and expectations
  • Coaching and accountability
  • Metrics review
  • System improvement
  • Ownership enforcement

Lane B must exist every week, even in crisis.

If Lane B disappears, the manager will never escape Lane A.


2.8 The Rescue Ticket Rule

Every manager intervention that saves work must be logged.

Rescue Ticket Requirements

  • What failed (input, skill, standard, capacity, tooling)
  • Who owns the fix
  • What change will prevent recurrence
  • Target date
  • Metric that proves resolution

Metric

  • Rescue recurrence rate must decrease over time

If rescues repeat, the manager is enabling failure.


2.9 Chapter Validation (Required)

A manager passes Chapter 2 only if they can demonstrate:

  1. Current manager involvement dial (0–5)
  2. Time split data (Managing vs Doing)
  3. Dependency metrics for their team
  4. At least one documented rescue → prevention fix
  5. Evidence that manager absence does not collapse output

No evidence = no pass.

 

Unit 2 — MD-I-U02

The Power of Many vs the Trap of One

Leverage, Dependency Signals, and Manager Involvement
(Related to Manual Chapter 2)

Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U02
  • Related Manual Chapter: Chapter 2 — The Power of Many vs the Trap of One
  • Unit Name: Leverage & Dependency Control
  • Timebox: 7–10 days
  • Prerequisites: MD-I-U01 (Outcomes + Measurement Discipline)
  • Primary Outcome: Manager can measure dependency, reduce rescues, and shift time from doing to managing without losing output.
  • Pass Standard: Dependency metrics exist and show improvement; manager involvement is intentionally controlled; at least one repeat rescue is eliminated through a prevention fix.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Tracks dependency signals weekly:
    • manager-touch rate
    • escalation frequency
    • repeat rescues
    • output variance
    • absence delta (even small test)
  • Can identify whether they are operating at Dial 1–2 (target) vs 3–5
  • Can explain when “doing” is acceptable vs when it becomes a crutch
  • Logs every rescue and converts at least one rescue into a prevention fix
  • Shows a measurable trend toward reduced dependency (even small delta)

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Awareness + Control

  1. I can describe the Manager Involvement Dial (0–5) and where I operate most of the time.
  2. I can explain (with examples) when manager “doing” is acceptable (surge, stabilization, training, coverage).

Dependency Measurement (Critical)

  1. I track manager-touch rate (% of tasks I touch).
  2. I track escalation frequency (how often decisions/tasks come to me).
  3. I track repeat rescues (same issue saved more than once).
  4. I track output variance week over week (stability trend).

Dependency Reduction (Critical)

  1. I use a rescue ticket/log for every rescue I perform.
  2. I can show at least one rescue that resulted in a prevention fix (standard/training/input/gate).
  3. My manager-touch rate is trending down OR is already within target range for my environment.
  4. My escalations are trending down OR are stable within a defined threshold.

Independence Testing

  1. I have run at least one controlled “step-back” test (delay response, delegate decision, or partial absence).
  2. I can show output did not collapse during the step-back test.

Critical questions: 3–6, 7–10
(If any critical question scores 0–1, the unit cannot pass.)

Scoring Outputs

  • Unit Score = average of all questions
  • Stage:
    • 0–1.4 Not Ready
    • 1.5–2.4 Developing
    • 2.5–3.4 Operating
    • 3.5–4.0 Leading

3) Evidence Checklist (Required to Pass)

Provide the following artifacts:

  1. Manager Involvement Self-Report
    • current dial range (0–5)
    • current weekly time split (Managing vs Doing)
    • target split for next 2 weeks
  2. Dependency Dashboard/Tracker showing:
    • manager-touch rate (weekly)
    • escalation frequency (weekly)
    • repeat rescues count (weekly)
    • output variance (weekly)
  3. Rescue Log / Rescue Tickets (minimum 5 entries or 2 weeks of logging)
  4. One Prevention Fix
    • rescue → root cause → fix implemented
    • metric expected to change
    • review date
  5. Step-Back Test Record
    • what was stepped back
    • duration
    • result (metrics or observable outcome)

4) Step-by-Step Training Plan (7–10 Days)

Day 1 — Establish Targets + Dial

  • Action: Choose your current dial (0–5) and set your target dial for the next 2 weeks.
  • Output: Dial statement + time split target (Managing vs Doing).
  • Validation: Target must be realistic and include reduction plan if currently high doing.

Day 2 — Define What Counts as “Touch,” “Escalation,” and “Rescue”

  • Action: Create definitions so tracking is consistent.
  • Output: Definitions document (1 page max).
  • Validation: Definitions must be objective enough that another person would log the same event.

Day 3 — Start the Dependency Tracker

  • Action: Add 5 metrics to your tracker:
    • manager-touch rate
    • escalations
    • rescues
    • repeat rescues
    • output variance
  • Output: Tracker with first entries.
  • Validation: Metrics must have a weekly cadence.

Days 4–6 — Run Rescue Logging + Identify Repeat Pattern

  • Action: Log every rescue and escalation.
  • Output: Rescue log with root cause category:
    • Input
    • Standard
    • Skill/Training
    • Capacity
    • Tooling
  • Validation: At least 5 entries with categories.

Day 7 — Implement One Prevention Fix

  • Action: Choose the most common repeat rescue and implement a prevention fix:
    • clarify input requirement
    • add a quality gate
    • SOP update
    • training micro-session
    • delegation decision rights
  • Output: Prevention Fix Record:
    • problem
    • fix
    • owner
    • metric targeted
    • review date
  • Validation: Fix must be operational and measurable.

Day 8–10 — Run a Step-Back Test

  • Action: Run one controlled independence test:
    • delay response to non-critical questions by 2 hours
    • delegate one decision fully with escalation rules
    • remove yourself from a routine approval for 1 cycle
  • Output: Step-Back Test Record with outcome.
  • Validation: Output did not collapse; issues were logged, not rescued silently.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 30 minutes evidence review + 10 minute Q&A
  • Pass Criteria (Must Meet All):
    • Dependency tracker exists with at least 2 weeks of data (or 1 baseline + current week if new)
    • Rescue log exists with categories
    • One prevention fix implemented with metric target + review date
    • Step-back test completed and documented
    • Critical questions scored ≥2
  • Fail Criteria (Any One Fails):
    • No rescue logging
    • No metric visibility on dependency
    • Prevention fix is vague (“try harder,” “be more careful”)
    • Step-back test not performed
    • Critical questions scored 0–1
  • Remediation Plan (If Fail):
    • Continue rescue logging 7 more days
    • Implement a real prevention fix tied to a metric
    • Re-run step-back test
    • Re-review in 7 days

6) Carry-Forward Commitments (After Passing)

  • Rescue logging continues until repeat rescues trend down and remain down
  • Manager-touch rate is reviewed weekly and must stay within target range
  • Every rescue must produce a prevention action or a documented reason why not

 

 

Chapter 3 — Understanding the System You Manage

Systems, Signals, and Stability Before Change


Chapter Purpose

This chapter ensures managers can read, measure, and stabilize the system they are responsible for before attempting to change it.

Most management failures happen because managers:

  • change systems they do not understand
  • diagnose people when inputs are broken
  • rebuild workflows instead of stabilizing them
  • act without measurable baselines

This chapter eliminates those failures.


3.1 What a “System” Actually Is

A system is not:

  • a tool
  • a person
  • a checklist
  • a meeting

A system is:

A repeatable sequence that converts inputs into outputs under constraints, producing measurable results.

Every System Has Six Required Elements

  1. Trigger – what starts the work
  2. Inputs – what must be present to begin
  3. Steps – the transformation of work
  4. Owner(s) – accountability at each step
  5. Outputs – what “done” produces
  6. Signals – how performance is measured

If any element is missing, the system is unstable.


3.2 Manager Responsibility: Read Before You Change

Managers are not permitted to redesign a system they cannot explain.

Minimum Understanding Standard

A manager must be able to answer:

  • Where does work enter the system?
  • What inputs are required and validated?
  • Who owns each step?
  • What does “done” mean?
  • Where does quality get checked?
  • How is performance measured?

If the manager cannot answer these clearly and quantitatively, they are not ready to improve the system.


3.3 Inputs Are the Most Common Failure Point

Most downstream failures originate at intake.

Common Input Failures

  • incomplete information
  • unclear priority
  • missing standards
  • incorrect assumptions
  • unvalidated requests

Rule:

Managers must fix inputs before blaming execution.


3.4 Required System Metrics (Baseline Signals)

Managers must establish a baseline signal set for every system they manage.

Minimum Required Metrics

These are mandatory across all functions:

Metric

Purpose

Volume

How much work enters and exits

Cycle Time

How long work takes

Quality

Defects or rework

Stability

Variance over time

Escalations

Work pushed upward

A system without these metrics is not manageable.


3.5 Stability Before Improvement

Improvement without stability is guesswork.

Stability Defined

A system is considered stable when:

  • inputs are consistently complete
  • outputs meet defined standards
  • variance is within acceptable range
  • performance is predictable week over week

What Managers Must Do First

  • enforce input standards
  • clarify ownership
  • ensure outputs are defined
  • measure consistently

Only then may improvement begin.


3.6 When Managers Are Allowed to Build vs Improve

Managers May Build a System When:

  • no system exists
  • work is entirely ad hoc
  • outcomes are undefined
  • no ownership exists

Managers Must Improve (Not Rebuild) When:

  • a system exists but performs poorly
  • outputs are inconsistent
  • people “work around” the process
  • performance varies by person

Rule:

Rebuilding instead of improving destroys learning and hides root causes.


3.7 Diagnosing Failure: System vs People

Managers must diagnose using data, not instinct.

System Failure Signals

  • high variance
  • inconsistent inputs
  • unclear handoffs
  • frequent escalations
  • repeated rework

People Failure Signals

  • consistent misses by the same individual
  • standards are clear and documented
  • inputs are valid
  • others succeed in the same system

Rule:

Diagnose the system first. Only diagnose people after system stability is proven.


3.8 The Manager’s System Map (Required Artifact)

Every manager must maintain a system map for their area.

Minimum System Map Requirements

  • trigger
  • inputs
  • steps
  • ownership
  • outputs
  • metrics

This map does not need to be complex.
It must be accurate.


3.9 Feedback Loops Inside the System

Managers must define where feedback enters the system.

Required Feedback Loops

  • Input feedback: what gets rejected or corrected
  • Execution feedback: where work slows or breaks
  • Output feedback: defects and customer impact

Feedback without capture is noise.
Captured feedback becomes improvement data.


3.10 Chapter Validation (Required)

A manager passes Chapter 3 only if they can produce:

  1. A documented system map
  2. Defined inputs and outputs
  3. A live baseline metric set
  4. Identification of top 3 failure points
  5. Evidence that no changes were made before baselining

No baseline = no improvement authority.

 

Unit 3 — MD-I-U03

Understanding the System You Manage

System Mapping, Inputs/Outputs, and Baseline Signals
(Related to Manual Chapter 3)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U03
  • Related Manual Chapter: Chapter 3 — Understanding the System You Manage
  • Unit Name: System Literacy & Baselines
  • Timebox: 7–10 days
  • Prerequisites:
    • MD-I-U01 (Outcomes + Measurement Discipline)
    • MD-I-U02 (Leverage & Dependency Control)
  • Primary Outcome: Manager can clearly explain the system they manage and establish baseline metrics without changing the system.
  • Pass Standard: A complete system map exists with live baseline metrics and zero unapproved changes during the baselining period.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Can explain the system end to end without referencing tools or people
  • Has clearly defined:
    • trigger
    • inputs
    • steps
    • ownership
    • outputs
  • Has established baseline metrics:
    • volume
    • cycle time
    • quality/rework
    • variance
    • escalations
  • Has not changed the system while baselining
  • Can identify the top 3 failure points using data, not opinion

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

System Definition

  1. I can explain what triggers the system and what “done” produces.
  2. Inputs required to start work are explicitly defined and enforced.
  3. Each step in the system has a clear owner.
  4. Handoffs between steps are explicit (not assumed).

Measurement & Baselines (Critical)

  1. I track volume entering and exiting the system.
  2. I track cycle time or aging across the system.
  3. I track quality defects or rework.
  4. I track variance week over week.
  5. I track escalations tied to this system.

Discipline (Critical)

  1. I have not made process changes during the baseline period.
  2. I can explain current failures using metrics instead of stories.
  3. I can point to the top 3 failure points with data.

Critical questions: 5–9, 10–12
(If any critical question scores 0–1, the unit cannot pass.)

Scoring Outputs

  • Unit Score = average of all questions
  • Stage:
    • 0–1.4 Not Ready
    • 1.5–2.4 Developing
    • 2.5–3.4 Operating
    • 3.5–4.0 Leading

3) Evidence Checklist (Required to Pass)

  1. System Map (Required Artifact)
    Must include:
    • trigger
    • inputs
    • steps
    • owners
    • outputs
      (Diagram or structured list is acceptable)
  2. Input Definition Sheet
    • what inputs are required
    • what makes them valid
    • where invalid inputs go
  3. Baseline Metrics Snapshot
    • volume
    • cycle time / aging
    • quality / rework
    • variance
    • escalations
      (Minimum: 1 full week or 1 full cycle)
  4. Failure Point Analysis
    • top 3 failure points
    • metric evidence for each
    • impact (time, quality, volume)
  5. Change Freeze Declaration
    • written confirmation that no changes were made during baseline

4) Step-by-Step Training Plan (7–10 Days)

Day 1 — Select the System

  • Action: Choose one real, repeatable system you manage.
  • Output: System name + scope statement.
  • Validation: Must run at least weekly.

Day 2 — Define Trigger, Inputs, Outputs

  • Action: Write:
    • what starts the work
    • required inputs
    • what “done” produces
  • Output: System definition document.
  • Validation: Inputs must be objective and checkable.

Day 3 — Map the System Steps & Ownership

  • Action: Map steps from trigger to output.
  • Output: System map (diagram or list).
  • Validation: Every step has one owner.

Day 4 — Define Measurement Points

  • Action: Decide where and how to measure:
    • volume
    • time
    • quality
    • variance
    • escalations
  • Output: Measurement plan tied to steps.
  • Validation: Metrics must be captured without manual heroics.

Days 5–7 — Run Baseline (No Changes)

  • Action: Operate the system as-is.
  • Output: Baseline dataset (1 full cycle or 1 week).
  • Validation: No process changes during this period.

Day 8 — Identify Failure Points

  • Action: Analyze baseline data.
  • Output: Top 3 failure points with metric evidence.
  • Validation: Failures must be data-backed.

Day 9–10 — Package Evidence

  • Action: Assemble evidence packet for review.
  • Output: Links/screenshots/docs.
  • Validation: All required artifacts included.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 30–40 minute evidence walkthrough
  • Pass Criteria (Must Meet All):
    • Complete system map exists
    • All baseline metrics present
    • No unapproved changes during baseline
    • Top 3 failure points identified using data
    • Critical questions scored ≥2
  • Fail Criteria (Any One Fails):
    • Missing metrics
    • Vague system definition
    • Process changes during baseline
    • Failure points described without data
  • Remediation Plan (If Fail):
    • Re-run baseline for 1 additional cycle
    • Complete missing metric capture
    • Re-submit failure analysis
    • Re-review in 7 days

6) Carry-Forward Commitments (After Passing)

  • System map remains current and versioned
  • Baseline metrics become the reference for all future improvements
  • No improvement proposals allowed without referencing this baseline

 

 

Chapter 4 — Executing the System Consistently

Operating Rhythm, Visibility, and Enforcement Without Micromanagement


Chapter Purpose

This chapter defines how managers ensure work gets done reliably, repeatedly, and independently.

Execution is not about effort or presence.
Execution is about standards, visibility, and rhythm.

This chapter exists to eliminate:

  • chaos masked as urgency
  • managers chasing work
  • inconsistent output
  • quality surprises
  • execution that collapses without supervision

4.1 What “Consistent Execution” Actually Means

Execution is consistent when:

  • work enters the system with complete inputs
  • work flows without constant clarification
  • outputs meet standards most of the time
  • exceptions are visible early
  • results are predictable week over week

Consistency is measurable, not assumed.


4.2 The Manager’s Role in Execution

Managers do not execute work.
Managers design and enforce the conditions under which execution happens.

Manager Execution Responsibilities

  • define standards
  • enforce inputs
  • make work visible
  • review signals
  • intervene only at defined points
  • correct causes, not symptoms

Rule:

If the manager must chase work, execution design has failed.


4.3 The Operating Rhythm (Non-Negotiable)

Execution requires a fixed cadence.
Without rhythm, everything becomes reactive.

Required Cadence (Minimum)

Daily

  • Review exceptions and blockers
  • Enforce input standards
  • Address quality failures immediately

Weekly

  • Review core metrics
  • Review escalations
  • Identify repeat failures
  • Confirm priorities for next cycle

Monthly

  • Trend analysis
  • System stability review
  • Improvement backlog review

Managers who skip cadence lose control.


4.4 Required Execution Metrics

Managers must track execution using objective signals.

Core Execution KPIs (Mandatory)

Metric

Purpose

Throughput

Work completed per period

Cycle Time

Speed of execution

Quality Gate Pass Rate

Output meeting standard

Exceptions

Work that breaks flow

Aging Work

Work stalled beyond target

Escalations

Decisions pushed upward

Execution without these metrics is guesswork.


4.5 Visibility Without Micromanagement

Managers should not ask, “Is it done?”
Managers should see whether it is done.

Visibility Rules

  • work status must be visible without asking
  • exceptions must surface automatically
  • aging work must be obvious
  • ownership must be clear at all times

Rule:

If a manager has to ask for status, the system is hiding information.


4.6 Quality Enforcement Through Gates

Quality must be enforced inside the system, not after failure.

Quality Gate Characteristics

  • clear criteria
  • fast to check
  • objective
  • tied to standards
  • blocks bad output from moving forward

Managers enforce quality by:

  • defining gates
  • reviewing pass/fail trends
  • correcting gate failures, not redoing work

4.7 Escalation Rules (Prevent Chaos)

Escalations are signals, not annoyances.

Proper Escalation Rules

Escalate only when:

  • inputs are invalid
  • standards conflict
  • capacity constraints exist
  • decisions exceed authority

Escalations must:

  • be logged
  • be categorized
  • reduce over time

Rising escalations indicate system failure.


4.8 The Manager Intervention Standard

Managers intervene only at defined control points.

Acceptable Interventions

  • correcting inputs
  • enforcing standards
  • clearing blockers
  • coaching on execution
  • adjusting capacity

Unacceptable Interventions

  • taking over tasks
  • reworking outputs silently
  • bypassing the system
  • saving work without logging cause

Rule:

Managers intervene to strengthen the system, not replace it.


4.9 Detecting Execution Drift

Execution drift is gradual and dangerous.

Drift Signals

  • increasing exceptions
  • rising cycle time
  • declining quality
  • more “just this once” fixes
  • increased manager involvement

Managers must respond early or drift becomes collapse.


4.10 Feedback Loops Inside Execution

Execution generates feedback constantly.

Required Feedback Loops

  • daily exception review
  • weekly trend review
  • monthly stability assessment

Feedback must be:

  • captured
  • categorized
  • prioritized
  • tracked to resolution

Uncaptured feedback is wasted data.


4.11 Chapter Validation (Required)

A manager passes Chapter 4 only if they can demonstrate:

  1. A documented operating rhythm
  2. Live execution metrics
  3. Visible work status without asking
  4. Defined quality gates
  5. Logged escalations with trend data
  6. Evidence execution holds without constant manager involvement

No visibility = no control.

 

Unit 4 — MD-I-U04

Executing the System Consistently

Operating Rhythm, Visibility, Quality Gates, and Enforcement
(Related to Manual Chapter 4)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U04
  • Related Manual Chapter: Chapter 4 — Executing the System Consistently
  • Unit Name: Execution Control & Visibility
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 (Outcomes + Measurement Discipline)
    • MD-I-U02 (Leverage & Dependency Control)
    • MD-I-U03 (System Literacy & Baselines)
  • Primary Outcome: Manager can run the system with predictable results, visible status, and enforced standards without chasing work or doing it themselves.
  • Pass Standard: Execution metrics are visible, cadence is followed, quality gates exist, and output remains stable for two consecutive cycles.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Has a documented operating rhythm (daily/weekly)
  • Can see work status without asking
  • Tracks execution KPIs consistently:
    • throughput
    • cycle time / aging
    • quality gate pass rate
    • exceptions
    • escalations
  • Enforces at least one quality gate
  • Intervenes only at defined control points
  • Shows stable output for 2 consecutive cycles without heavy manager doing

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Operating Rhythm

  1. I run a daily or near-daily exception review.
  2. I run a weekly execution review focused on trends, not stories.
  3. My cadence is scheduled and protected.

Visibility (Critical)

  1. Work status is visible without me asking individuals.
  2. Exceptions are logged and reviewed, not discovered late.
  3. Aging or stalled work is clearly visible.

Execution Metrics (Critical)

  1. I track throughput weekly.
  2. I track cycle time or aging weekly.
  3. I track quality defects or rework weekly.
  4. I track exceptions and escalations weekly.

Quality Enforcement (Critical)

  1. At least one quality gate exists and is enforced.
  2. Quality failures are blocked and corrected, not passed downstream.

Manager Behavior

  1. I intervene at defined control points, not ad hoc.
  2. I do not rework tasks silently.

Critical questions: 4–6, 7–12
(If any critical question scores 0–1, the unit cannot pass.)

Scoring Outputs

  • Unit Score = average of all questions
  • Stage:
    • 0–1.4 Not Ready
    • 1.5–2.4 Developing
    • 2.5–3.4 Operating
    • 3.5–4.0 Leading

3) Evidence Checklist (Required to Pass)

  1. Operating Rhythm Document
    • daily exception review
    • weekly execution review
    • meeting purpose + duration
  2. Execution Dashboard / Tracker
    Must show:
    • throughput
    • cycle time / aging
    • quality / rework
    • exceptions
    • escalations
      (Minimum: 2 consecutive cycles)
  3. Exception Log
    • date
    • category
    • owner
    • resolution status
  4. Quality Gate Definition
    • where it occurs
    • criteria
    • pass/fail outcome
    • owner
  5. Stability Evidence
    • 2 consecutive cycles with output within tolerance
    • notes on any deviations

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Define the Operating Rhythm

  • Action: Define:
    • daily exception review (10–15 min)
    • weekly execution review (30 min)
  • Output: Rhythm doc + calendar invites.
  • Validation: Cadence is realistic and repeatable.

Day 2 — Define What Counts as an Exception

  • Action: Write objective exception definitions:
    • late
    • incomplete
    • failed quality
    • blocked
  • Output: Exception definition list.
  • Validation: Another person could log the same exception.

Day 3 — Build Execution Dashboard (MVP)

  • Action: Extend existing KPI tracker to include execution KPIs.
  • Output: Live dashboard with current data.
  • Validation: Metrics update without manual heroics.

Days 4–6 — Run the Cadence

  • Action: Operate daily + weekly reviews exactly as defined.
  • Output: Logged exceptions and review notes.
  • Validation: Reviews occur on schedule.

Day 7 — Install One Quality Gate

  • Action: Identify the highest-defect point and add a gate.
  • Output: Quality gate checklist or rule.
  • Validation: Gate blocks bad output.

Days 8–11 — Enforce Without Doing

  • Action: Enforce standards without reworking tasks.
  • Output: Logged corrections and escalations.
  • Validation: Manager-touch rate does not spike.

Days 12–14 — Validate Stability

  • Action: Review 2 full cycles of execution data.
  • Output: Stability assessment:
    • within tolerance?
    • exceptions trending down or stable?
  • Validation: Output holds without increased manager doing.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 30–40 minute walkthrough of dashboard + logs
  • Pass Criteria (Must Meet All):
    • Cadence exists and is followed
    • Execution KPIs visible for 2 cycles
    • Quality gate exists and is enforced
    • Exceptions logged consistently
    • Critical questions scored ≥2
  • Fail Criteria (Any One Fails):
    • Missing execution metrics
    • No quality gate
    • Cadence not followed
    • Manager reworking tasks silently
  • Remediation Plan (If Fail):
    • Re-run cadence for 1 additional cycle
    • Add missing metrics or gates
    • Re-review in 7 days

6) Carry-Forward Commitments (After Passing)

  • Daily exception review continues
  • Weekly execution review continues
  • Quality gate remains enforced
  • Execution KPIs remain visible and reviewed

 

 

Chapter 5 — Leading People Inside the System

Standards, Coaching, Motivation, Delegation, and Accountability


Chapter Purpose

This chapter defines the manager’s responsibility with people inside a measurable operating system.

People leadership is not emotional management.
It is performance leadership.

This chapter exists to:

  • eliminate “managing by feel”
  • prevent favoritism and inconsistency
  • replace babysitting with ownership
  • ensure people grow while systems remain stable

5.1 The Manager’s Goal With People

The manager’s goal is not comfort, happiness, or control.
The manager’s goal is:

To develop people who can consistently produce outcomes inside the system without dependency on the manager.

Success is measured when:

  • people know the standard
  • people hit the standard
  • people improve over time
  • people solve problems at the right level

5.2 Standards Before Expectations

Expectations are verbal.
Standards are observable.

What a Standard Includes

  • definition of “done”
  • quality criteria
  • time constraints
  • ownership
  • escalation rules

Rule:

If a standard cannot be measured, it cannot be enforced fairly.

Managers must convert expectations into standards before addressing performance.


5.3 People Performance Is Measurable

Managers must evaluate people using signals, not impressions.

Core People Performance Metrics

Metric

What It Reveals

Output per period

Reliability

Quality / rework rate

Skill + attention

SOP adherence

Discipline

Responsiveness

Ownership

Escalation frequency

Judgment

Improvement trend

Coachability

These metrics are used to:

  • coach
  • train
  • delegate
  • correct
  • promote

They are not used for punishment in isolation.


5.4 Coaching for Performance (Not Comfort)

Coaching exists to close the gap between:

  • current performance
  • required standard

Effective Coaching Characteristics

  • data-backed
  • specific
  • timely
  • focused on behavior and output
  • tied to a clear improvement action

Coaching Flow

  1. Show the metric
  2. Restate the standard
  3. Identify the gap
  4. Agree on corrective action
  5. Set review date

Rule:

Coaching without data becomes opinion. Opinion becomes conflict.


5.5 Motivation Through Clarity and Momentum

Motivation is a byproduct of competence and progress, not speeches.

The Four Motivation Levers

  1. Clarity – people know what good looks like
  2. Enablement – they have tools and training
  3. Recognition – standards met are acknowledged
  4. Accountability – standards missed are addressed

If any lever is missing, motivation degrades.


5.6 Delegation With Ownership (Not Task Dumping)

Delegation is not assigning tasks.
Delegation is transferring outcomes with authority.

Proper Delegation Includes

  • desired outcome
  • success criteria
  • decision rights
  • escalation thresholds
  • review cadence

Delegation Failure Signals

  • repeated questions
  • constant escalation
  • inconsistent output
  • manager redoing work

These signal a delegation design issue, not a people problem.


5.7 Accountability Without Emotion

Accountability is enforcing standards consistently.

Accountability Is Triggered By

  • missed standards
  • trend deviation
  • repeated failure signals

Accountability Is Not

  • anger
  • lectures
  • personality judgments
  • public embarrassment

Rule:

Fair accountability requires objective standards and consistent application.


5.8 The Manager’s Role in Conflict

Most conflict is caused by:

  • unclear standards
  • competing priorities
  • invisible work
  • inconsistent enforcement

Managers must diagnose system causes before interpersonal ones.

Conflict Resolution Order

  1. Clarify standards
  2. Clarify ownership
  3. Review metrics
  4. Adjust system
  5. Address behavior if needed

5.9 The 1:1 Meeting (Required Tool)

1:1s are for performance and development, not status.

Required 1:1 Agenda

  • metric review
  • wins and misses
  • skill development
  • upcoming risks
  • support needed

1:1s without metrics become vent sessions.


5.10 Feedback Loops for People Development

Managers must run explicit feedback loops.

Required Loops

  • weekly performance check
  • monthly trend review
  • quarterly capability assessment

Feedback must be:

  • logged
  • tracked
  • revisited

Untracked feedback does not improve performance.


5.11 Chapter Validation (Required)

A manager passes Chapter 5 only if they can demonstrate:

  1. Clear performance standards for their team
  2. People metrics tied to those standards
  3. At least one documented coaching cycle
  4. Delegation with decision rights
  5. Accountability actions based on data
  6. Evidence of improvement over time

No standards = no accountability.
No metrics = no coaching authority.

 

Unit 5 — MD-I-U05

Leading People Inside the System

Standards, Coaching, Motivation, Delegation, and Accountability
(Related to Manual Chapter 5)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U05
  • Related Manual Chapter: Chapter 5 — Leading People Inside the System
  • Unit Name: People Leadership & Performance Control
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 (Outcomes + Measurement Discipline)
    • MD-I-U02 (Leverage & Dependency Control)
    • MD-I-U03 (System Literacy & Baselines)
    • MD-I-U04 (Execution Control & Visibility)
  • Primary Outcome: Manager can lead people using standards, metrics, and coaching—without emotion, favoritism, or babysitting.
  • Pass Standard: Clear performance standards exist; coaching is data-backed; delegation includes decision rights; improvement trends are visible.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Has clear performance standards for their team
  • Evaluates people using metrics, not impressions
  • Runs consistent, metric-driven 1:1s
  • Delegates outcomes with authority and escalation rules
  • Coaches for improvement with documented follow-up
  • Enforces accountability consistently and fairly
  • Shows measurable improvement in at least one person or team metric

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Standards & Clarity (Critical)

  1. Performance standards are documented and measurable.
  2. Each team member knows what “good” looks like for their role.

Measurement & Fairness (Critical)

  1. I use objective metrics to assess performance.
  2. Performance conversations start with data, not opinion.

Coaching Discipline

  1. I run regular 1:1s focused on performance and development.
  2. Coaching conversations end with a specific improvement action.
  3. I follow up to confirm improvement occurred.

Delegation & Ownership (Critical)

  1. Delegated work includes outcome, authority, and escalation rules.
  2. Team members make decisions without routing everything to me.

Accountability

  1. I address performance gaps early using standards and metrics.
  2. I apply accountability consistently across people.

Motivation & Growth

  1. Improvements and standards met are recognized.
  2. At least one performance metric is trending positively.

Critical questions: 1–4, 8–9
(If any critical question scores 0–1, the unit cannot pass.)

Scoring Outputs

  • Unit Score = average of all questions
  • Stage:
    • 0–1.4 Not Ready
    • 1.5–2.4 Developing
    • 2.5–3.4 Operating
    • 3.5–4.0 Leading

3) Evidence Checklist (Required to Pass)

  1. Performance Standards Document
    • role-level standards
    • measurable criteria
    • quality and time expectations
  2. People Metrics Snapshot
    • output per person or role
    • quality/rework by person or role
    • responsiveness or ownership metric
  3. 1:1 Agenda Template + Logs
    • metric review
    • coaching notes
    • action items
    • follow-up dates
  4. Delegation Record
    • outcome delegated
    • decision rights
    • escalation thresholds
  5. Coaching Cycle Example
    • metric → coaching → action → follow-up result

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Define Performance Standards

  • Action: Write clear standards for each role you manage.
  • Output: Performance standards doc.
  • Validation: Standards must be measurable.

Day 2 — Align Metrics to Standards

  • Action: Map 2–3 metrics per role to standards.
  • Output: Standards → Metrics mapping.
  • Validation: Metrics are objective and trackable.

Day 3 — Build People Metrics View

  • Action: Add people metrics to your dashboard.
  • Output: People metrics snapshot.
  • Validation: Metrics update regularly.

Days 4–6 — Run Metric-Driven 1:1s

  • Action: Conduct at least one 1:1 per person using metrics.
  • Output: 1:1 logs with actions and follow-up dates.
  • Validation: Conversations reference data.

Day 7 — Delegate With Authority

  • Action: Delegate one outcome with clear decision rights.
  • Output: Delegation record.
  • Validation: Team member owns decisions within scope.

Days 8–11 — Coach for Improvement

  • Action: Coach one gap using metrics.
  • Output: Coaching cycle record.
  • Validation: Follow-up shows change or escalation.

Days 12–14 — Review Trends

  • Action: Review people metrics trends.
  • Output: Improvement or corrective plan.
  • Validation: Trend direction is clear.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 30–40 minute evidence walkthrough
  • Pass Criteria (Must Meet All):
    • Performance standards exist and are measurable
    • People metrics are visible
    • Metric-driven 1:1s occurred
    • Delegation includes decision rights
    • At least one coaching cycle shows movement
    • Critical questions scored ≥2
  • Fail Criteria (Any One Fails):
    • No documented standards
    • Coaching without data
    • Delegation without authority
    • Inconsistent accountability
  • Remediation Plan (If Fail):
    • Rewrite standards
    • Re-run 1:1s using metrics
    • Redo delegation with authority
    • Re-review in 7 days

6) Carry-Forward Commitments (After Passing)

  • Metric-driven 1:1s continue weekly/bi-weekly
  • People metrics reviewed monthly
  • Delegation outcomes reviewed regularly
  • Coaching cycles documented and closed

 

Chapter 6 — Preventing Dependency & Building Ownership

Eliminating Rescue Behavior and Creating Self-Sustaining Teams


Chapter Purpose

This chapter defines how managers prevent teams from becoming dependent on them while still supporting performance.

Dependency is one of the most expensive management failures:

  • it hides system weaknesses
  • it caps team capacity
  • it burns out managers
  • it collapses when the manager is absent

This chapter exists to ensure teams can perform, decide, and improve without constant manager intervention.


6.1 Dependency Is a System Failure, Not a Personality Issue

Dependency forms when:

  • managers solve problems instead of fixing causes
  • standards are unclear or unenforced
  • decision rights are centralized
  • rescues replace accountability

Rule:

If the team depends on the manager to meet baseline output, the system is broken—even if results look acceptable.


6.2 The Difference Between Support and Rescue

Support (Allowed)

  • removes blockers
  • reinforces standards
  • teaches decision-making
  • improves inputs or training
  • reduces future intervention

Rescue (Failure)

  • silently fixes work
  • bypasses the system
  • replaces ownership
  • repeats without prevention
  • creates expectation of saving

Rule:

Help that does not reduce future help creates dependency.


6.3 Quantitative Dependency Signals

Managers must track dependency using data.

Required Dependency Metrics

Metric

What It Indicates

Manager-touch rate

Reliance on manager

Repeat rescues

Unfixed root causes

Escalation recurrence

Poor decision delegation

Output variance

Stability without manager

Absence delta

True system ownership

Red Flag Thresholds

  • Manager touches >25–30% of routine work
  • Same rescue repeats more than twice
  • Output drops >15% during absence
  • Escalations rise week over week

These require immediate corrective action.


6.4 The Rescue Ticket System (Non-Negotiable)

Every rescue must generate a rescue ticket.

Rescue Ticket Requirements

  • What failed (input, skill, standard, capacity, tooling)
  • Why it failed (root cause)
  • Who owns the fix
  • What change prevents recurrence
  • Metric to verify resolution
  • Review date

Rule:

If it isn’t logged, it didn’t happen—and it will happen again.


6.5 Delegation as Dependency Prevention

Delegation must reduce manager load over time.

Delegation Must Include

  • outcome definition
  • success criteria
  • decision rights
  • escalation thresholds
  • review cadence

Delegation Failure Signals

  • constant clarification requests
  • frequent upward decisions
  • inconsistent outputs
  • manager rework

These indicate a delegation design issue, not laziness.


6.6 Teaching Problem-Solving (Not Providing Answers)

Managers must teach how to think, not just what to do.

Manager Coaching Questions

  • What standard applies?
  • What data do you have?
  • What options exist?
  • What decision can you make?
  • When would you escalate?

Rule:

If the manager answers before the team thinks, ownership dies.


6.7 Removing Yourself Without Breaking the System

Managers must deliberately test independence.

Independence Tests

  • delay response to non-critical questions
  • temporarily step back from approvals
  • assign ownership with full authority
  • simulate short absences

Success Criteria

  • output remains stable
  • decisions happen at the right level
  • escalations stay within thresholds

6.8 Correcting Dependency Once It Exists

If dependency is already present, managers must:

  1. Stop rescuing silently
  2. Reinforce standards
  3. Clarify decision rights
  4. Fix inputs or training gaps
  5. Track reduction in rescues

Dependency is removed by system repair, not pressure.


6.9 Feedback Loops for Ownership

Ownership must be reviewed explicitly.

Required Ownership Reviews

  • weekly rescue review
  • monthly dependency trend review
  • quarterly independence test

Ownership without review decays.


6.10 Chapter Validation (Required)

A manager passes Chapter 6 only if they can demonstrate:

  1. Dependency metrics with trends
  2. Logged rescue tickets
  3. Reduction in repeat rescues
  4. Delegation with decision rights
  5. Stable output during manager step-back
  6. Evidence of increasing team ownership

No reduction in dependency = no pass.

 

Unit 6 — MD-I-U06

Preventing Dependency & Building Ownership

Rescue Control, Decision Rights, and Independence Testing
(Related to Manual Chapter 6)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U06
  • Related Manual Chapter: Chapter 6 — Preventing Dependency & Building Ownership
  • Unit Name: Ownership & Independence
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U05
  • Primary Outcome: Manager reduces dependency signals and proves the team can operate with less manager intervention.
  • Pass Standard: Dependency metrics trend down and at least one independence test is passed without performance collapse.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Logs every rescue and escalation
  • Differentiates support from rescue
  • Delegates outcomes with decision rights
  • Runs at least one independence (step-back) test
  • Shows a measurable reduction in dependency signals
  • Does not silently fix work

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Dependency Awareness (Critical)

  1. I can explain the difference between support and rescue.
  2. I log every rescue I perform.

Dependency Measurement (Critical)

  1. I track manager-touch rate weekly.
  2. I track escalation frequency weekly.
  3. I track repeat rescues by category.

Dependency Reduction (Critical)

  1. At least one repeat rescue has been eliminated via a prevention fix.
  2. Dependency metrics are trending down or stable within tolerance.

Ownership & Delegation

  1. Delegation includes decision rights and escalation thresholds.
  2. Team members resolve issues without routing them to me.

Independence Testing

  1. I have run at least one independence test.
  2. Output did not collapse during the test.
  3. Failures during the test were logged, not rescued.

Critical questions: 1–7
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Dependency Tracker
    • manager-touch rate
    • escalations
    • rescues
    • repeat rescues (weekly)
  2. Rescue Log
    • date
    • issue
    • category (input / standard / skill / capacity / tooling)
    • prevention action
  3. Prevention Fix Record
    • root cause
    • fix implemented
    • metric targeted
    • review date
  4. Delegation Records
    • outcome
    • decision rights
    • escalation rules
  5. Independence Test Record
    • what was tested
    • duration
    • result
    • issues logged

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Define Rescue vs Support

  • Action: Write clear definitions.
  • Output: Rescue vs Support definitions doc.
  • Validation: Definitions are objective.

Day 2 — Start Rescue Logging

  • Action: Log all rescues and escalations.
  • Output: Rescue log entries.
  • Validation: Nothing is fixed silently.

Days 3–6 — Identify Repeat Pattern

  • Action: Categorize rescues and find the most common cause.
  • Output: Repeat rescue analysis.
  • Validation: Pattern is data-backed.

Day 7 — Implement One Prevention Fix

  • Action: Fix the root cause.
  • Output: Prevention fix record.
  • Validation: Fix changes inputs, standards, training, or decision rights.

Days 8–10 — Run Independence Test

  • Action: Step back from one approval or decision area.
  • Output: Independence test log.
  • Validation: Output remains within tolerance.

Days 11–14 — Review Dependency Trend

  • Action: Review metrics trend.
  • Output: Dependency trend summary.
  • Validation: Downward or stable trend.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Pass Criteria:
    • Dependency tracker active
    • Rescue log with categories
    • One prevention fix implemented
    • Independence test completed
    • Critical questions ≥2
  • Fail Criteria:
    • Silent rescues
    • No prevention fix
    • Independence test skipped
  • Remediation:
    Extend logging 7 days and repeat test.

6) Carry-Forward Commitments

  • Rescue logging continues until repeat rescues trend down consistently
  • Independence tests run quarterly
  • Delegation authority remains explicit

 

 

Chapter 7 — Improving Systems Without Restarting

Disciplined Change, Feedback Loops, and Version Control


Chapter Purpose

This chapter defines how managers improve systems without destabilizing execution.

Most organizations fail here by:

  • rebuilding instead of refining
  • changing too many variables at once
  • acting without baselines
  • confusing motion with improvement

This chapter ensures improvement is measured, controlled, and cumulative.


7.1 Why Restarting Is a Management Failure

Restarting destroys:

  • baselines
  • learning
  • trust in standards
  • team momentum

Rule:

If a manager cannot explain what changed and why using metrics, the change is invalid.

Restarting is often a sign of:

  • impatience
  • lack of measurement
  • discomfort with incremental progress

7.2 The Improvement Prerequisites (Non-Negotiable)

A manager may not improve a system unless all are true:

  • the system is stable (Chapter 3)
  • execution is consistent (Chapter 4)
  • dependency is controlled (Chapter 6)
  • baseline metrics exist

No baseline = no improvement authority.


7.3 The Catalyst Improvement Loop

All improvement follows this loop—no exceptions:

Baseline → Diagnose → Hypothesis → Change → Measure → Decide → Standardize

Skipping steps invalidates results.


7.4 Diagnosing With Data (Not Opinion)

Improvement starts with diagnosis, not ideas.

Required Diagnostic Inputs

  • trend data (minimum 4 weeks)
  • variance analysis
  • failure categorization
  • impact ranking

Diagnostic Questions

  • Where does variance exceed tolerance?
  • Which failures repeat?
  • Which step constrains throughput?
  • What metric moved first?

Rule:

Diagnose patterns, not events.


7.5 Improvement Hypotheses Must Be Measurable

Every proposed change must include:

  • target metric
  • expected direction of change
  • magnitude (estimate)
  • review date

Example

  • Metric: cycle time
  • Baseline: 6.2 days
  • Hypothesis: standardizing intake reduces cycle time
  • Target: ≤5.0 days
  • Review: 14 days

Ideas without hypotheses are noise.


7.6 One Change at a Time (Change Control)

Managers must isolate variables.

Change Rules

  • change one variable at a time
  • document before/after
  • hold change long enough to measure
  • roll back if results degrade

Rule:

Multiple simultaneous changes eliminate attribution.


7.7 Feedback Loops That Drive Improvement

Feedback must be captured, categorized, and reviewed.

Required Feedback Sources

  • input rejections
  • quality gate failures
  • escalations
  • cycle time outliers
  • customer impact

Feedback Review Cadence

  • weekly: identify candidates
  • monthly: approve changes
  • quarterly: retire obsolete standards

Feedback without cadence becomes backlog rot.


7.8 Version Control for Processes and SOPs

Processes and SOPs must be versioned.

Required Versioning Fields

  • version number
  • change summary
  • reason for change
  • affected metrics
  • effective date
  • owner approval

Versioning Rules

  • no silent changes
  • no parallel versions
  • current version clearly marked

Unversioned change is chaos.


7.9 Measuring Improvement Effectiveness

A change is successful only if:

  • target metric improves
  • no other critical metric degrades
  • improvement sustains over time

Required Validation Window

  • minimum 2 cycles
  • preferably 30 days for stability

Short-term gains without stability are rejected.


7.10 When to Stop Improving

Managers must know when to stop.

Stop improving when:

  • metrics are within tolerance
  • marginal gains cost more than value
  • instability is introduced

Rule:

Optimization without constraint awareness creates fragility.


7.11 Chapter Validation (Required)

A manager passes Chapter 7 only if they can demonstrate:

  1. A stable baseline metric set
  2. A documented diagnosis
  3. A measurable improvement hypothesis
  4. One controlled change
  5. Before/after metric comparison
  6. Versioned documentation update
  7. Evidence execution remained stable

No versioning = no improvement credit.

 

Unit 7 — MD-I-U07

Improving Systems Without Restarting

Baselines, Hypotheses, Feedback Loops, and Versioned Change
(Related to Manual Chapter 7)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U07
  • Related Manual Chapter: Chapter 7 — Improving Systems Without Restarting
  • Unit Name: Disciplined Improvement
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U06
  • Primary Outcome: Manager can improve a system using data, controlled change, and feedback loops—without restarting or destabilizing execution.
  • Pass Standard: One improvement is implemented using a baseline → hypothesis → change → measure cycle, with documented results and versioned updates.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Uses baseline metrics before proposing change
  • Diagnoses problems using patterns, not anecdotes
  • Writes measurable improvement hypotheses
  • Changes one variable at a time
  • Uses feedback loops to validate impact
  • Versions SOPs/processes instead of rewriting them
  • Rolls back changes that don’t improve metrics

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Improvement Discipline (Critical)

  1. I do not change a system without baseline metrics.
  2. I diagnose issues using trend data, not single events.

Hypothesis & Measurement (Critical)

  1. Each proposed change includes a target metric and expected direction of change.
  2. I define a review date before implementing a change.

Change Control (Critical)

  1. I change only one variable at a time.
  2. I hold changes long enough to measure impact.

Feedback Loops

  1. I use input, execution, and output feedback to inform improvements.
  2. Feedback is logged and reviewed on cadence.

Versioning & Stability

  1. SOPs/processes are versioned with change notes.
  2. Execution remains stable during improvement.

Critical questions: 1–6
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Baseline Metrics Snapshot
    • volume
    • cycle time
    • quality / rework
    • variance
    • escalations
  2. Diagnosis Summary
    • problem statement
    • supporting data
    • why this issue matters
  3. Improvement Hypothesis
    • metric targeted
    • baseline value
    • expected change
    • review date
  4. Change Record
    • description of change
    • date implemented
    • variable changed
  5. Before/After Comparison
    • baseline vs post-change metrics
    • impact summary
  6. Versioned SOP / Process Update
    • version number
    • change summary
    • effective date

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Select One Improvement Target

  • Action: Choose one system issue backed by baseline data.
  • Output: Improvement target statement.
  • Validation: Issue must be recurring and measurable.

Day 2 — Diagnose Using Data

  • Action: Analyze trends and variance.
  • Output: Diagnosis summary.
  • Validation: Diagnosis references metrics, not opinions.

Day 3 — Write Improvement Hypothesis

  • Action: Define hypothesis with target metric and review date.
  • Output: Hypothesis document.
  • Validation: Hypothesis is falsifiable.

Day 4 — Approve Change Scope

  • Action: Confirm only one variable will change.
  • Output: Change scope declaration.
  • Validation: Scope is narrow and controlled.

Days 5–9 — Implement Change + Capture Feedback

  • Action: Implement change and log feedback daily.
  • Output: Change record + feedback log.
  • Validation: No additional changes introduced.

Days 10–12 — Measure Impact

  • Action: Compare post-change metrics to baseline.
  • Output: Before/after comparison.
  • Validation: Metrics show directionality (positive, neutral, or negative).

Days 13–14 — Decide & Version

  • Action: Decide to keep, adjust, or roll back change.
  • Output: Decision record + versioned update.
  • Validation: Decision references metrics.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Manager’s Manager
  • Review Method: 30–40 minute evidence walkthrough
  • Pass Criteria (Must Meet All):
    • Baseline metrics existed before change
    • Hypothesis documented with target + review date
    • One variable changed
    • Before/after metrics compared
    • SOP/process versioned
    • Critical questions scored ≥2
  • Fail Criteria (Any One Fails):
    • Change without baseline
    • Multiple variables changed
    • No versioning
    • Execution instability ignored
  • Remediation Plan (If Fail):
    • Roll back change
    • Re-run baseline
    • Rewrite hypothesis
    • Re-review in 7 days

6) Carry-Forward Commitments

  • All future changes follow the improvement loop
  • SOPs remain versioned
  • Feedback logs are maintained
  • Improvements are reviewed monthly

 

 

Chapter 8 — Managing Performance & Accountability

Objective Triggers, Fair Enforcement, and Corrective Action


Chapter Purpose

This chapter defines how managers address performance early, fairly, and effectively using data.

Performance management fails when:

  • issues are addressed too late
  • feedback is vague or emotional
  • standards are unclear
  • accountability is inconsistent

This chapter exists to ensure performance decisions are defensible, repeatable, and corrective, not personal.


8.1 Performance Is a Pattern, Not a Moment

Single events do not define performance.
Trends do.

Rule:

Managers act on patterns measured over time, not isolated incidents—unless risk is immediate.


8.2 The Three Performance Categories

Managers must correctly classify issues before acting.

Category 1 — Skill Gap

Signals

  • high effort
  • improving trend
  • inconsistent quality
  • learning-related errors

Response

  • training
  • coaching
  • SOP clarification
  • increased review cadence

Category 2 — Execution Gap

Signals

  • standards known
  • inputs valid
  • inconsistent adherence
  • repeatable misses

Response

  • clear expectations
  • accountability actions
  • shorter feedback loops
  • documented corrective plans

Category 3 — Behavior Gap

Signals

  • refusal to follow standards
  • missed commitments
  • avoidance of ownership
  • negative impact on others

Response

  • immediate documentation
  • formal accountability
  • potential role reassessment

Rule:

Misclassifying the category leads to ineffective or unfair action.


8.3 Objective Performance Triggers

Managers must define triggers that require action.

Required Triggers (Examples)

  • quality below standard for 2 consecutive cycles
  • missed deadlines above threshold
  • rework rate exceeding tolerance
  • escalation frequency increasing
  • SOP non-adherence trend

Triggers must be:

  • documented
  • visible
  • known to the team

8.4 The Performance Management Flow

All performance actions follow this sequence:

Detect → Classify → Coach → Measure → Escalate (if needed)

Skipping steps creates confusion and resentment.


8.5 Coaching vs Correction vs Escalation

Managers must choose the correct response.

Situation

Action

First trend deviation

Coaching

Repeated deviation

Correction

No improvement

Escalation

Behavior refusal

Immediate escalation

Rule:

Coaching without measurement is encouragement.
Correction without standards is punishment.


8.6 Documentation as Protection (Not Punishment)

Documentation exists to:

  • establish facts
  • track trends
  • protect fairness
  • support decisions

Documentation is required when:

Lack of documentation transfers risk to the manager.


8.7 Performance Improvement Plans (PIPs)

PIPs are structured recovery plans, not threats.

PIP Must Include

  • specific performance gaps
  • measurable targets
  • required actions
  • support provided
  • review cadence
  • consequences of non-improvement

PIPs must be:

  • time-bound
  • metric-driven
  • reviewed regularly

8.8 Accountability Without Bias

Managers must enforce standards consistently.

Bias Risks

  • favoritism toward high performers
  • avoiding conflict
  • rescuing instead of correcting
  • moving standards for individuals

Rule:

Inconsistent enforcement destroys trust faster than strict enforcement.


8.9 Feedback Loops for Performance

Performance management must close the loop.

Required Feedback Loops

  • weekly progress check during correction
  • documented metric review
  • clear “pass / continue / escalate” decisions

Open-ended accountability is failure.


8.10 Chapter Validation (Required)

A manager passes Chapter 8 only if they can demonstrate:

  1. Clear performance standards
  2. Defined performance triggers
  3. Correct classification of issues
  4. At least one documented coaching cycle
  5. Metric-based follow-up
  6. Evidence of improvement or justified escalation

No triggers = no accountability authority.

 

Unit 8 — MD-I-U08

Managing Performance & Accountability

Objective Triggers, Coaching vs Correction, and Fair Enforcement
(Related to Manual Chapter 8)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U08
  • Related Manual Chapter: Chapter 8 — Managing Performance & Accountability
  • Unit Name: Performance Control & Accountability
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U07
  • Primary Outcome: Manager can identify performance issues early, classify them correctly, and apply coaching or correction using data—not emotion.
  • Pass Standard: At least one performance issue is managed through the full detect → classify → act → review cycle with documented evidence.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Uses objective performance triggers
  • Correctly classifies issues as skill, execution, or behavior
  • Applies coaching, correction, or escalation appropriately
  • Documents performance actions consistently
  • Shows improvement or justified escalation
  • Enforces standards evenly across the team

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Detection & Triggers (Critical)

  1. I have documented performance triggers tied to metrics.
  2. Performance issues are identified through trends, not single events.

Classification (Critical)

  1. I can correctly classify issues as skill, execution, or behavior.
  2. My response changes based on the classification.

Action Discipline

  1. Coaching actions include measurable goals and review dates.
  2. Corrective actions are documented and time-bound.

Fairness & Consistency (Critical)

  1. Standards are enforced consistently across people.
  2. Performance actions are defensible with data.

Closure

  1. I review performance actions to confirm improvement.
  2. I escalate when data shows no improvement.

Critical questions: 1–4, 7–8
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Performance Trigger List
    • defined thresholds
    • linked metrics
  2. Performance Case Record (At Least One)
    • issue detected
    • data trend
    • classification
    • action taken
  3. Coaching or Correction Documentation
    • goals
    • actions
    • review dates
  4. Follow-Up Evidence
    • before/after metrics
    • outcome (improved / escalated)
  5. Consistency Check
    • evidence similar issues are treated similarly

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Define Performance Triggers

  • Action: Define metric-based triggers for action.
  • Output: Performance trigger list.
  • Validation: Triggers are objective and known.

Day 2 — Identify One Live Performance Case

  • Action: Select a real, current issue.
  • Output: Performance case selection.
  • Validation: Issue shows a measurable trend.

Day 3 — Classify the Issue

  • Action: Classify as skill, execution, or behavior.
  • Output: Classification rationale.
  • Validation: Classification matches evidence.

Day 4 — Choose Correct Action Path

  • Action: Decide coaching vs correction vs escalation.
  • Output: Action plan with metrics.
  • Validation: Action aligns with classification.

Days 5–9 — Execute Action

  • Action: Carry out coaching or correction.
  • Output: Documented actions + check-ins.
  • Validation: Actions are timely and documented.

Days 10–12 — Review Results

  • Action: Review metrics post-action.
  • Output: Outcome summary.
  • Validation: Improvement or justified escalation.

Days 13–14 — Close or Escalate

  • Action: Close case or escalate formally.
  • Output: Closure or escalation record.
  • Validation: Decision is data-backed.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / HR / Manager’s Manager
  • Review Method: 30–40 minute evidence review
  • Pass Criteria:
    • Performance triggers exist
    • Correct classification demonstrated
    • Actions documented and reviewed
    • Fair enforcement shown
    • Critical questions ≥2
  • Fail Criteria:
    • Action without data
    • Misclassification
    • No follow-up
  • Remediation:
    Re-run performance case with proper documentation.

6) Carry-Forward Commitments

  • Performance triggers remain active
  • Coaching/correction logs maintained
  • Metrics reviewed monthly for early detection

 

 

Chapter 9 — Scaling the Team Without Breaking It

Capacity, Growth Pressure, and Stability Under Load


Chapter Purpose

This chapter defines how managers scale output, people, and responsibility without degrading quality or burning out the team.

Scaling fails when:

  • volume increases faster than capacity
  • headcount is added without fixing flow
  • managers absorb load personally
  • systems aren’t ready for growth

This chapter ensures growth is planned, measured, and sustainable.


9.1 Scaling Is a Capacity Problem First

Growth is not primarily a motivation or effort problem.
It is a capacity and flow problem.

Rule:

If you cannot quantify capacity, you cannot scale safely.

Managers must understand:

  • current throughput
  • constraints
  • load variability
  • time-to-train

9.2 Required Capacity Metrics

Managers must track capacity using objective signals.

Core Capacity KPIs

Metric

What It Reveals

Throughput per role

True capacity

Cycle time

Constraint location

Work-in-progress (WIP)

Load pressure

Utilization

Overload risk

Overtime / surge rate

Unsustainable load

Rework rate

Quality under pressure

Training ramp time

Hiring lag

Scaling decisions without these metrics are guesses.


9.3 Detecting When the System Is at Capacity

Capacity limits show up before collapse.

Capacity Warning Signals

  • rising cycle time
  • growing WIP
  • increased defects
  • more escalations
  • manager-touch rate increases
  • “always behind” sentiment

Managers must act before standards degrade.


9.4 The Order of Operations for Scaling

Managers must scale in the correct order.

Scaling Sequence

  1. Stabilize execution (Chapters 3–4)
  2. Remove constraints (process fixes)
  3. Reduce rework (quality)
  4. Increase leverage (delegation, ownership)
  5. Add capacity (people/tools)

Rule:

Adding people to a broken system multiplies chaos.


9.5 Adding Volume Without Adding People

Before requesting headcount, managers must test:

  • intake control
  • batching
  • prioritization
  • WIP limits
  • automation or tooling improvements
  • skill cross-training

Validation

Managers must demonstrate:

  • where capacity was freed
  • what constraint remains
  • why headcount is the correct next move

9.6 Adding People Without Breaking the System

Hiring introduces risk.

Hiring Risks

  • training drag
  • quality dips
  • increased manager load
  • inconsistent standards

Manager Responsibilities During Hiring

  • protect standards
  • assign mentors
  • adjust throughput expectations
  • track ramp metrics
  • avoid absorbing work indefinitely

9.7 Training and Ramp Metrics

Managers must quantify onboarding.

Required Ramp Metrics

Metric

Purpose

Time to first output

Initial productivity

Time to standard

Full contribution

Error rate during ramp

Training quality

Manager time per hire

Hidden cost

Retention after ramp

Fit and enablement

Managers who cannot track ramp time cannot scale predictably.


9.8 Protecting Stability During Growth

Growth increases stress on systems.

Stability Protections

  • temporarily tighten quality gates
  • slow intake if needed
  • increase review cadence
  • pause non-critical improvements
  • communicate priorities clearly

Rule:

Growth pressure is not an excuse to lower standards.


9.9 Scaling Decision Review (Required)

Before scaling decisions are approved, managers must present:

  • current capacity metrics
  • constraint analysis
  • attempted non-headcount fixes
  • forecasted demand
  • ramp plan with metrics
  • risk mitigation plan

Opinion-based scaling decisions are rejected.


9.10 Feedback Loops During Scale

Scaling requires tighter feedback loops.

Required Cadence During Growth

  • weekly capacity review
  • weekly quality review
  • bi-weekly ramp review
  • monthly stability assessment

Scaling without increased feedback is blind expansion.


9.11 Chapter Validation (Required)

A manager passes Chapter 9 only if they can demonstrate:

  1. Current capacity metrics
  2. Identified constraints
  3. Evidence of process-first scaling
  4. Headcount justification (if applicable)
  5. Defined ramp metrics
  6. Stable execution during growth

No capacity math = no scale approval.

 

Unit 9 — MD-I-U09

Scaling the Team Without Breaking It

Capacity, Constraints, and Growth Discipline
(Related to Manual Chapter 9)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U09
  • Related Manual Chapter: Chapter 9 — Scaling the Team Without Breaking It
  • Unit Name: Capacity & Growth Readiness
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U08
  • Primary Outcome: Manager can quantify capacity, identify constraints, and scale output without degrading quality or creating dependency.
  • Pass Standard: Capacity model exists; constraints are identified; one scale decision (volume or headcount) is justified with data and executed without instability.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Can quantify current throughput capacity
  • Identifies the constraint limiting output
  • Distinguishes volume scaling from headcount scaling
  • Protects quality and standards during growth
  • Tracks ramp time and training load when adding people
  • Shows stable execution under increased load

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Capacity Quantification (Critical)

  1. I can quantify throughput per role or system.
  2. I can state current capacity vs demand using numbers.

Constraint Awareness (Critical)

  1. I can identify the primary constraint limiting output.
  2. I track WIP or load pressure signals.

Scaling Discipline (Critical)

  1. I attempt process fixes before adding people.
  2. I can justify headcount requests with data.

Quality Protection

  1. Quality and rework are monitored during growth.
  2. Standards are enforced even under pressure.

Ramp & Training

  1. I track time-to-first-output for new people.
  2. I track time-to-standard for new people.

Stability

  1. Output remains within tolerance during scale.
  2. Dependency does not spike during growth.

Critical questions: 1–6
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Capacity Model
    • throughput per role/system
    • current demand
    • utilization estimate
  2. Constraint Analysis
    • identified bottleneck
    • evidence (metrics)
  3. Scale Decision Record
    • volume increase OR headcount change
    • justification with data
    • risk assessment
  4. Stability Metrics
    • throughput
    • cycle time
    • quality
    • dependency signals (pre/post)
  5. Ramp Metrics (If Hiring)
    • time to first output
    • time to standard
    • manager time per hire

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Quantify Capacity

  • Action: Calculate throughput per role/system.
  • Output: Capacity model v1.
  • Validation: Uses real data, not estimates.

Day 2 — Identify the Constraint

  • Action: Analyze cycle time, WIP, and delays.
  • Output: Constraint statement.
  • Validation: Constraint is data-backed.

Day 3 — Test Non-Headcount Levers

  • Action: Attempt at least one:
    • intake control
    • prioritization
    • batching
    • WIP limits
  • Output: Lever test record.
  • Validation: Results measured.

Day 4 — Decide Scale Path

  • Action: Choose volume-only scale or headcount scale.
  • Output: Scale decision record.
  • Validation: Decision references capacity math.

Days 5–9 — Execute Scale Safely

  • Action: Increase volume or add capacity carefully.
  • Output: Daily/weekly metrics during scale.
  • Validation: Quality and dependency monitored.

Days 10–12 — Review Impact

  • Action: Compare pre/post metrics.
  • Output: Scale impact summary.
  • Validation: No hidden degradation.

Days 13–14 — Lock Learnings

  • Action: Update capacity model and standards.
  • Output: Updated documentation.
  • Validation: Learnings are captured.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Ops Lead
  • Review Method: 30–40 minute evidence walkthrough
  • Pass Criteria:
    • Capacity model exists
    • Constraint identified
    • Scale decision justified with data
    • Stability maintained
    • Critical questions ≥2
  • Fail Criteria:
    • Scaling without capacity math
    • Quality or dependency spikes ignored
  • Remediation:
    Roll back scale and re-run capacity analysis.

6) Carry-Forward Commitments

  • Capacity reviewed monthly
  • Constraints revisited quarterly
  • Ramp metrics tracked for all new hires
  • Scale decisions documented

 

 

Chapter 10 — The Manager Operating Rhythm

Cadence, Reviews, and Decision Discipline


Chapter Purpose

This chapter defines the non-negotiable rhythm managers use to control work, develop people, and improve systems.

Without a rhythm:

  • managers chase problems
  • decisions become reactive
  • metrics are reviewed too late
  • improvement stalls
  • dependency creeps back in

This chapter ensures management is repeatable, measurable, and sustainable.


10.1 Rhythm Is the Manager’s Primary Tool

Managers do not manage by memory, urgency, or availability.
Managers manage through cadence.

Rule:

If it isn’t reviewed on a schedule, it isn’t managed.

Rhythm creates:

  • predictability
  • early detection
  • accountability
  • momentum

10.2 The Four Management Cadences

Managers operate at four time horizons.
Each has a distinct purpose and metric set.


Daily — Control & Stability

Purpose: Keep work flowing and prevent small issues from compounding.

Required Reviews

  • exceptions
  • blockers
  • quality gate failures
  • aging work

Required Metrics

  • daily throughput
  • exceptions count
  • urgent escalations

Manager Actions

  • remove blockers
  • enforce input standards
  • redirect priorities
  • log issues for later analysis

Rule:

Daily cadence is for control, not problem-solving.


Weekly — Performance & Improvement

Purpose: Detect trends and decide what changes.

Required Reviews

Required Metrics

  • throughput trend
  • cycle time trend
  • quality trend
  • escalation trend
  • manager-touch rate

Manager Actions

  • coach based on data
  • approve or reject improvement proposals
  • adjust capacity or priorities
  • assign owners

Monthly — Stability & Capability

Purpose: Ensure systems and people are improving together.

Required Reviews

  • system stability
  • capability growth
  • recurring failure patterns
  • improvement effectiveness

Required Metrics

  • variance vs tolerance
  • rework reduction
  • skill development indicators
  • ramp metrics (if hiring)

Manager Actions

  • recalibrate standards
  • adjust training focus
  • retire ineffective changes
  • plan next improvement cycle

Quarterly — Direction & Readiness

Purpose: Prepare the team for future demand.

Required Reviews

  • capacity vs forecast
  • system health
  • talent readiness
  • risk assessment

Required Metrics

  • capacity utilization
  • constraint analysis
  • trend durability
  • dependency reduction over time

Manager Actions

  • approve scale plans
  • prioritize strategic improvements
  • reset goals and standards
  • align with leadership direction

10.3 The Manager’s Time Allocation Standard

Managers must protect time for managing.

Target Time Split (Normal Operations)

  • Managing: 70–85%
  • Doing: 15–30%

Required Management Activities

  • metric review
  • coaching
  • planning
  • system improvement
  • ownership enforcement

Rule:

If managing time is not scheduled, doing will consume it.


10.4 Meetings With Purpose (No Status Theater)

Meetings exist to:

  • review metrics
  • make decisions
  • assign ownership
  • close loops

Meeting Rules

  • metrics visible before discussion
  • decisions recorded
  • owners assigned
  • follow-up scheduled

Meetings without decisions are waste.


10.5 Decision Discipline

Managers must separate:

  • review
  • decision
  • execution

Decision Requirements

Every decision must include:

  • triggering metric
  • options considered
  • decision owner
  • expected impact
  • review date

Undocumented decisions decay.


10.6 Feedback Loops Across Cadences

Feedback must flow upward and forward.

Feedback Integration

  • daily issues feed weekly review
  • weekly trends inform monthly improvement
  • monthly results shape quarterly planning

Broken feedback loops cause repeat failures.


10.7 Detecting Rhythm Breakdown

Managers must watch for rhythm failure signals.

Breakdown Signals

  • skipped reviews
  • metrics reviewed inconsistently
  • surprises at month-end
  • decisions made outside cadence
  • repeated “fire drills”

These indicate loss of control.


10.8 Resetting the Rhythm

If rhythm breaks:

  1. Reinstate daily control
  2. Rebuild weekly reviews
  3. Freeze improvement changes
  4. Stabilize execution
  5. Resume cadence gradually

Rhythm recovery precedes improvement.


10.9 Chapter Validation (Required)

A manager passes Chapter 10 only if they can demonstrate:

  1. A documented operating cadence
  2. Scheduled metric reviews
  3. Time allocation tracking
  4. Recorded decisions with follow-ups
  5. Evidence cadence is being followed
  6. Improved stability over time

No cadence = no management.

 

Unit 10 — MD-I-U10

The Manager Operating Rhythm

Cadence, Reviews, Decision Discipline, and Time Allocation
(Related to Manual Chapter 10)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U10
  • Related Manual Chapter: Chapter 10 — The Manager Operating Rhythm
  • Unit Name: Cadence & Management Discipline
  • Timebox: 10–14 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U09
  • Primary Outcome: Manager runs a consistent operating cadence that controls execution, develops people, and drives improvement—without constant firefighting.
  • Pass Standard: Documented cadence is followed for at least two cycles; decisions are metric-triggered and documented; managing vs doing time is within target range.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Operates on daily / weekly / monthly / quarterly cadences
  • Reviews the right metrics at each cadence
  • Documents decisions with owners and review dates
  • Protects time for managing (not just doing)
  • Uses cadence outputs to drive coaching, improvement, and scale
  • Detects issues early (no end-of-month surprises)

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Cadence Structure (Critical)

  1. I have a defined daily, weekly, monthly, and quarterly cadence.
  2. Each cadence has a clear purpose and metric set.

Execution of Cadence (Critical)

  1. Cadence events are scheduled and consistently run.
  2. Metrics are reviewed on schedule, not reactively.

Decision Discipline (Critical)

  1. Decisions are triggered by metrics, not urgency.
  2. Decisions are documented with owner and review date.

Time Allocation

  1. My time split aligns with managing vs doing targets.
  2. Managing time is intentionally protected on my calendar.

Feedback Integration

  1. Daily issues feed weekly review topics.
  2. Weekly trends inform monthly improvement decisions.

Stability & Control

  1. I rarely experience “surprise” failures.
  2. Cadence prevents recurring fire drills.

Critical questions: 1–6
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Operating Cadence Document
    • daily / weekly / monthly / quarterly
    • purpose of each
    • metrics reviewed
  2. Calendar Proof
    • scheduled cadence events
    • attendance/ownership clear
  3. Metric Review Artifacts
    • screenshots or logs from at least 2 cycles
  4. Decision Log
    • triggering metric
    • decision made
    • owner
    • review date
  5. Time Allocation Snapshot
    • managing vs doing estimate (weekly)

4) Step-by-Step Training Plan (10–14 Days)

Day 1 — Define the Cadence

  • Action: Document all four cadences with purpose and metrics.
  • Output: Operating cadence doc.
  • Validation: Metrics align with outcomes owned.

Day 2 — Schedule and Protect Time

  • Action: Schedule cadence events on calendar.
  • Output: Calendar screenshots.
  • Validation: Events are realistic and recurring.

Days 3–6 — Run Cadence (Cycle 1)

  • Action: Execute daily and weekly reviews as defined.
  • Output: Review notes and metrics snapshots.
  • Validation: Reviews happen on time.

Day 7 — Log Decisions

  • Action: Document at least one decision made during cadence.
  • Output: Decision record.
  • Validation: Decision references a metric.

Days 8–11 — Run Cadence (Cycle 2)

  • Action: Repeat cadence and adjust only if necessary.
  • Output: Second cycle artifacts.
  • Validation: Cadence consistency improves.

Days 12–14 — Review Time Allocation & Stability

  • Action: Assess managing vs doing time and surprise rate.
  • Output: Time allocation snapshot + stability notes.
  • Validation: Managing time ≥70% in normal ops.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Ops Lead
  • Review Method: 30–40 minute cadence walkthrough
  • Pass Criteria:
    • All cadences defined and scheduled
    • Metrics reviewed for at least 2 cycles
    • Decisions logged with follow-ups
    • Managing vs doing within target range
    • Critical questions ≥2
  • Fail Criteria:
    • Cadence exists only on paper
    • Metrics reviewed inconsistently
    • Decisions undocumented
  • Remediation:
    Re-run cadence for one additional cycle with evidence.

6) Carry-Forward Commitments

  • Cadence remains active and visible
  • Decision log maintained
  • Time allocation reviewed monthly
  • Cadence outputs feed improvement and people development

 

 

Chapter 11 — Common Manager Failures & Anti-Patterns

Early Warning Signals, Root Causes, and Corrections


Chapter Purpose

This chapter exists to make management failure observable and correctable, not personal or political.

Most management failures are not sudden—they are patterns that go unaddressed.

This chapter:

  • names the most common failure modes
  • defines measurable warning signals
  • explains why they happen
  • provides corrective action paths

11.1 The Hero Manager (The Doer in Disguise)

What It Looks Like

  • Manager steps in “temporarily” to keep things moving
  • Output depends on the manager’s presence
  • Team performance drops during manager absence
  • Manager is praised for “saving” work

Quantitative Signals

  • Manager-touch rate >30%
  • Repeat rescues for same issue
  • Output variance during absence >15%
  • Manager doing >50% of execution for multiple weeks

Root Cause

  • Lack of trust in systems
  • Poor delegation design
  • Avoidance of accountability conversations

Correction

  • Enforce rescue ticket system
  • Reduce manager-touch targets weekly
  • Fix inputs, standards, or training gaps
  • Reassign ownership explicitly

11.2 The Babysitter Manager

What It Looks Like

  • Policing attendance and behavior
  • Constant reminders and follow-ups
  • Low ownership across the team
  • Manager spends time “checking” instead of improving

Quantitative Signals

  • High escalation frequency
  • Low decision-making at team level
  • Stable activity, low outcomes
  • Little to no improvement trends

Root Cause

  • No clear standards
  • No decision rights
  • Fear of losing control

Correction

  • Define measurable standards
  • Delegate outcomes with authority
  • Enforce accountability using metrics
  • Remove manager from routine approvals

11.3 The Best Doer Promotion

What It Looks Like

  • High performer promoted to manager
  • Continues doing work instead of managing
  • Team capability stagnates
  • Manager burnout increases

Quantitative Signals

  • Personal output remains high
  • Team output flat or declining
  • Delegation failure patterns
  • Coaching and improvement cycles absent

Root Cause

  • Promotion based on output, not management capability
  • No MD readiness gate applied

Correction

  • Remove manager from production gradually
  • Require MD-I completion
  • Reassign technical ownership
  • Measure team output independent of manager

11.4 Managing by Opinion (Metric Avoidance)

What It Looks Like

  • Decisions justified with stories or tone
  • Inconsistent enforcement
  • Performance conversations feel subjective
  • Conflicts escalate emotionally

Quantitative Signals

  • Missing or inconsistent KPIs
  • No trend data
  • Decisions lack documented triggers
  • Frequent disagreement on “what’s happening”

Root Cause

  • Discomfort with measurement
  • Lack of system literacy
  • Fear of accountability

Correction

  • Reestablish core metrics
  • Freeze opinion-based changes
  • Require data in all decisions
  • Retrain on Chapters 1–4

11.5 Overcorrecting (Change Thrash)

What It Looks Like

  • Constant changes to process
  • No stable baseline
  • Team confusion
  • “This is the new way” fatigue

Quantitative Signals

  • Multiple changes per period
  • No before/after comparisons
  • Execution variance increases
  • Quality declines after changes

Root Cause

  • Impatience
  • No improvement discipline
  • Desire to “fix everything”

Correction

  • Enforce change control
  • Require hypotheses and targets
  • Roll back unmeasured changes
  • Reinstate Chapter 7 discipline

11.6 Avoiding Performance Management

What It Looks Like

  • Managers delay hard conversations
  • Problems persist unaddressed
  • Team morale declines
  • High performers resent inconsistency

Quantitative Signals

  • Repeated performance misses
  • No documentation
  • No trend improvement
  • Rising escalations or rework

Root Cause

  • Conflict avoidance
  • Lack of standards
  • Fear of being disliked

Correction

  • Define objective triggers
  • Document coaching cycles
  • Apply standards consistently
  • Escalate when data demands it

11.7 Scaling Without Readiness

What It Looks Like

  • Volume increases suddenly
  • Quality drops
  • Manager absorbs extra work
  • Burnout spreads

Quantitative Signals

  • Rising WIP
  • Cycle time inflation
  • Increased defects
  • Manager-touch rate spikes

Root Cause

  • Capacity not measured
  • Growth decisions made emotionally
  • No ramp planning

Correction

  • Pause intake if needed
  • Reestablish capacity metrics
  • Fix constraints before adding people
  • Apply Chapter 9 scaling discipline

11.8 Early Warning Dashboard (Required)

Managers must be able to see failure forming.

Required Early Warning Signals

  • Manager-touch rate trend
  • Escalation trend
  • Rework trend
  • Output variance
  • Cycle time variance
  • Improvement backlog age

If these are not visible, failure will be late and expensive.


11.9 Chapter Validation (Required)

A manager passes Chapter 11 only if they can:

  1. Identify at least two applicable anti-patterns
  2. Show quantitative evidence of risk or absence
  3. Explain root causes objectively
  4. Define corrective actions tied to metrics
  5. Demonstrate active correction or prevention

Denial is failure.

 

Unit 11 — MD-I-U11

Common Manager Failures & Anti-Patterns

Early Warning Signals, Root Causes, and Correction
(Related to Manual Chapter 11)


Unit Header

  • Program: MD
  • Level: MD-I (baseline)
  • Unit #: MD-I-U11
  • Related Manual Chapter: Chapter 11 — Common Manager Failures & Anti-Patterns
  • Unit Name: Failure Detection & Self-Correction
  • Timebox: 7–10 days
  • Prerequisites:
    • MD-I-U01 through MD-I-U10
  • Primary Outcome: Manager can objectively detect failure patterns in themselves or their system and take corrective action early.
  • Pass Standard: At least one real or simulated failure pattern is identified using data, root-caused, and corrected with measurable impact.

1) Quick Standard (What “Good” Looks Like)

A Level I manager:

  • Can name common manager anti-patterns without defensiveness
  • Uses early warning metrics to detect drift
  • Diagnoses root cause (system, people, discipline)
  • Applies corrective action before performance collapses
  • Shows measurable stabilization or improvement after correction

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

Awareness (Critical)

  1. I can identify the most common manager failure patterns (hero, babysitter, opinion-based, overcorrector).
  2. I can recognize which patterns I am most at risk of.

Early Detection (Critical)

  1. I track early warning signals (dependency, variance, escalations).
  2. I review early warning signals on a defined cadence.

Diagnosis & Action (Critical)

  1. I diagnose failure causes using data, not excuses.
  2. I take corrective action before output collapses.

Discipline & Recovery

  1. I can pause changes and re-stabilize when needed.
  2. I document corrective actions and outcomes.

Learning & Prevention

  1. I update standards or systems to prevent recurrence.
  2. I can explain what changed and why using metrics.

Critical questions: 1–6
(If any critical question scores 0–1, the unit cannot pass.)


3) Evidence Checklist (Required to Pass)

  1. Early Warning Dashboard
    • manager-touch rate
    • escalation trend
    • output variance
    • rework trend
  2. Failure Pattern Identification
    • which anti-pattern appeared
    • evidence (metrics, behaviors)
  3. Root Cause Analysis
    • system cause
    • people cause (if any)
    • discipline gaps
  4. Corrective Action Record
    • action taken
    • owner
    • expected metric change
    • review date
  5. Post-Correction Metrics
    • before/after comparison

4) Step-by-Step Training Plan (7–10 Days)

Day 1 — Review Anti-Patterns

  • Action: Review documented manager failure patterns.
  • Output: Personal risk assessment (top 1–2 risks).
  • Validation: Risks tied to real signals.

Day 2 — Build Early Warning View

  • Action: Assemble early warning metrics in one view.
  • Output: Early warning dashboard.
  • Validation: Metrics update weekly or faster.

Day 3 — Identify a Live or Recent Failure Pattern

  • Action: Select one real or recent issue.
  • Output: Failure pattern summary.
  • Validation: Pattern supported by data.

Day 4 — Diagnose Root Cause

  • Action: Separate system vs people vs discipline causes.
  • Output: Root cause analysis.
  • Validation: Diagnosis references metrics.

Days 5–7 — Apply Corrective Action

  • Action: Implement correction (stabilize, clarify standards, reduce doing, etc.).
  • Output: Corrective action record.
  • Validation: Action addresses root cause.

Days 8–10 — Measure Recovery

  • Action: Review metrics post-correction.
  • Output: Recovery comparison.
  • Validation: Trend stabilizes or improves.

5) Manager Review Gate (Pass/Fail)

  • Reviewer: Department Lead / Ops Lead
  • Review Method: 30-minute evidence walkthrough
  • Pass Criteria:
    • Failure pattern correctly identified
    • Early warning signals used
    • Root cause diagnosis sound
    • Corrective action taken
    • Metrics show stabilization or improvement
    • Critical questions ≥2
  • Fail Criteria:
    • Failure identified by hindsight only
    • No early warning signals tracked
    • Action taken without diagnosis
  • Remediation:
    Repeat detection and correction with a different pattern.

6) Carry-Forward Commitments

  • Early warning dashboard reviewed weekly
  • Anti-patterns discussed quarterly
  • Corrective actions logged and reviewed
  • Prevention updates made to systems or standards

 

 

Chapter 12 — MD Validation & Readiness Standard

Pass / Fail Criteria, Evidence, and Ongoing Compliance


Chapter Purpose

This chapter defines how managers earn, maintain, and retain management responsibility.

MD is not completed by reading.
MD is completed by demonstrated, sustained capability.

This chapter exists to:

  • remove ambiguity from manager readiness
  • prevent premature promotions
  • protect the business from weak management
  • ensure standards persist after certification

12.1 MD Is a Gate, Not a Course

MD is not:

  • professional development content
  • leadership theory
  • optional training
  • a one-time event

MD is a management eligibility gate.

Rule:

Authority to manage people and systems is earned and maintained through evidence.


12.2 The MD Readiness Standard (Non-Negotiable)

A manager is considered MD-compliant only if all five dimensions are met.

The Five MD Dimensions

Dimension

Evidence Required

System Control

Stable execution metrics

People Leadership

Measured coaching & growth

Leverage

Reduced dependency signals

Improvement

Versioned, measured change

Discipline

Operating rhythm followed

Failure in any one dimension is overall failure.


12.3 Required Evidence (What Must Exist)

Managers must produce artifacts, not explanations.

Mandatory Evidence Set

  1. System Map (Chapter 3)
  2. Live KPI Dashboard (Chapters 1–4)
  3. Operating Rhythm Schedule (Chapter 10)
  4. People Metrics & Coaching Logs (Chapter 5)
  5. Dependency Metrics & Rescue Logs (Chapter 6)
  6. Improvement Log with Versions (Chapter 7)
  7. Performance Documentation (Chapter 8)
  8. Capacity Metrics (Chapter 9)

No artifacts = no compliance.


12.4 Pass / Fail Criteria

Pass Criteria

A manager passes MD when:

  • execution is stable for a sustained period
  • performance decisions reference data
  • dependency trends downward
  • improvements are controlled and measurable
  • team output holds during manager step-back

Fail Criteria

A manager fails MD if:

  • metrics are missing or ignored
  • manager is required for baseline output
  • rescues repeat without prevention
  • standards are inconsistently enforced
  • cadence is not maintained

Failure requires corrective action, not debate.


12.5 Validation Review Process

Initial Validation

  • conducted after required chapters
  • evidence reviewed against standards
  • gaps identified explicitly

Ongoing Validation

  • quarterly review minimum
  • metrics trend review
  • re-certification if role or scope changes

Rule:

Management readiness expires if not maintained.


12.6 MD Levels and Scope Control

MD validation applies by level.

  • MD-I: team-level systems and people
  • MD-II: multi-system or supervisor-of-supervisors
  • MD-III: cross-functional ownership
  • MD-IV: department-wide accountability

Passing MD-I does not grant authority for MD-II scope.


12.7 Failure Handling & Correction

Failure is handled through:

  • targeted remediation
  • scoped coaching
  • temporary authority reduction
  • reassignment if necessary

MD exists to protect the business, not punish individuals.


12.8 Promotion & Hiring Guardrail

MD is the only approved gate for:

  • manager promotions
  • external manager hires
  • expanded scope

No MD validation = no manager seat.


12.9 Leadership Accountability

Executives are accountable for:

  • enforcing MD standards
  • refusing exceptions
  • supporting managers with tools and data
  • removing managers who cannot comply

Allowing non-compliant managers is an executive failure.


12.10 MD as a Living System

MD itself must be reviewed and improved.

Required MD Feedback Loops

  • manager feedback
  • business outcome correlation
  • failure analysis
  • standard updates (versioned)

MD without iteration becomes outdated policy.


12.11 Chapter Validation (Final)

A manager completes MD only when:

  • all prior chapters are validated
  • evidence is reviewed and approved
  • scope is explicitly granted
  • expectations for maintenance are clear

Completion is documented and visible.

 

Unit 12 — MD-I-U12

MD Validation & Readiness Standard

Certification, Scope Control, and Ongoing Compliance
(Related to Manual Chapter 12)


Unit Header

  • Program: MD (Manager Development)
  • Level: MD-I (baseline certification)
  • Unit #: MD-I-U12
  • Related Manual Chapter: Chapter 12 — MD Validation & Readiness Standard
  • Unit Name: Management Readiness & Certification
  • Timebox: 5–7 days (review-focused)
  • Prerequisites:
    • MD-I-U01 through MD-I-U11 (all units passed)
  • Primary Outcome: Manager proves they meet the Catalyst management standard and are eligible to hold a management seat at MD-I scope.
  • Pass Standard: All five MD dimensions are met with evidence; authority is explicitly granted and scoped.

1) Quick Standard (What “Good” Looks Like)

A certified MD-I manager:

  • Produces stable results through systems and people
  • Uses metrics to drive decisions at all times
  • Is not a bottleneck or hidden dependency
  • Improves systems without restarting them
  • Operates on a predictable management cadence
  • Can step back without performance collapse
  • Maintains standards without exception or favoritism

2) Self-Assessment Questionnaire (0–4)

Score each 0–4.

System Control (Critical)

  1. Execution metrics are stable over time.
  2. Work flows without constant manager intervention.

People Leadership (Critical)

  1. Team members perform against clear standards.
  2. Coaching and accountability are metric-driven.

Leverage & Dependency (Critical)

  1. Manager-touch rate is within target range.
  2. Team performs during manager step-back.

Improvement Discipline (Critical)

  1. Improvements follow baseline → hypothesis → measure.
  2. Changes are versioned and controlled.

Operating Discipline (Critical)

  1. Management cadence is followed consistently.
  2. Decisions are documented and reviewed.

Critical questions: All
(Any score 0–1 = automatic fail.)


3) Evidence Checklist (Required to Pass)

The manager must submit a complete MD Evidence Packet containing:

A) System & Execution

  • System map (current)
  • Execution dashboard (minimum 4 weeks or 2 full cycles)
  • Quality gates and exception logs

B) People & Ownership

  • Performance standards
  • People metrics snapshot
  • Coaching and delegation records

C) Leverage & Dependency

  • Dependency tracker (manager-touch, rescues, escalations)
  • Rescue logs with prevention fixes
  • Independence test records

D) Improvement Discipline

  • Baseline metrics
  • Improvement hypothesis + results
  • Versioned SOP/process updates

E) Operating Rhythm

  • Cadence document
  • Calendar proof
  • Decision log
  • Time allocation snapshot

No partial packets allowed.


4) Step-by-Step Certification Process (5–7 Days)

Day 1 — Evidence Assembly

  • Action: Compile all unit artifacts into a single packet.
  • Output: MD Evidence Packet.
  • Validation: Packet is complete and organized.

Day 2 — Self-Scoring & Risk Review

  • Action: Score self-assessment honestly.
  • Output: Self-score + top 2 risk areas.
  • Validation: Risks are real, not defensive.

Day 3–4 — Formal MD Review

  • Action: Review packet with reviewer(s).
  • Output: Review notes and gap list (if any).
  • Validation: Evidence drives discussion, not opinion.

Day 5 — Gap Resolution (If Needed)

  • Action: Address minor gaps.
  • Output: Updated evidence.
  • Validation: Gaps are closed with data.

Day 6–7 — Certification Decision

  • Action: Grant or deny MD-I certification.
  • Output: Certification record with scope.
  • Validation: Decision is documented.

5) Manager Review Gate (Pass/Fail)

  • Review Panel:
    • Department Lead
    • Operations / People Lead
  • Review Method: 60–90 minute structured review

Pass Criteria (Must Meet All)

  • All prior units passed
  • Evidence packet complete
  • All critical dimensions ≥2
  • No unresolved dependency or stability risks
  • Manager absence does not collapse output

Fail Criteria (Any One Fails)

  • Missing evidence
  • Metric avoidance
  • Manager still required for baseline output
  • Inconsistent standards enforcement

6) Certification Outcome

If Passed

  • Status: MD-I Certified
  • Authority Granted:
    • Team-level management
    • System ownership at defined scope
  • Next Path:
    • MD-II eligibility (expanded scope)

If Failed

  • Status: Not Certified
  • Action:
    • Targeted remediation plan
    • Re-review in 30–60 days
  • Note: Authority may be reduced or removed during remediation

7) Ongoing Compliance (Non-Negotiable)

MD certification is not permanent.

Ongoing Requirements

  • Quarterly metric review
  • Dependency trends remain within tolerance
  • Cadence remains active
  • Standards remain enforced
  • Scope reviewed when role changes

Rule:

Management authority expires if not maintained.


8) Promotion & Hiring Guardrail

MD certification is the only approved gate for:

  • manager promotions
  • external manager hires
  • scope expansion

No MD = no manager seat.

 

Appendix A — MD-I-U01 One-Pager

Manager Mandate: Outcomes, Not Activity

Purpose:
Define what a manager owns and force all decisions to be outcome- and metric-driven.

When to use this:

  • New manager onboarding
  • Promotion consideration
  • Chaos, finger-pointing, or “busy but nothing’s moving”

THE CORE IDEA (Non-Negotiable)

  • Managers are accountable for outcomes, not effort.
  • If it’s not measured, it’s not managed.
  • Activity ≠ progress.
  • Metrics come before opinions.
  • Qualitative input explains data — never replaces it.

WHAT “GOOD” LOOKS LIKE

  • Manager can state 3–5 outcomes they own.
  • Each outcome has at least one metric.
  • Metrics are reviewed on cadence.
  • Decisions reference metrics explicitly.
  • Performance is discussed without emotion.

METRICS THAT MATTER

Minimum required signals:

  • Volume (work completed)
  • Cycle Time / Aging
  • Quality / Rework
  • Variance (week over week)
  • Escalations / Dependency

No manager operates without these.


COMMON FAILURE MODES

  • Talking about effort instead of results
  • Managing by feel or intuition
  • Metrics exist but aren’t reviewed
  • Decisions justified with stories
  • “Everyone’s working hard” used as evidence

HOW TO IMPLEMENT (Straight-Line)

  1. Write 3–5 outcomes you own (no tasks).
  2. Assign at least 1 metric per outcome.
  3. Build a simple KPI tracker (sheet, dashboard, view).
  4. Schedule daily exception + weekly KPI reviews.
  5. Capture one real decision tied to a metric.
  6. Stop using effort-based language in reviews.

SELF-CHECK (2 Minutes)

Answer YES or NO:

  • Can I explain success and failure using numbers?
  • Do I know last week’s metrics without looking?
  • Can I justify a decision using data?
  • Do I review metrics even when nothing is “wrong”?

Any NO = this unit is not complete.


OUTPUT ARTIFACTS

  • Outcome statement (1 page)
  • KPI dashboard or tracker
  • Scheduled review cadence
  • One documented metric-driven decision

  • Before: None (this is the foundation)
  • After:
    • Unit 2 — Leverage & Dependency
    • Unit 3 — System Literacy

HOW THIS APPENDIX SHOULD BE USED

  • As a desk reference
  • As a manager coaching tool
  • As a promotion gate reminder
  • As a reset tool when chaos appears

 

Appendix B — MD-I-U02 One-Pager

The Power of Many vs the Trap of One

Purpose:
Ensure managers multiply team output instead of becoming the hidden engine.

When to use this:

  • Manager burnout
  • Team waits for approvals
  • “I’ll just do it myself” patterns

THE CORE IDEA (Non-Negotiable)

  • Managers must increase output through others.
  • Doing work repeatedly is a failure signal.
  • Dependency is measurable.
  • Help must reduce future help.
  • Teams must perform without the manager present.

WHAT “GOOD” LOOKS LIKE

  • Manager operates mostly at Dial 1–2.
  • Manager-touch rate is declining or stable.
  • Escalations are intentional, not constant.
  • Output holds during manager step-back.

METRICS THAT MATTER

  • Manager-touch rate
  • Escalation frequency
  • Repeat rescues
  • Output variance
  • Absence delta

COMMON FAILURE MODES

  • Hero management
  • Silent rescues
  • Centralized decisions
  • “Temporary” doing that never ends

HOW TO IMPLEMENT

  1. Define what counts as a “touch” and “rescue.”
  2. Track dependency weekly.
  3. Log every rescue.
  4. Identify repeat rescue cause.
  5. Fix the cause (not the task).
  6. Run a step-back test.

SELF-CHECK

  • Does work stall without me?
  • Do I rescue the same issue repeatedly?
  • Can my team decide without me?

Any YES = dependency exists.


OUTPUT ARTIFACTS

  • Dependency tracker
  • Rescue log
  • Prevention fix record
  • Step-back test notes

  • Before: Unit 1
  • After: Unit 3, Unit 6

Appendix C — MD-I-U03 One-Pager

Understanding the System You Manage

Purpose:
Force managers to understand systems before changing them.

When to use this:

  • Inconsistent results
  • Blaming people
  • Constant “process fixes”

THE CORE IDEA

  • Systems fail before people.
  • Inputs determine outputs.
  • You cannot improve what you can’t explain.
  • Baselines come before changes.

WHAT “GOOD” LOOKS LIKE

  • System has a clear trigger and output.
  • Inputs are defined and enforced.
  • Ownership is explicit.
  • Baseline metrics exist.
  • No changes during baselining.

METRICS THAT MATTER

  • Volume
  • Cycle time
  • Quality / rework
  • Variance
  • Escalations

COMMON FAILURE MODES

  • Rebuilding instead of diagnosing
  • Changing during baseline
  • Tool-focused thinking
  • Vague ownership

HOW TO IMPLEMENT

  1. Pick one repeatable system.
  2. Define trigger, inputs, output.
  3. Map steps and owners.
  4. Define measurement points.
  5. Run baseline unchanged.
  6. Identify top 3 failure points.

SELF-CHECK

  • Can I explain the system end-to-end?
  • Are inputs enforced?
  • Do I have baseline data?

OUTPUT ARTIFACTS

  • System map
  • Input definition
  • Baseline metrics
  • Failure analysis

  • Before: Unit 2
  • After: Unit 4, Unit 7

Appendix D — MD-I-U04 One-Pager

Executing the System Consistently

Purpose:
Make execution predictable without manager chasing.

When to use this:

  • Firefighting
  • Surprise failures
  • Micromanagement

THE CORE IDEA

  • Rhythm controls execution.
  • Visibility replaces supervision.
  • Quality must be gated.
  • Managers enforce, not do.

WHAT “GOOD” LOOKS LIKE

  • Daily exception review.
  • Weekly execution review.
  • Work status visible without asking.
  • Quality gates block bad output.
  • Stable output for 2 cycles.

METRICS THAT MATTER

  • Throughput
  • Cycle time
  • Quality pass rate
  • Exceptions
  • Escalations

COMMON FAILURE MODES

  • Status meetings without metrics
  • Fixing work silently
  • No quality gates
  • Late discovery of problems

HOW TO IMPLEMENT

  1. Define daily + weekly cadence.
  2. Define what counts as an exception.
  3. Build execution dashboard.
  4. Run cadence consistently.
  5. Install one quality gate.
  6. Enforce without reworking.

SELF-CHECK

  • Can I see status instantly?
  • Do exceptions surface early?
  • Does output hold without me?

OUTPUT ARTIFACTS

  • Cadence doc
  • Execution dashboard
  • Exception log
  • Quality gate

  • Before: Unit 3
  • After: Unit 5

Appendix E — MD-I-U05 One-Pager

Leading People Inside the System

Purpose:
Replace babysitting with standards-based people leadership.

When to use this:

  • Performance feels subjective
  • Inconsistent accountability
  • Team confusion

THE CORE IDEA

  • People succeed inside systems.
  • Standards come before expectations.
  • Coaching is data-driven.
  • Delegation includes authority.
  • Fairness requires metrics.

WHAT “GOOD” LOOKS LIKE

  • Clear role standards.
  • Metric-driven 1:1s.
  • Delegation with decision rights.
  • Coaching cycles close.
  • Improvement trends visible.

METRICS THAT MATTER

  • Output per role
  • Quality / rework
  • SOP adherence
  • Escalations per person
  • Improvement trend

COMMON FAILURE MODES

  • Managing by personality
  • Avoiding hard conversations
  • Task dumping instead of delegation

HOW TO IMPLEMENT

  1. Define role standards.
  2. Align metrics to standards.
  3. Add people metrics to dashboard.
  4. Run metric-based 1:1s.
  5. Delegate outcomes with authority.
  6. Coach gaps with follow-up.

SELF-CHECK

  • Are standards measurable?
  • Do I start conversations with data?
  • Do people decide without me?

OUTPUT ARTIFACTS

  • Standards doc
  • People metrics
  • 1:1 logs
  • Delegation record

  • Before: Unit 4
  • After: Unit 6, Unit 8

Appendix F — MD-I-U06 One-Pager

Preventing Dependency & Building Ownership

Purpose:
Eliminate rescue behavior and create independent teams.

When to use this:

  • Constant questions
  • Manager overload
  • Team hesitation

THE CORE IDEA

  • Rescue creates weakness.
  • Support strengthens systems.
  • Ownership requires decision rights.
  • Independence must be tested.

WHAT “GOOD” LOOKS LIKE

  • Rescues logged.
  • Repeat rescues declining.
  • Decisions happening at the right level.
  • Output stable during step-back.

METRICS THAT MATTER

  • Manager-touch rate
  • Rescues / repeat rescues
  • Escalations
  • Output variance

COMMON FAILURE MODES

  • Silent fixing
  • Answering too quickly
  • Centralized authority

HOW TO IMPLEMENT

  1. Define rescue vs support.
  2. Log all rescues.
  3. Find repeat pattern.
  4. Fix root cause.
  5. Delegate authority.
  6. Run independence test.

SELF-CHECK

  • Do people wait for me?
  • Am I rescuing repeatedly?
  • Can I step back safely?

OUTPUT ARTIFACTS

  • Rescue log
  • Dependency tracker
  • Prevention fix
  • Step-back test

  • Before: Unit 5
  • After: Unit 7

Appendix G — MD-I-U07 One-Pager

Improving Systems Without Restarting

Purpose:
Enable disciplined improvement without chaos.

When to use this:

  • Constant process changes
  • “This isn’t working” reactions

THE CORE IDEA

  • Baselines precede change.
  • Hypotheses replace opinions.
  • One variable at a time.
  • Version everything.

WHAT “GOOD” LOOKS LIKE

  • Baseline metrics exist.
  • Hypotheses documented.
  • Before/after comparison.
  • SOPs versioned.
  • Execution stays stable.

METRICS THAT MATTER

  • Target improvement metric
  • Variance
  • Side-effect metrics

COMMON FAILURE MODES

  • Restarting processes
  • Changing multiple things
  • No rollback discipline

HOW TO IMPLEMENT

  1. Select data-backed problem.
  2. Diagnose trend.
  3. Write hypothesis.
  4. Change one variable.
  5. Measure impact.
  6. Decide + version.

SELF-CHECK

  • Did I baseline first?
  • Can I explain what changed and why?
  • Did execution stay stable?

OUTPUT ARTIFACTS

  • Baseline snapshot
  • Hypothesis
  • Change record
  • Versioned SOP

  • Before: Unit 6
  • After: Unit 8, Unit 9

Appendix H — MD-I-U08 One-Pager

Managing Performance & Accountability

Purpose:
Address performance early, fairly, and objectively.

When to use this:

  • Repeated misses
  • Avoided conversations
  • Team resentment

THE CORE IDEA

  • Performance is a pattern.
  • Classification matters.
  • Coaching ≠ correction.
  • Documentation protects fairness.

WHAT “GOOD” LOOKS LIKE

  • Triggers defined.
  • Issues classified correctly.
  • Actions documented.
  • Improvement or escalation occurs.

METRICS THAT MATTER

  • Triggered thresholds
  • Improvement trend
  • Rework / misses

COMMON FAILURE MODES

  • Waiting too long
  • Emotional reactions
  • Inconsistent enforcement

HOW TO IMPLEMENT

  1. Define performance triggers.
  2. Identify live case.
  3. Classify issue.
  4. Apply correct action.
  5. Review results.
  6. Close or escalate.

SELF-CHECK

  • Do I act on trends?
  • Can I justify actions with data?
  • Is enforcement consistent?

OUTPUT ARTIFACTS

  • Trigger list
  • Performance case record
  • Follow-up evidence

  • Before: Unit 5
  • After: Unit 9

Appendix I — MD-I-U09 One-Pager

Scaling the Team Without Breaking It

Purpose:
Scale capacity without sacrificing quality or people.

When to use this:

  • Growth pressure
  • Hiring requests
  • Backlogs

THE CORE IDEA

  • Scaling is a capacity problem.
  • Fix flow before adding people.
  • Constraints dictate growth.
  • Ramp time is real cost.

WHAT “GOOD” LOOKS LIKE

  • Capacity quantified.
  • Constraint identified.
  • Scale decision justified.
  • Stability maintained.

METRICS THAT MATTER

  • Throughput per role
  • WIP
  • Cycle time
  • Quality
  • Ramp time

COMMON FAILURE MODES

  • Hiring into broken systems
  • Ignoring training drag

HOW TO IMPLEMENT

  1. Quantify capacity.
  2. Identify constraint.
  3. Test non-headcount levers.
  4. Decide scale path.
  5. Monitor stability.
  6. Update capacity model.

SELF-CHECK

  • Can I explain capacity math?
  • Did quality hold?
  • Did dependency spike?

OUTPUT ARTIFACTS

  • Capacity model
  • Scale decision record
  • Stability metrics

  • Before: Unit 7
  • After: Unit 10

Appendix J — MD-I-U10 One-Pager

The Manager Operating Rhythm

Purpose:
Make management repeatable instead of reactive.

When to use this:

  • Fire drills
  • Missed trends
  • Burnout

THE CORE IDEA

  • Rhythm creates control.
  • Cadence beats urgency.
  • Decisions require triggers.
  • Managing time must be protected.

WHAT “GOOD” LOOKS LIKE

  • Daily / weekly / monthly cadence.
  • Metrics reviewed on schedule.
  • Decisions logged.
  • Few surprises.

METRICS THAT MATTER

  • Execution KPIs
  • Decision follow-ups
  • Managing vs doing time

COMMON FAILURE MODES

  • Meetings without outcomes
  • Decisions without follow-up

HOW TO IMPLEMENT

  1. Define cadence.
  2. Schedule events.
  3. Run 2 cycles.
  4. Log decisions.
  5. Review time allocation.

SELF-CHECK

  • Are reviews consistent?
  • Are decisions documented?
  • Am I mostly managing?

OUTPUT ARTIFACTS

  • Cadence doc
  • Calendar proof
  • Decision log

  • Before: Unit 9
  • After: Unit 11

Appendix K — MD-I-U11 One-Pager

Common Manager Failures & Anti-Patterns

Purpose:
Detect and correct failure early.

When to use this:

  • Drift
  • Repeated issues
  • Defensive reactions

THE CORE IDEA

  • Failure is predictable.
  • Early signals matter.
  • Correction beats blame.
  • Metrics remove defensiveness.

WHAT “GOOD” LOOKS LIKE

  • Anti-patterns recognized early.
  • Root cause identified.
  • Corrective action taken.
  • Metrics stabilize.

METRICS THAT MATTER

  • Dependency trend
  • Variance
  • Escalations
  • Rework

COMMON FAILURE MODES

  • Denial
  • Overcorrection
  • Opinion-based decisions

HOW TO IMPLEMENT

  1. Review early warning metrics.
  2. Identify pattern.
  3. Diagnose root cause.
  4. Apply correction.
  5. Measure recovery.

SELF-CHECK

  • Would I catch this early again?
  • Did metrics improve?

OUTPUT ARTIFACTS

  • Early warning dashboard
  • Correction record

  • Before: Unit 10
  • After: Unit 12

Appendix L — MD-I-U12 One-Pager

MD Validation & Readiness Standard

Purpose:
Formally determine management readiness.

When to use this:

  • Promotion
  • Hiring
  • Scope expansion

THE CORE IDEA

  • Management authority is earned.
  • Evidence replaces opinion.
  • Certification is scoped.
  • Compliance is ongoing.

WHAT “GOOD” LOOKS LIKE

  • All MD dimensions met.
  • Evidence packet complete.
  • Team performs independently.
  • Cadence sustained.

METRICS THAT MATTER

  • Stability
  • Dependency
  • Improvement results

COMMON FAILURE MODES

  • Partial evidence
  • Metric avoidance

HOW TO IMPLEMENT

  1. Assemble MD evidence packet.
  2. Self-score honestly.
  3. Conduct formal review.
  4. Grant or deny certification.
  5. Schedule re-validation.

SELF-CHECK

  • Could someone else verify this?
  • Would output hold without me?

OUTPUT ARTIFACTS

  • MD evidence packet
  • Certification record

  • Before: Units 1–11
  • After: MD-II

 

MD-I-U01 — Execution Checklist

Manager Mandate: Outcomes, Not Activity

(Catalyst MD Appendix Handout — Checklist Artifact)

Checklist Header

  • Program: MD (Manager Development)
  • Level: MD-I
  • Unit #: MD-I-U01
  • Unit Name: Outcomes, Not Activity
  • Related Manual Chapter: Chapter 1 — The Manager Mandate
  • Manager (Owner): ______________________________
  • Reviewer: _____________________________________
  • Review Date: ____ / ____ / ______

Section 1 — Preconditions (Hard Gate)

Stop immediately if any item below is unchecked.

☐ Scope of responsibility is clearly defined (team/system you manage)
☐ At least one real system/process is identified as your management scope
☐ A baseline period is defined (Week 0) or baseline data exists
☐ You have access to required data sources (sheet, CRM, ERP, dashboard, etc.)
☐ Reviewer agrees on the minimum KPI categories required for your scope

If stopped, remediation required:



Section 2 — Required Actions (Non-Negotiable)

Complete all actions exactly as defined.

A) Outcomes Defined (not tasks)

☐ 3–5 outcomes are written in plain language (no task lists)
☐ Each outcome is within the manager’s authority to influence
☐ “Success” and “failure” are defined for each outcome using numbers

B) Minimum KPI Set Built

Volume metric defined and tracked
Cycle Time / Aging metric defined and tracked
Quality / Rework metric defined and tracked
Variance tracked week over week (trend, not single point)
Escalations / Dependency tracked (how often work/decisions come to manager)

C) Cadence Installed

☐ Daily (or near-daily) exception review is scheduled
☐ Weekly KPI review is scheduled
☐ KPI reviews are structured around metrics first, narrative second

D) Metrics Used for a Real Decision

☐ At least one decision was made using a triggering metric
☐ The decision includes an expected metric impact and a review date


Section 3 — Evidence to Produce

If the evidence does not exist, the action is considered incomplete.

Outcome Statement (1 page max)

  • outcomes + success/failure definitions + KPI mapping

KPI Tracker / Dashboard (link or screenshot) showing the 5 minimum KPI categories

  • includes Week 0 baseline or current week + prior snapshot

Cadence Proof

  • calendar screenshots or scheduled events (daily exception + weekly KPI)

Decision Record (minimum 1)

  • triggering metric → decision → expected impact → review date

Evidence location / links:




Section 4 — Validation Questions (Data-Backed)

All answers must be supported by the evidence above.

Can you explain success/failure for your scope without using “busy,” “effort,” or “hard work”?
☐ Yes ☐ No

Do you have the minimum KPI set visible today (Volume, Time, Quality, Variance, Escalations)?
☐ Yes ☐ No

Are KPI reviews scheduled and actively running (not just planned)?
☐ Yes ☐ No

Is there at least one documented decision tied directly to a metric?
☐ Yes ☐ No

If any answer is “No,” identify the blocking factor:



Section 5 — Failure Signals (Early Warnings)

If any are present, corrective action is required immediately.

☐ Outcomes are written as tasks (activity lists)
☐ Metrics exist but are not reviewed on schedule
☐ Decisions are justified by stories instead of KPIs
☐ Quality is discussed without defect/rework data
☐ “We’re working hard” is used as performance evidence
☐ Escalations/dependency are not tracked

Detected signals (if any):



Section 6 — Outcome & Gate Decision

Select one outcome only.

PASS — Standard Met
Advance to MD-I-U02 (Leverage & Dependency Control)

HOLD — Remediation Required
Required fixes + deadline:


FAIL — Standard Not Met
Management authority restricted pending remediation.


Section 7 — Sign-Off

  • Manager Signature: _________________________ Date: __________
  • Reviewer Signature: ________________________ Date: __________

 

MD-I-U02 — Execution Checklist

Leverage & Dependency Control

Section 1 — Preconditions (Hard Gate)

☐ MD-I-U01 passed
☐ Team/system scope defined
☐ Baseline output metrics visible
☐ Manager authority and decision rights documented


Section 2 — Required Actions

☐ Definitions exist for: manager touch, escalation, rescue
☐ Dependency metrics tracked weekly
☐ All rescues logged (no silent fixing)
☐ At least one repeat rescue identified
☐ Root cause categorized (input / standard / skill / capacity / tooling)
☐ One prevention fix implemented
☐ One step-back (independence) test executed


Section 3 — Evidence to Produce

☐ Dependency tracker
☐ Rescue log with categories
☐ Prevention fix record
☐ Step-back test summary


Section 4 — Validation Questions

☐ Did output hold during step-back?
☐ Did dependency decrease or stabilize?
☐ Was a repeat rescue eliminated?


Section 5 — Failure Signals

☐ Manager repeatedly doing work
☐ Same issues resurfacing
☐ Decisions routing upward


Section 6 — Gate Decision

☐ PASS → MD-I-U03
☐ HOLD → Reduce dependency
☐ FAIL → Authority restricted



MD-I-U03 — Execution Checklist

System Literacy & Baselines

Section 1 — Preconditions

☐ MD-I-U02 passed
☐ One repeatable system selected
☐ No active process changes


Section 2 — Required Actions

☐ Trigger defined
☐ Inputs defined and enforced
☐ Steps mapped end-to-end
☐ Ownership defined per step
☐ Output clearly defined
☐ Measurement points defined
☐ Baseline run completed (unchanged system)


Section 3 — Evidence

☐ System map
☐ Input definition sheet
☐ Baseline metrics snapshot
☐ Top 3 failure points with data


Section 4 — Validation

☐ Can system be explained end-to-end?
☐ Were no changes made during baseline?


Section 5 — Failure Signals

☐ Changing process during baseline
☐ Blaming people without data


Section 6 — Gate

☐ PASS → MD-I-U04
☐ HOLD → Re-baseline



MD-I-U04 — Execution Checklist

Execution Control & Visibility

Section 1 — Preconditions

☐ MD-I-U03 passed
☐ System map + baseline exist


Section 2 — Required Actions

☐ Daily exception review defined
☐ Weekly execution review defined
☐ Exceptions objectively defined
☐ Execution KPIs tracked
☐ Quality gate installed
☐ Enforcement without rework


Section 3 — Evidence

☐ Cadence schedule
☐ Execution dashboard (2 cycles)
☐ Exception log
☐ Quality gate definition


Section 4 — Validation

☐ Is work status visible without asking?
☐ Did output remain stable for 2 cycles?


Section 5 — Failure Signals

☐ Firefighting
☐ Silent rework
☐ No quality gates


Section 6 — Gate

☐ PASS → MD-I-U05
☐ HOLD → Stabilize execution



MD-I-U05 — Execution Checklist

People Leadership & Standards

Section 1 — Preconditions

☐ MD-I-U04 passed
☐ Execution stable


Section 2 — Required Actions

☐ Role standards documented
☐ Metrics aligned to standards
☐ People metrics visible
☐ Metric-driven 1:1s conducted
☐ Delegation includes authority
☐ Coaching cycle executed


Section 3 — Evidence

☐ Standards doc
☐ People metrics snapshot
☐ 1:1 logs
☐ Delegation record
☐ Coaching record


Section 4 — Validation

☐ Are standards measurable?
☐ Did at least one metric improve?


Section 5 — Failure Signals

☐ Babysitting
☐ Opinion-based coaching


Section 6 — Gate

☐ PASS → MD-I-U06
☐ HOLD → Clarify standards



MD-I-U06 — Execution Checklist

Ownership & Independence

Section 1 — Preconditions

☐ MD-I-U05 passed


Section 2 — Required Actions

☐ Rescue vs support defined
☐ Rescues logged
☐ Repeat pattern identified
☐ Root cause fixed
☐ Decision rights delegated
☐ Independence test run


Section 3 — Evidence

☐ Rescue log
☐ Dependency trend
☐ Prevention fix
☐ Independence test record


Section 4 — Validation

☐ Did dependency decrease?
☐ Did output hold?


Section 5 — Failure Signals

☐ Silent rescues
☐ Escalation dependence


Section 6 — Gate

☐ PASS → MD-I-U07
☐ HOLD → Reduce dependency



MD-I-U07 — Execution Checklist

Disciplined Improvement

Section 1 — Preconditions

☐ MD-I-U06 passed
☐ Stable baseline exists


Section 2 — Required Actions

☐ Problem selected via data
☐ Diagnosis completed
☐ Hypothesis written
☐ One variable changed
☐ Feedback logged
☐ Before/after measured
☐ SOP versioned


Section 3 — Evidence

☐ Baseline snapshot
☐ Hypothesis doc
☐ Change record
☐ Versioned SOP


Section 4 — Validation

☐ Did metrics change as expected?
☐ Did execution remain stable?


Section 5 — Failure Signals

☐ Restarting systems
☐ Multiple changes at once


Section 6 — Gate

☐ PASS → MD-I-U08
☐ HOLD → Re-baseline



MD-I-U08 — Execution Checklist

Performance & Accountability

Section 1 — Preconditions

☐ MD-I-U07 passed


Section 2 — Required Actions

☐ Performance triggers defined
☐ Live case selected
☐ Issue classified correctly
☐ Coaching or correction applied
☐ Follow-up conducted
☐ Improvement or escalation documented


Section 3 — Evidence

☐ Trigger list
☐ Performance case record
☐ Follow-up metrics


Section 4 — Validation

☐ Was action data-driven?
☐ Was enforcement consistent?


Section 5 — Failure Signals

☐ Delayed action
☐ Emotional decisions


Section 6 — Gate

☐ PASS → MD-I-U09
☐ HOLD → Re-run case



MD-I-U09 — Execution Checklist

Capacity & Growth Readiness

Section 1 — Preconditions

☐ MD-I-U08 passed


Section 2 — Required Actions

☐ Capacity quantified
☐ Constraint identified
☐ Non-headcount levers tested
☐ Scale decision documented
☐ Stability monitored
☐ Capacity model updated


Section 3 — Evidence

☐ Capacity model
☐ Constraint analysis
☐ Scale decision record
☐ Stability metrics


Section 4 — Validation

☐ Did quality hold?
☐ Did dependency spike?


Section 5 — Failure Signals

☐ Hiring into broken system
☐ Ignoring ramp cost


Section 6 — Gate

☐ PASS → MD-I-U10
☐ HOLD → Re-analyze capacity



MD-I-U10 — Execution Checklist

Manager Operating Rhythm

Section 1 — Preconditions

☐ MD-I-U09 passed


Section 2 — Required Actions

☐ Daily cadence defined
☐ Weekly cadence defined
☐ Monthly cadence defined
☐ Metrics aligned to cadence
☐ Decisions logged
☐ Managing time protected


Section 3 — Evidence

☐ Cadence doc
☐ Calendar proof
☐ Decision log
☐ Time allocation snapshot


Section 4 — Validation

☐ Are surprises rare?
☐ Are decisions reviewed?


Section 5 — Failure Signals

☐ Cadence skipped
☐ Meetings without outcomes


Section 6 — Gate

☐ PASS → MD-I-U11
☐ HOLD → Reinforce cadence



MD-I-U11 — Execution Checklist

Failure Detection & Correction

Section 1 — Preconditions

☐ MD-I-U10 passed


Section 2 — Required Actions

☐ Early warning metrics visible
☐ Failure pattern identified
☐ Root cause diagnosed
☐ Corrective action taken
☐ Recovery measured


Section 3 — Evidence

☐ Early warning dashboard
☐ Correction record
☐ Before/after metrics


Section 4 — Validation

☐ Did trend stabilize or improve?


Section 5 — Failure Signals

☐ Denial
☐ Overcorrection


Section 6 — Gate

☐ PASS → MD-I-U12
☐ HOLD → Apply correction



MD-I-U12 — Execution Checklist

MD Validation & Certification

Section 1 — Preconditions

☐ MD-I-U01–U11 passed


Section 2 — Required Actions

☐ Evidence packet assembled
☐ Self-assessment completed
☐ Formal review conducted
☐ Scope defined
☐ Certification decision documented


Section 3 — Evidence

☐ MD evidence packet
☐ Certification record


Section 4 — Validation

☐ Can output hold without manager?
☐ Are all MD dimensions met?


Section 5 — Failure Signals

☐ Missing evidence
☐ Metric avoidance


Section 6 — Gate

☐ CERTIFIED — MD-I
☐ NOT CERTIFIED — Remediation required