Stop Managing Development by Ratio. Start Managing It by Results.
“70-20-10” has become one of the most quoted ideas in talent development: 70% learning from experience, 20% from relationships, 10% from formal training. It’s tidy, memorable, and easy to communicate.
It’s also easy to misuse.
When leaders treat 70-20-10 like a compliance target (“Did we hit the 70?”), development turns into math instead of movement. People don’t grow because you allocated the right percentage of “learning calories.” They grow because they had the right experiences, supported by the right feedback, aimed at a real business outcome.
Why 70-20-10 gets repeated (and what it was actually meant to do)
The Center for Creative Leadership (CCL) still describes 70-20-10 as a research-based guideline emerging from their Lessons of Experience research and highlights a key nuance: “All experiences aren’t created equal,” so you need to go beyond the simple ratio. (CCL)
In other words: the value isn’t the numbers—it’s the reminder that development is bigger than courses.
The problem: the percentages often get treated like facts
Credible critiques point out that the exact percentages are not strongly validated as universal truth.
- Jefferson and Pollock (ATD) note that the phrase doesn’t even appear in Lessons of Experience and explain the numbers as a conceptual/theoretical summary of executives’ retrospective reflections—not a scientific formula. (ATD)
- Richard Harding’s peer-reviewed “debate” article argues that the model’s seductive precision should raise caution: real-world organizational learning rarely fits neat, immutable ratios, and he cites concerns that the assumption is frequently repeated “as if it is fact.” (Open Research Online)
So yes—experiential learning matters, and the model can be a helpful design prompt. But treating 70-20-10 as a universal rule can lead to shallow decisions, like cutting formal learning because “it’s only 10% anyway.”
A better approach: design development like you design strategy
If you’re a business leader, you already know the playbook: start with the outcome, align the work, measure what matters. Development should be no different.
Here’s a practical way to build programs (or team development plans) that actually move performance.
1) Start with the business problem (not the learning mix)
Ask:
- What business metric must improve? (cycle time, win rate, quality, retention, customer escalations)
- What behaviors and decisions drive that metric?
- Where are we seeing breakdowns (skills, systems, incentives, leadership habits)?
If you can’t answer those questions, no ratio will save you.
2) Build specific on-the-job experiences that force the capability
“Experience” isn’t “time served.” It’s challenge + stakes + reflection.
Examples of high-value experiences:
- A cross-functional project with real delivery dates
- Owning a customer segment or renewal portfolio
- Leading a post-mortem on a high-impact failure
- Running a change initiative with measurable adoption goals
CCL’s point that “all experiences aren’t created equal” is the difference between generic job exposure and intentional growth assignments. (CCL)
3) Surround experiences with coaching and feedback (this is where learning sticks)
This isn’t soft stuff—coaching has real evidence behind it.
A workplace coaching meta-analysis found positive effects on organizational outcomes overall and highlights that coaching can improve outcomes across skills, attitudes, and results. (ResearchGate) A more recent meta-analysis focused on psychologically informed coaching also reports meaningful effects on outcomes like goal attainment and self-efficacy. (ResearchGate)
Practical mechanisms that work in business settings:
- Weekly 15-minute manager coaching (one obstacle, one commitment, one feedback point)
- Peer “deal reviews” or “case consults” (structured, not casual)
- Shadowing + reverse-shadowing with a debrief
- 360-style inputs used carefully (they can help, but aren’t a magic wand) (ResearchGate)
4) Use formal learning as an amplifier, not the centerpiece
Formal learning is still valuable when it’s:
- tightly connected to the work people must do next week
- short enough to apply immediately
- reinforced by managers and peers
This lines up with what transfer research has shown for decades: transfer depends not just on training content, but also on work-environment factors like supervisory support and the opportunity to use the new behavior. (Flip Tools)
So instead of asking, “Is this 10% or 20%?”, ask:
- “What knowledge or tool removes friction so they can perform in the real work?”
Measure results, not ratio compliance
If you want development that earns budget and credibility, measure it the way you’d measure any strategic initiative:
Lagging indicators (business impact)
- win rate, cycle time, quality defects, customer escalations, retention, internal mobility
Leading indicators (behavior change)
- frequency of targeted behaviors (e.g., weekly pipeline reviews, customer call cadence)
- adoption metrics (usage data, process compliance where it matters)
- manager coaching frequency and quality (simple rubric)
And don’t skip the basics:
- Are people getting chances to apply what they learned?
- Are managers reinforcing it?
- Are incentives aligned?
Transfer research explicitly calls out the work environment—support, constraints, and opportunity to perform—as a major factor in whether learning shows up on the job. (Flip Tools)
A simple template you can steal
If you’re designing a development plan for a role or team, write it on one page:
- Business outcome (metric + target)
- Critical work moments (3–5 situations where performance matters most)
- Stretch experiences (projects/assignments tied to those moments)
- Coaching & peer system (cadence + structure)
- Targeted learning (tools/frameworks only)
- Measures (leading + lagging)
That’s the blend. No magic ratio required.
Bottom line
70-20-10 is useful when it reminds you: most development happens in the flow of work—and relationships matter.
But the numbers aren’t the point. The point is to design experiences that solve business problems, support them with coaching and feedback, add targeted learning where needed, and measure outcomes.
Ratios don’t develop people. Well-designed experiences do.
Further reading (for leaders who want receipts):
CCL’s updated guidance on going beyond the 70-20-10 rule (CCL)
ATD critique on origins and “where’s the evidence?” (ATD)
Harding’s scholarly critique of the seductive precision of the ratio (Open Research Online)
Baldwin & Ford on why transfer depends on design + trainee + work environment (Flip Tools)
Coaching meta-analyses showing positive effects on workplace outcomes (ResearchGate)