
Two MIT research streams landed on the same conclusion from different angles.
Project Iceberg quantifies how much of the economy sits inside current AI capability. MIT NANDA explains why most organizations still fail to turn that capability into measurable business value. Together, they frame the real story of 2025: the constraint is operating design, not model access.
1) The headline is a capability exposure
Project Iceberg introduces the Iceberg Index, a skills-centered measure of technical exposure, meaning overlap between tools that exist today and the skills inside occupations.
What stands out in the report is scale.
- About 11.7% of wage value, around $1.2T, sits in roles where AI tools can technically perform meaningful portions of the skill mix today.
- In parallel, the analysis also reports that current AI systems can technically perform about 16% of classified labor tasks referenced in the report’s taxonomy foundation.
The leadership takeaway: treat this as an exposure map for prioritization, budgeting, and sequencing.
2) Adoption is visible, transformation remains rare
MIT NANDA’s 2025 report quantifies the gap between tool trials and workflow-level impact.
- 95% of organizations see zero return from GenAI efforts in the study’s framing.
- Over 80% explored or piloted general-purpose tools, with nearly 40% reporting deployment, yet those deployments mostly show up as individual productivity, not P&L movement.
- Enterprise-grade, task-specific systems face a steep drop: 60% evaluated, 20% reached pilot stage, 5% reached production.
This is why many leadership teams feel progress while finance teams struggle to see durable ROI.
3) This is an execution and integration problem
The report points to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations as core failure drivers.
A useful way to translate that into operator language is simple: teams buy a tool, yet the workflow stays the same, the interfaces stay the same, the QA paths stay the same, and the system has limited ability to learn from feedback inside the process.
4) The winners treat AI as an operating infrastructure
MIT NANDA frames the divide as approach driven, with leading buyers demanding process specific customization and evaluating tools on business outcomes rather than software benchmarks.
In practice, that means leaders design AI into the system of work:
- Decision rights for humans remain explicit
- Inputs and systems of record are defined
- Review, escalation, and audit paths are designed up front
- Feedback loops exist so the system improves over time inside the workflow
5) Failed rollouts create organizational drag
The report highlights a pattern: pilots launch easily, production success remains rare, and skepticism rises when tools feel misaligned with real work.
Leaders can treat this as a change management and credibility problem, not a tooling problem. Early workflow wins, tight QA, and clear accountability protect organizational trust.
6) The real disruption is job architecture
Project Iceberg frames workforce change at the skill and task layer: roles restructure as AI absorbs portions of the work and elevates oversight, integration, and coordination skills.
So the execution target becomes job design, not job titles.
A practical playbook leaders can run this quarter
Step 1 Pick 3 to 5 workflows that move money
Examples: quote to cash, procurement to pay, customer support resolution, financial close, sales operations, recruiting operations.
Selection filter: high volume, repeatable decisions, clear quality criteria, strong data exhaust.
Step 2 Map tasks and failure modes
Inventory the tasks, inputs, systems touched, handoffs, and where errors propagate into cost, risk, or cycle time.
Step 3 Redesign the workflow around human decision points
Define what needs judgment, what needs validation, and what can run under measured error thresholds.
Step 4 Instrument outcomes like an operator
Track cycle time, defect rate, exception rate, rework, and cost per unit of work. Tie those metrics to P&L owners.
Step 5 Build learning loops into the system
Store feedback, label outcomes, create playbooks, and use review protocols that help the system improve over time, which MIT NANDA flags as a separator between stalled pilots and scaled value.
The strategic gap that will create the next unicorn cohort
Project Iceberg suggests capability exposure is already broad across administrative, financial, and professional services. MIT NANDA shows most enterprises still fail to translate that into scaled workflow impact. The gap between those two realities is where new category leaders get built.
Where do you see the biggest gaps in your org today, workflow selection, task mapping, governance, or measurement?

