
OpenAI published The State of Enterprise AI 2025 report in December 2025, based on two sources: de-identified, aggregated usage data from enterprise customers and a survey of 9,000 workers across nealy 100 enterprises. Many companies still present AI as a collection of use cases and experiments. Boards continue to ask for roadmap decks. The report’s data indicates a different reality within high-performing deployments: usage is shifting from prompts toward systems, where AI becomes an integral part of how work flows through the enterprise.
The three signals that matter more than the buzz
1. Volume is rising, yet the bigger change is depth
Weekly messages in ChatGPT Enterprise grew roughly 8x over the past year. The average worker sends about 30 percent more messages.
That sounds like adoption. The deeper story is what people do with the tool.
2. Work is moving into repeatable workflows
Structured workflows such as Projects and Custom GPTs increased 19x year to date. That suggests teams are packaging repeatable work into shareable assets, then scaling that behavior across functions.
3. Reasoning usage explodes when AI enters production systems
Average API reasoning token consumption per organization increased about 320x in the past 12 months. In plain terms, more companies route real processes through higher intensity model calls, which usually shows up when AI moves from individual assistance into operational integration.
Where growth concentrates
The report highlights rapid growth across industries, particularly in technology, healthcare, and manufacturing.
The geographic story is equally direct. Australia, Brazil, the Netherlands, and France each exceed 140% year-over-year growth in business customer bases. International API customer growth has exceeded 70 percent over the last six months. Japan has the most significant number of corporate API customers outside the United States.
Productivity gains exist, and they vary by behavior
Across surveyed enterprises, 75 percent of workers report faster or higher-quality output. Time saved averages 40-60 minutes per day. Heavy users report more than 10 hours per week.
This creates a management reality. Productivity gains concentrate among people who operationalize AI across many tasks, not among those who try it occasionally.
The frontier gap is the real headline
OpenAI reports a widening split inside enterprises. Frontier workers send 6 times as many messages as the median employee. Frontier firms send 2x as many messages per seat as the median enterprise.
That gap signals competitive separation inside the same company. Some teams build an AI operating cadence. Other teams keep AI as an optional tool.
The pitfalls the data implies, even when teams feel busy
Enterprise AI programs often stall for predictable reasons.
A roadmap deck replaces operational instruentation. Leadership observes activity but lacks evidence related to cycle time, error rates, throughput, and cost.
Projects grow, yet ownership stays ambiguous. Teams create Custom GPTs, then quality and policy controls drift across functions.
Usage expands, yet systems integration lags. When AI stays at the prompt layer, the organization captures convenience, not compounding operational advantage.
The report itself points to organizational readiness and implementation as the constraint, rather than model capability.
An operator playbook to turn usage into an operating advantage
Assign workflow ownership by function
Pick a business-critical workflow per function and name an owner who can change the workflow end-to-end. Example areas include customer support triage, pricing approvals, contract review, finance close, sales qualification, and incident response.
Instrument three metrics that leadership can govern
Usage depth. Messages per seat, Projects adoption, Custom GPT reuse.
Time and quality. Minutes saved per task class, rework rates, escalation rates, and defect rates.
Risk and control. Data handling compliance, policy hits, approval gates, and audit logs.
Scale Projects and Custom GPTs where repeatable work dominates
Treat Projects as a packaging layer for knowledge, prompts, tools, and evaluation. Treat Custom GPTs as productized entry points for workflows. Promote the assets that demonstrate measurable lift, then retire the rest through a simple lifecycle policy.
Study frontier behavior, then encode it into standard operating procedures
Find the teams and individuals already operating at frontier intensity inside your environment. Map their task patterns, tool choices, review habits, and handoff steps. Convert that behavior into templates, checklists, and training that spreads across the median.
The strategic takeaway
The report describes a transition. AI starts as a personal productivity tool. It becomes a production dependency when workflows, controls, and measurements mature together. The winners treat this as operating design, with clear ownership and instrumentation, then scale what works.

