
While the top 1% of Meta’s AI engineers make a killing, the rest find their jobs hanging by a thread
When Meta cut around 600 roles across its AI division in October 2025, including teams inside the historic FAIR lab that created PyTorch, it sent a clear signal about how frontier AI talent markets really work.
On one side sit a small group of engineers and researchers who receive multimillion-dollar offers from top labs. On the other side sit hundreds of colleagues who discover that their work is now “non-core” as leadership refocuses on superintelligence bets.
The gap between those two groups is growing.
Meta’s restructuring shows the new AI hierarchy
Meta’s cuts hit FAIR, product AI, and AI infrastructure teams, while the new TBD Lab, focused on next-generation foundation models, continued to hire.
In his memo, Chief AI Officer Alexandr Wang framed the layoffs as a way to create smaller, more “load-bearing” teams where each person has a wider scope and impact.
A few patterns stand out.
- Frontier model work sits at the top of the hierarchy
- Researchers and architects with rare, proven impact remain protected
- Many applied, infrastructure, and “legacy” research roles carry more risk
Meta is not alone. Across the ecosystem, labs are willing to pay aggressively for the very best people working on the very best models, even while they reduce overall AI headcount.
Why top labs overpay the top 1%
In large model races, small performance gains compound over time.
A model that is 1% better on key benchmarks today often benefits from a better architecture, cleaner data curation, or smarter training strategy. Those advantages tend to compound across future generations of the system. The team that holds that edge for two or three years can steer the direction of the entire product line and, in some cases, the market.
Executives inside these labs think in terms of:
- Lead on key benchmarks for long enough and you own developer mindshare
- Own developer mindshare, and you shape the ecosystem around your tooling
- Shape the ecosystem, and you pull more talent and data into your orbit
In that context, paying a small group of researchers two or three million dollars per year becomes a rational trade for leadership. The marginal cost of a few elite packages is tiny compared with the value of owning the top model in a category.
The result is a barbell structure.
- At one end, a very small group of elite researchers and engineers who work on core model breakthroughs and set technical direction
- At the other end, a larger group of applied and support staff whose work is easier to reassign, automate, or consolidate
When pressure arrives, it lands on the second group.
The model that is 1% better today
Scaling laws and industry experience both suggest that performance improvements do not arrive in a smooth, linear way. The teams that continuously push models a little further on context length, reliability, or multimodal capability create a gap that grows wider over time.
That is why leadership teams talk about:
- A model that reduces hallucinations by a few percentage points
- A recommendation system that lifts engagement by low single-digit percentages
- An inference stack that cuts unit economics just enough to support a bigger rollout
These numbers look small in isolation. Over twelve to twenty-four months, they compound into:
- Higher usage and retention
- Lower marginal costs
- More training data and user feedback
- More capital support for the next training run
This feedback loop defines the race toward advanced general systems. The group that sustains even a modest performance edge for several years can pull far ahead.
In that race, the incentive is simple.
Top labs pay top dollar for the absolute best talent, even when it means taking costs out elsewhere.
Why costs shift onto “the rest” of the org
Budgets are not infinite, even at global platforms.
Meta, for example, has poured billions into AI infrastructure and has invested more than ten billion dollars into external partners such as Scale AI to accelerate its efforts.
When boards ask for both “best in class AI” and disciplined spending, executives reach for a familiar playbook:
- Protect and over-invest in a small, talent-dense core
- Consolidate overlapping teams and legacy research groups
- Automate or standardize parts of the stack that feel more like infrastructure than innovation
This is how companies can simultaneously:
- Announce hundreds of AI layoffs
- Continue to recruit senior researchers from OpenAI, Google DeepMind, and Apple with multimillion-dollar offers
- Describe the changes as an efficiency and focus initiative rather than a retreat from AI
For most AI engineers, this creates a new reality. Talent markets value:
- Direct contribution to model quality or safety
- Clear ownership of critical infrastructure at scale
- Rare combinations of skills, such as systems engineering plus frontier model understanding
Roles that sit far from those leverage points face more volatility.
What this means for AI engineers and leaders
For individual contributors, the message is uncomfortable but clear.
- Skills converge toward two clusters that remain resilient:
- Frontier research and architecture
- Deep systems, infrastructure, or safety work tightly coupled to those models
- Generic “AI developer” profiles carry more risk as commoditized tooling improves
- Work that can be performed by a larger pool of engineers in many locations attracts more cost pressure
For CEOs, CHROs, and CTOs, the bigger lesson lies in how they design teams.
- Use over-investment intentionally for a very small group of people who genuinely shift the frontier
- Build stable, skilled product and platform teams around those cores, with clear charters and ownership
- Review every “middle” role that neither advances the model frontier nor owns critical delivery
The aim is a team shape where:
- A small group defines the model edge
- A broader group builds durable products, infrastructure, and integrations
- Everyone can explain their direct leverage on revenue, cost, safety, or long-term advantage
The new social contract of AI work
AI once looked like a simple growth story. Demand grew, headcount followed, and the narrative of lifetime security for machine learning engineers took hold.
Meta’s restructuring shows a more complex picture.
Top labs will continue to compete fiercely for the very best people who can move model quality forward. They will keep paying at levels that feel extraordinary for a small group. At the same time, they will treat most other roles as variable, even inside famous research units.
For leaders, the practical question is not whether this pattern continues. The question is how to build organizations that acknowledge it honestly:
- Concentrate outsized compensation where it truly changes outcomes
- Offer clear growth paths, reskilling, and internal mobility for strong performers outside the frontier core
- Communicate that “AI job security” depends on proximity to real leverage, not just the label on a team
The model that is one step ahead today shapes the market two or three years from now. The people who build that model capture most of the upside. The rest of the organization lives with the tradeoffs that make that possible.

