Ekkehard Ernst, the International Labour Organization's chief macroeconomist, challenged the prevailing narrative on AI and employment in Beijing on Tuesday, arguing that "algorithmic collusion" poses a graver risk to workers than mass job displacement. The claim warrants scrutiny: it reframes a genuine concern, but the ILO has not yet published the data to support it.
Dispatch
BEIJING, 28 March 2026 — The South China Morning Post reported Ernst's remarks at an unspecified Beijing event. Ernst made two distinct claims: first, that the "robot apocalypse" narrative has been oversold, and second, that AI's real threat lies in wage erosion through algorithmic coordination.
The threat to employment posed by artificial intelligence was not a "robot apocalypse" that would steal jobs, but "algorithmic collusion" that could quietly erode wages and workplace safety.
Ekkehard Ernst, International Labour Organization, reported by South China Morning Post, 28 March 2026

Ernst cited an Anthropic study released in March 2026 to support his first point—that real-world AI adoption lags theoretical capability. He also offered comparative data on youth unemployment:
I don't think that we are anywhere close to major disruption of labour markets.
Ekkehard Ernst, reported by South China Morning Post, 28 March 2026
Ernst compared China's youth jobless rates (16.1 per cent for 16- to 24-year-olds, 7.2 per cent for 25- to 29-year-olds) to European figures exceeding 20 per cent, arguing that China's figures were not exceptional and that economic slowdown, not AI, drove youth unemployment [1].
No contrasting source has yet offered a detailed rebuttal. The story remains largely uncontested in the public record—which itself is revealing.
---
What's Really Happening
---

The Real Stakes
For workers: If Ernst is correct, the danger is not a jobs cliff but a slow wage squeeze in sectors where AI is already embedded—software, customer service, logistics, administrative support. Confirmed: AI is being deployed in hiring and wage-setting systems in these sectors [1]. Projected: if adoption accelerates without regulation, entry-level wages in high-AI-adoption roles could compress by 10–20 per cent over five years, according to some labour economists (though no single study has yet quantified this precisely).
For policymakers: Ernst's framing shifts the policy debate from "How do we retrain displaced workers?" to "How do we prevent wage cartels disguised as algorithms?" This is a harder problem. Job displacement can be addressed through education and transition support. Wage suppression through algorithmic coordination requires either transparency mandates (forcing companies to disclose how AI sets pay) or antitrust enforcement (treating coordinated algorithm behavior as collusion). Neither is currently standard practice in most jurisdictions. China's government, which has been aggressive on tech regulation, has not yet targeted algorithmic wage-setting. The European Union's AI Act (2024) does not explicitly address this scenario [2].
For employers: The unstated implication is that companies using AI for hiring and compensation already have a competitive advantage—lower wage bills, less negotiating friction. If regulation follows Ernst's logic, that advantage disappears. Companies that have already embedded AI into compensation systems face potential forced audits or redesigns.
---
Industry Context
The tech sector's own experience with AI hiring tools offers a cautionary precedent. Amazon's internal recruiting algorithm, deployed in 2014, was found to systematically downgrade female candidates [3]. The system was not deliberately programmed to discriminate; it learned patterns from historical hiring data that reflected human bias. Amazon scrapped it. The lesson: algorithmic systems can suppress opportunities (and, by extension, wages) without explicit instruction to do so.
Ernst's concern about wage suppression through AI is not new—it echoes warnings from economists including Daron Acemoglu (MIT) and others who have argued that "low-skill automation" (machines that replace workers without creating new high-wage roles) poses a greater long-term threat than dramatic job loss [4]. What is new is the framing of the mechanism: not robots replacing workers, but algorithms suppressing what workers can earn.
---

Impact Radar
---
Watch For
1. ILO publishes detailed research on algorithmic wage suppression. Ernst's remarks are commentary; the ILO's credibility depends on releasing peer-reviewed data. If no study appears within 12 months, his hypothesis remains untested. Watch for publication in the ILO's World Employment and Social Outlook series or its research portal.
2. A major labour lawsuit targets algorithmic wage-setting. If a worker or workers' group sues an employer for using AI to suppress wages below market rates, it would be the first legal test of Ernst's concern. No such case has been filed as of March 2026. A filing would signal that lawyers and unions view this as a credible threat.
3. Regulatory action on algorithmic transparency in hiring and compensation. The EU AI Act requires transparency for high-risk systems, but does not explicitly cover wage-setting. If the EU, UK, or a major economy introduces explicit rules requiring companies to disclose how AI influences compensation, it validates Ernst's framing and shifts the policy baseline.
---
Bottom Line
Ernst has reframed an important question—from "Will AI destroy jobs?" to "Will AI suppress wages?"—but he has not yet provided the evidence to support his answer. The implementation gap he cites is real and well-documented. The wage suppression risk he describes is plausible and worth monitoring, but it remains a hypothesis, not a finding. Policymakers and investors should take the concern seriously and demand that the ILO publish the data. Until then, Ernst's warning is a useful thought experiment, not a diagnosis.
---