Technology & innovation
AI adoption increases labor productivity by 4% on average, with no evidence of reduced employment in the short run, according to new research published by the Centre for Economic Policy Research.
Highlights from the study of more than 12,000 European firms:
While the study found that AI adoption increased productivity, those productivity gains are unevenly distributed. Medium and large firms benefit substantially more than small ones, raising concerns about widening gaps in a region dominated by small and medium-sized enterprises.
AI adoption alone is insufficient. Every additional percentage point invested in workforce training amplifies AI's productivity gains by 5.9 percentage points. Investment in software and data infrastructure adds 2.4 percentage points.
Workers at AI-adopting firms have seen higher wages. Whether those gains will persist and be shared equitably across skill levels remains an open question.
While our results offer some reassurance that AI may not be leading to immediate job destruction, policymakers should not be complacent. As AI systems become more capable … labor-displacing effects could emerge."
Read more via Centre for Economic Policy Research
Concerns about AI's impact on jobs may be fueling renewed interest in labor organizing across income levels, according to labor experts.
In an interview with The Guardian, Sarita Gupta, the Ford Foundation's vice-president of U.S. programs and co-author of The Future We Need, argued that AI is "creating an opportunity" for a resurgent labor movement.
Recent polling found that 71% of Americans fear AI will put “too many people out of work permanently.”
In 2025, “about 16.5 million workers were covered by a union contract,” up from just 16 million in 2024 and “the highest level since 2009."
According to the Economic Policy Institute, more than 50 million American workers across all industries wanted union representation in 2025 but were unable to obtain it.
Experts say that young workers seeing AI threaten their roles, combined with flattening wages could create conditions that are ripe for “larger working-class movements.”
The Industrial Revolution brought about “rapid technological change.” It also prompted the creation of labor unions.
The AFL-CIO has issued a “set of principles aimed at protecting workers in the age of AI.” Other unions have also “weighed in.”
Not all labor unions are taking “militant or even entirely adversarial” positions against AI. Some unions “have collaborated with tech companies and AI researchers to devise AI tools with worker protections in mind.”
Michigan's labor movement is pushing for the RAISE Act, a new bill that would limit how employers can use AI to monitor workers or set wages. The goal is to put guardrails on AI before it does more damage to jobs and worker rights across the state.
Read more via The Guardian, Futurism, Nonprofit Quarterly, SHRM, Michigan AFL-CIO
Developers of agentic AI systems are far more willing to document what their tools can do than whether those tools are safe, according to new research from MIT.
When researchers behind the MIT AI Agent Index cataloged 67 deployed agentic systems, they found a striking imbalance in transparency:
Around 70% of indexed agents provide capability documentation, and nearly half publish code.
Only 19% disclose a formal safety policy.
Fewer than 10% report external safety evaluations.
Leading AI developers and startups are increasingly deploying agentic AI systems that can plan and execute complex tasks with limited human involvement. However, there is currently no structured framework for documenting … safety features of agentic systems."
Given that AI agents can access files, send emails, make purchases and modify documents, mistakes and exploits made by AI agents are less likely to be contained in a single output the way they would be with a standard AI model. In other words, the stakes are higher for agentic systems.
While the researchers stopped short of declaring agentic AI unsafe, their findings suggest that as autonomy increases, structured public transparency about safety has not kept pace.
Read more via CNET, Research paper, MIT AI Agent Index
Virtually all CEOs are prioritizing AI. Half of CEOs believe their jobs could be on the line if they don’t "get their AI strategies right." Yet experts say CEOs aren't devoting much -- if any -- time to actually using AI themselves.
CEOs are "betting big" on AI:
94% of CEOs plan to "continue to invest even if it does not pay off in 2026," according to Boston Consulting Group (BCG).
72% of CEOs told BCG they are "the main decision maker on AI in their organization," and 50% said they “fear for their jobs if they don’t get their AI strategies right this year.”
Most business leaders aren't spending much time using AI:
According to a new working paper published by the National Bureau of Economic Research, most senior executives (CEOs included) aren't spending much time using AI, if they're using it at all.
69% of CEOs, CFOs and other senior leaders "are using AI at work less than an hour a week," while "28% aren’t using it at all," according to the working paper, co-authored by Stanford economist Nick Bloom.
The average worker uses AI 1.8 hours a week, while "senior executives average 1.5 hours."
Read more via Time.com, Boston Consulting Group, NBER
Burger King is piloting an AI chatbot: Burger King's OpenAI-powered chatbot, dubbed "Patty," is "currently being piloted in 500 Burger King locations." The chatbot operates via a voice headset worn by Burger King employees, where it can "collect data on ‘friendliness’ and simplify workflow." Burger King is developing new platform of AI tools designed to "provide training and operational support to workers." The restaurant chain hopes the tools will assist workers in "alerting managers to items that are no longer available" and "helping workers remember the ingredients in limited-time offers." (Convenience.org)
An internal AI agent, built by two employees, is now used by thousands:
OpenAI built an internal AI data agent with just two engineers in three months. Now, now thousands of OpenAI employees use it daily to pull charts, run analysis, and answer complex business questions just by typing in plain English. (VentureBeat)
Is your AI pilot stalled? You may have skipped an important step: Most companies' AI pilots stall not because the technology fails, but because they skipped the all-important step of building a clean, centralized, well-governed data foundation before attempting to scale. According to one industry expert, the fix is to treat data infrastructure and AI experimentation as parallel workstreams rather than sequential ones. (Fast Company)
Tech workers want limits on how military uses AI: Workers at Google and OpenAI are signing letters demanding stricter limits on military AI contracts after the Pentagon blacklisted Anthropic for refusing to let its tech be used for mass surveillance or autonomous weapons. The situation heated up even more when the U.S. launched strikes on Iran that same weekend, pushing the letter to nearly 900 signatures. (CNBC)
Over 40% of Japan's job seekers are using AI to find work: 41.8% of job seekers in Japan say they "have used generative artificial intelligence in their activities," according to a new survey. The survey found that while 17.4% of respondents age 30 or younger are using AI "almost every day," over 50% of respondents age 40 and older said they "rarely use or have never used AI or said they do not know." (Japan Times)