Technology & innovation
A large-scale experiment published in Harvard Business Review found that anthropomorphizing AI, including giving it a name, job title, or place on an org chart, reduced accountability, increased escalation, and lowered the quality of human review.
When AI was framed as an employee rather than a tool, personal accountability among managers fell by 9 percentage points, while accountability attributed to the AI rose by 8 points.
Managers reviewing work framed as coming from an AI employee caught 18% fewer errors than those reviewing work framed as coming from an AI tool.
If you want people to feel like they will lose their job to AI, or can be easily replaced by AI, then put it on the org chart."
Requests for additional managerial review increased by 44% under the AI employee framing, suggesting the designation undermines rather than builds reviewer confidence.
Managers in companies that framed AI as a teammate were 13% more likely to report uncertainty about their professional identity and 7% more likely to express concern about job security.
Anthropomorphizing AI did not meaningfully increase adoption intent; adoption followed managerial encouragement more than framing.
Read more via Harvard Business Review
74% of companies report that candidates are now using AI in their job search, while most organizations are still working to scale AI effectively across their own hiring workflows, according to a survey of more than 400 U.S. talent acquisition leaders by iCIMS and Aptitude Research.
69% of companies report using AI in some capacity in talent acquisition, but only 18% say they are using it broadly across hiring processes.
Screening is the most widely adopted use case (58%), followed by candidate communication (54%), assessments (50%), and sourcing (46%).
When conflicts arise between AI recommendations and human judgment, recruiters override AI in 58% of organizations.
82% of companies say transparency and explainability in AI systems are important, yet 45% report they do not have a formal AI governance framework in place.
46% of companies say they are using or planning to use agentic AI to support talent acquisition.
Read more via iCIMS
"Skillfishing," or applicants using AI tools to fake skills and game applicant tracking systems, is making candidate screening harder, and employers are responding with more rigorous interview processes and, in some cases, a return to in-person assessment.
Generative AI tools have made it easier than ever for applicants to load resumes with the right keywords, creating what one recruiter called an "explosion of convenience" that has complicated screening for employers.
Skills verification questions, including mini-cases and situational questions designed to demonstrate skills early in the process, are becoming more common in pre-screen interviews.
Some companies have moved back to in-person interviews as a validation measure; probationary periods and temporary contracts may also make a return.
One management professor cautioned that probationary periods are not an ideal solution and should include clear expectations, regular feedback, and support for minor skills gaps.
Read more via HR Dive
As employers roll out AI use requirements, employment attorneys are warning HR teams to prepare for religious objection claims, an area of law with little case precedent and significant legal exposure.
Religious discrimination claims have surged in recent years, accelerated first by COVID-era vaccine and masking mandates, and attorneys say AI mandates are now triggering a similar wave.
Religious objections to AI tend to fall into two categories: concerns about AI's environmental impact and concerns about the loss of human autonomy, particularly with agentic AI.
Employees do not need backing from an organized religion for a claim to be legally valid; courts have been consistently reluctant to challenge the sincerity of a stated religious belief.
It bears repeating that there may be some requests from people who share those beliefs out of left field, and we've got to treat those seriously."
Following the Supreme Court's 2023 Groff v. DeJoy decision, employers face a higher burden to show that an accommodation poses undue hardship, defined as "substantial in the overall context of the employer's business."
Attorneys advise HR teams to audit their accommodation policies, designate a consistent internal point of contact for requests, and document every step of the evaluation process.
The Trump administration, which took a hands-off approach to AI regulation since taking office, is now discussing an executive order that would create a government working group to examine oversight procedures for new AI models, including a formal review process before public release, according to U.S. officials.
The policy shift was triggered in part by Anthropic's announcement of a new model called Mythos, which the company described as too powerful to release publicly due to its ability to identify software security vulnerabilities.
White House officials briefed executives from Anthropic, Google, and OpenAI on the discussions last week; a review process being developed in Britain, which requires AI models to meet certain safety standards, is one possible model.
The administration is also evaluating whether new AI models could yield capabilities useful to the Pentagon and U.S. intelligence agencies; some officials are pushing for a review system that would give the government first access to models without blocking their release.
Some tech executives have pushed back, arguing that government oversight will slow U.S. innovation relative to China.
Read more via The New York Times
As corporate AI spending rises, employers are monitoring employee AI activity in unprecedented detail, but most still can't demonstrate a clear return on investment, according to a 2026 survey of 100 senior AI enterprise leaders by ModelOp.
More than two-thirds of enterprises rely on estimates like time saved or projected cost reductions rather than measured financial results to assess AI's ROI, a gap ModelOp calls the "AI value illusion."
64% of companies say AI is driving innovation, but only 39% report a measurable impact on earnings, according to McKinsey.
"Tokenmaxxing" has emerged as employees race to inflate AI usage metrics to signal productivity, but experts warn that prompt volume is "a very poor proxy for productivity."
It is easier to measure AI agents than human workers; Salesforce reported its platform generated 2.4 billion AI "work units," with AI agents handling 129 million customer service tasks in a single quarter.
Meta is testing internal systems to track employee mouse movements, clicks, and keystrokes, ostensibly to train AI systems rather than evaluate individual performance.
Read more via CNBC
A commentary from Yale's Chief Executive Leadership Institute argues that the debate over AI-driven job destruction is being framed incorrectly: the disruption is not showing up as mass layoffs but as a steady narrowing of entry-level opportunities, with recent college graduates bearing the brunt.
Unemployment among recent graduates has climbed to nearly 6%, rising twice as fast as the rest of the workforce since 2022; among computer science and computer engineering graduates specifically, unemployment now runs at 7.0% and 7.8% respectively.
A Stanford study found a 16% decline in early-career employment across the most AI-exposed occupations since late 2022; software development job postings have fallen 53% over the same period, according to Indeed.
Hiring has slowed to levels last seen in 2010, when unemployment was nearly 10%; companies are not firing at scale but are getting more output from existing workers, reducing the need for new hires.
Only 10% of employers surveyed said graduates were sufficiently prepared for AI-enabled workplaces; critical thinking and complex problem-solving ranked as the most sought-after capabilities by a wide margin.
The share of U.S. workers who say it is a good time to find a job has fallen from roughly 70% in 2022 to just 28% recently, with college graduates now more pessimistic than those without degrees.
The greatest risk will not be a sudden wave of layoffs. It will be a labor market in which fewer entry-level jobs are created, making it harder for workers to gain experience and advance over time."
Read more via Yale School of Management
Nvidia CEO Jensen Huang highlighted a new partnership with Corning as evidence that the AI infrastructure buildout is creating demand well beyond the technology sector, including for electricians, construction workers, and data center specialists.
As part of the deal, Corning will increase optical manufacturing capacity in the U.S. tenfold, building three new facilities in Texas and North Carolina that the company says will create more than 3,000 jobs.
Huang said the next generation of AI infrastructure will require vast amounts of optical connectivity as computing demands outpace what copper wires can handle.
We're going through the single largest infrastructure buildout in human history. Artificial intelligence is going to become fundamental infrastructure all over the world, and surely here in the United States."
A Bristol Myers Squibb facility in Devens, Mass., was the only U.S. manufacturer recognized this year by the World Economic Forum and McKinsey for cutting-edge technology adoption, a distinction that highlights how far most American factories still have to go.
Of the 223 factories that have made the World Economic Forum's Global Lighthouse Network list since 2018, 14 have been in the United States; 99 are in China.
At the Devens plant, AI monitors variables like temperature, oxygen levels, and pH during drug production and alerts technicians to problems in real time, boosting drug output for clinical trials and commercial use by about 40%.
"China is scaling faster," said one McKinsey partner involved in the initiative. "They have technologists in the factories — hundreds of them — while in the U.S. we're competing for that same talent with Silicon Valley."
Bristol Myers Squibb is also cutting more than 1,000 positions and trimming $2 billion in costs by end of 2027, with the CEO acknowledging that AI could adversely affect some employees.
AI-generated fitness influencers are selling impossible results on social media. A BBC investigation found misleading fitness ads featuring AI-generated characters promising transformations experts call scientifically implausible, including losing 40 pounds in 28 days and looking "20 years younger" in a month. The fake instructors were created to sell app subscriptions, and many ads failed to disclose that the people featured weren't real. One AI expert described the current landscape as a "wild west" in terms of regulation. (BBC)
AI is also being used to impersonate struggling small businesses online. An ABC News investigation identified dozens of online retailers using AI-generated images, videos, and backstories to pose as retiring craftspeople or mom-and-pop shops closing after decades in business, then shipping low-quality goods from overseas. Experts say the scale and speed at which these sites can be created and taken down makes them difficult to police. (ABC News)
A Chinese court ruled that companies cannot fire workers simply to replace them with AI. The Hangzhou Intermediate People's Court found that a tech firm illegally terminated a quality assurance employee after automating his role, ruling that AI implementation does not meet the legal standard for contract termination. The decision builds on a similar ruling from December. (Fortune)
Google DeepMind employees in the UK have voted to unionize over the company's military AI contracts. Workers cited Google's removal of a pledge not to use AI for weapons development and surveillance as the catalyst and are seeking to block the lab's technology from being used by the U.S. and Israeli militaries. The effort follows a broader Pentagon deal with seven AI companies, including Google, to use their models on classified networks. (Wired)