Technology & innovation
Leading AI models aren't as good at offering an "unbiased, outside perspective" as many leaders assume them to be. New research from Harvard Business Review tested leading AI models on thousands of strategic decisions and found the models consistently gravitate toward whatever sounds good in modern business culture rather than what makes sense for a specific situation. The researchers call it "trendslop."
Across almost every model tested, including ChatGPT, Claude, Gemini, and others, AI consistently favored differentiation over cost leadership, augmentation over automation, and long-term thinking over short-term, regardless of context.
Leaders might assume that LLMs are able to offer a kind of unbiased, outside perspective … Unfortunately, that might be a mistake … On strategy, LLMs might be more akin to a freshly minted MBA or junior consultant, parroting what’s popular rather than what’s right for a particular situation."
Better prompting didn't fix it, and neither did adding more company context. The biases persisted even when researchers provided detailed industry-specific scenarios.
AI models are trained on internet text that reflects popular business culture, so they essentially predict the most socially desirable answer rather than the most strategically sound one.
Read more via Harvard Business Review
While 83% of HR decision-makers say AI helps employees work faster, 67% say it is also "creating new points of friction and mistrust" between employers and employees, according to a new MetLife survey.
61% of employees are worried about the ethical and safety risks of AI, including bias and lack of accountability. 59% fear it will make their jobs obsolete, and 24% feel they are competing with AI at work.
"Workslop," defined as AI-generated content that looks polished but lacks substance, is adding to the friction. 53% of workers admit to sharing "workslop," while 40% say they've received it from colleagues in the past month.
"It shifts the burden from the sender to the receiver," said one researcher, making collaboration harder and eroding trust between coworkers.
Read more via CNBC
AI use jumped from 47% to 63% in less than a year, with one in three physicians now using it daily, according to a new Doximity survey of more than 3,100 U.S. physicians.
Only 5% of U.S. physicians said they have no interest in using AI at all.
The most common uses are literature search (35%), voice-based documentation and AI scribes (29%), and administrative tasks like drafting prior authorization letters and summarizing patient records.
75% of physician AI users said it has already reduced their administrative workload and improved job satisfaction, and 69% said it has contributed to better patient care and outcomes.
90% of physicians believe AI has the potential to reduce "pajama time," the hours spent on documentation after hours at home. Nearly a quarter say it already has.
Despite the enthusiasm, 71% cited accuracy and reliability as their top concern, with legal uncertainty, patient privacy and ethical considerations also ranking high.
Read more via Fierce Healthcare
After years of hype and heavy investment, several recent surveys are pointing in the same direction: workplace AI use is not growing the way the industry expected, and in some measures is actually declining.
US Census Bureau data shows the share of Americans using AI to produce goods and services at large companies ticked down from 12% to 11% between surveys. Among mid-sized companies, the share reporting no AI use in the prior two weeks rose from 74% in March to 81% in the most recent survey.
A Stanford economist tracking generative AI use at work found usage dropped from 46% of respondents in June to 37% by September.
Executives are increasingly pointing to "AI fatigue" among employees as a factor, and a December 2024 EY survey found more than half of senior executives felt they were failing in their role of supporting AI adoption at their companies.
The stakes are high: the industry is expected to spend $5 trillion on AI infrastructure through 2030, a figure that requires significant growth in both enterprise and consumer AI revenue to justify.
Read more via Futurism
The Department of Labor launched a free AI literacy course delivered entirely by text message: Workers can enroll by texting "READY" to 20202 and complete the seven-day course in about 10 minutes a day, no laptop or internet connection required. The "Make America AI-Ready" initiative was developed in partnership with education technology company Arist as part of the Trump administration's broader AI workforce strategy. (U.S. Department of Labor)
A federal court ruling is a wake-up call for employees using AI at work: A federal judge in New York ruled in February that a defendant's searches on an AI platform were not protected by attorney-client privilege or work product doctrine, because the communications were made to a third party without the involvement of a lawyer. Legal experts say the ruling is a reminder that sensitive information shared with AI tools is not confidential, and that businesses should move quickly to establish AI acceptable use policies to limit legal and reputational exposure. (National Law Review)
A healthcare AI platform is letting doctors ask complex questions in plain English: Lumeris expanded its "Tom" primary care AI platform with a new feature called "Ask Tom," which pulls together clinical, claims, pharmacy and social determinants data and lets health system leaders pose complex analytical questions and get immediate answers and visualizations. The launch comes as nearly 100 million Americans lack access to primary care and the U.S. faces a projected shortage of nearly 90,000 physicians over the next decade. (Fierce Healthcare)