Technology & innovation
A new status game called "tokenmaxxing" has taken hold at some tech companies. Employees compete to consume as many AI tokens as possible, sometimes spending thousands of dollars a day, as employers increasingly tie AI usage to performance reviews.
One OpenAI engineer processed enough text to fill Wikipedia 33 times in a single week. A single Anthropic user ran up a $150,000 Claude Code bill in one month.
Meta, OpenAI and other companies have introduced internal leaderboards tracking employee token consumption, and some managers are rewarding heavy AI users and flagging those who lag behind.
The explosion in token use is being driven by agentic coding tools that run unsupervised for hours, spawning subagents and processing billions of tokens while workers sleep.
Anthropic more than doubled its revenue projections in two months this year largely because of the growth of its agentic coding tools. OpenAI said its Codex tool tripled weekly active users since the start of the year.
The obvious problem: leaderboards measure quantity, not quality. Several tech workers told the Times they worried colleagues are burning through billions of tokens largely for bragging rights.
Read more via The New York Times
With just a dozen AI tools and $20,000, Matthew Gallagher and his brother launched telehealth provider Medvi back in September 2024. In the company's first full year, it generated $401 million in revenue in its first full year and is profitable.
Gallagher used tools including ChatGPT, Claude, Grok, Midjourney and Runway to build and run the business, outsourcing medical, pharmacy and compliance functions to third-party telehealth platforms.
The model works partly because Gallagher isn't building the underlying technology: he's using AI to run a middleman business in an established market, letting other companies handle doctors, prescriptions and shipping.
Early shortcuts included AI-generated before-and-after photos and a customer service chatbot that sometimes hallucinated drug prices, which Gallagher honored. He has since replaced some of those elements with real customer content.
Despite generating $65 million in net profit last year, Gallagher has no plans to hire.
OpenAI CEO Sam Altman, who had predicted a one-person billion-dollar company would eventually emerge, said upon hearing about Medvi that he would “like to meet the guy.”
As AI makes it easier for candidates to submit polished applications at scale, employers are finding resumes and cover letters increasingly unreliable and are redesigning their hiring processes to put more emphasis on in-person interaction and practical assessment. More than 40% of employers have extended probation periods because they can no longer reliably assess candidates' real skills during the application process, according to data from HR platform Deel.
L'Oréal has declared interviews "AI-free zones," requiring at least one face-to-face conversation before any hire. EY has trained more than 20,000 interviewers to probe candidates' thinking rather than their prepared answers, looking for how people make decisions rather than what they've done.
The problem cuts both ways: many candidates say they turned to AI partly because automated screening was rejecting their applications with little human review, creating a cycle of AI generating applications and AI filtering them out.
Some employers are adding friction early to filter out low-commitment applicants. An advertising agency that asked candidates to submit a 60-second video saw 40% drop out immediately, dramatically improving the quality of the remaining pool.
Not everyone is banning AI from the process. McKinsey is piloting a case study that requires candidates to use its own AI tool, testing whether applicants can use the technology effectively rather than just lean on it.
"AI has widened the gap between how candidates present themselves and how they perform," said one HR platform executive. "Employers are telling us they can only understand real capability once someone starts the job."
Read more via Financial Times
New research from Forrester finds that despite widespread AI deployments, employees' actual understanding of how to use the tools has barely moved. The firm calls the results "alarming" and places the blame squarely on employers rather than workers.
Understanding of prompt engineering, a core skill for using tools like Microsoft 365 Copilot and Google Workspace, grew by just 4 percentage points over the past year, from 22% to 26%.
Only half of organizations that have deployed AI applications offer any training for non-technical employees. "It's you, the employer, who hasn't yet cultivated a learning and engagement environment sufficient to helping employees attain these skills," said Forrester's lead researcher.
Workers with low AI proficiency often avoid the tools entirely or use them incorrectly, producing negative productivity as they redo AI-generated work. Many also don't know when to question AI outputs or how to use the tools ethically.
The gap between leadership expectations and employee reality is stark: 96% of C-suite leaders expected AI to increase output, while 77% of employees said AI tools actually increased their workload, according to a separate Culture Amp report.
Read more via HR Dive, Forrester
A new ISACA survey of 681 digital trust professionals in Europe found that 59% didn't know how quickly their organization could stop an AI system during a security incident, and only 21% said they could do so within 30 minutes. The findings point to a widening gap between how quickly companies are deploying AI and how prepared they are to govern or control it.
Fewer than half of respondents (42%) said they were confident their organization could investigate and explain a serious AI incident to leadership or regulators. Only 11% were completely confident.
One in five respondents didn't know who would be ultimately accountable if AI caused harm, and only 38% pointed to the board or an executive.
A third of respondents said their employer didn't require staff to disclose when AI had been used in work products, limiting visibility into how deeply AI is shaping decisions and outputs.
The findings arrive as the EU AI Act tightens requirements around explainability and accountability, meaning companies may soon need not just technical controls but documented processes and designated responsible parties.
Read more via IT Brief
Anthropic accidentally leaked part of Claude Code's internal source code: The AI startup confirmed Tuesday that a "release packaging issue caused by human error" exposed part of the source code for its popular coding assistant, though the company said no customer data or credentials were involved. The leaked code quickly spread online, with one post linking to it accumulating more than 21 million views. The incident was Anthropic's second data blunder in under a week, following a separate incident in which descriptions of an upcoming AI model were found in a publicly accessible data cache. (CNBC, The Wall Street Journal)
An AI machine in China is sorting used clothes by fabric composition 50 times faster than a human worker: A Chinese recycling company called DataBeyond has developed a machine that can process two tons of textile waste per hour, compared to two days for two human workers doing the same job at lower accuracy. The technology, named one of Time magazine's Best Inventions of 2025, has already cut the share of textiles sent to landfill or incineration at one facility from 50% to 30%. The facility's sales director said the ultimate goal is a fully automated "dark factory" running 24 hours without human workers. (Associated Press)
OpenAI shut down Sora, its AI video generation tool: The company abruptly pulled the plug on Sora, its hyped video-generation product, just weeks before it was set to begin licensing the technology to Hollywood studios including Disney, which had invested $1 billion in OpenAI partly based on the promise of the product. The decision caught Disney executives off guard, with many learning about it less than an hour before the announcement. The core problem: Sora was consuming too many AI chips and wasn't profitable, and OpenAI needed those computing resources for a new model and its push into agentic AI tools for enterprise customers, where it has been trailing rival Anthropic. Altman told staff the Sora team will now focus on longer-term projects including robotics. (The Wall Street Journal)