Technology & innovation
As AI use spreads across the workforce, a new cost management challenge is emerging.
Tokens are the unit that measures the amount of computing power AI consumes. Companies that have moved past the "get employees to use AI" phase are now tracking token use the same way they track any other budget line, trying to figure out who is generating real value and who is just burning through resources.
Generating roughly 750 words of text uses about 1,000 tokens. More complex tasks like coding, video creation, or multi-day agent projects cost significantly more.
One engineer at a startup deployed AI agents to build a critical infrastructure service in a single day, a task that would have taken humans a matter of weeks to complete. The token bill came to around $10,000.
Companies are starting to use token data to identify high performers and inefficient patterns alike. One AI transformation officer said when someone's token use runs five times higher than their peers, it's either a "golden pattern" worth replicating or an "anti-pattern" that needs coaching.
Unlimited token budgets can get out of hand fast: one CEO said costs spiked within 48 hours of giving his entire 15-person team access to a new coding platform.
Experts predict companies will soon develop formal governance around token use, including limits on which AI models employees can use for which tasks.
Read more via The Wall Street Journal
While AI-driven job losses dominate the headlines, the race to build AI infrastructure is generating serious demand for electricians, plumbers, HVAC engineers, and other skilled tradespeople, and the pay is getting hard to ignore.
CNBC cited data from Kelly, suggesting specialized trade workers moving into data center roles are seeing 25% to 30% pay increases.
Nvidia CEO Jensen Huang has predicted six-figure salaries will become standard for workers building AI facilities.
Between 2022 and 2026, demand for robotic technicians grew 107%, cooling system engineers 67%, and industrial automation technicians 51%, according to a global analysis of 50 million job postings.
The problem is there aren't enough workers to fill the roles. The U.S. could face a shortfall of 1.9 million manufacturing workers by 2033, and the Associated Builders and Contractors estimates nearly half a million new workers will be needed in 2027 alone.
Companies are turning to apprenticeship programs, community college partnerships, military veteran pipelines, and internal training academies to grow their own talent pipelines.
One new wrinkle: as data centers become targets in the Middle East conflict, experts say "hazard pay" may increasingly factor into compensation packages for on-site workers.
Read more via CNBC
A new analysis of 164,000 workers across more than 1,100 employers finds that AI is making work faster and more intense, not lighter. Workers who adopted AI tools saw their time on email and messaging more than double, their use of business management software rise 94%, and their time spent on focused, uninterrupted work drop 9%.
Rather than using AI-created efficiencies to slow down, workers take on more tasks. AI "makes additional tasks feel easy and accessible, creating a sense of momentum."
About 80% of employees now use AI at work, up from 53% two years ago, and average time spent using AI tools has risen eightfold.
Workers who spent 7% to 10% of their work hours using AI showed the highest productivity gains, but only 3% of AI users actually reach that level of engagement. Most spend just 1% of their work hours using AI.
Researchers warn that the short-term productivity bump comes with long-term risks: cognitive overload, burnout, and declining decision quality if the pace of intensification continues.
Researchers have a name for what a lot of workers are already feeling: "AI brain fry." It's that foggy, buzzing, can't-think-straight feeling that comes from spending hours babysitting AI tools, and a new Harvard Business Review study of nearly 1,500 workers finds it's both real and widespread.
14% of workers using AI on the job said they'd experienced it, with marketing workers hit hardest (26%), followed by people ops, engineering, finance and IT.
Workers with brain fry made major errors 39% more often and had 33% more decision fatigue than those without it. They were also 39% more likely to be actively looking to leave.
Productivity peaks at two or three AI tools used at once, then drops. After three, the gains disappear.
Here's the twist: using AI to get rid of boring, repetitive tasks actually reduced burnout by 15%. The problem isn't AI itself, it's being asked to monitor and manage AI on top of everything else.
Manager support made a real difference: workers whose managers helped them figure out AI had 15% lower mental fatigue scores. Workers left to figure it out alone scored higher.
Companies that sent implicit signals that AI meant more work, even without saying so directly, had employees with 12% higher mental fatigue scores.
Read more via Harvard Business Review
Anthropic interviewed more than 80,000 Claude users across 159 countries last December in what it says is the largest qualitative study of its kind. The findings are messier and more human than the usual AI-optimist-vs-pessimist framing suggests.
The most common hope was pretty practical: 19% just wanted AI to handle the boring stuff so they could focus on work that actually matters. Time savings (50%), learning (33%), and economic empowerment (28%) were the most commonly cited benefits people said they'd actually experienced.
The biggest fear wasn't robots taking over. It was unreliability and hallucinations (27%), followed closely by job displacement and economic anxiety (22%), which turned out to be the strongest predictor of negative feelings about AI overall.
Hope and fear weren't cleanly separated. People who valued AI for emotional support were three times more likely to also worry about becoming dependent on it.
Workers in lower and middle income countries were consistently more optimistic than those in wealthier regions, tending to see AI as an opportunity rather than a threat.
Nearly 1 in 5 respondents said AI had already let them down, mostly due to inaccuracy.
It’s easy to assume there are AI optimists and AI pessimists, divided into separate camps. But what we actually found were people organized around what they value—financial security, learning, human connection— watching advancing AI capabilities while managing both hope and fear at once."
Read more via Anthropic
Americans send nearly 3 million messages per day to ChatGPT asking about wages, compensation, and earnings, making it one of the most common ways workers are trying to close the gap between what they know and what employers know about pay.
Highlights from OpenAI's report:
The most common questions involve translating pay into a usable benchmark (26%), understanding what a specific role pays (19%), and figuring out what a business idea or entrepreneurial path might realistically earn (18%).
Wage searches are most concentrated in fields where pay is hardest to benchmark or most negotiable, including creative work, management, healthcare, and tech roles.
Workers ask more pay questions in fields where salaries are higher and more variable, suggesting people seek information most when getting the answer right matters most.
OpenAI tested its GPT-5.4 model against official Bureau of Labor Statistics wage data and found it closely matched national occupation benchmarks, though the company acknowledges it wants to improve accuracy at the local and firm level.
Read more via OpenAI, Full report
Minnesota looks at requiring 90 days notice before “deploying AI that could displace jobs”: Minnesota is considering a bill that would require employers to give workers 90 days notice before deploying AI that could displace jobs, with pay continuing through that transition period and reskilling opportunities offered. Companies that fail to comply could be barred from state contracts and fined up to $10,000 per employee. More than 31% of Minnesota jobs, roughly 813,000 workers, are estimated to have high exposure to generative AI. (Minnesota House of Representatives, HF 4369)
New York legislators have introduced a proposal that news outlets label AI-generated content: New York lawmakers introduced the NY FAIR News Act, which would require news organizations to label AI-generated content, mandate human review before publication, and protect journalists from being fired or having their pay cut due to AI adoption. The bill has broad support from media unions including WGA-East and SAG-AFTRA. (Nieman Lab)
Senate bill introduced to bar AI from making “lethal targeting decisions”: U.S. Senator Elissa Slotkin introduced a five-page bill that would prohibit AI from autonomously making lethal targeting decisions, ban its use for mass surveillance of Americans, and bar it from launching nuclear weapons. The bill comes amid a public falling-out between the Pentagon and Anthropic over similar concerns. (NBC News)
Microsoft has a new AI tool that reads your medical records: Copilot Health is a new feature that allows users to upload medical records, lab results, physician notes, prescription lists, and wearable device data for personalized health guidance. Microsoft says health information will be encrypted and separated from other app functions to address privacy concerns about sharing medical data with generative AI. (The Wall Street Journal)
Humanoid robots are getting scarily good at sports: Researchers from Tsinghua and Peking universities demonstrated a humanoid robot sustaining a live tennis rally with a human player, hitting shots with 96% accuracy in simulation tests. The robot navigated the court and reacted to fast-moving balls in real time. (InterestingEngineering.com)
You can now buy a robot butler for less than a used car: Several robotics companies, including 1X, are now selling humanoid home assistant robots, with some models available for a few thousand dollars. They're not quite ready for prime time, but the fact that consumer humanoid robots are available for purchase at all marks a meaningful milestone. (New Scientist)
AI is learning to smell: Researchers are developing AI-powered "electronic nose" systems that can detect aromas with up to 1,000 times the precision of human noses. Early applications include scanning breath to detect lung cancer, urinary tract infections and other diseases, monitoring hospital air for dangerous infections, and even developing perfumes faster and cheaper. Scientists caution the technology is still early, comparing it to where computer vision was 30 years ago. (The Wall Street Journal)
When A.I. agents go rogue, “they can wreak havoc": A.I. agents can now book travel, send emails, and manage schedules autonomously, but early adopters are discovering the risks of automated decision-making. One San Francisco entrepreneur woke to find his A.I. bot had negotiated a $31,000 sponsorship agreement he never authorized and couldn't afford. Some companies are already cutting staff in anticipation of agent adoption, but experts warn that reliability issues may slow widespread deployment. (The New York Times)
If you made it this far in the Need to Know Briefing, you deserve to know about the most fun thing to happen this week on the Internet, which is this new translator that takes normal things people might actually say and translates them into the kind of posts you might only find on LinkedIn.
Try it out for yourself!