Technology & innovation
For anyone who hasn't been paying attention, here's a brief rundown of this week's drama involving Anthropic and the Trump administration.
The backstory: Anthropic held a $200 million Pentagon contract and was the only AI company with its model deployed on classified military networks. The dispute ignited after Claude was allegedly used (without Anthropic's prior knowledge) during a raid that captured Venezuelan President Nicolás Maduro.
The ultimatum: Defense Secretary Pete Hegseth gave Anthropic a hard deadline to grant the military unrestricted, "all lawful use" access to Claude or face consequences.
The refusal: Anthropic refused, saying it could not allow Claude to be used for fully autonomous weapons or mass domestic surveillance of Americans. Anthropic argued that its AI models aren't reliable enough without human oversight.
The fallout: When the deadline passed, Trump ordered all federal agencies to "immediately cease" using Anthropic products, and Anthropic was designated a "supply chain risk to national security."
Anthropic defends itself: Anthropic called the moves "retaliatory and punitive" and vowed to challenge the supply chain risk designation in court.
OpenAI's involvement: OpenAI then announced a deal with the Pentagon to deploy its models on classified networks. However, OpenAI CEO Sam Altman earlier said he agreed with Anthropic's red lines, prompting some to question how different the two deals actually are.
How is the AI industry reacting? Tech workers from OpenAI and Google signed an open letter in support of Anthropic's stance, while NVIDIA's Jensen Huang took a more neutral position.
What does the drama mean for the AI sector? Experts say the standoff between Anthropic and the federal government has set a potentially chilling precedent for how the government can pressure private AI companies.
Read more via CBS News, CNBC, NPR, CNN, ABC News, TechPolicy.Press
There's no shortage of employers explaining away layoffs by suggesting AI is to blame. But some people “wonder whether that explanation captures the full picture.”
Challenger, Gray & Christmas says employers cited AI in "announcements of more than 50,000 layoffs in 2025."
Amazon, Pinterest and Hewlett-Packard are among the firms that have mentioned AI with respect to recent job cuts.
But some experts say they suspect there's at least some "AI-washing" happening. (Forrester defined AI-washing as " attributing financially motivated cuts to future A.I. implementation.")
While the rationale for layoffs is often "more complex," explaining job cuts by mentioning AI may sound better, especially to investors.
Many layoffs that have been attributed to AI are still "anticipatory" in the sense that while AI may be able to do many of the jobs people are doing, it's not generally doing those jobs just yet.
Attributing layoffs to AI is a “very investor-friendly message," that experts say goes a long way compared to “the business is ailing.”
Read more via The New York Times, Gizmodo
Block, the payments company that includes both Square and Cash App, is laying off 40% of its workforce as a result of AI. The layoffs will impact 4,000 workers.
Block founder Jack Dorsey, who also co-founded Twitter, announced the layoffs last week, saying AI tools have “changed what it means to build and run a company.”
Dorsey told employees (and shared on X) that the "decision to cut jobs wasn’t because the company is in trouble."
Instead, Dorsey said he "wrestled with" deciding between cutting workers “gradually over months or years as this shift plays out, or be honest about where we are and act on it now.”
I think most companies are late. Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes."
Read more via The Wall Street Journal
Major tech firms are integrating AI usage into employee performance evaluations and tracking productivity gains, with some companies even refusing to hire candidates who lack AI fluency.
AI competency becomes job requirement: Some tech leaders now say they won't consider hiring candidates without AI fluency. The Wall Street Journal reports that tech firms are incorporating tests on problem-solving using AI into candidate evaluation and some are developing ways to score candidates and employees on AI competency.
Manager expectations surge: 42% of tech-industry workers said their direct manager expects AI use in day-to-day work as of October 2025, up from 32% eight months prior.
Performance reviews now include AI metrics: Google is factoring AI use into some software engineer performance reviews for the first time this year. Meta is tracking how many lines of code engineers wrote with AI. Microsoft managers include AI use questions in performance discussions, and Salesforce added an AI fluency progress tracker to internal dashboards.
Mandatory AI tools expand: At Salesforce, employees can now file for paid time off only by interacting with an AI agent, with most self-evaluations and performance reviews done with AI assistance. An executive said "basically 100%" of employees use AI in some capacity.
Some economists say AI investments made up a significant portion of 2025's U.S. GDP growth. But at least one Goldman Sachs analyst would beg to differ.
Federal Reserve Bank of St. Louis economists previously estimated AI investments made up 39% of GDP growth in Q3 2025, while Harvard economist Jason Furman posted that information processing equipment and software accounted for 92% of GDP growth in the first half.
Despite billions in tech spending on AI infrastructure, imported chips and hardware mean AI investments contributed almost nothing to U.S. economic growth, according to Goldman Sachs Chief Economist Jan Hatzius. Hatzius said AI investment had "basically zero" contribution to U.S. GDP growth in 2025, with much equipment imported from Taiwan and Korea, offsetting domestic spending in GDP calculations.
AI's impact on productivity remains unclear. Despite 70% of firms actively using AI, about 80% reported no impact on employment or productivity in a recent survey of nearly 6,000 executives across the U.S., Europe, and Australia.
Read more via Gizmodo
Mark Cuban says there are two types of LLM users: Large language model users can be divided into two categories: learners and non-learners. There are "those that use it to learn everything" or "those that use it so they don't have to learn anything," Cuban said. Cuban has also said that when it comes to companies, there will also be two types: "those who are great at AI, and everybody else." (Business Insider)
Amazon is blaming AI for recent AWS outages: Two recent Amazon Web Services (AWS) outages "occurred as a result of actions by Amazon’s AI tools," according to recent reports. The Financial Times reports that Amazon's AI agent "Kiro" was "responsible for the December incident affecting an AWS service in parts of mainland China." The AI tool reportedly "chose to “delete and recreate the environment” it was working on, which caused the outage." According to the report, this is the second recent outage "linked to an AI tool in the last few months." (The Verge)
PwC has a new AI agent to handle spreadsheets: Spreadsheets are "crucial to corporate operations" at PwC, according to company executives. While traditional AIs "just kind of shrug and give up" upon encountering a "big spreadsheet," PwC is hoping its new "frontier AI agent" will be "capable of reasoning over vast, enterprise-grade spreadsheets." The new agent "can understand and navigate spreadsheets, mimicking "how experienced practitioners work: scanning, searching, jumping across tabs, integrating charts and receipts, and reasoning." (Business Insider)
OpenAI considered reporting troubling posts made by alleged mass shooter: In the months before Jesse Van Rootselaar became a suspect in the recent mass shooting in British Columbia, OpenAI employees "considered alerting law enforcement" about Van Rootselaar’s posts on the ChatGPT platform. The posts allegedly "described scenarios involving gun violence" and were "flagged by an automated review system," prompting alarm on the part of OpenAI employees. After debating "whether to take action on Van Rootselaar’s posts," OpenAI "ultimately decided not to contact authorities." (The Wall Street Journal)