Technology & innovation
It wasn't long ago you could divide companies into two groups -- organizations that had implemented AI and those that had not. Over the past year, AI adoption has surged, and the number of organizations that have not implemented AI has dwindled.
Organizations can now be divided not by whether they have adopted AI, but by how they are redesigning and rebuilding their organization around it, according to a new survey of 9,000 knowledge workers conducted by Asana.
AI adoption has seen "explosive growth":
70% of knowledge workers now use AI on a weekly basis, up from 52% in 2024 and 36% in 2023.
88% of executives use AI weekly. The same is true for 79% of managers and 62% of individual contributors.
Despite widespread adoption, AI is "automating chaos" at many organizations:
84% of knowledge workers are experiencing digital exhaustion.
Just 22% of workers say "information moves quickly between teams" and just 30% report effective cross-functional collaboration.
Organizations can now be divided into "AI Scalers" and "Nonscalers":
29% of organizations are AI Scalers: AI Scalers are organizations that have successfully implemented AI organization-wide with established measurement and optimization. They don't just use AI—they've rebuilt work around it, creating a transformation engine that turns AI into an organizational multiplier.
71% of organizations are Nonscalers: These are organizations “trapped in pilot purgatory, layering AI onto broken workflows without the infrastructure needed for real transformation.”
AI Scalers see more productivity gains from AI than Nonscalers:
91% of AI Scalers report seeing productivity gains from AI, compared to just 61% of Nonscalers.
Workers at AI Scalers are 1.3 times more likely to understand how to use AI agents, and have stronger "AI agent understanding" than their peers at Nonscalers.
What are AI Scalers doing that Nonscalers aren't?
AI Scalers are redesigning work instead of just adding AI to "broken processes":
AI Scalers are 3.5 times more likely to fully redesign workflows to integrate AI.
AI Scalers are training their workers:
AI Scalers are 2.3 times more likely to provide formal AI training.
78% of AI Scaler employees have strong AI grasp vs only 42% at Nonscalers.
AI Scalers are creating measurement systems and frameworks:
AI Scalers are 122% more likely to have usage policies.
77% of AI Scalers encourage open discussion about AI failures (vs 41%).
AI Scalers are more focused on tracking performance of AI tools.
Read more via Asana
Despite the "sea of tech talent," many tech companies say they cannot “find the workers they want.”
The supply of tech talent should be ample:
"There has rarely, if ever, been so much tech talent available in the job market," according to The Wall Street Journal.
From 2013 until 2022, the number of computer-science degrees awarded by U.S. colleges "more than doubled."
According to the Labor Department, “businesses will employ 6% fewer computer programmers in 2034 than they did last year.”
Employers say there "aren’t enough people with the most in-demand skills":
Employers are looking for "AI savants." They are waiting for "dream candidates with deep backgrounds in machine learning," leaving many “AI-related roles … unfilled for weeks or months.”
The difference between an "AI all-star" and an AI savvy worker is … significant:
Workers who are familiar enough with AI to utilize ChatGPT or who have taken an AI certificate course are not close to reaching the level of the elusive “AI all-star.”
Experts say there are – “at most” – "hundreds of people in the world" who actually qualify as “AI all-stars.”
Maybe you’re learning how to work more efficiently with the aid of ChatGPT and its robotic brethren. Perhaps you’re taking one of those innumerable AI certificate courses. You might as well be playing pickup basketball at your local YMCA in hopes of being signed by the Los Angeles Lakers. The AI minds that companies truly covet are almost as rare as professional athletes."
Jobs meant for AI all-stars are getting thousands of applicants:
AI engineering and machine learning roles, advertised with base salaries nearing $500,000, are attracting thousands of applicants each week.
However, the pool of applicants often consists of people "who claim to be AI literate" but do not approach the "AI all-star" level.
Experts say companies are engaged in an "AI arms race," where the goal is to hire a single engineer “who can do the work of ten.”
Read more via The Wall Street Journal
On September 29, California Governor Gavin Newsom signed the "Transparency in Frontier Artificial Intelligence Act" (SB-53) into law. Newsom praised the legislation as “carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models.”
AI companies, such as Anthropic and Nvidia, will be required to “publish public documents on how they are ensuring safety and to report any dangerous circumstances.”
Companies that fail to comply are subject to "fines of up to $1 million per violation."
The bill specifically targets "big AI companies," requiring companies with "more than $500 million in revenue" to "assess the risk that their technology could become autonomous and resist human control, or aid in the development of bioweapons."
(California is home to 32 of the top 50 AI companies in the world.)
Will California's new law impact other states or federal regulations?
Newsom has been outspoken about California leading the way by “enacting… first-in-the-nation frontier AI safety legislation.”
Newsom says the bill "should serve as an example of what federal-level AI policy could look like."
Some critics say the bill's passage “could place responsibility for AI regulation on individual states rather than Congress.”
Read more via Tech.co, State of California
The rapid adoption of generative AI tools like Anthropic’s Claude and ChatGPT is fundamentally changing how software engineers work, automating routine coding tasks and prompting fierce competition among industry leaders.
Chatbots that can generate code are becoming essential tools for software engineers, automating repetitive tasks and allowing developers to focus on bigger ideas.
In the San Francisco Bay Area, experts say AI coding assistants represent "the most competitive space in the industry right now," as companies such as OpenAI, Anthropic, and others battle to produce the best AI coders.
Some have used the term "vibe-coding" to describe the practice of AI coding assistants doing “the grunt work while human software developers work through big ideas.”
As AI becomes increasingly efficient at software development, the phenomenon “has raised fears of job loss in software careers.”
For now, AI tools "accelerate coding but depend on skilled professionals’ oversight." Experts say AI “still struggles with deeper architectural decisions and nuanced code quality.”
Read more via Associated Press
Could an AI-powered nursing robot reduce human nurses' workload?
Nurabot is an "autonomous, AI-powered nursing robot" designed by Taiwan-based Foxconn to assist nurses with repetitive or physically demanding tasks, such as delivering medication or guiding patients around the ward. According to news reports, Nurabot was developed over a ten-month period and "has been undergoing testing at a hospital in Taiwan since April 2025." Foxconn is preparing the robot for commercial launch early next year. (CNN)
Nearly one million London jobs could be impacted by AI, according to new research by LiveCareer UK. The nearly one million roles likely to be impacted include 200,000 telemarketers, 150,000 bookkeepers, and more than 95,000 data-entry specialists. Other "at-risk" roles include "fast food and warehouse workers, retail cashiers, paralegals and proof readers." (SIA)
AI agents that "update themselves" can "unlearn" their own safeguards. New research suggests AI systems that "rewrite their own code and workflows" could "erode their own safeguards." While autonomous AI agents can "learn on the job," they can also "unlearn how to behave safely," according to researchers. (Emerge)