Technology & innovation
According to Gallup, workplace AI adoption continues to climb, with 45% of U.S. employees reporting they used AI at least a few times in the past year, up from 40% in the second quarter of 2025.
Highlights from Gallup's survey of over 23,000 employed adults:
AI usage trends:
Frequent use (a few times weekly or more) increased from 19% to 23% between Q2 and Q3 2025.
Daily AI use grew modestly from 8% to 10% during the same period.
Knowledge workers lead adoption: 76% of employees in technology or information systems, 58% in finance, and 57% in professional services used AI at work at least a few times yearly.
Frontline workers lag behind: Only 33% of retail employees, 37% in healthcare, and 38% in manufacturing reported similar AI usage rates.
Organizational AI implementation remains unclear to many workers:
37% of employees said their organization has implemented AI technology to improve productivity, efficiency and quality.
40% said their organization had not implemented AI.
23% said they did not know whether their organization had adopted AI—a figure higher than daily AI users but lower than yearly users, suggesting some employees use personal AI tools without awareness of organizational AI strategy.
Individual contributors (26%) were significantly more likely than managers (16%) and leaders (7%) to report uncertainty about their organization's AI implementation status.
How employees are using AI:
Top use cases include consolidating information (42%), generating ideas (41%), and learning new things (36%).
More than six in 10 workplace AI users reported using chatbots or virtual assistants, making them the most common AI tool.
AI writing and editing tools were the second most popular (36%), followed by AI coding assistants (14%).
Frequent users were more likely to employ specialized tools: 22% used coding assistants compared to 8% of less frequent users, while 18% used data science or analytics tools versus 8% of occasional users.
Read more via Gallup
Young workers in the most AI-exposed occupations experienced a significant employment decline between 2022 and 2025, while employment for less exposed or more experienced workers has remained steady or increased, according to the Dallas Federal Reserve.
Dallas Fed economists cited a recent Stanford University study that used ADP data. The study found that workers ages 22 to 25 in occupations with the highest AI exposure have experienced a 13% employment decline since 2022, just as employment for “less exposed or more experienced workers has been steady or even increasing.”
How AI exposure was measured:
AI exposure was measured on a scale of 0 (no exposure) to 1 (full exposure) and occupations were then divided into one of three categories.
Least AI exposure: cashiers, janitors and building cleaners, laborers and freight/stock/material movers
Moderate AI exposure: driver/sales workers and truck drivers, retail salespersons, elementary and middle school teachers
Most AI exposure: first-line supervisors of retail sales workers, secretaries and administrative assistants, customer service representatives
How AI is impacting young workers entering the labor market:
The rate at which young workers entering the labor force find work has declined more significantly for those seeking high AI-exposure jobs.
Among young labor market entrants, job-finding rates have held steady only for jobs with low AI exposure, while declining for groups with higher AI exposure since November 2022.
Since November 2023, the job-finding rate for young workers in the most AI-exposed group has fallen by more than 3 percentage points.
Why unemployment data may understate AI's impact:
The Current Population Survey records unemployed workers' occupations as their most recent job, not the occupation they're seeking.
This means if AI adoption prevents a recent graduate from ever finding work in their field, their occupation wouldn't be recorded as unemployed in that profession—making AI's impact on specific occupations harder to detect in unemployment statistics.
While AI's effect on labor outcomes has appeared subtle so far, its future impact could be substantially greater, according to experts.
Read more via Dallas Fed
The impact of generative AI on employee creativity "remains uneven," according to The Harvard Business Review.
Generative AI can enhance employee creativity, but a recent survey found that only 26% of employees who use AI report improvements in their creativity.
New research published in the Journal of Applied Psychology reveals that AI tools boost creative performance primarily for employees with strong metacognition—the ability to plan, monitor, and refine their thinking.
Researchers began by focusing on 250 employees at a technology consulting firm in China. Employees were randomly assigned either to receive a ChatGPT account for daily work or to a control condition without AI access. One week later, creativity was assessed through manager evaluations and independent raters.
Employees with stronger metacognition became more creative when using AI, generating ideas judged as more novel and useful.
For employees with weaker metacognition, AI made little difference to creative performance.
AI boosts creativity by expanding employees' knowledge base and freeing mental capacity for complex problem-solving—but only when employees actively evaluate, question, and refine AI outputs rather than accepting first suggestions.
Employees low in metacognition are more likely to accept AI's first answer, rely on default outputs, and fail to check whether suggestions are accurate or relevant.
Read more via Harvard Business Review
Multiple pending lawsuits claim that AI-powered employment screening systems produce discriminatory outcomes against protected groups, according to JD Supra. These cases highlight risks for all employers using automated decision-making tools, regardless of whether bias was intentional.
How do AI tools introduce bias?
AI tools can produce skewed results through the historical data used for training, the specific criteria selected for evaluation, and the relative importance assigned to different factors. Historical inequities embedded in training data can be perpetuated by AI systems.
There is also the possibility that what seems like neutral evaluation criteria could "function as proxies for protected traits, such as zip code correlating with race, or employment gaps correlating with disability or caregiving responsibilities."
How the 80/20 rule is used to signal potential discrimination:
Under employment discrimination law, AI systems may be evaluated using the 80/20 rule.
When a protected group's success rate falls below 80% of the comparison group's rate, further investigation is warranted.
The 80/20 rule is not a "definitive test of discrimination," but it "functions as a screening mechanism or warning indicator that may warrant closer review of a particular practice or decision-making process."
U.S. courts are currently "scrutinizing" AI systems to determine "whether they contribute to discriminatory results":
In one pending class-action suit, a job applicant "alleges that … AI-based screening tools systematically rejected him across more than 100 applications."
In another pending suit, an applicant says "the employer relied on an AI-powered applicant tracking system that embedded historical bias by using data points functioning as proxies for race, resulting in his candidacy being downgraded and eliminated before advancing in the hiring process."
What employers should do to reduce legal exposure:
Legal experts say organizations could be unaware of discriminatory patterns until facing regulatory investigation or legal action.
Proactive measures can ameliorate legal exposure. Employers should consider undertaking a systematic evaluation of where automation influences employment decisions, independent review of vendor-provided tools, and ongoing monitoring protocols to identify emerging bias patterns.
Read more via JD Supra
California launches pilot for AI assistant to help state employees: The state of California has launched a new pilot project involving an AI-powered assistant that could "make state employees’ daily work easier and more efficient." Poppy is a new digital assistant that is "powered by ChatGPT and other publicly available generative AI tools." Poppy will assist state employees in finding information in "California’s dense catalog of policies" and will help employees "easily find answers to complex state government questions." (GovTech, State of California)
Ford announces new AI assistant: Ford is in the process of developing a new AI assistant that will be available on the "newly revamped Ford app in early 2026." Ford is planning "native, in-vehicle integration" in 2027. Ford made the announcement at the recent Consumer Electronics Show, where the company also "teased a next-generation of its BlueCruise advanced driver assistance system that is both cheaper to make and more capable — ultimately leading to eyes-off driving in 2028." (TechCrunch, Ford)