Technology & innovation
Gen Z workers are least likely to believe AI will advance their careers or improve work efficiency, even as they face simultaneous technological disruption and economic uncertainty while starting their careers, according to LinkedIn's first AI Confidence Survey.
Daily AI usage has surged: U.S. employees are now 2.4 times more likely to report using AI tools daily or weekly on the job compared to a year and a half ago.
Generational confidence gap: Gen Z is least likely to believe AI will help them grow careers or improve work efficiency compared to millennials, Gen Xers and baby boomers. They are also the least likely to say they feel supported in learning the technology or plan to build new AI skills proactively. (Gen X is most likely to say they feel supported in building AI skills.)
Workers learn AI without employer support: 54% of U.S. employees plan to proactively learn new AI skills in the next six months, and nearly half agree AI makes them more efficient and will help career growth. However, only 45% say they feel supported in gaining AI skills and knowledge.
Executive-worker disconnect persists: 67% of U.S. executives feel confident employees will learn new AI tools in the next six months, and 70% believe AI is already making their workforce more efficient—contrasting sharply with workers' uncertainty about organizational support.
Surge in roles requiring AI literacy: In the U.S., roles requiring AI literacy grew 70% year-over-year as digital and data skills become foundational across functions.
Job seekers embrace AI by necessity: Active job seekers of all ages are significantly more likely to use AI daily or weekly than those not searching for work, and are more confident in acquiring new AI skills, using tools to rewrite resumes, prepare for interviews, and accelerate skill-building.
Read more via LinkedIn
The new AI start-up, Humans&, says AI “should empower people rather than replace them.”
Humans&'s founders include former Anthropic, xAI and Google employees.
They say the idea started with the goal of creating "software that facilitated collaboration between people — like an A.I. version of an instant messaging app — while also helping with internet searches and other tasks that suit machines."
While Anthropic's model is trained to "work autonomously," Humans& says it thinks of "machines and humans as complementary."
Humans& has so far raised "$480 million in seed funding" from Nvidia, Amazon founder Jeff Bezos and Google Ventures, among others. The start-up is "valued at $4.48 billion, even though it has only about 20 employees and launched just three months ago."
Read more via The New York Times
According to a "months-old but until now overlooked study," large language models are “incapable of carrying out computational and agentic tasks beyond a certain complexity.”
The research paper was authored by former SAP CTO Vishal Sikka.
AI agents are "models designed to autonomously carry out tasks without human intervention."
Some employers have already begun replacing human workers with AI agents, only to "quickly" realize "the agents … weren’t anywhere near good enough to replace the outgoing humans, perhaps because they hallucinated so much and could barely complete any of the tasks given to them."
Sikka told Wired that AI agents should not be considered to do important things like run nuclear power plants, despite the hype by "AI boosters."
AI experts say that stronger, external guardrails "can filter out the hallucinations" that make AI agents unreliable.
Sikka, too, says that "hallucinations can be reined in." He believes that while AI agents have "this inherent limitation," there are ways to "build components around LLMs that overcome those limitations.”
Read more via Futurism, Wired
Are autonomous snow blowers the future of snow removal? One New Jersey man used a $4,999 autonomous snow blower to "clear his long New Jersey driveway" during last week's storm. The 230-pound robot snow blower is made by Yarbo, and the driveway it was clearing belonged to Tom Moloughney, host of the State of Charge YouTube channel and senior editor at InsideEVs. (ABC7, Business Insider)
AI chatbot delusions are causing real problems for real people: According to "dozens of doctors and therapists" interviewed by The New York Times, AI chatbots have "led their patients to psychosis, isolation and unhealthy habits." Physicians are currently "navigating how to treat problems caused or exacerbated by A.I. chatbots." In some cases, they reported that patients had positive experiences communicating with chatbots, but "they also said the conversations deepened their patients’ feelings of isolation or anxiety," with "more than 30" noting "cases resulting in dangerous emergencies like psychosis or suicidal thoughts." (The New York Times)
ChatGPT erased two years of one professor's work: A professor of plant sciences at Germany's University of Cologne recently revealed in a Nature article that "ChatGPT cost him two years of work, including grant applications, teaching materials, and publication drafts." The professor claims to have "signed up for OpenAI's ChatGPT Plus plan" and used it to draft emails, grant applications, lectures and more. When the professor opted to disable the platform's "data consent" option, he found that all of his previous chats were "permanently deleted and the project folders were emptied." He claims "large parts of [his] work were lost forever." (Windows Central)
Will AI soon be writing federal transportation regulations? The Trump administration says it is planning to "use artificial intelligence to write federal transportation regulations," according to reporting by ProPublica. According to the report, a plan was presented to Department of Transportation staff that outlines the potential of AI to "revolutionize the way" the agency drafts regulations. (ProPublica)