Technology & innovation
Maybe it's already been sent to you, or maybe you'd seen it mentioned on social media. AI startup founder Matt Shumer's article (titled “Something Big Is Happening”) has well over 80 million views. In the piece, Shumer discusses how he thinks AI will impact the labor market and how quickly he thinks it will happen.
Shumer asks readers to think back to February 2020, when the COVID outbreak first made the news, but when it seemed like a distant danger unlikely to impact our everyday lives.
To Shumer, that is where we are now in terms of AI. Most of us recognize that change is afoot, but it doesn't necessarily feel entirely imminent or threatening.
I think we're in the "this seems overblown" phase of something much, much bigger than Covid."
Shumer says AI is advancing at a pace that few understand, and he says tech workers have watched AI go from being a "helpful tool" to being something that “does my job better than I do” in under a year.
Shumer expects everyone working in knowledge jobs to have the very same experience – “not in ten years,” but potentially in less than one year. From now.
He warns that testing the free versions of generative AI platforms won't help with understanding just how far AI has come.
Shumer says that anyone who hasn't tested AI recently will find that “what exists today” will be “unrecognizable.”
In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54. By 2023, it could pass the bar exam. By 2024, it could write working software and explain graduate-level science. By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI."
Shumer says that predictions of AI eliminating “50% of entry-level white-collar jobs within one to five years” are “conservative" and that the disruptions caused by AI will be “different from every previous wave of automation.”
After his piece went viral, Shumer told CNBC the article “wasn’t meant to scare people.” Instead, Shumer “encouraged people in the workforce to start seriously using AI tools.”
Read Shumer's entire piece via LinkedIn, CNBC
If you've spent any time on the Internet over the past couple of weeks, chances are you've heard of Clawdbot (or Moltbot, or OpenClaw). Interest in Clawdbot has been surging and “online chatter about the tool reached viral status — at least, as viral as an open-source AI tool can be.”
What's it actually called?
Before we discuss what it is, it's worth mentioning that this new viral AI assistant has changed its name multiple times. First, it was Clawdbot, until Anthropic asked that the name be changed due to confusion around its association with Anthropic's Claude AI. Clawdbot changed its name to Moltbot, before changing it again to OpenClaw.
So, what is it?
OpenClaw describes itself as "AI that actually does things." The best way to understand it is that OpenClaw is an open-source AI personal assistant that runs locally on your device.
It can check your calendar, your email, or even send messages through your own applications.
Experts call it an "impressive example of agentic AI, meaning it's a tool that can act autonomously and complete multi-step actions on behalf of the user."
OpenClaw "remembers everything you've ever told it," and can “proactively take personalized action.”
Sounds great. Is there anything to worry about?
OpenClaw " succeeds where other AI agents have failed," and it does so because "it has full system access to your device."
OpenClaw can “read and write files, run commands, execute scripts, and control your browser.”
OpenClaw is the hottest thing in the AI world. It also opens a can of worms around privacy and security if not installed correctly."
Read more via Mashable, TechCrunch, LinkedIn
Decisions to reduce workforce size are being driven primarily by expectations of what AI will eventually accomplish rather than demonstrated productivity gains, according to a recent survey by the Return on AI Institute.
Highlights from the survey of 1,006 global executives:
Future potential is driving current cuts: Most organizations have already trimmed staff or curtailed hiring based on what they expect AI to deliver. 39% of organizations had low to moderate workforce reductions, while 21% made substantial cuts. 29% of organizations said they are maintaining lower hiring volumes than previously normal. Only 2% of respondents attribute major workforce reductions to AI systems actually performing work.
Leaders are struggling to measure generative AI returns: Nearly half (44%) identified generative AI as the hardest technology to evaluate economically.
Preemptive workforce decisions are creating organizational risks: Cutting staff before automation materializes could signal to remaining workers that their positions are vulnerable, making less willing experiment with AI tools.
Read more via Harvard Business Review
On February 11, the U.S. House of Representatives' Workforce Protections Subcommittee held a hearing to examine the ways in which AI tools can enhance workplace safety and what safeguards are needed.
Highlights from the hearing:
Human oversight is critical: Expert witnesses emphasized that "having a human in the loop" is essential when deploying AI safety tools. Elected officials warned that "employers should be wary of delegating responsibility for worker safety to AI."
AI tracking of workers could pose new risks: AI tracking of worker activity could create physical and/or psychosocial hazards due to fear of surveillance, loss of privacy, or anxiety about job loss.
AI technology offers legitimate safety benefits: Witnesses detailed AI and safety technologies that can help protect workers, including wearables that detect heat stress, sensors, AI-enabled cameras, exoskeletons, predictive analytics, and automation-assisted technologies.
Funding and standards needed: Elected officials stressed the need for "strong safety standards" to be maintained and for safety-related agencies to be fully funded.
Read more via Safety and Health
Heineken says AI is partly to blame for workforce cuts: Heineken is cutting 6,000 jobs. Heineken's CEO says "AI is “partly” behind the plan to cut up to 7% of its workforce." Heineken is hoping to "boost efficiency through productivity savings from AI." (CNBC)
An AI security company interviewed a fake candidate: An AI security company recently "inadvertently interviewed a deepfaked candidate for a security researcher role." Evoke Security CEO Jason Rebholz said the saga began after he posted on LinkedIn to "promote a few vacant roles at his AI security company." Rebholz then "received a message from an individual who claimed to know just the right person for an open security researcher position." Things got weird during a scheduled video call and Rebholz recorded a portion of the interaction. A deepfake detection company "determined there was a 95% chance that face-swapping technology was used." Rebholz called the experience "a wake-up call to the industry that deepfake attacks can happen to businesses of all sizes." (LinkedIn, IT Brew)
An AI agent designed to help executives make better business decisions: Building material company Cemex is putting an "AI-powered financial agent" to work as part of an effort to help executives make "better informed decisions." The agent has been trained using “thousands of internal, often confidential, economic and financial data points.” Cemex has already granted access to the AI agent to "about 100 senior leaders." Users interact with the AI agent via "natural language chat." (Microsoft)
NOTE: There will be no Need to Know Briefing published Monday, February 23. The next issue will be published Monday, March 2.