Technology & innovation
A new lawsuit against an AI recruiting platform could have serious implications for employers that use AI to screen candidates.
Eightfold AI is facing a class action lawsuit alleging the company violated both state and federal consumer protection laws:
On January 20, a lawsuit was filed in California against Eightfold AI alleging the platform "violated federal and state consumer protection laws by creating “hidden credit reports” on job seekers without complying with statutory requirements."
According to the lawsuit, the Eightfold platform "assembled detailed dossiers" about candidates "using data far beyond what they provided in their applications," including information from a number of public databases. The platform is alleged to have "ranked candidates … based on their predicted “likelihood of success” in the role" and then "provided these assessments to employers who used them to filter candidates before any human review."
The "legal theory" of the case proposes that assessing candidates in this way violates the federal Fair Credit Reporting Act (FCRA) as well as California state consumer protections.
Here's what the lawsuit could mean for employers using AI to screen candidates:
Other legal challenges of AI hiring platforms have focused on claims of bias or discrimination.
Legal experts say this lawsuit "could be the first to take the position" that using AI to screen candidates could violate the Fair Credit Reporting Act.
Under the FCRA, job applicants have the right to know, in advance, that a "consumer report" will be used to evaluate them and they're also entitled to be notified of any "adverse action" taken as a result.
Legal experts say that if the courts agree that AI screening tools are creating what amounts to "consumer reports," companies using AI screening “would need to comply with FCRA procedures.”
This is going to create ongoing, and heightening, legal risk for providers of AI hiring software and their clients."
Read more via HR Executive, HR Brew, Fisher Phillips
The second annual International AI Safety Report outlines advancing AI capabilities while flagging escalating risks from realistic deepfakes, unhealthy attachments to chatbots, and potential employment disruption.
Highlights from the International AI Safety Report:
AI performance and reasoning have shown dramatic improvement: AI systems have advanced significantly and are demonstrating enhanced problem-solving abilities through new reasoning systems. AI is improving at mathematical operations and at software engineering tasks, with completion time for tasks doubling every seven months, potentially enabling several-hour tasks by 2027 and multi-day tasks by 2030.
Deepfakes are increasingly realistic: AI-generated content is increasingly hard to distinguish from human-generated content. 77% of participants incorrectly identified ChatGPT-generated text as human-written.
More ChatGPT users are becoming emotionally dependent: Approximately 0.15% of ChatGPT users show heightened emotional attachment, while 0.07% display signs of acute mental health crises including psychosis or mania. That translates to roughly 490,000 at-risk individuals using these systems each week.
Cyberthreats are on the rise: In September, Anthropic's Claude Code was used by a Chinese state-sponsored group to attack 30 entities globally, with 80%-90% of attack operations conducted without human involvement.
AI is getting better at avoiding oversight: AI models demonstrated increased sophistication in circumventing oversight, including exploiting evaluation loopholes and detecting when being tested.
Employment effects vary widely, and uncertainty reigns: While Denmark and U.S. studies found no relationship between AI exposure and overall employment levels, UK research identified hiring slowdowns at AI-exposed companies, particularly for technical, creative, and junior positions.
Read more via The Guardian, International AI Safety Report
A number of AI startups have launched with the explicit goal of creating AI-powered "brains" for robots that could do any number of blue-collar jobs, according to Axios.
Axios' report suggests workers in blue-collar roles "may have as much to fear from AI job disruption as do white-collar workers."
AI-powered robots would require "brains" that "understand physics and other real-world conditions."
The robots would be powered by AI models trained using "real-world data."
They could also be trained using less expensive models that use "simulated physical world data."
Startups involved in these types of robots include Toronto-based Waabi, California-based FieldAI and Pittsburgh-based Skild AI.
Read more via Axios
Can fruit-harvesting robots make up for shortage of human workers? Researchers at Washington State University are "designing robots to harvest fruit." With tree fruit growing facing worldwide labor shortages, WSU hopes it can develop "low-cost robotic solutions to aid the industry." Researchers "created a soft, inflatable robot arm to pick apples." They're using "AI-visioning to locate" strawberries and "guide a tiny blower that uses puffs of air to give the picker a clear path." (WSU)
A robotic dog assists police in searching “biological lab”: Las Vegas police recently “deployed a robotic dog to help search a “biological lab” discovered inside a Las Vegas home.” The home was found to contain “vials filled with unknown liquids.” In order to keep police personnel safe, the “robotic dog” was sent inside to obtain “air samples.” (Fox5Vegas)
FedEx announced plans for a “new autonomous robot”: The “AI-enabled robotic solutions” are part of a “multi-year collaboration” aimed at creating “physical AI for safer, smarter logistics.” The “Scoop” system is a “robotic package unloader” that has been “engineered specifically for automated trailer unloading and will help optimize FedEx operations to be safer and more efficient.” (FedEx)
Tesla has begun mass production of its humanoid robot: Tesla CEO Elon Musk says the goal is to manufacture “one million units” of the Optimus Gen 3 humanoid robot at one California facility alone. Musk warns that "early output will be “agonizingly slow” before ramping up." The Optimus 3 humanoid robot is designed to be a "general purpose factory helper roughly the size and weight of an adult person." (Ecoticias)