Technology & innovation
While many employers are "beginning to incorporate" AI into their investigations, experts say too much reliance on AI in workplace investigations can present obstacles and open employers up to risk.
While AI can play an important role in the investigative process, human oversight remains critical, according to an expert interviewed by HR Dive.
How can AI be helpful in investigations?
AI can play a role in identifying “persons with whom employers should speak to” and in "brainstorming potential questions for interviews."
AI's potential role in transcription remains controversial, given the extent to which AI models are "not yet reliably accurate enough to transcribe witness interviews word-for-word." (AI also has the "tendency to produce hallucinations," which could prove problematic.)
AI can also “double check any blind spots the investigative team may have.”
“I just want to emphasize that [AI] should be a tool … At the end of the day, the individual investigator is the decision maker, and that’s a role with great responsibility."
Privacy issues need to be considered:
AI platforms that are "publicly accessible" (ChatGPT, for example) could pose a risk for employers, given they "could leave sensitive information exposed to the general public."
Henderson notes that "even small tasks such as asking a chatbot to generate questions for a specific witness are problematic."
Using a paid AI platform that protects data is an alternative, but Henderson says even then, any investigative data that includes "personally identifying information" should be removed.
Read more via HR Dive
According to reporting by The New York Times, Amazon's automation team intends to automate 75% of operations and cut a significant number of jobs.
By 2027, Amazon "could avoid hiring more than 160,000 U.S. workers it would normally need" based on automation opportunities, according to the Times' reporting on "leaked" company plans.
The leaks further reveal that "robotic automation could potentially keep the company’s U.S. headcount steady even as sales are expected to double by 2033."
The company's automation team is working toward an "ultimate goal to automate 75 percent of the company’s operations."
Amazon is already using "about a thousand" robots at its "most advanced warehouse in Shreveport, Louisiana," a move that allows the company to “employ roughly 25% fewer workers than it would have without automation.”
Read more via Gizmodo, The New York Times
Dollar Tree is replacing "older legacy systems" with AI-enabled platforms, according to CEO Mike Creedon. At the company's annual investor meeting, Dollar Tree executives discussed plans to use AI-powered technology to optimize product placement within its stores. Dollar Tree is also using AI to optimize HVAC efficiencies across its stores. (Retail Dive, Supermarket News, Brainbox)
An AI detection system "mistook a student's bag of chips for a gun." A Baltimore County, Maryland school's AI detection system thought a student's "crumpled-up chip bag looked like a firearm." The AI detection system "worked how it was meant to," according to school officials. Critics, including the student who was approached by arriving officers, say they do not think a "chip bag should be mistaken for a gun at all." (WMAR)
AI is “delivering real-time stats and smarter broadcasts” for this year's Major League Baseball World Series. Broadcasters and their production teams generally spend a significant number of hours prior to each game gathering statistics and information. This year, FOX Sports and Google Cloud have joined forces to create FOX Foresight, a new AI platform “trained on data from many seasons of major league play.” Broadcasters will now be able to immediately ask the tool questions that previously would have taken minutes – or even hours – to find. (Google)
Two federal judges say members of their staffs used AI to prepare court orders that were “error-ridden.” U.S. District Court judges in New Jersey and Mississippi said the AI-assisted court orders “did not go through” the correct processes. The judges said their policy prohibits the use of generative AI in legal research or for drafting opinions and orders. In one of the instances, a law school intern “used OpenAI's ChatGPT to perform legal research without authorization or disclosure.” (Fox News)