Nevada will become the first state to pilot a generative AI system designed to make unemployment claim decisions, marketed as a way to speed up appeals and tackle the nation’s overwhelming backlog of cases. It’s a risky, first-time experiment at integrating AI into higher-level decision making.
Google is behind the program’s tech, which runs transcripts of unemployment appeals hearings through Google’s AI servers, analyzing the data in order to provide claim decisions and benefit recommendations to “human referees,” Gizmodo reported. Nevada’s Board of Examiners approved the contract on behalf of its Department of Employment, Training and Rehabilitation (DETR) in July, despite broader legal and political pushback against integrating AI into bureaucracy.
SEE ALSO:
Facebook flagged and removed emergency wildfire information as ‘spam’
Christopher Sewell, director of DETR, told Gizmodo that humans will still be be heavily involved in unemployment decision making. “There’s no AI [written decisions] that are going out without having human interaction and that human review. We can get decisions out quicker so that it actually helps the claimant,” said Sewell.
Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable’s weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
But Nevada legal groups and scholars have argued that any time saved by gen AI would be cancelled out by the time it would take to conduct a thorough human review of the claim decision. Many have also noted concerns about the possibility of private, personal information (including tax information and social security numbers) leaking through Google’s Vertex AI studio, even with safeguards. Some have hesitancies surrounding the type of AI itself, known as retrieval-augmented generation (RAG), which has been found to produce incomplete or misleading answers to prompts.
Across the country, AI-based tools have been quietly rolled out or tested across various social services agencies, with gen AI integrating itself further into the administrative ecosystem. In February, the federal Centers for Medicare and Medicaid Services (CMS) ruled against using AI (including generative AI or algorithms) as a decision maker in determining patient care or coverage. This followed a lawsuit from two patients who alleged their insurance provider used a “fraudulent” and “harmful” AI model (known as nH Predict) that overrode physician recommendations.
Related Stories
ChatBlackGPT founder Erin Reddick is tackling AI’s racial bias with culturally inclusive innovation
Google is training AI to ‘hear’ when you’re sick. Here’s how it works.
How Big Tech is approaching explicit, nonconsensual deepfakes
How the dot-com bubble burst is relevant for the AI era
ChatGPT is ableist toward applicants with disabilities, new study finds
Axon, a police technology and weapons manufacturer, introduced its first-of-its-kind Draft One — a generative large language model (LLM) that assists law enforcement in writing “faster, higher quality” reports — earlier this year. Still in a trial period, the technology has already sounded alarms, prompting concerns about the AI’s ability to parse the nuance of tense police interactions and potentially adding to a lack of transparency in policing.