The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.
Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.
“If the benefits do not meaningfully outweigh the risks, agencies should not use the AI,” the memo says. But the draft memo carves out an exemption for models that deal with national security and allows agencies to effectively issue themselves waivers if ending use of an AI model “would create an unacceptable impediment to critical agency operations.”
The draft rules were released by the White House Office of Management and Budget two days after President Biden signed an executive order that amounted to a government-wide plan to simultaneously increase government use of AI while also seeking to prevent harm from the technology. The need to keep people safe from AI was a major theme, with the order’s provisions including reporting requirements for the developers of large AI models and compute clusters.
The proposed OMB rules would add testing and independent evaluation of algorithms bought from private companies as a requirement of federal contracts, which the office can do in its role of coordinating departments with presidential priorities. They would ask government agencies to evaluate and monitor both algorithms in use and any acquired in the future for negative impacts on privacy, democracy, market concentration, and access to government services.
The draft memo would require testing and evaluation of algorithms to be done by people with no direct involvement in a system’s development and encourage external “red teaming” tests of generative AI models. It also instructs the leaders of federal agencies to explore ways they can use generative AI such as OpenAI’s ChatGPT “without imposing undue risk.”
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearBiden’s AI executive order requires the OMB to provide its guidance to federal agencies in the next five months. The office is inviting public comment on the draft policy until December 5.
“The framework enables a set of binding requirements for federal agencies to put in place safeguards for the use of AI so that we can harness the benefits and enable the public to trust the services the federal government provides,” says Jason Miller, OMB’s deputy director for management.
The draft memo highlights certain uses of AI where the technology can harm rights or safety, including health care, housing, and law enforcement—all situations where algorithms have in the past resulted in discrimination or denial of services.
Examples of potential safety risks mentioned in the OMB draft include automation for critical infrastructure like dams and self-driving vehicles like the Cruise robotaxis that were shut down last week in California and are under investigation by federal and state regulators after a pedestrian struck by a vehicle was dragged 20 feet. Examples of how AI could violate citizens rights in the draft memo include predictive policing, AI that can block protected speech, plagiarism- or emotion-detection software, tenant-screening algorithms, and systems that can impact immigration or child custody.
According to OMB, federal agencies currently use more than 700 algorithms, though inventories provided by federal agencies are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “Our expectation is that in the weeks and months ahead, we're going to improve agencies' abilities to identify and report on their use cases,” he says.
Vice President Kamala Harris mentioned the OMB memo alongside other responsible AI initiatives in remarks today at the US Embassy in London, a trip made for the UK’s AI Safety Summit this week. She said that while some voices in AI policymaking focus on catastrophic risks like the role AI can some day play in cyberattacks or the creation of biological weapons, bias and misinformation are already being amplified by AI and affecting individuals and communities daily.
Merve Hickok, author of a forthcoming book about AI procurement policy and president of the nonprofit Center for AI and Digital Policy, welcomes how the OMB memo would require agencies to justify their use of AI and assign specific people responsibility for the technology. That’s a potentially effective way to ensure AI doesn’t get hammered into every government program, says Hickok, who is also a lecturer at the University of Michigan.
But the provision of waivers could undermine those mechanisms, she fears. “I would be worried if we start seeing agencies use that waiver extensively, especially law enforcement, homeland security, and surveillance,” she says. “Once they get the waiver it can be indefinite.”