“The framework enables a set of binding requirements for federal agencies to put in place safeguards for the use of AI so that we can harness the benefits and enable the public to trust the services the federal government provides,” says Jason Miller, OMB’s deputy director for management.
The draft memo highlights certain uses of AI where the technology can harm rights or safety, including health care, housing, and law enforcement—all situations where algorithms have in the past resulted in discrimination or denial of services.
Examples of potential safety risks mentioned in the OMB draft include automation for critical infrastructure like dams and self-driving vehicles like the Cruise robotaxis that were shut down last week in California and are under investigation by federal and state regulators after a pedestrian struck by a vehicle was dragged 20 feet. Examples of how AI could violate citizens rights in the draft memo include predictive policing, AI that can block protected speech, plagiarism- or emotion-detection software, tenant-screening algorithms, and systems that can impact immigration or child custody.
According to OMB, federal agencies currently use more than 700 algorithms, although inventories provided by federal agencies are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “Our expectation is that in the weeks and months ahead, we’re going to improve agencies’ abilities to identify and report on their use cases,” he says.
Vice President Kamala Harris mentioned the OMB memo alongside other responsible AI initiatives in remarks today at the US Embassy in London, a trip made for the UK’s AI Safety Summit this week. She said that while some voices in AI policymaking focus on catastrophic risks like the role AI can some day play in cyberattacks or the creation of biological weapons, bias and misinformation are already being amplified by AI and affecting individuals and communities daily.
Merve Hickok, author of a forthcoming book about AI procurement policy and president of the nonprofit Center for AI and Digital Policy, welcomes how the OMB memo would require agencies to justify their use of AI and assign specific people responsibility for the technology. That’s a potentially effective way to ensure AI doesn’t get hammered into every government program, says Hickok, who is also a lecturer at the University of Michigan.
But the provision of waivers could undermine those mechanisms, she fears. “I would be worried if we start seeing agencies use that waiver extensively, especially law enforcement, homeland security, and surveillance,” she says. “Once they get the waiver it can be indefinite.”
+ There are no comments
Add yours