Automated technologies are increasingly common in various everyday scenarios—whether it be in sorting mail, finding the best path to drive to the office, or in accessing specific services. Automated technologies, including when used with artificial intelligence (AI), pose new questions on how to ensure such systems are accountable, protect civil rights, and work for the people they are intended to serve.
To that end, the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (AI BOR) was released by the White House Office of Science and Technology Policy (OSTP) on October 4, 2022. The AI BOR is designed around five principles intended to safeguard all Americans from the adverse impacts of automated systems: 1) Safe and Effective Systems, 2) Algorithmic Discrimination Protections, 3) Data Privacy, 4) Notice and Explanation, and 5) Human Alternatives, Consideration, and Fallback.
“VA is committed to employing the latest automated technologies, like AI, to provide the most important needs to our Veterans and their families,” said VA’s Chief Information Officer Kurt DelBene. “However, it is important to ensure that these technologies are used in a safe and effective manner so that they in no way harm those we seek to serve.”
Within AI, the agency has developed and released its comprehensive VA AI Strategy (PDF). VA is committed in taking concrete steps to increase Veteran and stakeholder trust and confidence in AI by working to educate VA leaders and researchers on the principles of transparency, bias and understandable AI. There are also a number of pilot efforts yielding early promising results that are cited in the AI BOR White House Fact Sheet.
“Here at VA, we bring together a diverse set of policy, functional, and mission leaders to work through complex challenges, including how best to support ethical and Veteran-centered innovation and operational impact with data analytics and AI,” said VA’s Chief Data Officer Kshemendra Paul. “Our pilot efforts are accelerating our impact, from which we can continue to strengthen as a learning enterprise.”
Several pilot efforts are ongoing through VA National Artificial Intelligence Institute (NAII). For example, the NAII tested an artificial intelligence institutional review board (IRB) module that incorporated questions focused on the unique risks that AI poses.
The AI IRB module went through its first pilot test this year at one of the VA NAII’s AI Network sites, the Tibor Rubin VA Medical Center in Long Beach, California. The AI IRB module resulted in the rejection of a proposed industry-sponsored AI study that lacked transparency in how the model worked and put Veterans’ privacy at risk. The AI IRB module will now be piloted at AI Network sites nationwide.
“Long Beach is proud to be part of the AI Network and to participate in such pilots that have real, tangible effects,” said Long Beach Director Walt Dannenberg. “Protecting our Veterans’ health care information and privacy is always a priority, and we are glad to implement measures, like the AI IRB module, to safeguard their information.”
Tibor Rubin VA Medical Center is one of four members of the growing AI Network, which is comprised of a group of AI researchers and practitioners across the country’s VA medical centers. They help advance AI research and maximize the benefits of AI for Veterans. Kansas City, Tampa, and Washington, D.C. VA Medical Centers are also members of the AI Network. Other facilities are set to join soon.
The AI Network is also piloting hospital-level AI oversight policies and governing committees with the goal of incorporating lessons learned from the field to inform how national policy and oversight should be applied.
Implementing the principles of the AI BOR, alongside VA’s existing measures, will ensure that Veterans are protected, while still benefiting from cutting-edge automated technology.
To learn more about how VA protects Veterans by using trustworthy AI and hear more about how VA is improving Veteran care with AI, join the AI@VA Community.