On Feb. 8, the U.S. Chamber of Commerce announced the formation of a consortium to research new guidelines and standards for artificial intelligence innovation. Launched under the National Institute of Standards and Technology, it will be comprised of more than 200 companies, universities, civil organizations and non-profits.
“As we think about the societal impacts of AI technologies, it’s good to have a dialogue that presents varied interests,” Nitesh Chawla, director of the Lucy Family Institute for Data and Society and a professor of computer science and engineering, said. “What I believe this consortium is trying to do is to create that space.”
Chawla said developers and users are often not in the same room.
“We don’t have a good, complete understanding of what could go wrong [and] what could go right,” Chawla said. “How do we measure these things? How do we monitor these AI tools as we develop them and they go out in the wild?”
It's important for Notre Dame, in Chawla's mind, to have an important and critical voice in the conversation around AI because of the University's mission, he said.
“We are excited to be part of [the consortium],” Chawla said. “And as a university, it’s important for us to have a seat on the table when we think about AI as a force for good. How do we use AI as a force for good?”
The consortium was prompted by an executive order signed in October — “The Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.”
“Today we are in a race to build [and] deploy these technologies,” Chawla said. “But we must think about how we do it in a responsible way. How do we do it in a safe way?”
Chawla emphasized the need for establishing industry standards in artificial intelligence.
“There are no industry standards right now,” he said. “We often feel like we are building the plane as we are flying it.”
Jeffrey Rhoads, vice president for research and a professor of aeronautical engineering, said in a statement the University was excited to join the AI Safety Institute Consortium.
“We know that to manage AI risks, we first have to measure and understand them,” Rhoads said. “It is a grand challenge that neither technologists nor government agencies can tackle alone. Through this new consortium, Notre Dame researchers will have a place at the table where they can live out Notre Dame’s mission to seek discoveries that yield benefits for the common good.”
When asked about his attitudes toward AI, Chawla said he was super optimistic.
“This is an amazing time in AI … what we can do — the impact it will have is phenomenal,” Chawla said. “But we need to recognize that we need to think about guardrails. We need to think about measurements. We need to think about safety. We need to think about trust.”
The consortium is already starting to host meetings. Chawla’s team members have recently attended a meeting on red teaming efforts, and he said he plans on hosting an event at Notre Dame in the coming months.