As part of a symposium on “Envisioning Federal AI Legislation” organized by the Law School’s Journal of Legislation, U.S. Sen. Todd Young, a Republican from Indiana, participated in a fireside chat moderated by the American Enterprise Institute’s (AEI) Adam White. Young spoke to the audience in the McCartan Courtroom about the emergence of artificial intelligence (AI), and its implications for regulation and national security.
The senator has been the subject of recent controversy on campus, following Notre Dame College Republicans’ withdrawal of their endorsement after Young said he would not vote for Trump in 2024.
Young said to regulate AI, we first need to understand it, relying on expert knowledge.
“We can’t get command of every issue. That would be unrealistic. Not every member of Congress knows exactly how airplanes work but we still regulate them,” he said.
“Airplanes might be the wrong example,” White quipped in response, referring to recent headlines surrounding Boeing planes.
Young, a member of the bipartisan Senate AI Caucus, discussed how efforts with Senators Chuck Schumer (D-NY), Mike Rounds (R-SD) and Martin Heinrich (D-NM) have constituted a “mutual exploration of the truth.”
He explained without committee room cameras present, there was less pandering.
“Less showboating too. Something I tend to enjoy, the less, not the showboating,” Young said to laughs.
Young struck an optimistic note about the possibilities that AI can bring.
“You think of how we can make very rapid advancements in almost every realm,” he said, mentioning the potential to eradicate diseases like cancer and Alzheimer’s disease. He also discussed the deployment of autonomous technologies in the land, sea and air and ending traffic through autonomous vehicles.
Young said that in creating an appropriate regulatory framework, the government should be careful to not overregulate despite “very valid” concerns.
“Imagine a world in which those things don’t happen because we don’t appreciate the opportunity cost of over regulating,” he said.
Young also spoke to the marvels of generative AI, discussing a recent dinner with a colleague.
“Someone was referencing the speech they wanted to deliver in a few days, and I, in less than a second, drafted a speech that was far better. And the person was like ‘Can you send that to me?’ I would not counsel using this. Folks always find out, right?” Young said.
Young mentioned that the CEO of the Indiana-based medicine company Eli Lilly and Company told him that “almost every innovation that [they] have right now every drug in [their] pipeline would not be possible but for existing AI technology.”
He stated his interest in AI was a national security priority, extending from his work on the CHIPS and Science Act, which funded domestic research and production of semiconductors.
In response to a question from White on why the private sector couldn’t handle these concerns, Young said some of White’s AEI colleagues did not understand market failure and the need for the government to step in.
“It’s a global pandemic. What do you want us to wait for until the market fail? The market has failed! The market has failed, there’s no substitute for a semiconductor,” he said.
His critique extended to the Wall Street Journal editorial page for the same blind spot.
“Today, you will be hard-pressed to go back and find a Wall Street Journal editorial that they place on their opinions page from an independent observer … who admits that we had to reassure capacity, or comes up with a viable alternative solution,” Young said.
“Our economic history demonstrates the benefit-to-cost ratio of investing, and university-led research and DOE research is off the charts. And that too, has been a reality that not everyone will admit. It seems a quirky, libertarian impulse to deny that,” he added.
Young criticized both the Trump and Biden administrations for their trade policies.
“That’s a lot of tariff pancaking on top of computer chips. It will starve us as we become increasingly used to embedding these technologies,” he said.
Young also discussed the ‘black swan’ risks of AI, or unforeseen threats.
“There, there's some threats out there that are being discussed. I mean, if AI technologies enable one to more effectively, more easily develop superbugs,” he said, mentioning a bill he has proposed with Sen. Maria Cantwell (D-WA) to establish an AI Safety Institute to develop standards for AI.
Young answered a law student’s question asking about whether he’d be comfortable with AI writing law.
“How do you know I haven’t?” he joked. “Yeah, of course I would. But a human being would have to study whatever output the algorithm produced before, you know, introducing the legislation before it’s voted on, and before it certainly becomes law.”
Young said that he allows staff to use ChatGPT as a tool for producing internal memos while reminding them that they are responsible for their content.
In a question posed by professor John Behrens about the many different concepts that AI refers to, Young said the terms are malleable.
“If you’re … asking if among colleagues, we’ve established working definitions to have precise debates, no. It’s the United States and, you know, at some level, nobody’s in charge,” he said.