Conference to explore ethical issues with artificial intelligence
Mariah Rush | Wednesday, September 19, 2018
A two-day conference examining ethical issues in artificial intelligence (AI) usage will begin Wednesday evening in O’Neill Hall and continue as an all-day event Thursday in McKenna Hall.
The conference, “Artificial Intelligence and Business Ethics: Friends or Foes?” is sponsored by the Mendoza College of Business and the Chase Manhattan Lecture Series, which is an endowment that aims to support ethical responsibilities of business. The conference will feature seven speakers from various business backgrounds and four panelists.
Conference organizer and professor Tim Carone said the conference centers on artificial intelligence’s role in business ethics and to what extent its decision making is of a positive or negative consequence.
“The idea is that artificial intelligence is software that actually makes a decision. In the past, humans always made decisions about things. Now when it comes to things like self-driving cars or drones, now software that is artificial intelligence software makes decisions that humans used to and what that means and what we’ve been seeing is that it’ll act in ways that are not anticipated or thought of at a time,” Carone said. “So that’s one of the big things — can we understand that? Can we identify when we see it and determine if it’s good or bad or ugly? The other part of it is a problem is when AI is making decisions, you can’t interrogate them. And say why did you make that decision to turn left instead of, right? And that’s a huge gap in our understanding and someone’s going to be talking about that as well at the conference.”
Students and corporations alike will be attending the conference, which questions the potential bias that may arise from the use of AIs. Carone said he has asked speakers to be “edgy and provocative” with their talks.
Carone said he saw a need for this first-time event upon seeing advances in AI technology that allowed for data to be interpreted by itself, particularly in critical times when objectivity was needed.
“We’ve actually gotten to the point now where the software models are capable of making decisions in critical business processes and we know in the past when that’s happened that the models we create are based on our data and we know our data has problems,” Carone said. “There’s missing data, there’s biased data and therefore that bias could potentially make everything they have in the past made decisions based on the data on to be highly biased or racist decisions. And if AIs are going to be used even more in the corporation and managing these business processes, we kind of have to worry about how these things show up. It may take a long time to figure out that an AI has actually been making biased decisions.”
Carone said he advises acknowledging the ethical issues that come with using such advanced technology, and discussing how to manage the risk.
“So, we have to go in eyes wide open and assume we’ll have ethical issues and we need to understand how we identify, how do we manage that risk and it’s a difficult field and there hasn’t been a lot of work, and understanding how to deal with ethics issues as they arise with this kind of software-running companies,” Carone said.
Carone hopes to bring awareness to Notre Dame’s campus through Mendoza College of Business and do research to learn how to address these potential problems, he said.
“The goal really is can we identify some areas that Mendoza can start doing some research or identify areas of research and needs to address this, how to manage a corporation,” Carone said. “But it’s really what research can we identify and how to move forward.”