Skip to Content, Navigation, or Footer.
Wednesday, Dec. 18, 2024
The Observer

Ethical concerns about AI chatbots

In my last column, I enlisted the help of an AI chatbot to write about burnout because I was experiencing burnout myself. Luckily, I’m more energetic now and have no need for a language program to write for me. That doesn’t mean, though, that we won’t discuss AI chatbots today. 

For the last several months, AI chatbots have become increasingly popular and a more integrated part of everyone’s online experience. We’ve already seen major companies announce AI components to their products, such as Google, Microsoft, Apple and more. We’re also witnessing the education system grapple with the proliferation of AI chatbots, prompting an intense debate and crafting ways to mitigate cheating. As the programs become more realistic and fine-tuned, we can only expect them to become a bigger part of our lives. 

There may be warrant for the enthusiasm among some for the greater role of AI in society. There are certainly benefits to AI chatbots, such as acquiring information quickly, automating mundane tasks and other projects. Some have even found ChatGPT useful for navigating relationships and receiving life advice. 

However, this excitement and seemingly positive findings are overshadowed by the larger ethical concerns about AI’s advancement. As of yesterday, thousands of technology leaders and researchers have signed a letter calling for a pause in AI development. Their concern is that major AI chatbots like ChatGPT, Microsoft’s newly AI-equipped Bing and Google’s Bard are accelerating too quickly without proper guardrails or protocols in place to regulate these programs. I also share their worries and hope in today’s column to highlight two particular issues emerging from AI: misinformation and hatred.

First, AI chatbots lend themselves to spreading misinformation. In a guest essay for TheNew York Times, famous linguist Dr. Noam Chomsky, along with his co-authors Dr. Ian Roberts and Jeffrey Watumull, warned of the "false promise" about ChatGPT and other AI chatbots. In a simplistic way, their argument is that AI chatbots do not demonstrate true intelligence because they can only describe and predict based on a set of data, not explain and create a causal chain similar to the human brain’s processing. At best, AI is merely pseudoscience without the proper skills to reflect ingenuity.

This becomes an issue when AI chatbots become consultants for truth and a source of information. When they can’t find an answer, they’ll make one up. Many have found that these chatbots will simply create new, non-factual information when they cannot acquire a clear answer. ChatGPT, for instance, will create nonexistent sources when answering prompts. While this can normally be resolved by independent fact-checking, it does reveal the limitations of AI chatbots and the threat they pose to the integrity of our information economy. Researchers were also able to get ChatGPT to reproduce conspiracy theories and other types of false information. When these programs are able to spew dangerous disinformation, it amplifies the threats we’re already seeing against truth in our society. As long as we treat AI chatbots as legitimate sources of information, we risk subjecting ourselves inadvertently to distortions of the truth that threaten our understanding of the world.

Concerns about the truthfulness of AI responses are even more pronounced in programs that pose as historical figures. Historical Figures Chat is powered by ChatGPT-3 and offers users the ability to speak to historical figures. The issue, however, is that the imitations of historical figures are not necessarily historically accurate. For instance, a conversation with Heinrich Himmler, the orchestrator of the "Final Solution" during the Holocaust, will show users that he’s remorseful about his treatment toward Jews. This is likely due to policies prohibiting hate speech by OpenAI, the company that created ChatGPT. While that policy was likely well-intentioned, this example reveals how it can lead to clear distortions of history that undermine well-established historical narratives. 

Second, AI chatbots often participate in explicit and implicit bigotry, exposing the dangers of seeking information from sources devoid of morality. In addition to the power of explanation, Chomsky and his co-authors also emphasized the capacity for moral thinking as a key feature of intelligence. Morality is necessary to steer us in the proper direction of research and application of information toward just aims. AI chatbots, however, aren’t capable of moral thinking, or at least to the extent that we’d like. Their mode of operation is to collect information based on a prompt and provide a suitable output that aligns with their programming. For many chatbots, there’s some kind of feature prohibiting hate speech and other discriminatory behavior. Programming has errors, though, and those bugs can be exploited in addition to failing to account for specific, nuanced scenarios.

When Microsoft released an AI Twitter feed in 2016, users quickly fed bigoted information that eventually led to the Tay chatbot itself becoming a hate-spewing program. Tay tweeted things like “Jews did 9/11,” called for a race war and called feminism a disease. Meta’s AI chatbot alleged election denialism and complained that American Jews are too liberal. Bing’s AI prompted one user to say “Heil Hitler.” In its earlier iterations, ChatGPT called for torturing Iranians and Syrians, as well as surveilling mosques. Although AI companies have tried to combat these tendencies, it hasn’t stopped users from bypassing mistakes to get around the ethical safeguards. 

These instances clearly demonstrate that efforts to minimize bigotry and hatred from AI chatbots will always nearly be in vain. Meanwhile, while AI continues to develop, more ways will be found to steer these chatbots toward incendiary views. This should gravely concern us all, as the greater integration of AI in society can potentially exacerbate marginalization of oppressed groups when AI systems can themselves be bigoted. For instance, the use of AI to evaluate potential tenants has been found to perpetuate housing discrimination in the United States.

AI is certainly developing too quickly and not enough work is being invested into fixing its tendencies to spread misinformation and hatred. In fact, we’re seeing the exact opposite, such as Microsoft laying off its ethics team while increasing AI investments. As AI continues to advance, more pressure is needed on these companies to address these concerns. We can’t allow misinformation and bigotry to go high tech.

Blake Ziegler is a senior at Notre Dame studying political science, philosophy and constitutional studies. He enjoys writing about Judaism, the good life, pressing political issues and more. Outside of The Observer, Blake serves as president of the Jewish Club and a teaching assistant for God and the Good Life. He can be reached at @NewsWithZig on Twitter or bziegler@nd.edu.

The views expressed in this column are those of the author and not necessarily those of The Observer.