Skip to Content, Navigation, or Footer.
Thursday, Nov. 21, 2024
The Observer

IMG_7286.HEIC

Generative AI-related Honor Code violations rise as Notre Dame students learn new uses

Built by the artificial intelligence (AI) company Open AI, ChatGPT rocked the world when it became available in the late Fall 2022, offering chatbot technology with a capability the public had never seen before. 

In a singular, free program, ChatGPT could tell jokes about rabbits, write love poems for a significant other and compose proper university mission statements. After OpenAI’s grand entrance into the chatbot scene, other large tech companies, including Google and Bing, raced to develop their own versions of the technology. Performing whatever task a user could think of, the possibilities seemed endless.

But these generative AI technologies had become so powerful, many educators feared that they could also do your homework — or worse, it might be able to ace your final exam for you.

According to Ardea Russo, the director of the office of academic standards, in the Spring 2023 semester, while the technology was fairly new to student users, 10% of academic Honor Code violations at Notre Dame were related to generative AI. But in the Fall 2023 semester, that figure jumped to 30%.

The world of academic dishonesty, an age-old problem for universities, has been transformed.

Passing Business Analytics without knowing how to code

Students majoring in Business Analytics at Notre Dame would typically find it impossible to get by without learning how to code — until now, perhaps.

A graduating senior in the major spoke on condition of anonymity to The Observer about their heavy use of GPT-4, OpenAI’s most powerful, subscription-based generative AI service. 

The student said they have used the service to complete coding assignments in R or Python languages in “anything from weekly homework to full-blown final projects.”

“I’m not skilled in the synthesis of the projects, but I can read them,” the student said. “I don’t know if I could do them at all, because I can’t really code.”

There are hazards, the student acknowledged. Sometimes an AI’s output will include an error such as “[INSERT BLANK HERE],” which could spell an Honor Code violation if not edited out of the student’s assignment submission. 

But the student has never been caught, because coding assignments typically only have one or two right answers.

Despite not knowing how to code, the student does not think their education has been deficient.

“It’s just kind of a trend for where data analytics and coding is going,” the student said. “Learning to code in the language itself is going to be an outdated medium.”

Updating academic policies

Notre Dame has worked quickly to update its academic policies to better suit the changing academic landscape. Upon returning to campus this semester, students received an email from Russo reminding them of this policy and its recent updates.

In the email, Russo told students that editing written work using AI technologies is “not recommended,” and reminded them that “use of generative AI in a way that violates an instructor’s articulated policy, or using it to complete coursework in a way not expressly permitted by the faculty member, will be considered a violation of the Honor Code.”

Russo said that the University has provided a degree of flexibility to its Honor Code, allowing professors to set their own policies, with the University’s policies taking effect in the absence policies set by a professor.

“I think the flexibility makes sense for what different types of classes are trying to accomplish,” Russo said.

Compared with the traditional form of plagiarism, where copied work from another student or academic can be detected and proved if the original work is found, generative AI technologies are difficult to detect in a student’s work. 

But sometimes, detection is possible. AI can reveal itself in subtle ways, Russo explained. 

Professors can compare past essays side-by-side with a suspicious submission in order to detect work that is written in an entirely different style. Sometimes, the AI “hallucinates,” and the professor can catch its mistakes. And in other instances, the output of an AI submission is totally different from what a “typical undergraduate” would produce.

A professor’s suspicion doesn’t merit an immediate Honor Code violation, but instead it will lead to a conversation with the student to find out more about the situation. In setting consequences for AI-related violations, Russo said the University takes into account the forthrightness of a student in coming forward to confess their offenses. 

And if a suspected student remains in denial of a violation after initial conversations, the professor can opt to go to an Honor Code hearing with the student, for which a University panel will decide the case with a vote.

And despite the race to update policies to adjust to a changing world of technology, Russo stressed that punishment isn’t the goal in the enforcement of policies on generative AI at Notre Dame.

“It’s not like anyone wakes up in the morning just wanting to catch cheaters.” she said. “We care about students learning, and if you’re outsourcing your work to ChatGPT, then you aren’t. And we want you to take your Notre Dame degree and represent us as a well-educated member of society.”

Changing world of academia

Susan Blum, an anthropology professor at Notre Dame, says that the problem of how to best manage AI technology in the classroom is not confined to Notre Dame, but is prevalent across the country. 

“It always takes us back to what the goals of education are, and all the really smart writing professors that I read, talk about the fact that writing really is for thinking and learning,” Blum said. “But if we emphasize that writing is for producing a product, then outsourcing that task gets you the product, but it doesn’t get you the thinking.”

Rethinking the goals of education is a responsibility for everyone in education, Blum said, one that will “challenge what we assume.” Students in particular must question their goals in obtaining an education.

“If your incentive is to get a decent grade with the minimal amount of engagement, you know, AI will get you part of the way there. You won’t learn anything, but you’ll get the work done,” Blum said. “So if learning matters, you can only use it so much.”

Screenshot 2024-01-30 at 9.04.02 PM.png

Desync is a start-up founded by two Notre Dame students, which similar to the above artwork, combines the ambitions of entrepreneurs with the utility of AI bots. Courtesy of Mark Evgenev.

AI to be a better person

Some students use AI to get by, and others, like Mark Evgenev, use it for everything.

Evgenev is a junior at Notre Dame majoring in science business who in January 2023 had an epiphany upon encountering ChatGPT: the technology is “the future.”

Evgenev set himself to work. Funded by Notre Dame, he took over 200 hours of courses on AI-prompting through the online course program Udemy and joined Collaborative Dynamics, a community of over 10,000 people aimed at learning how to prompt AI effectively. With this work, he began to develop his own styles and theories of prompting AI. 

Evgenev has also co-founded an AI-centered company, desync, which aspires to lower barriers to entrepreneurship by matching start-up companies with the right investors. 

“When it comes to fundraising, a lot of people get it really, really wrong,” Evgenev said. “They start spamming all the investors they can find, and they just get ignored.”

Desync therefore uses a database of 220,000 investors which its AI can sort through to match a single founder to the 100 most-fitting investors, whom the bot can then send emails to with an ideal pitch.

While he finds himself increasingly invested in the field of AI, he is not very worried about the implications of AI for the education of serious students.

“We got into one of the best schools in the country, and just using this to cheat and get through classes, that’s not going to get you anywhere,” Evgenev said.

Instead, Evgenev thinks students should use the opportunity to increase their knowledge repository in their classes, use it to practice on more problems in order to master a given skill or as an explainer on concepts they find difficult to understand.

“It’s really an opportunity to learn more rather than do less,” he said.

Evgenev aims to become a subject-matter expert in AI, and he is already working over 100 hours every week between schoolwork and his time running his company. The amount of work, he said, is “roughly insane,” but it’s part of his personality. 

“When I do things, I’m either all in, or I’m not in,” he said.

Evgenev said he has used AI successfully to help him improve “across every single level of life” over the last year.

“I have AI as a nutritionist. I have AI as a psychologist — if I have any problems I could just have that conversation and feel a tiny bit better. I can have AI for athletics, AI for personal development (to learn) how to actually talk to people better,” Evgenev said. 

The goal in this AI-engrained lifestyle, he said, is to “be better as a person.”