FOX31 spoke to Shaun Schafer, a professor who sits on the Generative AI Task Force for the Metropolitan State University of Denver. Schafer said ChatGPT is a generative artificial intelligence program. Essentially what that means is you feed in information, like a prompt and/or a question, and it will produce an answer.
Can teachers find out if a paper was written using ChatGPT?
So, how does this play a role in education? Could some students take the “easy way out” when it comes to writing papers?
This is something a lot of teachers and professors are having to deal with whether they’re ready to or not.
You could say something like, “Write a three-page paper about Abraham Lincoln’s career.” FOX31’s Carly Moore tried this and in just over a minute, ChatGPT wrote a decent paper with different sections of his life. However, it was not three pages.
Schafer said it typically produces about B to C-grade papers right now.
Usually what ChatGPT writes, he said, is mostly right factually but isn’t very dynamic.
So how would a teacher know if someone uses ChatGPT for a paper? There are ways to put the text in a reader website, like Zero GPT, but it’s not always accurate.
“ChatGPT [readers have] about a 1% to 2% false positive rate right now. There’s not one out there that doesn’t have that. So that means if you did 1,000 papers all of a sudden you’ve got this problem that you’re accusing people of being wrong who are not and did not use GPT,” said Schafer.
He said the generative AI programs keep improving based on feedback they get from users.
The more people use it, the more accurate it will be.
Some school districts like New York Public Schools have already instituted an all-out ban. Schafer doesn’t believe that’s realistic.
What stance are local universities taking on AI use in the classroom?
How can universities walk the line of exploring new technology without potentially impacting creativity?
AI and ChatGPT are something MSU has talked a lot about. They’ve created a task force to make recommendations on how to deal with it in the classroom.
Schafer, a member of the task force and deputy interim provost, said that they feel like they needed a task force to keep up with the ever-changing industry.
Basically, the task force makes recommendations to the provost and then those recommendations go to faculty and deans.
One of the things that will be new for students as they come back this fall is that every professor will have rules in the syllabi for every course describing how you can and can’t use AI in that class.
It could be different for each area of study or even for different types of assignments, ranging from very restricted to requiring students to use it on assignments.
“We talk a lot about what are the rules of academic misconduct. Basically, if you present work as your own work, it has to be your own work, it’s not AI-generated,” said Schafer. “At what point am I cheating? At what point does that become the next level? That is the really hard part. You can take a real ultimatum of ‘No AI may be used in this and if any AI is used, I suspect anything you’re out.’ You can do that, but the reality is that’s not very practical. We’ve recognized that people are going to use it and so one of the things we’re trying to come up with is what are the best spectrum of rules, you can have to deal with that.“
Schafer said the best instructors are already having conversations about what is acceptable, but he said it’s going to get harder and harder to tell the difference.