top of page

Robot Did My Coursework: AI Poses New Questions, Risks in Academia

I often think about what the first students of St Andrews would think of the University today. I picture a fifteenth-century academic fainting at the sight of cars, or Tesco, or the lack of people being burnt at the stake. Nevertheless, it is obvious to say that a 600-year-old institution has progressed alongside arising technological advancements. Accessibility to printed texts, the invention of the graphing calculator, the use of PCs, etc., have all drastically affected the universities approach to their established pedagogy.


The newest invention that is sure to revolutionise academia is a software called ChatGPT. It is an artificial intelligence program created by Open AI, which has the ability to produce text in a human-like manner.



The “chatbot” is powered by a vast array of sources throughout the internet and has the ability to constantly learn and develop. ChatGPT is the predecessor of earlier models like GPT-3 and other programs started by tech company Open AI. The company proudly states that its mission is to ensure that “artificial general intelligence benefits all of humanity”.


Open AI was started by a group of tech giants: Sam Altman, Peter Thiel, Reid Hoffman, Jessica Livingston, and of course, Elon Musk (whose name I am now just going to assume is loosely attached to any tech company I may encounter). The colloquially conversational manner of the program, alongside its accessibility, separates it from all of its previous models. From “write a screenplay about a frog wizard just looking for love” to “rewrite this code in a different language”, ChatGPT thinks and produces works similar to and in some cases better than a human may be able to. This brings up a burning question: what effect will ChatGPT have on academia?


Whether it’s students using the software to write essays or teachers using it to develop lesson plans, ChatGPT will categorically affect aspects of traditional learning. Globally, teachers are already shifting assessments from online back to in-person and shifting away from essays to projects that engage more with students' conceptual and critical thinking abilities. Many universities are rewriting their academic honour codes to explicitly outlaw the program. In the United States, ChatGPT has already been banned in school districts like New York and Los Angeles. St Andrews is yet to release a prescriptive response as to how the chatbot should be used, if at all. Surely having another source write a piece that you pass off as your own is a violation of academic integrity, but to what extent is unclear.

Few quotes truly stick with me through the years. But there is one that has: my ethically questionable coach turning around to ten-year-old me and saying, “if you’re not cheating, you’re not trying,” is one that I have found truly powerful.


Plagiarism has always been an issue within universities, and the idea of having another entity complete a piece of writing for you is not new. There have always been people that you can send your essay question to, and as long as you buy them a meal deal and promis


e that you won’t tell anyone, they will write a decently good piece for you (I’ll put his email at the bottom). Regardless of whether or not I hired someone to write this article or used ChatGPT for most of it, the ethical use of Artificial Intelligence is something that universities need to start incorporating into their understanding of academic integrity.


The chatbot is perhaps unlike any other forms of academic dishonesty because of its accessibility.


ChatGPT is free and easy to use (that might sound like an advertisement, but it is not). The program has the ability of simplifying complex concepts and reframing them into understandable ideas and language, which can make assessments far easier for any user. In the last few months alone, the bot has taken an MBA exam issued by a Wharton Business School professor and The United States Medical Licensing Exam.


The latter test typically takes medical students five years of school and studying to pass. So the chatbot is easy to use, academically capable, and you don’t have to buy it a meal deal. Although I believe that it is implicitly banned through the honour code, I am sure there are students reading this asking themselves, why not just use ChatGPT?


Open AI’s program was immediately met with responses from a number of plagiarism detecting software companies. A staple of St Andrews, Turnitin, has already begun to develop means of testing whether papers were written by artificial intelligence or humans. A student at Princeton created an app called ZeroGPT which can easily sort the difference (yes, he is only twenty-two, and, yes, that has really put my life into perspective).


However, ChatGPT is not a magical, all-knowing genie; it does have some flaws of its own.


The program’s intelligence is vastly sourced from the internet, so to use ChatGPT as a search engine is not a very viable source. Beyond the logistical nightmare of people attempting to develop a system for citing ChatGPT, it should not be used as a source for facts.


There have been multiple cases in which it has given false information, and to use it as a search engine is the same as using Chrome or Safari without checking the name of any of the websites you click on. It would be great to think that humans have created an almighty being who awaits our every command, but I do not think that is the case here.


St Andrews, alongside many other institutions, is now approaching a time where it is necessary to set a precedent for how Artificial Intelligence should be used in academic practice. The University could ban the use of the chatbot, which quite frankly would likely be ineffectual; or, they can choose to integrate it into the curriculum. I know there have been cases in Computer Science and Philosophy lectures already where professors have displayed ChatGPT to help illustrate their point or allude to the implications that it may have in their field.


Invariably, artificial intelligence of this kind will become an increasingly prominent part of our lives and our academic institutions, but the question of whether the University will condemn or condone them is still uncertain.


I am sure the invention of the computer or the calculator caused a stir in universities. Some people must have thought their fate was sealed and academia was about to come crumbling down. Now they are both deeply ingrained in all academic classes, and no one is saying that we are worse off for their presence.


While it is a daunting concept to think of artificial intelligence’s rapidly progressing abilities, if it is not approached with an open mind or some degree of curiosity, institutions would be accepting a willing ignorance.


I decided to take my questions directly to the source and interview ChatGPT itself. I started asking about what it thinks of the moral permissibility of its use on examinations, before I quickly became derailed and asked it for some self-help tips (I do not recommend treating the program like a therapist, it cares little for your feelings).


However, before its feigned attempt at convincing me that I’m the problem, the chatbot acknowledged its potential for misuse in academia. The program has the potential to be misused within institutions, and is designed for the purpose of helping people.


Professor Ian Gent, of the St Andrews computer science department, when asked about his thoughts on ChatGPT, said “I’ve been in AI for half its history and this is the first time I’ve really bought into the hype of the latest AI fad. I think if anything ChatGPT is being underhyped, not overhyped”.


The program undeniably has the capacity to effect change; academia is not immune to that, regardless of if it is banned or not.


The University has the unique opportunity to modernise its approach to teaching.


The robots are not taking over, teachers are not becoming obsolete, and I still haven't found a viable therapist.


However, academics and the world around them are bound to change, as they should, and artificial intelligence will become more prominent and capable each year.


Universities have the ability to utilise this new tool to reassess their approach to teaching and develop alongside the inevitable technological growth.




Illustration: Lauren McAndrew

238 views0 comments
bottom of page