By Jashodhara Jindal, Student, Pathways School Noida
The use of AI-generated content is causing turmoil in academia. The question of whether utilizing AI-generated content constitutes plagiarism has become entangled in polemics, with valid arguments on both sides. Nevertheless, plagiarism, or copying without attribution, remains a serious breach of academic integrity. OpenAI’s ChatGPT has become ubiquitous in recent months, leveraging an extensive database and employing natural language processing techniques to generate responses that closely mimic human language. It is already being widely employed to complete academic assignments, reports, research, and more. The consideration of ChatGPT-generated content as plagiarism arises because ChatGPT does not credit its sources. However, whether copying from ChatGPT itself constitutes plagiarism is a contentious topic.
OpenAI, the owner of ChatGPT, assigns to the user “all its right, title, and interest in the output” in its terms of use. The company does not claim copyright over its generated output; in fact, it relinquishes all legal rights regarding the ownership and possession of that output to the user. The user can hardly be accused of plagiarizing material for which they own the rights and title.
ChatGPT employs a large language model to generate its responses, drawing from a training dataset that includes web pages, essays, magazine articles, internet content, and books. While some of these works may be protected by copyright, proving infringement or plagiarism may be challenging due to the algorithm’s claim that it does not copy-paste. The gap, perhaps in technology but more likely in intent, lies in OpenAI’s lack of source disclosure or the percentage contribution of individual texts to the final output. OpenAI places the responsibility on users to ensure that their use of the output adheres to laws and guidelines.
When the renowned political pundit Fareed Zakaria faced plagiarism accusations years before AI chatbots gained prominence, his former editor Michael Kinsley explored the fine line between ordinary research and borrowing without attribution. Kinsley suggested that several contemporary writers are essentially “collecting and rearranging stuff that is the work of other people.” Perhaps ChatGPT is merely streamlining this process. However, when it comes to students, it is debatable whether this process should be made more efficient.
There are alternative, less problematic ways to use AI chatbots. Search engines are already common tools for research. ChatGPT may expand a student’s reading list significantly, saving time and effort by generating summaries of individual pieces. Additionally, ChatGPT offers multiple outline suggestions and can serve as a debate sparring partner for practice, exposing students to diverse viewpoints and aiding in the absorption of extensive reading material. Neither of these applications, on its own, should be considered academic dishonesty but rather as tools for broadening perspectives and enhancing productivity. Artificial Intelligence, when used judiciously, can be a force multiplier for human ingenuity.
A prominent argument in favor of AI-generated content is that the output reflects the user’s skill in providing high-quality prompts. A simple experiment can validate or invalidate this notion.
I conducted an experiment using ChatGPT for a history assignment I received in eleventh grade. Using one low-quality prompt, “Write essay on Hitler rise to power 500 words,” and one high-quality prompt, “Write an essay on Hitler’s rise to power in Germany. Include all essential points such as the Beer Hall Putsch, propaganda, post-World War effects, economy, etc., and make sure to include necessary details. 500 words,” I generated two essays on Adolf Hitler’s rise to power. I then submitted these essays to my history teacher for evaluation under the International Baccalaureate History Paper 2 grading criteria. He found that both essays were of a similar grade level and couldn’t identify which essay was generated using which prompt. Although this is an isolated result, it strongly suggests that for a student seeking to complete their homework with minimal effort, ChatGPT serves as more of an autonomous engine for plagiarism than a productivity-enhancing tool.
Academic institutions are rightfully cautious of ChatGPT. The pillars of education emphasize individual effort and critical thinking. Submitting AI-generated assignments undermines the learning process. Importantly, students engage in critical thinking when they write; the cognitive process occurring in their minds is as valuable, if not more so, than the output itself. If all thinking, analyzing, and composing are outsourced to a chatbot, the fundamental purpose of the exercise is defeated.
With the disruption caused by ChatGPT and other AI chatbots, an evolution of testing methods may be on the horizon. Lillian Edwards, a prominent academic in Internet Law, recently remarked, “At the moment, it’s looking a lot like the end of essays as an assignment for education.” Banning ChatGPT may become increasingly challenging to enforce in the near future, as its usage becomes harder to detect. Establishing guidelines for its ethical and responsible utilization could prove more beneficial, benefiting students, teachers, and examiners alike.