Statistician Social scientist Educator

A robot writing a book. This image was generated using Lexica, an AI bot that runs on an image generation model called Stable Diffusion.

In the 1970s, a revolutionary technology was finally starting to make its way into households and classrooms after decades of development. This device took just seconds to complete tasks that humans spent years learning to achieve by hand. Naturally, a heated debate ensued over how to respond to this new technology. Many parents and teachers’ first instinct was to ban it, fearing that young minds would become lazy and dependent on it. One common argument reported in surveys was that students would no longer be able to perform basic life skills if they ever lost access to this device. “What do they do when the battery runs out?” griped one professor to the New York Times in 1975. “I have yet to be convinced that handing them a machine and teaching them how to push the button is the right approach.”

As you may have guessed, this controversial new device was the calculator—a technology so common nowadays that most people are usually no more than an arm’s length away from several different devices with one. As for classroom use, our society seems to have reached a consensus that children should learn to do some basic arithmetic on their own, but calculators should be allowed for more advanced math classes. Older students need to be able to use calculators so they can move on to developing higher-level reasoning skills without being slowed down by tedious calculations. As another professor argued to the Times in the early days of the calculator: “If a teacher finds it necessary to ban calculators, then what he’s teaching is trivial.” Fifty years later, this rings true even more: doing basic arithmetic just isn’t that important now that calculators are everywhere, but quickly solving a calculus problem or estimating a statistical model using calculators of some sort are highly lucrative skills.

A remarkably similar debate has sprung up in recent months over a new advancement. This time, the technology in question can automate writing rather than arithmetic. AI bots such as ChatGPT can write entire essays based on a prompt. Granted, ChatGPT can’t just magically write an A+ paper with the snap of a finger. As a language model, ChatGPT is only capable of mimicking language that it was fed from the real world—mostly webpages and books—so it cannot think for itself or produce deep insights. As one professor points out in a viral video, essays produced by ChatGPT exhibit the unusual combination of perfect grammar with rather shallow ideas. College-level writing assignments usually require critical thinking, creativity, and citations of sources—none of which ChatGPT can handle (for now). As a result, any essay written entirely by AI is not likely to earn more than a C. This gives students a natural incentive to write their own papers with a little help from AI rather than the other way around. AI is especially powerful for smaller tasks such as coming up with ideas for research topics and arguments, generating outlines for papers, rewording text to be more or less formal, drafting bits and pieces of text, and who knows how many other uses.

Here’s where the discussion on AI gets uncomfortable for some. Let’s say a student does use ChatGPT to write a whole essay, and they turn it in with little to no edits. What if this essay actually earns an A? Then, what if the professor puts it through an AI detector and finds out that the student used AI to write the whole essay? Who is at fault? Technically, the student is at fault unless the professor explicitly authorized this kind of AI use. At a deeper level, though, the student is not the one in the wrong here. The professor is the one who gave an assignment with so little real-world value that it can be automated in seconds. The student is paying thousands of dollars in tuition to learn a task that a robot can breeze through and still get an A. To echo what that professor said about calculators in 1975: if a teacher finds it necessary to ban AI, then what they’re teaching is trivial.

So far, most educators seem to be treating AI use as cheating. Some school districts have outright banned ChatGPT, and some universities have already charged students with academic misconduct for using it. One professor even told the New York Post that he feels “abject terror” over ChatGPT after realizing a student used it to write an essay for his class. To be frank, this frenzy to suppress all AI use in classrooms is a knee-jerk reaction built on the insecurity of educators who don’t want to deal with the fact that the skills they teach are no longer useful. If a course can be aced with AI and minimal human input, then it is not worth taking. But it’s so much easier for educators to ban AI than it is for them to reevaluate their entire curricula, teaching styles, and assessment techniques for a world where many tasks will be automated in the coming years. 

Additionally, many people have yet to process the implications of AI on the value of writing your own original sentences. We often consider any use of language that a person didn’t write themselves to be plagiarism, but plagiarism actually means stealing someone else’s work. Until now, there was no need to make such a distinction. Stealing someone else’s work was the only way possible to use language that you didn’t write on your own. Another person had to put in the time, effort, creativity, and critical thinking to create those words. There is now another way: an AI bot wrote it. The whole purpose of many AI bots is to generate unique, copyright-free content that the user can include in their own work—with proper attribution of course. As long as students credit any AI bots they use, there is no need to consider it plagiarism. It’s not stealing; it’s using AI exactly as it is intended to be used.

But it still feels wrong for a student to use AI-generated language in their essay, right? Let’s break that down a bit more. Why does it feel wrong? Even when we use AI, we still have to do the bulk of the thinking. Words are a means of conveying thought; the ideas contained within words are where the value really is. People often forget this because all we’ve ever known is a world where we have to do both the thinking and the writing. It’s hard to imagine a future where we no longer have to do all our writing on our own, just as it was hard in the 1970s to imagine a future where we no longer have to do all our arithmetic on our own. Think of it like typing and spellcheck: people generally still know how to handwrite and spell. We usually only handwrite for less formal purposes, so it’s fine if our spelling and grammar isn’t top-tier. For more formal purposes, we use technology to maximize our writing quality. AI is the next generation of technology for this purpose. It’s not like we won’t be able to form sentences anymore; we’ll just be able to form them better and more quickly.

As time goes on, AI will only become more widely available, more powerful, and more integrated into daily life, just like calculators. Microsoft, a major investor in the company that developed ChatGPT, is looking to add the same technology to Microsoft Word, Outlook, Bing, and its cloud services. Google has a similar AI system that it may be rolling out into its products soon too. It’s not clear yet exactly what this would look like, but plan on seeing AI assistants incorporated into nearly every app you use within the next few years. The predictive text on your phone will begin suggesting sentences and paragraphs rather than just the next word. Anywhere you can type, you will probably have an AI tool that can generate text from a prompt. Not long from now, using AI when you write will be so normal that writing without AI will be like writing by hand today.

Of course, just as children need to learn basic arithmetic, they still need to learn basic writing skills. But by high school and college, banning AI doesn’t help anyone, and it actively hurts us. If school is supposed to prepare students for their future, then school is where they need to learn how to use the tools that will be at their disposal. Rather than banning AI from classrooms, let’s teach students how to use it responsibly and properly, and then raise our standards accordingly. Just as calculators free us from robotic tasks so we can move on to tasks that require higher-level thinking, AI does too, in ways that we are only beginning to imagine.