AI tools for writing might have started much earlier than we’re let on to know, but writing generators were made public around 2017 online. These programs typically use a combo of natural language processing and machine learning to write based on the user’s input.
While we’re on the note of discussing writing, we want our student readers to know that if they feel burnt out, overwhelmed, or just need a break from any of their schoolwork; then they can visit Studyfy to get professional help for fair and negotiable rates.
Studyfy provides all kinds of help for students, and they are the most professional essay editing service out there with hundreds of seasoned expert writers. Homework, essays, reports, you name it – and a writer will be at your service once you’ve clicked a button.
Moving on, here’s our take on AI tools.
The AI’s Capabilities Are Limited
The AI accessible to the public online is typically subscription-based or free. Their capabilities are too limited; however, accusing students of using them is unethical. We’re going to clarify what we mean here:
Most of these AI tools, like Jarvis.ai, can improve, make suggestions, or change writing. Much like Grammarly.
Most AI tools cannot write from scratch from a user’s input; they can simply make suggestions on what to write.
The text that the AI generates usually needs to be rewritten, heavily edited, and organized since many of their suggestions can appear incoherent without modification.
As of now, it’s difficult to know if the AI’s provided examples and information are from credible sources or not. Most AI writing generators do not state the source of the information they fetch; however, some of them do.
So, in that sense, the AI’s limited efficiency does not necessarily impose a threat to writing ethics. It’s more comparable to having a digital assistant that provides additional ideas for content, but those ideas need to be heavily modified to be used. However, using AI poses a bigger threat to other aspects of writing.
Relying On AI Can Potentially Dull People Down
If students rely too heavily on their AI writing tools and do not limit or moderate their use of it over a long period of time, they will become lazy and habituated to its assistance. The brain is like a muscle that needs to be stimulated by learning.
We already observe the heavy reliance on technology in most of our cities and communities across the globe. We rely on our phones for directions, communication, work, and entertainment. Billions of people around the world have a growing phone addiction that they are not aware is a huge problem right now.
Confirm this by riding public transport or visiting a public space, and you’re bound to see many people glued to their phone screens (or tablets, computers, etc.). Many people could use a dopamine detox because that’s what the phone and most smart devices offer, a superficial hit of dopamine to the brain—instant gratification at the tips of our fingers.
That’s where the ethical issue of using technology might arise. It’s paradoxical in nature; the more automated things become, the easier it is for most people, but at the same time, if we prolong our use of these shortcuts and smart apps, how will our own capabilities become in the long run? Are we relying on technology to the point of regression?
There have been discussions of banning AI tools on academic campuses, such as at the Victoria University of Wellington. Still, this type of tech is simply too easy to access online, and there are plenty of alternatives. Thus, perhaps in order to make the use of AI machines ethical, we should lean on the possibility of teaching students how to use them in the proper manner.
However, the implications and solid foundations of how this could be is still unknown and undecided, but there should be reasonable limits that schools can agree upon for such an issue. There could be a meeting to classify which activities and challenges in school warrant the use of AI tools and those that do not warrant its usefulness.
Teachers could brainstorm together to find the right time and moments to use AI tools and also know which moments require a student’s extended effort. However, this is a large move and must be fully agreed upon by all sides.
We have simplified everything, be it from cropping our fields to washing our clothes, so if you look at it from a broader perspective, what is happening to the way we learn is the same thing that’s happened to our way of commuting; it’s being automated and revolutionized.
However, with revolution and progress come many dangers, as we’ve already stated. Laziness and regression are potential threats to us if we rely too heavily on technology; however, this is not fully true and should be taken with a grain of salt.
Automation and our progress could also lead us to transform the way we learn, the way we handle education in schools, the way we work with machines, and so forth. We could be stepping into an unprecedented era, where students and the majority of the populace will be handling the management, supervision, and creation of AI and robots.
It is best not to put ultimatums on a future we won’t be able to grasp fully. The realm of possibility is still vast. As for ethics, it’s a matter of perspective and a matter of common sense. If students still need to work on their papers at the end of the day, perhaps it’s too far-fetched to call it downright plagiarism.