top of page
Search

Tired of the AI conversation, when are we going to talk about school?

What is artificial intelligence, and what was it made for? Is it a corporate tool used to control us, or a resource that we will benefit from? Will it ruin our ability to think critically? Maybe it’s your friend, perhaps your enemy, and if you’re a student, it’s likely just something you use for those online assignments.


See, in almost every AI conversation I’ve been a part of, there is always some finger pointing. It’s as if there should be an obvious negative, something unfathomably terrible that comes from this kind of power. I mean, with this kind of advancement, it’s only natural to be so skeptical.


We do know of a few negatives so far, one of the most detrimental being in the context of education. AI has definitely had a huge impact in the realm of academia for everyone involved, and this can partially be due to the timeliness of the topic; learning and growing isn’t something we can just pause, so educators are quickly scrambling to unite on an answer. While the fingers are definitely pointed, we might not be pointing them in the right direction.


Where conversations are had about AI in school, I am never invited. I can’t speak for the conversations that may or may not have been held by the people who decide what’s important for me to learn, but I can speak for what I haven’t heard. I haven’t heard of any invitation to an open discussion where my opinions can create change. Instead, I have received threats of academic investigations if I dare to use AI at all. While each instructor goes about modulating plagiarism in their own way, it usually always lands on the student to be responsible for any failure from the new technology at hand.


It’s clear that the schools believe it should be on the student to bear the repercussions of the potential that their work may be flagged for plagiarism, even if it was not plagiarised, and that is likely because that option is close to free. It surprisingly costs far less time, money, and human resources to go through pools of academic investigations than it does to change the system we have, or does it?


It’s very easy to believe the explanation that we are given: that the current AI detection tools are not good enough, and that’s just how it is. In reality, while that may be the case, we are being distracted by what does work, what is good enough, and that is changing the way these schools test in the first place, before detection tools need even be considered.


Conversations about testing and exams in schools used to be a hot topic for discussion in the community, so what happened? We first were hit with the monumental coronavirus, which took everyone’s attention entirely. This gave school’s the first step towards shifting the blame for poor education quality, in fact, the point was made for them by the public; ‘student’s will cheat if they are able to get away with it’. Second to covid was the public release of free-to-use generative AI, which only further helped to promote the narrative that students do not care for their education, and would cheat if necessary. These in combination dramatically shifted the focus of the testing issue from school to student, and now that there was an excuse, the pointed fingers were turned away for good.


While academic institutions can publicly shame AI for the reason students graduate with a poorer education than before, attention should be drawn towards the lack of legal action being taken towards open AI. From this observation, one might be able to conclude that this is actually because in secret, these institutions benefit quite a bit.


While the school seems to be interested in the development of AI detection tools, they leave it to instructors to scramble and rework their curriculum in a way that is even less accessible for themselves and students. The solutions that arise to detect plagiarism are quite frankly either ineffective or anti-human, yet despite knowing this, schools push for these tools to be prioritized rather than reforming the testing systems in place from ages ago. Whether you believe it or not, like many of the recent changes at Dalhousie, it’s hard to understand how this can be for lack of funding when it just seems like these funds are being moved around elsewhere. What you see in the end is a trickle down the line, where the school’s minor inconveniences become the staff's problems, and the staff's problems become the student’s nightmares.


Unfortunately, outside of the AI issue, this is not the only example of students getting the short end of the stick. This passing-off of problems can be seen in the form of our education quality, student support systems, campus accessibility services, and just about everything we interact with as not only students, but faculty as well. The burdens of our system should not be on the students, and certainly not on their teachers. This is something that our schools refuse to see.


While one might be skeptical, imagine that universities are corporations. While there are many measures of success, in the ever-tipping scale of pumping out well-educated students versus staying monetarily afloat, money will win over student wellbeing every time. Rather than putting money and resources into fixing these issues, they can point the finger of poor student education at AI rather than at the non-inclusive ways they decide to test them in the first place. The way we test has always been one of the biggest accessibility issues imaginable, and now with the rapid decline of testing options at Dalhousie guised as an excuse to “combat AI”, we are moving dangerously closer to a criminal world for all students.


The AI issue is the accessibility issue. If we want any positive resolutions to come out of at least one of these problems, we need to stop trying to use the same old detection and testing methods that no longer fit the situation we’re in. Our evolution no longer supports online assignments, but that doesn’t mean we should turn to handwritten forms of testing either. Those exams need to become a choice between multiple options, not a single one. If given the ability to choose between, let's say, a written and an oral exam, it would not only greatly benefit those who better communicate their knowledge through one over the other, but it would also be near impossible for them to use AI to complete at least one of these kinds of tasks.


Why don’t we want to use AI the way it was always intended? Rather than fighting the inevitable, let it be a tool we can use to solidify and question our ideas and better our understanding of the topics we are presented with in class. This would not only support the student’s own curiosity, but it shows their instructors the effort that they are willing to put into further exploring these topics outside of class time. If we’re talking accessibility; having options that require just a bit more supervision (like an oral exam, presentation or group assignment) might provide certain students with the opportunity to display their knowledge fully, rather than by chance on an exam that is designed to have an average of 70%.


If money is an issue it is because we are putting it in the wrong places. Stop investing in anti-AI software when AI will soon, if not already, be part of how we interact with the world. Stop removing teaching assistants roles in the classroom and replacing them with machines that grade multiple choice exams which total up to 100% of a students grade in a course. Who do these kinds of exams benefit? They don’t benefit me. They don’t benefit my friends with learning disabilities, communication differences, neurodivergence and poor accommodation options. They don’t even fit the normally high-achievers who’s grades now have a ridiculous distribution across their courses because not one instructor even uses the same method. The one-size-fits-all no longer fits any, and this is a growing realization that is being made everywhere. The system is old; it's time to listen to those who live in it every day and make the changes we deserve.


While this should’ve been taken seriously when it was an accommodation issue, maybe AI is the push that we need to create a better academic environment for everyone. If the classroom doesn’t have these resources, then the school needs to make room for them to be implemented. If the school doesn’t have that room, then we need to start questioning why. This is something that is long overdue, so please, when do we start?


Written by Maria Hamlin, Published April 2, 2025

Comments


© 2035 by Marketing Inc.

bottom of page