Sign In
Sign In
Register for Free

AI in education – Boasts in the machine

Illustration of a robotic humanoid figure holding an oversized pencil and filling in a checklist, representing AI in education

We’re told that AI could elevate or destroy us, observes Gareth Sturdy – but the reality is more prosaic than some would have you believe…

Gareth Sturdy
by Gareth Sturdy
DOWNLOAD A FREE RESOURCE! KS3 Computing Lesson Plan – Create Your Own Animated Cartoon

Have you heard about the school that let its Y5 students design its curriculum?

The kids wrote schemes of work and individual lesson plans for their teachers by comparing information across lots of websites, but without any deep knowledge of, or personal engagement with the subjects involved.

Okay, I’m fibbing. That didn’t happen. Y5s may well be capable of Googling and contrasting the results they find, but it would be absurd to see this as a substitute for the knowledge, informed reasoning and practical skills wielded by teachers when designing a curriculum.

And yet, this is how the artificial intelligence (AI) behind chatbots such as ChatGPT works in practice – with a mentality similar to that of a 10-year-old blankly surfing the net. So why are so many voices trying to persuade us that AI in education is the biggest thing to ever happen to schools, with the potential to fundamentally transform what they do?

Febrile climate

Earlier this year, Prime Minister Rishi Sunak launched a £100m taskforce to exploit the use of AI across the UK. Education was high among said taskforce’s priorities.

At around the same time, however, some of the Silicon Valley moguls who have been instrumental in building AI platforms – including the likes of Elon Musk and Steve Wozniak – penned an open letter warning us all of the technology’s potentially calamitous impacts.

They even called for a temporary moratorium on further development.

The Oxford academic Toby Ord recently told The Spectator that around half of AI researchers harbour similar fears about human extinction stemming from the use of AI. Indeed, one of AI’s foremost pioneers, Geoffrey Hinton, went as far as quitting Google over his concerns regarding the existential risk that machine learning poses to humanity.

What are teachers to make of this febrile climate, in which we herald intelligent machines as both saviours of education and destroyers of civilisation?

The fearful response to AI seems of a piece with the apocalyptic mindset we’ve been encouraged to adopt in response to all contemporary global challenges, be they viral, climate-related, military or economic.

Using AI in education

Yet spending just a few minutes toying with ChatGPT or Google Bard should be sufficient to persuade even the most sceptical of how ingenious these tools are, and the myriad potential applications they can be put to in education.

There can be little doubt that AI in education is going to improve the standard of learning resources and free up valuable teacher time. Just as with any disruptive technological development, jobs could be at risk. But on balance, the future will be better with AI.

“On balance, the future will be better with AI”

That said, let’s not get carried away. AI is, at least for now, really, really dumb. John Warner, English teacher and author of Why They Can’t Write?, has previously argued that we should be careful in how we talk about the data handling carried out by machine learning algorithms.

Change in consciousness

Warner makes the point that what we’re seeing from them isn’t genuine reading or writing. The so-called ‘Large Language Models’ that currently drive AI don’t actually know anything. They’ve yet to experience any change in consciousness through their learning.

Elsewhere, the musician Nick Cave has written that AI, “Can’t inhabit the true transcendent artistic experience. It has nothing to transcend! It feels such a mockery of what it is to be human.”

All AI can presently do is identify patterns it has seen before and copy them. There is no imagination at work. No new ideas are being generated; only reworkings of what’s already been. AI bots are mere plagiarists, a pastiche of intelligence. Or as the technology writer Andrew Orlowski memorably put it in The Telegraph, “ChatGPT – the parrot that has swallowed the internet and can burp it back up again.”

Simulated understanding

AI cannot impart meaning to anything. Meaning can only ever reside in a human mind. This is crucial for education, which is the creation of meaning by another name.

If platforms like ChatGPT have any use at all, it’s only because human beings have previously assigned meaning somewhere inside them.

The engineers currently fretting about how intelligent AI could become might be better off paying more attention to just how, well, artificial it still is.

These issues have been hotly debated over many years, ever since Alan Turing first proposed the ‘Turing Test’ in his 1950 paper, ‘Computing, Machines and Intelligence’.

To answer the question of whether machines could think, he hypothesised a game played between a person and an unseen machine. If the player can’t tell that their opponent isn’t human, the machine passes the test.

Strong vs weak AI

30 years later, a paper titled ‘Minds, Brains and Programs’ by the philosopher John Searle boiled the question down to focus on whether a machine could ever truly understand a language – a situation he called ‘Strong AI’ – or merely simulate understanding, which he dubbed ‘Weak AI’.

He concluded that machines of sufficient complexity could be devised to pass the Turing Test by manipulating symbols, just as the ELIZA project created by computer scientist and MIT professor Joseph Weizenbaum appeared to do, way back in 1966.

As Searle noted, the machine wouldn’t need to understand the symbols it was manipulating in order to provide an illusion of cognition sufficient to pass the Turing Test. Computers running software, on the other hand, wouldn’t be able to achieve Strong AI.

To truly come to terms with the role of AI in education, we need to invoke Searle’s distinction between the understanding, and mere simulation of understanding needed to pass a test.

He suggested that the difference between them lies in intentionality – the human quality which always directs mental states towards a transcendent end.

Demoting the teachers

What end are we seeking when we educate? What is a student’s real intention when they learn? Do programs like ChatGPT produce knowledge, or a mere simulacrum of it, resulting from mindless rule-following? It’s the answers to these kinds of questions that will ultimately determine the use of AI in education.

Machines aren’t going to ‘take over’ our schools because they simply can’t. This is regardless of whatever spooky stories their creators like to frighten themselves with.

If kids are using ChatGPT to cheat on their assignments, that should just tell us that we’re setting the wrong sorts of tests. Education isn’t a Turing Test. AI is weak. Machines will always be dependent on humans for any supposed ‘learning’ they achieve.

“If kids are using ChatGPT to cheat on their assignments, that should just tell us that we’re setting the wrong sorts of tests”

There is, however, one genuine risk. The more teachers come to rely on AI, the more likely it is that AI will reshape and define the meaning teachers give to their own role.

If we end up reducing education to a matter of machine learning, then it follows that teachers might begin to approach their vocation more mechanistically – almost as an automatic process of efficient information transfer perhaps best suited to a production line or call centre.

In such a milieu, the experience of becoming an educated mind, and the struggle and delight involved in that expansion of intellect, could start to seem increasingly irrelevant, rather than what they actually are: the point of the whole exercise.

The threat to education here isn’t posed by machine intelligence superseding that of teachers. It will come from teachers demoting themselves into becoming mere machines themselves. This will cheapen the ideal of learning and undervalue the meaning we assign to education itself.

Gareth Sturdy (@stickyphysics) is a former teacher now working in edtech

You might also be interested in...