New Zealand Principal Magazine

Editorial

Liz Hawes · 2025 Term 1 March Issue · Editorial

We tend to think of Artificial Intelligence as a new phenomenon, coinciding with the emergence of ChatGPT and the like, but in fact it was invented in the early 1950s.

In 1950, Alan Turing, a mathematician, computer scientist and logician, published ‘Computer Machinery and Intelligence’. This publication formed the basis for what became known as Artificial Intelligence. He was described as ‘the man who made machines think’.

It was John McCarthy, in 1955, however, who actually coined the term Artificial Intelligence, or AI. McCarthy was a PhD graduate in mathematics from Princeton University. It was while running a series of summer schools for the world’s leading thinkers in computers, that the term AI emerged. By 1965 he was Founding Director of the Stanford Artificial Intelligence Laboratory, researching machine intelligence, graphical interactive computing and autonomous vehicles.

In fact the years between the 1950s and 1960s were quite prolific for AI. This is the time many computer languages were invented and the idea of ‘robots’ was being explored by the creative artists of the world through film and fiction. By the 1960s, robots were operating on car assembly lines and in the 1970s the first robot was built, as was the first autonomous vehicle. Jumping forward, who could forget the famous chess game in 1997, when ‘Deep Blue’, an IBM developed AI programme, beat the world chess champion Gary Kasparov?

Throughout the 1980s, 1990s, and early 2000s we saw language translating programmes, computer generated conversations in human language and human level reasoning, drawing programmes, the first driverless car, chatbots, speech recognition software and NASA landing two rovers on Mars with no human intervention. There’s nothing new about AI.

What has changed are the motivations behind advancing AI. By the 2000s social media sites like Twitter and Facebook and the Google search engine were using AI to target consumers with advertising by using algorithms that collect data on their experiences. The CEOs of these media platforms were deliberately manipulating and exploiting their users to build their own wealth to obscene levels. And they have successfully achieved that goal.

Shameless exploitation aside, current social media platforms have also led young people, in particular, to struggle in distinguishing reality from unreality; truth from fiction. Such is the sophistication of editing tools and the ease of their use, and such is the pressure on young people to meet certain idealistic images. Further, since everybody can be a publisher on the internet and on social media, we no longer have the journalistic checks and balances to ‘fact check’ what is written. This further muddies the waters allowing ‘fake news’ and mistruths to thrive.

It is in this social context of manipulation, doubt and striving for unrealistic perfection that ChatGPT arises. It draws on knowledge accumulated across the entire internet, to source whatever information a person wants to write about, and what’s more will write in the format and style you prescribe, with or without tables, graphs and illustrations. So what student, for example, would not be tempted to have a night off and instruct ChatGPT to write their assignment for them?

I recently met with some colleagues of mine who work as tutors at Massey University. Initially, they were right on to ‘cheating’ students and already had the software to check whether ChatGPT had been used in the writing of an assignment. But very quickly, that software could no longer detect the cheating students because the AI-driven ChatGPT had lifted another notch. Now my tutor friends can’t be sure whether an assignment is written by an AI program or by the student. But cheats in schools or universities are not the only problem – although the lack of ability to distinguish cheats from non-cheats will surely diminish the value of university credentials over time, bringing a whole new set of problems. What is the meaning of a University Degree when an employer has no idea of the veracity of the work completed? How will they know which applicants have real degrees and which have AI-generated ones? It will be impossible to tell.

AI programs can also effectively corrupt the music scene. By reconfiguring the music of an existing artist, calling it a new piece of music and selling it, the intellectual property and copyright of the original artist is effectively stolen. Because let’s remember, AI does not create. It draws on the information already in existence, albeit from an enormous quantity of information, and turns it into a different form as requested.

The same issues exist for the painters, the novelists, the poets and the inventors. But for my tutor friends, there is a much bigger and more serious problem created by this new generation of AI. It is the fear that, in the end, there will be no motivation or incentive for anyone to create anything new. Everything will be AI-generated from the existing bank of information. We will become stuck, as if in a time warp.

Perhaps it is time to pause and have a rethink about this generation of AI, before we forget how to.

New Zealand Principal Magazine: Term 1 2025