Recently a member of the VICTVS DevOps (software development) team, asked an AI-powered image generator to create a picture showing what the VICTVS office would have looked like in the 1950s. The result is below.
At first glance, this is a really excellent image that has the effect of making one immediately ask ‘how the hell did it do that?’. The colours and style are period appropriate, the ubiquitous suits in the office, the lack of diversity, even the lamps on the desk are what you would expect to see. On the surface, this seems like a miracle. Humans have evolved to the point that we have created machines that can generate art in mere seconds, that would take a person significantly longer. The effort required to imagine a prompt such as, ‘make me an image representing…’, is clearly a tiny fraction of that required to actually sit down and make that image from scratch. But the machines make it look so easy.
That is until you look closer.
A 1950s smartphone?
A 1950s WAN / server?
A 1950s laptop?
No idea what’s happening here…
At VICTVS we like to make sure that we are as close to the cutting edge as we can, but had we existed in the 1950s, it’s hard to think that we would have managed to be 40 years ahead of the tech game…
Understanding why these and other errors occur in generative AI systems is of course not simple. The volume of data being accessed (i.e. the whole of the Internet), and the lack of quality control of that data (it doesn’t take long to find total nonsense in cyberspace), means that anyone tasked with providing an answer to a question using this data, may well end up quoting or referencing or incorporating erroneous source material. This seems to be what AI is doing – making assumptions, joining dots and filling in the blanks with best guesses. The irony of course is that this is a very human thing to do – but it’s the sort of thing that we expect of small children. And this is an interesting point referred to by Mo Gawdat in his book Scary Smart. Here he explains that current AI systems should be regarded as children who are finding their way through an incomprehensibly complex world and at the same time, are being asked endless questions by carbon-based life forms who like to laugh when the systems get it wrong. As the machines continue to create answers, images, results and assessments, they will need to be corrected when they make mistakes – just like a child – so that they may continue to improve.
The point of this article is not to condemn the current AI systems, but to highlight that it is important to recognise that the current generation of algorithms are in their infancy. Their training has barely begun. To imagine them as fully formed automata capable of exceeding human capabilities in all aspects, is a mistake. The companies behind these systems are of course keen to extoll their virtues through slick marketing, suggesting that their latest ‘AI’ bot can act as your digital assistant (like Clippy…) to sort out your bad habits, bad admin, inefficiencies, lack of ideas, lack of creativity, and lack of time, but just because someone says that something can do something, does not mean that it actually can. Therefore, it is important to be realistic in our expectations of current AI technology, to acknowledge and learn about its limitations, and to ensure that if we choose to use AI in assessment design or execution, then those assessments are not compromised as a result.
The age of AI is here. Whether that is a good thing for humanity or not, will be shown in the course of time. For now, we should all try to maintain sensible levels of expectation for this brand-new technology and ensure that we don’t fall for marketing hyperbole or be dragged in by the allure of the ‘shiny new thing’ syndrome. Current AI is good, but it is not a panacea.
For clarity, this article was NOT written using Chat GPT or any other LLM. All errors are my own.