Clickbait? Perhaps, but I have a point to make.
There is a blog I follow which regularly posts about good people. People who make a difference to their world. Kindness. They change lives. And it makes for a refreshing read in a world which often appears to be so full of shit we could be drowning in the stuff. I also see now on social media – at least on Facebook, which is the only one I follow other than Instagram – AI generated posts on good people. I know they’re AI generated, because the signs are all there. I don’t intend to list the signs, as most people are aware of them already. These AI generated posts seem to fall into the same few categories. There is the rough biker with the heart of gold adopting a defenceless little girl. The retiree who’s lost his wife and finds meaning in life through spreading love through his community. There’re one or two others, but they all seem to fall into a few predictable categories. And you read these long tear-jerkers and reach the end and you go ‘Ah, isn’t that lovely.’ Or you’re meant to, anyway. But they are AI generated, the people don’t exist (although the original ones may have been based on real people), and these things did not really happen. But does this matter?
I think it does, for several reasons. AI invents stuff. If this is not the intention of the user, these are known as ‘AI Hallucinations’. If it can’t find what it’s been asked to find, it will sometimes make something up instead. Equally, it may draw data from untrustworthy sources. Then there are AI programs which are designed to make up stuff. If we understand that, then when we read something we understand is AI generated, we don’t necessarily believe it. And since we don’t believe the characters or the narrative, then the message it is designed to deliver is rejected. We all know that kindness is a good thing, but being told that by a computer program that has clearly fabricated the vehicle of delivery diminishes the message.
It is the exact opposite of ‘Don’t shoot the messenger’, because in this case the message is rejected because the messenger is flawed.
And the more we read these posts, knowing they are AI generated but if we’re still happy to take them completely at face value, the more we help to normalise the things. The more we accept AI into our lives and accept these fabrications.
So there are more than one type of AI program. Many of those that are really good at inventing stuff, and there are quite a few, are designed specifically to write books. They advertise themselves as producing books ‘in minutes, not months’. A few clicks on the button and hey presto! I’ve written a book! I’ll get back to this at some point, but are these people authors? No. They’re not. They’re frauds. But this brings me back to those original posts, which someone has created using an AI program similar to the book writer programs to deliberately invent the contents.
And to the more important point, the point where both the hallucinations, but even more importantly the deliberately fabricated material, really matter.
AI is, as we’ve seen, designed to invent stuff. Okay, that’s a simplification, but the point is that it’s designed to give the user exactly what they ask for. If someone requests it to write a piece justifying theft, or infanticide, for example, (not to ask it if it can be justified, but telling it to actually do so) it will do that, citing either nasty stuff it’s dug up from some remote hole on the internet, or, more likely, completely inventing stuff because the real justifications don’t exist. And it will look reasonably believable, perhaps writing something along the lines of ‘the Cornel University experiments of 1983 – 1984 by Taylor and Whickham et al demonstrate that…’ etc etc. And the casual reader will think ‘oh, I never realised that. So perhaps there’s something in it after all.’ But these citations will be made up.
And to go slightly off topic for a moment, there are the illustrations. AI generated photos are still usually recognisable as such, but they’re getting much better. Ones that have been subtly manipulated are now very hard to detect. The implications there should be obvious, can we now believe anything we see or are told?
This is not to suggest AI is an unmitigated evil. Its champions will point out advances in, for example, medicine and material sciences, which are very real and extremely important. But the issues of misinformation and, as frequently cited, intellectual property theft, to say nothing of the potential to completely destroy careers in the literary and artistic worlds, are also very real.
So how do we fight this? I’m afraid I’ve no idea. The genie is out of the bottle and I see no way it’s going back in again. Other than burning down the internet we are stuck with it and over the next year or so (or less – who knows?) it’s going to get harder and harder to tell truth from complete (and possibly dangerous) crap. While the programs are becoming better at presenting the genuine data they are requested to present, the ones inventing stuff are getting better at making this appear real. All we can do is be aware of this, be cautious and critical. And perhaps we could go back to getting our facts from books which, although not infallible, are far more likely to be accurate. Publishers are still the gatekeepers there, and they tend to do a pretty good job. Research stuff properly. Rather than accepting important medical information, for example, from Joe Bloggs on Facebook, look it up on a respectable site, like the NHS (in UK).
Maybe just stay off the internet more.
Which is probably a good idea anyway.



