This is not something that I really know anything about, but the possible dangers of building a machine with Artificial Intelligence were in the news again this morning.
Although he is not the first to do so, the fact that Professor Stephen Hawking has warned that this will inevitably lead to machines coming to dominate humans, and perhaps deciding to enslave or eliminate them, has made plenty of headlines around the world.
I have the impression that this (the danger) is not something that is taken seriously by many people, perhaps because we have all grown up with cartoons in comics and comedies on the television of lovable, but bumbling, robots, usually unfailingly loyal to their human masters. A quick trawl through the internet produces countless images of robots, predominantly benign and friendly looking ones busy helping humans. Naturally, these pictures are largely produced by companies that would like us to invest in this image. Certainly, more R2D2 than Terminator.
But science is proceeding at quite a rate, and it will not be long before this becomes an urgent issue.
What could possibly go wrong?
Language is inevitably a compromise; no word can completely describe something. Often, we do not even agree on what a word means – as Lewis Carroll writes in ‘Alice in Wonderland’: ‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less’. A popular paradox ‘what happens when an irresistible force meets an immovable object?’ is often talked about, to which the simple answer is that the force is only irresistible because it has not yet been resisted, and the object is only immovable because it has not yet been moved. The glib answer that Artificial Intelligence would have ‘ethics’ built into it so that it could not challenge humans, is meaningless.
As human beings have evolved and developed, the unquestioning belief in gods and their ethical dictats has inevitably come to be challenged. In the same way, a machine capable of learning and thought would be able to question an ethical restraint programmed into it.
And once the genie is out of the bottle, there is no putting it back. Any more than we could uninvent the aeroplane or the hydrogen bomb, once the information how to do it is out there, it will be stored and shared and eventually used.
I also find it difficult to avoid the feeling that there is a sizeable part of the scientific establishment that believe they have a right to do absolutely anything, and take any risk, and that it is justified in the name of ‘science’ or ‘progress’, be it nanotechnology or germ warfare research or some other such delight.
It does seem sometimes that as a species, we are hell-bent on wiping ourselves out.
There, that’s a nice big helping of doom and gloom for a Monday morning. But perhaps not; in my ignorant non-scientific naivety, I wonder if as long as this amazing thinking and learning machine is just that, and that only, and not a robot that can move around and do things, all might yet be well.
Of course, the writer in me then imagines this huge brain surviving the end of the World and pondering deeply for eons before declaring ‘Let there be light!’
Maybe this has all happened several times before…