Artificial Intelligence

This is not something that I really know anything about, but the possible dangers of building a machine with Artificial Intelligence were in the news again this morning.

Although he is not the first to do so, the fact that Professor Stephen Hawking has warned that this will inevitably lead to machines coming to dominate humans, and perhaps deciding to enslave or eliminate them, has made plenty of headlines around the world.

I have the impression that this (the danger) is not something that is taken seriously by many people, perhaps because we have all grown up with cartoons in comics and comedies on the television of lovable, but bumbling, robots, usually unfailingly loyal to their human masters. A quick trawl through the internet produces countless images of robots, predominantly benign and friendly looking ones busy helping humans. Naturally, these pictures are largely produced by companies that would like us to invest in this image. Certainly, more R2D2 than Terminator.

But science is proceeding at quite a rate, and it will not be long before this becomes an urgent issue.

x

What could possibly go wrong?

Language is inevitably a compromise; no word can completely describe something. Often, we do not even agree on what a word means – as Lewis Carroll writes in ‘Alice in Wonderland’: ‘When I use a word,’ Humpty Dumpty said in rather a scornful tone, ‘it means just what I choose it to mean — neither more nor less’. A popular paradox ‘what happens when an irresistible force meets an immovable object?’ is often talked about, to which the simple answer is that the force is only irresistible because it has not yet been resisted, and the object is only immovable because it has not yet been moved. The glib answer that Artificial Intelligence would have ‘ethics’ built into it so that it could not challenge humans, is meaningless.

As human beings have evolved and developed, the unquestioning belief in gods and their ethical dictats has inevitably come to be challenged. In the same way, a machine capable of learning and thought would be able to question an ethical restraint programmed into it.

And once the genie is out of the bottle, there is no putting it back. Any more than we could uninvent the aeroplane or the hydrogen bomb, once the information how to do it is out there, it will be stored and shared and eventually used.

I also find it difficult to avoid the feeling that there is a sizeable part of the scientific establishment that believe they have a right to do absolutely anything, and take any risk, and that it is justified in the name of ‘science’ or ‘progress’, be it nanotechnology or germ warfare research or some other such delight.

It does seem sometimes that as a species, we are hell-bent on wiping ourselves out.

There, that’s a nice big helping of doom and gloom for a Monday morning. But perhaps not; in my ignorant non-scientific naivety, I wonder if as long as this amazing thinking and learning machine is just that, and that only, and not a robot that can move around and do things, all might yet be well.

Of course, the writer in me then imagines this huge brain surviving the end of the World and pondering deeply for eons before declaring ‘Let there be light!’

Maybe this has all happened several times before…

39 thoughts on “Artificial Intelligence

  1. The rise of artificial intelligence is something of a worry, although from what I have read I believe we are still many, many years from IA technology being capable of taking over the world. But I love your final thoughts at the end… it is something I often wonder myself.

    Liked by 2 people

  2. Have you ever considered, Mick, that human beings might be the result of an experiment in artificial intelligence conducted by a previous race? The experiment was so successful that humanity wiped out its creators and replaced them. (As you say, this has probably happened many times before.) Now humanity is returning the favour by designing its own successors.

    It’s always seemed to me that human beings are shoddily designed, the product of a sub-committee working on a small budget. We are makeshift, synthetic creatures. How else to explain hiccoughs? (Steve Jobs would have made a better fist of it.) No doubt the products of AI will be designed with better quality control. No more hiccoughs, appendectomies or limp heads on Monday morning!

    Liked by 3 people

    1. Certainly a few design faults, John. Who was it coined the phrase Soft Machine to describe humans? It was a SciFi writer, but the name slips my mind and I can’t be bothered to Google it now. I guess there have been lots of stories about those experiments – 2001 comes to mind, but I’m sure there are plenty more. When I get around to attempting a SciFi story, I might well have a go myself.

      Like

      1. No, the one in which he presents this same concept. One universe ends in entropy leaving behind a multi-dimensional computer that finally solves the problem of creating something from nothing and says, “Let there be light!”. I’ll try to find the title for you. And 2001 was by Arthur C. Clarke….

        Liked by 1 person

  3. Great piece, Mick. Of course, much of the science fiction world has either applied or rejected Asimov’s three rules of robotics, which are supposed to ensure we are never overruled by AI. I’m a ‘rejecter’. History proves that if it’s possible, it will be done. We only need one of the many extremist groups to get hold of the technology in the future and do their usual destructive worst and we could have war declared on us by such devices.
    I touch on the dangers of AI in my novella, The Methuselah Strain, and, in common with many science fiction writers, included this threat as a way of encouraging people to think about the possibility and do something about it before the genie escapes!
    Thanks for an interesting topic.

    Liked by 1 person

    1. I thought of you when I wrote this piece, Stuart! Yes, the 3 laws of robotics. I couldn’t remember if it was 3 or 5 (it was a long time ago when I read Foundation and others – in my teens) but I always felt that if we were to ever make robots like humans, then the robots would be able to decide not to obey those laws.
      I do fear the worst, personally. As I said about parts of the scientific community in my piece, there just seem to be some people there who are utterly determined that they should be able to try anything they want, regardless of risk. I don’t know whether it might be because they have been cocooned from the real world for a long time, or because it is an example of power corrupting and absolute power corrupting absolutely.

      Like

  4. As an old programmer, I’m not particularly worried about AI, at least for another 20 years or so. Even then, I expect that as in genetic science and cloning, the vast majority of developers will behave ethically. A much more pressing concern would be what nefarious non-artificial programmers can and will do in the interim – you have heard of hackers, right? You’d probably have a better chance of ethical behavior from an AI than you would from some of their “real intelligence” overlords.

    Liked by 1 person

      1. I’m sure that would be attempted. However I suspect anything sophisticated enough to be a true AI would also be sophisticated to recognize outside influences, and be more external hack resistant than non AI programs might be.

        The question is, what is true AI? The ability to pass a Turing test? Self awareness?

        Liked by 1 person

  5. The singularity – that’s the expression coined for when AI becomes so advanced that our condition is fundamentally altered. And as for the crap design of humans, well here’s the late great Robin Williams’ take on one element of that design

    PS if you haven’t come across Oliver look up his recent evisceration of Trump – the man is a genius even if he comes from Birmingham

    Liked by 1 person

  6. I share your concerns about science gone wild. It’s a double-edged sword and we always seem to reach beyond the benefits into the cutting edge of self-destruction. I wonder how much of robotics research is funding by the militaries of the world?

    Liked by 1 person

  7. I think most of us are really scared this is not a right thing to be invented. But some (very minor) do proceed to the doom day of the world. Sad though. As you have written it’s always better to have machines that could make work easier and not dominate humans in any kind. It’s a bit scary. Hope hope hope that all will be well. Have a great week ahead Mick.

    Liked by 1 person

  8. I think the main barrier against creating artificial intelligence is probably just the complexity of the task, but who knows how long that will remain the case? It’s possible that such intelligence will be malevolent, and this is something I worry about, but I feel it is only part of the issue.

    Humans are capable of conscious thought, but for the most part, we don’t all behave toward one another in calculating, utterly mercenary ways. The threat of a jail sentences is not the main reason for this. A stronger check on our behavior is emotion. Most of us are swayed by such things as compassion, loyalty, love, shame, pity, and so on.

    If humans can create artificial consciousness, they can probably also create artificial emotions. Our thinking machines might be programmed to care about us. I don’t find this any more fantastic than imagining machines that can think in the first place.

    I worry about the morality of such a development, though. A manufactured emotion is still real to the experiencer. If artificial beings can think and feel, then they have interests. In other words, it matters to them how they are treated. But if they have interests, then they must surely also have rights and protections.

    It would be reckless and immoral to create thinking and feeling beings and then treat them like glorified toasters. After all, even dogs and cats have legal protections in most industrialized countries. These are the kind of moral issues that may not have been much thought about much outside the pages of science fiction.

    Liked by 1 person

    1. I certainly hadn’t thought along those lines, Bun. Much food for thought, there. The first thing that comes to mind is that the emotions programmed into the machine would have to be both subtle and complex; an intelligent machine that was driven by raw emotion strikes me as being every bit as potentially dangerous as one driven entirely by cold logic, mainly because it would still be employing that logic, but might come up with (for example) a solution to the problem of food shortages on Earth that included the wiping out of all animal life that did not directly contribute to the human food chain. Since we do not yet understand entirely the effect that our own emotions have on our behaviour, things could still go horribly wrong.

      Liked by 1 person

      1. Yes, I think you are right about emotions. They prompt all sorts of behavior in us, good and bad. There is a lot of work at the moment on trying to create robots that can recognize and respond to emotion appropriately. I’ve even had a short conversation with one of them, a robot called Pepper.

        At this stage, the whole field is laughably primitive. Humans are nowhere near creating artificial consciousness or complex emotions. I worry that this will not always be the case, though. I completely agree that we should be thinking through the implications now, and not when these developments are just around the corner.

        Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.