Sunday, February 17, 2008

Kurzweil and Human Equivalence

Following on from the matter of alien minds, I notice the energetic Ray Kurzweil is now promising us 'human-level artificial intelligence' by 2029 - why not 2030, I wonder? I seem to recall this is about 16 years sooner than his last forecast. He may be right, but as I mention in the book review, it seems unlikely because, after 50 years of AI research, we have made no progress in the direction of intelligent machines. People like Kurzweil are over-impressed by computing power. Thinking faster processor speeds and terabytes of memory will give us 'human equivalence' is like thinking Michelangelo's David is only a matter of having a big enough block of marble. Then there's the problem of the meaning of 'human equivalence'. What is it? An intelligent machine may be intelligent in ways we cannot begin to understand. There's no reason to assume that ours is the only possible kind of intelligence. In fact, on balance, I hope it's not.

12 comments:

  1. They can be as clever as they like, but what do they do when you pull out the plug?

    ReplyDelete
  2. When you're not looking, they reach out a robotic arm, using standby power, and plug themselves back in.

    ReplyDelete
  3. Yeah, right.
    Well don't pay the electric bill, that'll fox 'em.

    ReplyDelete
  4. I suspect that the point of equality between human and machine intelligence will be achieved by 2029 but not through advances in nanotechnology or neural networks. It it simply the fact that we are becoming dumber with each passing generation.

    ReplyDelete
  5. The Canadian sci-fi writer Peter Watts has argued that AI would not be anything like us, or any biological species, as we are the product of evolution. What we consider basic drives - like self preservation - may not be present in any machine intelligence.

    ReplyDelete
  6. Computer hardware - faster processors, etc. - isn't the problem. You can cluster slow processors into something hugely powerful, the basis of Google's IT for example. Nope, the problem imho lies in software. Even something relatively simple can require very, very complicated software and outfits like Microsoft and Apple have found that computer code simply to run what we have today can soon balloon out of control and become unmanageable.

    So more of the same, essentially, isn't going to cut it. We'll need entirely new tools. That of course depends what you mean by "intelligence", something of a tricky question since our knowledge of the brain is so limited. We're still somewhere in the Middle Ages with that. As to whether computers could ever be self-aware, whether something - what we tend to call mind - which wasn't there to begin with could arise from myriad interconnected processes, who knows? I'll bet absolutely no one. I reckon it will be a very long time before we stumble on the right tools to take us in the general direction of that, let alone make progress along the road.

    ReplyDelete
  7. ours isn't the only kind of intelligence. you'd think they'd start off with something less ambitious, like a hamster. or george bush.

    ReplyDelete
  8. We've got about 5 billon humans and counting. Why on earth, might one imagine, should we be interested in more artificial versions, even if some genuine archetypes are so deluded as what it is to be one in the first place, that they think this would be some kind of triumph.
    All that this is probably really about is that people with an utterly degraded notion of being human, which they imagine to be scientifically based, wish to create a simulacrum of allegedly identical nature, thus confirming them in the supposed truth of their Lilluptian understanding of human consciousness. The attempt of eunuchs to take revenge upon life.

    ReplyDelete
  9. To put it another way:
    It's not about the elevation of the machine to the human. It's about the denigration of the human to the machine. Applied materialism.

    ReplyDelete
  10. Surely computers are already more intelligent than humans, in terms of their capacity for rational computation? They're just not conscious, for precisely that reason. There was an article in the paper about some experiment with chimps (cherchez le singe) whereby numbers were randomly displayed on a screen one at a time, and the chimps then had to replicate the sequence from memory; apparently the chimps were better at it than people.

    I would suggest that the chimps lesser consciousness allowed them to do this because consciousness is about a sense of the whole not the parts, the forest not the trees, and when information is random it has no greater wholeness to it, so the chimps brains were not hampered by a relentless attempt to integrate and systemize the information. The human brain attempts to synthesize information and is thus hampered when it is too random to treat as a whole.

    A similar effect presumably applies to autistic savants who can process information extremely effectively, while displaying little self awareness: they might have highly effective neural networks in specific brain modules, but lack the efficient connections between modules to generate whole brain consciousness. Ordinary autistics might not have either, to any great degree.

    In so far as this is the case, making computers conscious would diminish their value - we need them to be the ultimate autistic savants.

    ReplyDelete
  11. I agree. We should forget about human intelligence and just concentrate on devising computers that can do more and more of the work we don't want to do or can't do. I can't think of any good reason why we'd want machines to be more human. In fact, the less human they are the better. Otherwise, we'll have strikes and unions and time off in lieu and god knows what!

    ReplyDelete
  12. As a Thuffir Hawat-style Mentat i can do everything a supercomputer can do and also kill using the point of a blade rather than the edge (the mark of a true gentleman). A computer, as Michael Smith points out, can remember everything because it doesn't discriminate. Human consciousness is necessarily about forgetting, being selective, patterning.

    i suspect part of the quest for AI is an attempt to be as God is. We already in the West mainly live in a man-made environment (nothing quite as ugly as the man-made, is there?), effectively living within our own creation. When we can create life as God is supposed to have, why then we will surely be at last divine.

    Nice idea except that if it were possible, which i doubt, 'twould be in the hands not of God or even a Socrates or Buddha, but of men like George Bush and Bill Gates, or the owners of Wallmart or McDonald's. The results, needless to say, would be kiling machines, monsters able to commit horrors without qualms at the behest of their masters.

    ReplyDelete