This is heady stuff. Vinge predicted that the singularity would probably be achieved by 2023 – so we are now less than a decade away from the transformation of our society into something unrecognisable.
The question, though, is whether it will really happen. There are many enthusiastic explanations for why it’s inevitable that I will not bother to elaborate here – I’m sure that singularity proponents will not hesitate to put me right if I’m completely off base with my reservations [7] but basically I’m with Steven Pinker, who in 2008 said:
"There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.” [8]
Now, I don’t think I’d put it quite as dismissively as this – but then I’m also not a professional psychologist, so perhaps he knows things I don’t. Basically, my objection to the predicted singularity revolves around three points:
First, it assumes a unified definition of intelligence – this is certainly not wise.
What is intelligence? Is it raw processing power? In that case, by some definitions computers already outpace us since they can perform so many calculations so quickly. Clearly this isn’t what singularitarians[9] mean, or at least isn’t all that they mean. So what do they mean by intelligence? From what I have read on the subject, it appears that the central idea is a dualistic conception of intelligence – there seems to be an assumption that “intelligence” is something separate from and existing within the boundaries of the thing that has it – a sort of mystical substance that makes us intelligent. This is the core, this idea that there will be intelligent entities that emerge within the data streams of the world. For a variety of reasons, I don’t think that dualism is a viable concept when it comes to consciousness [10] so any theory that seems to depend on dualism in this sense is suspect.
Second, it seems to assume that these theorized intelligences (granting the concept to start with) will spring fully formed into the world.
In reality, capacity for intelligent action is one thing but actual intelligent action appears to be another – one that requires experience. This is an issue for something that is expected to interact with the real world, because no matter how quickly “experience” might be accumulated in a digital space, there are limits to how much this can accelerate learning regarding the real world, which steadfastly resists our attempts to speed things up. While artificial intelligences might well emerge in the future, it seems unlikely that they will be able to achieve the kind of explosive development and evolution that is predicted as a result of simple physical limitations.
Finally, and by no means the least of my objections, is the concept that it is in fact possible for an artificial intelligence to approximate a human being.
There have been some amazing advances in AI over the last few years, and I’m as enthusiastic as anyone regarding the potential for this field. But I think we need to be realistic. What we call intelligence and consciousness refers to what we ourselves experience. There’s really no way for us to know whether a given AI construct is really experiencing consciousness [11] or just successfully simulating the outer appearance of it. More importantly, we have to consider what results in consciousness among human beings – and again we get back to the issue of dualism.
If we assume we can strike the spark of consciousness in a machine, then in essence we are ignoring the fact that there’s (apparently) no magical consciousness organ in humans – what we call consciousness can only reasonably be seen as encompassing the whole organism, including not only the brain but the various branches of our nervous systems and probably even various other dimensions like hormonal signalling and maybe even interaction with the non-human elements of our bodies. [12] While it’s easy to see some ways in which this kind of dynamic system might be simulated in silicon, since we don’t really have any idea how it works in ourselves – and there’s no clear indication that we will soon have a breakthrough in this area – it’s hard to see how it’s possible to predict whether, let alone when, we will be able to achieve the same sort of effect in an artificial “mind”.
So I’m sceptical, to say the least.
Singularity, as far as it goes, is a very interesting concept for science fiction, but for the time being at least it seems to go no further than that.
But perhaps I’ll be proven wrong in 2023. [13]
###
1. At least according to Stanislaw Ulam, who reported on a conversation with Von Neumann in a retrospective published in the May 1958 issue of the Bulletin of the American Mathematical Society (Vol 64 No 3)
2. Yes, that Vinge
3. Read it here: http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
4. In 1993 he specified 30 years, but I suspect he didn’t intend that estimate to be taken as gospel.
5. He was thinking mainly of computer artificial intelligence, including machine enhancement of human intelligence, but one wonders - might there be other avenues that would lead to a similar singularity? Biological engineering, for example? (not in the sense of enhancing human intelligence, but in the sense of creating a runaway technology in the same vein as Vinge’s ideas regarding computing)
6. An interesting fictional exploration of what the singularity might be like is Charles Stross’s Accelerando which is available online through his website/blog (please consider buying an official copy) – I first came across this in a fragment that was published as a stand alone in Asimov’s (I think) which I really liked, but my feelings on the whole book are mixed. Say what you will about it, though, it’s a stimulating read.
7. I cheerfully confess to being little more than an interested amateur – my arguments against are purely armchair musings. If you have hard, scientific arguments that contradict me I want to hear them!
8. This was for his participating in the IEEE Spectrum special report on the singularity in 2008 – I have no idea whether he still holds this view.
9. Yes singularitarians – that is a real word.
10. Indeed, I’m not completely convinced of consciousness – but that’s another discussion.
11. I take it back, let’s sort of discuss it here: can we really detect consciousness in each other? And if not, does that call into question our ability to detect consciousness in ourselves? An investigation into the concept of the philosophical zombie can raise serious questions. See here for a bibliography if you’re interested in reading further.
12. Yes, really.
13. Or perhaps 2045, which is the other date being bandied around. How the heck do they come by these numbers?
The question, though, is whether it will really happen. There are many enthusiastic explanations for why it’s inevitable that I will not bother to elaborate here – I’m sure that singularity proponents will not hesitate to put me right if I’m completely off base with my reservations [7] but basically I’m with Steven Pinker, who in 2008 said:
"There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.” [8]
Now, I don’t think I’d put it quite as dismissively as this – but then I’m also not a professional psychologist, so perhaps he knows things I don’t. Basically, my objection to the predicted singularity revolves around three points:
First, it assumes a unified definition of intelligence – this is certainly not wise.
What is intelligence? Is it raw processing power? In that case, by some definitions computers already outpace us since they can perform so many calculations so quickly. Clearly this isn’t what singularitarians[9] mean, or at least isn’t all that they mean. So what do they mean by intelligence? From what I have read on the subject, it appears that the central idea is a dualistic conception of intelligence – there seems to be an assumption that “intelligence” is something separate from and existing within the boundaries of the thing that has it – a sort of mystical substance that makes us intelligent. This is the core, this idea that there will be intelligent entities that emerge within the data streams of the world. For a variety of reasons, I don’t think that dualism is a viable concept when it comes to consciousness [10] so any theory that seems to depend on dualism in this sense is suspect.
Second, it seems to assume that these theorized intelligences (granting the concept to start with) will spring fully formed into the world.
In reality, capacity for intelligent action is one thing but actual intelligent action appears to be another – one that requires experience. This is an issue for something that is expected to interact with the real world, because no matter how quickly “experience” might be accumulated in a digital space, there are limits to how much this can accelerate learning regarding the real world, which steadfastly resists our attempts to speed things up. While artificial intelligences might well emerge in the future, it seems unlikely that they will be able to achieve the kind of explosive development and evolution that is predicted as a result of simple physical limitations.
Finally, and by no means the least of my objections, is the concept that it is in fact possible for an artificial intelligence to approximate a human being.
There have been some amazing advances in AI over the last few years, and I’m as enthusiastic as anyone regarding the potential for this field. But I think we need to be realistic. What we call intelligence and consciousness refers to what we ourselves experience. There’s really no way for us to know whether a given AI construct is really experiencing consciousness [11] or just successfully simulating the outer appearance of it. More importantly, we have to consider what results in consciousness among human beings – and again we get back to the issue of dualism.
If we assume we can strike the spark of consciousness in a machine, then in essence we are ignoring the fact that there’s (apparently) no magical consciousness organ in humans – what we call consciousness can only reasonably be seen as encompassing the whole organism, including not only the brain but the various branches of our nervous systems and probably even various other dimensions like hormonal signalling and maybe even interaction with the non-human elements of our bodies. [12] While it’s easy to see some ways in which this kind of dynamic system might be simulated in silicon, since we don’t really have any idea how it works in ourselves – and there’s no clear indication that we will soon have a breakthrough in this area – it’s hard to see how it’s possible to predict whether, let alone when, we will be able to achieve the same sort of effect in an artificial “mind”.
So I’m sceptical, to say the least.
Singularity, as far as it goes, is a very interesting concept for science fiction, but for the time being at least it seems to go no further than that.
But perhaps I’ll be proven wrong in 2023. [13]
###
1. At least according to Stanislaw Ulam, who reported on a conversation with Von Neumann in a retrospective published in the May 1958 issue of the Bulletin of the American Mathematical Society (Vol 64 No 3)
2. Yes, that Vinge
3. Read it here: http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html
4. In 1993 he specified 30 years, but I suspect he didn’t intend that estimate to be taken as gospel.
5. He was thinking mainly of computer artificial intelligence, including machine enhancement of human intelligence, but one wonders - might there be other avenues that would lead to a similar singularity? Biological engineering, for example? (not in the sense of enhancing human intelligence, but in the sense of creating a runaway technology in the same vein as Vinge’s ideas regarding computing)
6. An interesting fictional exploration of what the singularity might be like is Charles Stross’s Accelerando which is available online through his website/blog (please consider buying an official copy) – I first came across this in a fragment that was published as a stand alone in Asimov’s (I think) which I really liked, but my feelings on the whole book are mixed. Say what you will about it, though, it’s a stimulating read.
7. I cheerfully confess to being little more than an interested amateur – my arguments against are purely armchair musings. If you have hard, scientific arguments that contradict me I want to hear them!
8. This was for his participating in the IEEE Spectrum special report on the singularity in 2008 – I have no idea whether he still holds this view.
9. Yes singularitarians – that is a real word.
10. Indeed, I’m not completely convinced of consciousness – but that’s another discussion.
11. I take it back, let’s sort of discuss it here: can we really detect consciousness in each other? And if not, does that call into question our ability to detect consciousness in ourselves? An investigation into the concept of the philosophical zombie can raise serious questions. See here for a bibliography if you’re interested in reading further.
12. Yes, really.
13. Or perhaps 2045, which is the other date being bandied around. How the heck do they come by these numbers?
No comments:
Post a Comment