Wednesday, April 29, 2009

The Singularity

As far as I can tell, when someone refers to himself as a "futurist," his job mostly involves serving as a prophet for the coming singularity. If you haven't heard already, "singularity," in this context, refers to the moment when machine intelligence becomes greater than that of the human brain. After this event, when computers begin to program and improve themselves, it's impossible to predict what will come next since it will be an increasingly greater intelligence that propels technology forward. The newest Big Think episode gives a good example of this thesis by one of it's most famous proponents:



Kurzweil is convinced that by 2029 computers will be smarter than people. I can only guess how he arrived at this number, but he seems pretty confident about it. It's fun to think about, but this is obviously a controversial idea. His analysis is partly an extrapolation of Moore's law, but that will eventually hit its physical limits when circuits can no longer become smaller. Kurzweil finds ways around this problem with the possibility of quantum computing, but the fact that he needs someone to invent a non-existing technology makes you wonder how he can be so confident about a year as precise as 2029. And I'm sure there are better arguments against it, but most scientists don't take this sort of thing seriously enough to put the time into critiquing it.

But that aside, this idea is about as close to the "promised land" a secular person can get (unless Arnold Schwarzenegger has something to do with it, but I don't see why super intelligent machines would become evil, necessarily). Almost all difficult questions would become easy; we could just ask the machine. Economies could be coordinated, conflicts resolved, diseases and maybe even old age itself could be cured.

But in the end, life would be so fundamentally altered that the thought of it is a little revolting. It's impossible to know if we should be rooting for it or not.

2 comments:

  1. When I hear people talking about an impending singularity my knee-jerk reaction is to write them off as overly optimistic and naive for several reasons.

    1) We don't understand the brain and the mind well enough what it means to re-create it within the next 20 years. We probably won't able to accurately model a brain for a while, let alone create something that acts as odd as its organic component - recall that the brain is not a collection of electronic switches, but an organic entity, and there are undoubtedly unknown peculiarities to its organic-ness that we're going to run into.

    2) We don't have a fair enough understanding of what intelligence is to say that brains can be "as intelligent" as a person. While complexity and ability to do particular tasks is something we can measure, "intelligence" is an abstract notion that has yet to be grounded in something physical.

    I don't believe that humans won't be able to create a human mind or its equivalent will never happen. I don't see any reason why this is in principle impossible, but science tends to work by continuously running into things that you didn't know were there. That's the beauty of discovery!

    My prediction: we will certainly see complex and brain-like objects in our life-time, but no singularity.

    Also to keep in mind: technology does not drive itself, people drive it. Even artificial intelligence won't act like Sky-net, if we refuse to network it and retain control over it. I think we'll see a combination of technology incorporated into our own bodies (as transhumanists often state) and artificial intelligence, but for the latter it will be more "Robot stories" than "Terminator".

    ReplyDelete
  2. Obviously choosing a particular year for a singularity is bizarre, but the idea that computing power will continue accelerating at roughly the same pace doesn't seem much of a stretch. Sure, there are physical limits to transistor size, etc, but haven't we overcome a number of 'limits' to processor speed already? What about multi-core chips, what about large cheap clusters, what about...

    Anyway, to say that this increased computing power will bring about The Singularity is dreaming and fantasy (aren't dreams of immortality the oldest ones?) but there will obviously be serious repercussions on how we live our lives, and what it means to be human. A couple days ago I had asked a friend of mine (in my program) who is working on "computationally-intensive analysis" in econometrics. I don't know where increased computing power will lead us but I find it easy to believe the discipline will be shaken to its core by the results of this (or become increasingly irrelevant).

    What does an empirical economist do when programs mining ever greater stores of data can come up with all the hypotheses and lines of correlation? Just stick to deciding which are causal and which not?

    ReplyDelete