If you're unfamiliar with the "technological singularity" or "strong AI":
http://en.wikipedia.org/wiki/Technological_singularity
http://en.wikipedia.org/wiki/Strong_ai
http://io9.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it
The world is in a state of transition and the road ahead holds many unknowns. This blog is an attempt to think clearly about the future—particularly, the promise and peril of new technologies and how people will change with them. To do this, one must consider a wide range of possibilities. I'm going to start by examining some issues regarding one very popular vision of the future, the technological singularity.
I find the singularity concept to be both intriguing and frustrating. It's intriguing because it gives hope for a "nerd rapture". And since AI are the main agents involved, humans have to do far less work. But the singularity is just as frustrating, because its very definition constrains the range of thought deemed relevant about the future. The whole point of the singularity is that we can't understand what happens during it, or what existence looks like after it. We also don't know when it will occur or how long it will take to change the world or in what way. The singularity's shroud of unknowing seems to make talking coherently about it moot. But there is one central element of the singularity which we can think critically about: strong artificial intelligence.
Whether it ever becomes "strong" or not, AI has a huge part to play in the future. Even though general purpose, human-level AI has proven more difficult to crack than researchers expected, AI is making great strides and encroaching further every day into areas of intelligence that we consider part of the human domain. AI may not look like a walking, talking person yet, but it is developing in numerous fields: game playing, pattern recognition of faces and locations (i.e. smart missiles), maneuvering robots, making music, predicting the stock market, forecasting the weather, and creating King Kong's cg hair and waving it realistically in the wind.
Not all of those may seem like they take intelligence, but they all feature impressive pattern recognition and creation. Our own brains operate with the help of countless subconscious processes that help us recognize faces, walk, talk, do math, write poetry, and drive a car. It's a large collection of these “agents” that make up our brain processes and sub-processes, and thus our minds. Looking at any individual process of the brain may not uncover something that looks “intelligent,” but put them together and there you go.
Strong AI is a key component for the occurrence of a singularity. The singularity is synonymous with the concept of an “intelligence explosion,” a meteoric rise in the quantity and quality of intelligence. This rise is given from strong AI improving on itself in an exponential feedback loop, far surpassing human capabilities and transforming the world through almost miraculous technological innovation. Another source of increased intelligence may be enhanced human beings, bolstered to super-normal cognitive capacity with the help of neural implants, smart drugs, and enriching information technologies such as smart phones and the Internet. An "intelligence explosion" may result from a combination of these two sources. AI has potentially better hardware, of course.
There are a few assumptions being made in regards to strong AI and the singularity:
1. Strong AI is possible. I wouldn't bet against it, but it might take a long time to develop.
2. The power of AI will improve rapidly on an exponential curve as it uses each level of increased intelligence to jump to another level. We don't know enough about the nature of intelligence to say that it actually can increase like that. Perhaps leaps in intelligence will be difficult even for superhuman minds, and AI may only improve slowly or encounter difficult problems.
3. It is possible for even a superintelligent being to improve technology considerably faster than humans already are. The development of technology takes intelligence, but it also requires the development of supporting technologies and often great resources. The development of a new processor, for example, requires many millions of dollars and a large research labs. While strong AI could certainly help with the man hours involved, I find it hard to imagine that it could do exponentially better than large groups of skilled humans, especially enhanced humans integrated with information technology.
4. That we won't hit a wall in regards to the exponential growth of technology. Such walls might be surpassed, but might flatten the growth rate of new tech.
Now, I'm not pessimistic (just look at my tag line) and I would be more happy than most if those assumptions were true. But I do believe that the future must be explored openly. So it bugs me that AI is often treated in science fiction as a type of black box, a literal Deux ex machina where a genie pops out of a box and fixes (or annihilates) everything. The advent of strong AI is too frequently the point where the discussion ends. I have seen its association with the singularity shield it from discussion.
It seems safe to say that the perspective and capacities of superhuman AI could be beyond our comprehension. Thus, many think we cannot coherently speculate about its actions or what direction it (or they) might take technology. But thinking like this only limits how we imagine the future, and that imagining is too important to constrain with intellectual insecurities. Is imagining the future a worthwhile endeavor? That's the next post.
Subscribe to:
Post Comments (Atom)
1 comment:
The model for the word singularity itself is somewhat incompatible with the definition that's been taken from it. This singularity means impossible-to-predict developments in hardware, software and integration. But an actual gravitational singularity is merely hard to predict from the current model - not impossible - i.e. Hawking's predictive model of quantum mechanics does exist. Maybe they wanted to use the mathematical singularity? The way geometric growth curves toward infinity is implied by Moore's Law, and there have been human developments that became popular to the point of meaninglessness (language, rule of law, Starbuck's). If this definition holds, then while it may be hard to predict, it is not impossible. We can know the general shape of the future if not specific details.
We'll still hit plateaus in that development; much like how African culture is still resisting the Enlightenment, there will be impediments to the smooth operation of the growth function. Societal support for the ideas underpinning the technology will limit the resources made available to developing it. But hitting that wall doesn't mean the growth ends there, just that for a while the growth will go in new directions to pick up more public support. And then it will resume. Maybe the plan never goes smooth, but it still goes.
And why does everybody think that the creation of strong AI is going to be the singularity? Of itself, that will not change anything. The hundred tiny ways that breakthrough changes in how we function and direct ourselves is what we really can't see past. That it will create markets for things we do not yet want is likely to be the bulk of the paradigm shift, just like after the industrial revolution, television and the mass market, and the development of the internet.
But the basic human needs for connection, purpose, enjoyment, understanding, health and comfort aren't going anywhere. Like the quantum mechanics of human technology, they have been beneath all previous singularities, and only the increasing level at which these needs are met has changed. I think these are the ways that we can begin predicting the shape of the future.
Post a Comment