Pages

Thursday, May 27, 2010

Singularity Part 3: Consciousness and Machines

I have seen numerous debates about the singularity hinge on the question of whether machines could ever become conscious. Specifically, one side takes the position that the singularity will never happen because we will never be able to create conscious machines to make it happen. That position conflates intelligence and consciousness, I believe erroneously. First, let me point out that the success of a singularity, fueled by superhuman AI, does not depend on whether the AI is conscious or not. It only depends on intelligence, and it seems very likely that a machine could have intelligence without consciousness.

Understanding and generating consciousness is a much harder problem than generating intelligence. For one thing, there are ready ways to measure intelligence. It's easy to imagine an anthropic robot that could interact with you as an intellectual equal. Think C-3PO or Data. Such a robot would be recognizably intelligent. But is it conscious? Can non-biological matter ever have consciousness? Could you even tell if it was? No one knows. It may be that conscious awareness is an intrinsic property of sufficient intelligence, emerging from the complex pattern of information and algorithms of a self-reflective system. But again: who the hell knows?

Consciousness may well be the hardest problem in science because of its subjective nature. Our existence itself is consciousness. Yet, we have no working theory of how the brain generates consciousness. With our current ignorance, speculating about machine consciousness is pure fantasy.

I have to repeat that conscious machines are not needed for a singularity. I'm all for sentient machines, of course. I'm just pointing out that this is an immensely difficult issue, and arguing about its likelihood is pointless at the moment. Still, theoretical discussions about the moral and existential issues of conscious machines are interesting exercises, and may help us prepare for a time when we know more about this.

I should note that, conversely, having consciousness doesn't necessarily mean having intelligence, as [insert name of stupid celebrity here, possibly a Baldwin] demonstrates.

Next blog: uploading the mind.

--
Brian

Singularity Part 2: Imagineeration or Where's my Jetpack?

First, Disney didn't make up the word imagineering, so I can use it how I want.

About my first post, I want to say that not every singularitarian (what a word!) thinks the same way about the future or lets the singularity cloud their vision. I just want to point out that some people use it to cast doubt on whether predictions about the future can ever be considered realistic or meaningful, and I have seen it shut down futurist discussions.

Imagining the future is sometimes seen as a futile endeavor. A post-singularity world might be so different from our own that our conceiving it is no more possible than a cavemen coming up with the idea of Facebook. But that is not a productive assumption. And even if the future is unimaginable, we can still make useful predictions a good deal into the future regarding where technology and society are headed based on current trends. A singularity event doesn't even necessarily change where any trend is heading, but may only accelerate it on its way. Superhuman AI still have to work with the laws of physics, after all. So, I think that for the sake of envisioning the future, strong AI should be seen as an expedient on the road toward transformation, not just the constructor of a road block to our imagination.

Meaningful attempts at futurism require critical thinking and healthy skepticism grounded in contemporary knowledge of science and technology. Also required is a grasp on humanity's driving forces, and which directions they're taking us. Something can't just sound cool like flying cars or jetpacks, it has to make sense in physical and social terms. The cliche thing to ask during discussions about the future is why we don't have those things yet. But we do have them! They're just expensive and dangerous, which is hardly a surprise. Engineers eighty years ago pointed out how impractical those technologies were. The energy required to move a car through the air is far greater than that used to roll it along the ground. And who you would trust to fly one? As for jetpacks? Come on! Was anyone ever serious about those? If Bobba Fett's jetpack accidentally killed him, what hope do the rest of us have with one strapped to our backs? (He's dead, get over it.)

On the other end of the "common sense folly" of making predictions, no one really foresaw what the Internet would become. Or just how the pill would change society and liberate women. No one on record, that I'm aware of, even foresaw personal automobiles or their impact. Futurism is not an exact science. But it is fun, and educated speculation can help us prepare for the future. It may not happen exactly the way we foresee it, but it's better than nothing. Just because the future is fuzzy doesn't mean you have to head there blindfolded. And we have never had a clearer view of how things are going to turn out. This is not the time to bury our heads in thinking like, "the future is unknowable".

So why look dumb in 30 years time if you can just keep your mouth shut? Because predictions are necessary to guide our path and foresee pitfalls. Here's an example: automation. That's something we desperately need to be examining and predicting. I've been thinking about automation for a while and will try to write about it soon in a more extensive way. But the gist of it is this:

Automation will be taking more and more jobs as robots and other automating technologies improve. Every year robots get better and cheaper, with capabilities now expanding into diverse areas of work: agriculture, docking, manufacturing, warehousing, construction, flying planes, programming, and more. Companies love to replace human workers with machines, and it makes perfect economic sense for them to do so. Automation will be taking menial labor jobs from millions of people all over the world in the decades to come. So what do those millions of unemployed, undereducated people with limited skill sets do? Where will new opportunities for these people come from? How will so many people adjust to fill positions that will likely require education, creativity, and skills that can't be replicated by increasingly sophisticated automation?

It's not a hopeless situation, but it's not being dealt with openly in America. Instead, anxiety over automation generates popular nightmares such as: Skynet of The Terminator series, the Borg of Stark Trek, the Cylon of Battlestar Galactica, that retarded Surrogates movie, the machines from the Matrix, and dozens of dystopic mechanized futures like Brazil. These are not generally constructive criticisms of automation. The message gleamed from most of them is that mechanization will rob us of our humanity or try to destroy us, leaving us with no choice but to nip it in the bud right now. But widespread automation will continue; it is simply too cost effective and powerful to stop. We need to be thinking openly and critically about how automation will play out, not just exercising our fears with childish apocalyptic scenarios.

Next post: what's there to say about conscious machines.

--
Brian

Thursday, May 20, 2010

Singularity Part 1: Strong AI as Deux ex Machina

If you're unfamiliar with the "technological singularity" or "strong AI":
http://en.wikipedia.org/wiki/Technological_singularity
http://en.wikipedia.org/wiki/Strong_ai
http://io9.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it


The world is in a state of transition and the road ahead holds many unknowns. This blog is an attempt to think clearly about the future—particularly, the promise and peril of new technologies and how people will change with them. To do this, one must consider a wide range of possibilities. I'm going to start by examining some issues regarding one very popular vision of the future, the technological singularity.

I find the singularity concept to be both intriguing and frustrating. It's intriguing because it gives hope for a "nerd rapture". And since AI are the main agents involved, humans have to do far less work. But the singularity is just as frustrating, because its very definition constrains the range of thought deemed relevant about the future. The whole point of the singularity is that we can't understand what happens during it, or what existence looks like after it. We also don't know when it will occur or how long it will take to change the world or in what way. The singularity's shroud of unknowing seems to make talking coherently about it moot. But there is one central element of the singularity which we can think critically about: strong artificial intelligence.

Whether it ever becomes "strong" or not, AI has a huge part to play in the future. Even though general purpose, human-level AI has proven more difficult to crack than researchers expected, AI is making great strides and encroaching further every day into areas of intelligence that we consider part of the human domain. AI may not look like a walking, talking person yet, but it is developing in numerous fields: game playing, pattern recognition of faces and locations (i.e. smart missiles), maneuvering robots, making music, predicting the stock market, forecasting the weather, and creating King Kong's cg hair and waving it realistically in the wind.

Not all of those may seem like they take intelligence, but they all feature impressive pattern recognition and creation. Our own brains operate with the help of countless subconscious processes that help us recognize faces, walk, talk, do math, write poetry, and drive a car. It's a large collection of these “agents” that make up our brain processes and sub-processes, and thus our minds. Looking at any individual process of the brain may not uncover something that looks “intelligent,” but put them together and there you go.

Strong AI is a key component for the occurrence of a singularity. The singularity is synonymous with the concept of an “intelligence explosion,” a meteoric rise in the quantity and quality of intelligence. This rise is given from strong AI improving on itself in an exponential feedback loop, far surpassing human capabilities and transforming the world through almost miraculous technological innovation. Another source of increased intelligence may be enhanced human beings, bolstered to super-normal cognitive capacity with the help of neural implants, smart drugs, and enriching information technologies such as smart phones and the Internet. An "intelligence explosion" may result from a combination of these two sources. AI has potentially better hardware, of course.

There are a few assumptions being made in regards to strong AI and the singularity:
1. Strong AI is possible. I wouldn't bet against it, but it might take a long time to develop.
2. The power of AI will improve rapidly on an exponential curve as it uses each level of increased intelligence to jump to another level. We don't know enough about the nature of intelligence to say that it actually can increase like that. Perhaps leaps in intelligence will be difficult even for superhuman minds, and AI may only improve slowly or encounter difficult problems.
3. It is possible for even a superintelligent being to improve technology considerably faster than humans already are. The development of technology takes intelligence, but it also requires the development of supporting technologies and often great resources. The development of a new processor, for example, requires many millions of dollars and a large research labs. While strong AI could certainly help with the man hours involved, I find it hard to imagine that it could do exponentially better than large groups of skilled humans, especially enhanced humans integrated with information technology.
4. That we won't hit a wall in regards to the exponential growth of technology. Such walls might be surpassed, but might flatten the growth rate of new tech.

Now, I'm not pessimistic (just look at my tag line) and I would be more happy than most if those assumptions were true. But I do believe that the future must be explored openly. So it bugs me that AI is often treated in science fiction as a type of black box, a literal Deux ex machina where a genie pops out of a box and fixes (or annihilates) everything. The advent of strong AI is too frequently the point where the discussion ends. I have seen its association with the singularity shield it from discussion.

It seems safe to say that the perspective and capacities of superhuman AI could be beyond our comprehension. Thus, many think we cannot coherently speculate about its actions or what direction it (or they) might take technology. But thinking like this only limits how we imagine the future, and that imagining is too important to constrain with intellectual insecurities. Is imagining the future a worthwhile endeavor? That's the next post.