Pages

Thursday, May 27, 2010

Singularity Part 3: Consciousness and Machines

I have seen numerous debates about the singularity hinge on the question of whether machines could ever become conscious. Specifically, one side takes the position that the singularity will never happen because we will never be able to create conscious machines to make it happen. That position conflates intelligence and consciousness, I believe erroneously. First, let me point out that the success of a singularity, fueled by superhuman AI, does not depend on whether the AI is conscious or not. It only depends on intelligence, and it seems very likely that a machine could have intelligence without consciousness.

Understanding and generating consciousness is a much harder problem than generating intelligence. For one thing, there are ready ways to measure intelligence. It's easy to imagine an anthropic robot that could interact with you as an intellectual equal. Think C-3PO or Data. Such a robot would be recognizably intelligent. But is it conscious? Can non-biological matter ever have consciousness? Could you even tell if it was? No one knows. It may be that conscious awareness is an intrinsic property of sufficient intelligence, emerging from the complex pattern of information and algorithms of a self-reflective system. But again: who the hell knows?

Consciousness may well be the hardest problem in science because of its subjective nature. Our existence itself is consciousness. Yet, we have no working theory of how the brain generates consciousness. With our current ignorance, speculating about machine consciousness is pure fantasy.

I have to repeat that conscious machines are not needed for a singularity. I'm all for sentient machines, of course. I'm just pointing out that this is an immensely difficult issue, and arguing about its likelihood is pointless at the moment. Still, theoretical discussions about the moral and existential issues of conscious machines are interesting exercises, and may help us prepare for a time when we know more about this.

I should note that, conversely, having consciousness doesn't necessarily mean having intelligence, as [insert name of stupid celebrity here, possibly a Baldwin] demonstrates.

Next blog: uploading the mind.

--
Brian

1 comment:

flamingsword said...

Putting together enough different kinds of intelligence will lead to consciousness in machines eventually, hopefully on a much shorter timeline than evolutionary biology took to do so.

There are people who argue pretty convincingly that dogs are pretty nearly conscious, and that dolphins should have human rights based on their ability to make altruistic decisions based in moral reasoning. With machines we may get sapience akin cephalopods since there would be no biological basis for pack behaviors. Alternately, they could develop much more elegant protocols than we have by being spawned in an interactive network of other intelligences.

Ultimately, I see no reason why machines could not reproduce most of what we do, and they would have the capacity to discover things we can't do with our flimsy jello brains. It should look enough like consciousness to be able to tell anyone who didn't think it was conscious to go fuck themselves.