The short answer: PROBABLY!
The rapture hits May 21st according to billboards around Dallas, probably related to this nonsense:
http://en.wikipedia.org/wiki/2011_end_times_prediction
But what if it isn't nonsense? And what if it's only the souls of millions of Christians that disappear, leaving their bodies? Demonic forces might then take over their bodies and brains to do Satan's bidding! Be prepared around May 21st for the hordes of ravenous Christian zombies that will undoubtedly descend upon the sinners of the world. Or maybe they'll just take over their everyday lives, leaving the rest of us unaware of the changeover? Whatevs.
--
Brian
Monday, May 16, 2011
Saturday, May 14, 2011
CTHULHU IS THE INTERNET!
Holy crap guys! I'm getting married today, so this is odd timing, but I just realized that the Internet is the worldly manifestation of Cthulhu! It is slowly consuming our minds and causing our bodies to wither away from inaction, as our consciousness is sucked into its ever expanding spiral of PRON, WAREZ, AND DOOM! Check these "facts" out:
1. Cthulhu has a whole bunch of tentacles to suck out souls or something. The Internet is full of tentacles! It's made of tentacles! Bonus fact: Worse than tentacles, the "web" is radiating through our bodies all of the time thanks to cell phone networks and wifi! If you live in a major city- porn, cat videos, and snuff films penetrate your brain all day long! We are already submerged deep in a sea of Lovecraftian nightmarescapes!
2. Cthulhu will bring madness and horror to all, causing laughing fits at the horrors of the world as our minds degrade and cave in to the worst depravities imaginable. I.e. 4chan.org
3. Millions of netizens worldwide worship Cthulhu! Check this out: http://www.cracked.com/funny-1911-cthulhu/
Anyway, it's way too late to stop Cthulhu's Net, so have a nice apocalypse.
--
Brian
1. Cthulhu has a whole bunch of tentacles to suck out souls or something. The Internet is full of tentacles! It's made of tentacles! Bonus fact: Worse than tentacles, the "web" is radiating through our bodies all of the time thanks to cell phone networks and wifi! If you live in a major city- porn, cat videos, and snuff films penetrate your brain all day long! We are already submerged deep in a sea of Lovecraftian nightmarescapes!
2. Cthulhu will bring madness and horror to all, causing laughing fits at the horrors of the world as our minds degrade and cave in to the worst depravities imaginable. I.e. 4chan.org
3. Millions of netizens worldwide worship Cthulhu! Check this out: http://www.cracked.com/funny-1911-cthulhu/
Anyway, it's way too late to stop Cthulhu's Net, so have a nice apocalypse.
--
Brian
Tuesday, March 15, 2011
Transcending the Human Form: Robot Bodies VS Virtual Proxies
My last post about AI was perhaps overly simplistic, especially in regards to the complexity of the brain, but I had other points to make. I hope to see the emergence of strong AI in my lifetime, but it's difficult to guess what that will require, much less its consequences. One thing I am much more certain about is the expansion of humanity's capabilities through computers and cybernetics.
Our brains interface with the world through our bodies. It's an ancient interface that has served us well, but it is increasingly restrictive in our evolving high-tech world. Our bodies confine and even imprison us, especially when something goes wrong with them. By improving the body, we enhance the way we communicate, our ability to access and manipulate information, and our ability to deal with the physical world.
Cybernetics and bio-engineering will augment our natural bodies, but what about adopting new bodies entirely? Currently we have no alternatives, but that may soon change. Placing our brains inside robot bodies is one appealing way of transcending the limitations of our flesh. Who doesn't think robot bodies are cool? They would have better senses, durability, endurance, and a finer ability to manipulate the physical environment than any natural body.
But I believe that the true future of humanity is not in the physical world at all, but in the virtual. The full potential of the human brain can only truly unfold in virtual reality. VR offers greater freedom than the physical world in almost every way. Inhabiting virtual bodies, or "proxies" as I call them in Living Outside, our brains will finally be able to express themselves to their ultimate extent.
I want to compare robots with proxies to demonstrate that simulated living is superior to even the best physical existence. But first I want to clarify a few issues.
The Brain Interface
To inhabit any synthetic bodies with full immersion (all the senses and motor controls of the natural body), will require brain implants. These implants will either be integrated into the brain itself, such as with the somatosensory center and motor cortex, or connected to the brain's input/ouput connections– the 12 pairs of cranial nerves and the spinal cord.
The input/output option will likely be accomplished long before cyberization of the brain, and will be sufficient to allow the brain to inhabit and control a robot or virtual proxy with our full complement of senses. Cyberizing these connections will be much less invasive than fiddling with the brain itself, and significantly more straightforward. Axonal frequencies will be far easier to decode and stimulate than complex cognitive functions. Incidentally, the bandwidth of the brain's connections to the body is somewhere around 1 gigabyte/second, which is very manageable.
However it's done, the signals sent from and to a robot or virtual body will be electronic in nature, translated from the sensors of the robot, or the simulated senses of the proxy, into nerve impulses which the brain can interpret, and vice versa. This means that if your brain is hooked up to a robot, there is no reason why you can't route it to a proxy instead. Conversely, if your brain can be hooked up to a virtual body, you should be able to telepresence into a robot.
I want to note that telepresence into a nearby robot should work about as well as actually having your brain inside it. This is because neural signals move extremely slowly compared to the speed of electronics, so the brain shouldn't notice the distance. If all you want to do is run around in a robot, there should be no good reason to remove your brain from your birth body to do so, and you should be able to effectively "become" a robot by wirelessly redirecting the input/output pathways of your nervous system.
The Brain Vat Option
Brain vats are another option for handling a removed brain. They have negative connotations, due to associations with mad scientists and existential thought experiments. However, I think that brain vats are a more likely scenario than putting brains inside moving robot bodies. Stationary containers designed to protect and sustain cyberized brains are safer, more economical, and less technically challenging than robot bodies. They're also better in case something goes wrong. Imagine how easy brain surgery would be on a brain in a vat.
Mobile robot bodies with onboard brains will require dependable life support systems, and well protected brain compartments. Facilities housing brains would have economy of scale, possibly housing hundreds or thousands of other brains. And you can always telepresence into a robot from a brain vat if you really want to run around in one. You can take bigger risks with such a robot than if it had your brain inside it.
Those things established, I will now compare proxy and robot bodies to show that proxies are superior in most respects:
Cost: Advantage Proxies
For either option, there would be an initial cost of implanting the neural interface. Beyond this, a robot which could physically sustain a brain would obviously be extremely expensive, though its costs would go down with time. Robots also require parts, maintenance, upgrades, repairs, electricity, and storage space. Acquiring additional robot bodies will bring their own costs. I have a hard time imagining robots like this becoming cheap, but who knows?
Proxies, on the other hand, would be cheap, if not free, beyond the initial cost of neural implants. You could potentially acquire thousands of different proxies, and modify them however you want, for little or no additional cost. Virtual proxies and environments will eventually require negligible electricity costs.
Immersive virtual reality requires much less technological advancement than robots, and will occur sooner. Inhabitable virtual proxies are much more likely to be developed, period. If you can hook a brain up to a robot, virtual reality is a certainty. But the advent of virtual reality says much less about the feasibility of inhabitable robots. Past the brain implants for full immersion, proxies require only the development of realistic virtual worlds, which are being intensely pursued in movies and games, and the simulated bodies to explore them. I don't know how likely a brainbot is in my lifetime, but I'm fairly certain I'll see full immersion virtual reality if I live another 60 years.
Convenience: Advantage Proxies
Using proxies, you could teleport instantly to any physical place that is virtualized. That is, any place that has sufficient sensors to allow it to be simulated for the purposes of a virtual proxy. Robots could go to any space they could travel to, including places with no sensors. However, they can't teleport, and so it will take time and energy to move them from one place to another. I can't underscore enough the convenience and power of being able to instantly appear in countless locations all over the world.
Proxies never have to be cleaned, recharged, or fine tuned. If something goes wrong with them, it's easy to restore or replace them.
Flight: Advantage Proxies
Proxies could fly like nothing else! The experience of flying as a proxy will eventually become realistic, and could be enhanced from there. Depending on the rules of their virtual environment, proxies could hover or fly anytime they wanted. They could rocket to 700 mph, feeling the extreme acceleration and air pressure, and stop on a dime. They could zip through the sky like Superman. They could dart around a crowded room in a blur without worrying about hurting anyone.
Flying robots won't be nearly as free. They'll be expensive and require lots of energy. Even with a jetpack, robots will never be able to maneuver and accelerate like a proxy, and so the experience they offer will be less thrilling. The only extra thrill from putting your brain inside a flying robot would be from the danger to your own life. I believe that a perfectly simulated flying proxy experience would hold a similar sense of danger for proxy users. A realistic simulated experience should trick a user's brain, even if they are perfectly aware of being in no danger, even better than roller coasters do.
In any case, there would be real danger in flying in a robot body, as a physical crashlanding would at least give your brain a good jolt, even with good brain suspension. Good luck if your life support system gets damaged and you're stuck hanging off the side of a building. Putting my brain in a flying robot is not my idea of a good time, and telepresencing into a flying robot would be a waste of fuel. The sensations of flight produced by the sensors of a robot could be copied and simulated virtually. It would eventually be an identical experience to the user, so why not just use a proxy? Also, flying robots are dangerous to everyone around and underneath them, and their flight would have to be strictly controlled, especially in populated areas.
Interacting with physical environments: Mixed, Robots Overall
Robot bodies will have some advantages when interacting with a physical environment. Of course, first they have to get to the location, which again, costs energy and time. A user could telepresence into a robot at the desired location, of course, though they will have to either own the robot, or make some arrangement to borrow the robot. I should note that there are already an increasing number of robots that are controlled remotely to accomplish specific tasks, used in labs, construction, for surgery, chatting, and many other activities.
Robots can manipulate a physical environment, something a proxy couldn't do by itself. A robot can pick up a physical object and look underneath it. Proxies could interact with virtualized objects and environments, but while this experience could be as real to the proxy user as if they were really there, their actions would make no physical change without assistance from people or machines present at that location.
A robot's sensors would allow its user greater control in examining an environment. A physical space with sufficient sensors could be simulated to create a nearly identical experience for a proxy user, though there could be blindspots. A robot would be more consistent in the experience it delivered, but it would potentially have more limitations. A well constructed virtualized environment would allow a proxy to explore and manipulate objects much more easily than a robot could. Proxies could check out something on a high shelf, sit on a chandelier and tinkle its crystals, shrink down to hide under a couch, or examine locked rooms.
To mitigate some of their disadvantages, proxy users might remotely control special surveillance bots designed to move around a physical environment to better virtualize it. The "bugs" in Living Outside, Episode 3 are examples of these.
Robots could possibly eat, drink, and smoke if they had synthetic digestive tracts and lungs, seemingly necessary to support a brain in the first place. This possibility is interesting, but I have no idea when such tech might be developed and become common. Again, brain vats seem more likely.
The consumption of physical goodies is something that a proxy user could only do on their own end, where their brain was located. Proxy users could not drink the liquor in their friend's house, for example. They could have a perfect simulated experience of swallowing it, but it wouldn't get them drunk. Drugs similarly would have to be imbibed in the proxy user's physical space, and friends couldn't share them.
Interacting with physical people: Mixed, Robots Overall
Here's another area where robots have advantages, though not as many as you might think at first. People with sufficient brain implants could simulate the experience of a virtual person's presence. They could feel a proxy's hug, have sex with them, and otherwise interact with them as if they were really there. It would, in essence, be the interaction of two proxies. The extent of this interaction would be greater than that possible with a robot, because of the greater flexibility allowed by the virtual.
People with non-implant augmented reality technologies could interact with a proxy to a degree satisfactory for most purposes. They could see proxies with bionic contacts or feel a hug from a proxy through haptic clothing. Those without personal augmented reality would have a harder time perceiving a proxy, and their interaction would be limited. They could see and talk to them through a screen, but they wouldn't be able to touch them. For the unaugmented, interaction with robots would be superior. Though I wonder how many unaugmented people there will be by this time frame.
Intimate physical interaction with an unimplanted person would require robots. Proxies couldn't give a physical massage, or have physical sex. Telepresence into a robot that was relatively nearby would confer these same advantages without the need to remove your brain from your organic body, although lag would become an issue over longer distances. Lag could also be an issue for proxies.
Assorted things: Advantage Proxies
So many things that are completely trivial to implement for a virtual body would be quite difficult, expensive, or prohibitively impractical for a robot body.
Even if a robot body could fly, could it hover safely through a crowd? Could it stand on the ceiling and walk around upside down? Could it grow giant, or shrink to doll size? Change form and appearance at will? Turn completely invisible or incorporeal? Phase through walls? Taste with its knees? Blow up? What about turning into a tentacle monster, or copying someone's appearance? Could it teleport to China on a whim?
For a proxy, upgrades and adding components are a simple matter of a software update or tinkering with settings. Upgrading robots will not be so easy.
Interacting with virtual environments: Complete Advantage Proxies
A robot can't go into virtual environments, except by proxy. And virtual environments are seriously where the future is at. Their scope, quality, and diversity will far exceed anything found in the physical world. They will be more beautiful, more engaging, and far more interactive than anything we could build here, at very little cost. I've been exploring these possibilities while writing Living Outside, and they are pretty exciting.
Many people say, "Where is virtual reality? They've been promising it for decades!" I have two responses to that.
First, we already have VR! Look at cutting edge games like Crisis 2. That's virtual reality. The player just isn't immersed in its environment yet. True, it's not as good as people want. Computer technology isn't yet capable of delivering an immersive VR experience. That will require much more intensive processing power to simulate a realistic environment for all the bodily senses. We are only now approaching photorealistic graphics and physics engines of the required sophistication. We've also started working on the tactile elements of virtual objects, such as their texture and weight. We are quickly approaching a time when the average person's computer will be able to simulate realistic, fully immersive worlds, but the hardware isn't there yet.
Second, it will take some effort to create the interface that will completely immerse users in the amazing simulated realities we are creating. Decent audiovisual VR gear is around the corner. Soon, VR capable glasses that also provide untethered augmented reality will have enough utility for people to start paying attention to virtual reality again. Eventually, they will be small, lightweight and stylish. But full immersion VR, featuring senses like touch, acceleration, taste, and smell will require brain implants to be fully convincing. That's much further away, but it's being worked on. The applications of full immersion simulated reality are just too incredible for the necessary tech not to be developed, if it is plausible at all. Just think of the immense market forces that will drive the development of realistic virtual sex, or convincing telepresence.
Conclusion: We All Win (hopefully)!
As I said at the beginning, robots are awesome. I like to imagine that people will get a robot body to encase and protect their brains while they frolic in the liberated expanse of virtual reality. They would be useful for moving your brain in case of emergencies, if nothing else. That sounds like an ideal setup to me. But once people get used to proxies, they're not going to be very interested in robot bodies, except as infrastructure. They might take over one remotely every now and then for the novelty, but the experience will be inferior overall, and I don't think people are going to want to risk their brains by moving them around when they don't have to. Telepresence into robots for specific functions will undoubtedly be common, but for most purposes, like socializing, proxies win hands down.
Our current bodies are biological proxies. They are the only tools our brains have, at the moment, to interact with the world. Soon, we will enhance, and even replace them, with the help of technology. By replace, I don't necessarily mean remove our brains from them. We can liberate our brains from our flesh by rerouting our nervous system so that our brains can effectively inhabit other proxies, virtual and otherwise, by telepresence. And through those new forms, we will connect with other human beings like never before, in both quantity and quality.
People should stop saying, "I want my robot body!" and start saying "I want my virtual body!"
I try to envision virtual reality, virtualized environments, and proxies in Living Outside, so if you're interested in this stuff check it out. Episode 3 is particularly relevant to this discussion.
--
Brian
Our brains interface with the world through our bodies. It's an ancient interface that has served us well, but it is increasingly restrictive in our evolving high-tech world. Our bodies confine and even imprison us, especially when something goes wrong with them. By improving the body, we enhance the way we communicate, our ability to access and manipulate information, and our ability to deal with the physical world.
Cybernetics and bio-engineering will augment our natural bodies, but what about adopting new bodies entirely? Currently we have no alternatives, but that may soon change. Placing our brains inside robot bodies is one appealing way of transcending the limitations of our flesh. Who doesn't think robot bodies are cool? They would have better senses, durability, endurance, and a finer ability to manipulate the physical environment than any natural body.
But I believe that the true future of humanity is not in the physical world at all, but in the virtual. The full potential of the human brain can only truly unfold in virtual reality. VR offers greater freedom than the physical world in almost every way. Inhabiting virtual bodies, or "proxies" as I call them in Living Outside, our brains will finally be able to express themselves to their ultimate extent.
I want to compare robots with proxies to demonstrate that simulated living is superior to even the best physical existence. But first I want to clarify a few issues.
The Brain Interface
To inhabit any synthetic bodies with full immersion (all the senses and motor controls of the natural body), will require brain implants. These implants will either be integrated into the brain itself, such as with the somatosensory center and motor cortex, or connected to the brain's input/ouput connections– the 12 pairs of cranial nerves and the spinal cord.
The input/output option will likely be accomplished long before cyberization of the brain, and will be sufficient to allow the brain to inhabit and control a robot or virtual proxy with our full complement of senses. Cyberizing these connections will be much less invasive than fiddling with the brain itself, and significantly more straightforward. Axonal frequencies will be far easier to decode and stimulate than complex cognitive functions. Incidentally, the bandwidth of the brain's connections to the body is somewhere around 1 gigabyte/second, which is very manageable.
However it's done, the signals sent from and to a robot or virtual body will be electronic in nature, translated from the sensors of the robot, or the simulated senses of the proxy, into nerve impulses which the brain can interpret, and vice versa. This means that if your brain is hooked up to a robot, there is no reason why you can't route it to a proxy instead. Conversely, if your brain can be hooked up to a virtual body, you should be able to telepresence into a robot.
I want to note that telepresence into a nearby robot should work about as well as actually having your brain inside it. This is because neural signals move extremely slowly compared to the speed of electronics, so the brain shouldn't notice the distance. If all you want to do is run around in a robot, there should be no good reason to remove your brain from your birth body to do so, and you should be able to effectively "become" a robot by wirelessly redirecting the input/output pathways of your nervous system.
The Brain Vat Option
Brain vats are another option for handling a removed brain. They have negative connotations, due to associations with mad scientists and existential thought experiments. However, I think that brain vats are a more likely scenario than putting brains inside moving robot bodies. Stationary containers designed to protect and sustain cyberized brains are safer, more economical, and less technically challenging than robot bodies. They're also better in case something goes wrong. Imagine how easy brain surgery would be on a brain in a vat.
Mobile robot bodies with onboard brains will require dependable life support systems, and well protected brain compartments. Facilities housing brains would have economy of scale, possibly housing hundreds or thousands of other brains. And you can always telepresence into a robot from a brain vat if you really want to run around in one. You can take bigger risks with such a robot than if it had your brain inside it.
Those things established, I will now compare proxy and robot bodies to show that proxies are superior in most respects:
Cost: Advantage Proxies
For either option, there would be an initial cost of implanting the neural interface. Beyond this, a robot which could physically sustain a brain would obviously be extremely expensive, though its costs would go down with time. Robots also require parts, maintenance, upgrades, repairs, electricity, and storage space. Acquiring additional robot bodies will bring their own costs. I have a hard time imagining robots like this becoming cheap, but who knows?
Proxies, on the other hand, would be cheap, if not free, beyond the initial cost of neural implants. You could potentially acquire thousands of different proxies, and modify them however you want, for little or no additional cost. Virtual proxies and environments will eventually require negligible electricity costs.
Immersive virtual reality requires much less technological advancement than robots, and will occur sooner. Inhabitable virtual proxies are much more likely to be developed, period. If you can hook a brain up to a robot, virtual reality is a certainty. But the advent of virtual reality says much less about the feasibility of inhabitable robots. Past the brain implants for full immersion, proxies require only the development of realistic virtual worlds, which are being intensely pursued in movies and games, and the simulated bodies to explore them. I don't know how likely a brainbot is in my lifetime, but I'm fairly certain I'll see full immersion virtual reality if I live another 60 years.
Convenience: Advantage Proxies
Using proxies, you could teleport instantly to any physical place that is virtualized. That is, any place that has sufficient sensors to allow it to be simulated for the purposes of a virtual proxy. Robots could go to any space they could travel to, including places with no sensors. However, they can't teleport, and so it will take time and energy to move them from one place to another. I can't underscore enough the convenience and power of being able to instantly appear in countless locations all over the world.
Proxies never have to be cleaned, recharged, or fine tuned. If something goes wrong with them, it's easy to restore or replace them.
Flight: Advantage Proxies
Proxies could fly like nothing else! The experience of flying as a proxy will eventually become realistic, and could be enhanced from there. Depending on the rules of their virtual environment, proxies could hover or fly anytime they wanted. They could rocket to 700 mph, feeling the extreme acceleration and air pressure, and stop on a dime. They could zip through the sky like Superman. They could dart around a crowded room in a blur without worrying about hurting anyone.
Flying robots won't be nearly as free. They'll be expensive and require lots of energy. Even with a jetpack, robots will never be able to maneuver and accelerate like a proxy, and so the experience they offer will be less thrilling. The only extra thrill from putting your brain inside a flying robot would be from the danger to your own life. I believe that a perfectly simulated flying proxy experience would hold a similar sense of danger for proxy users. A realistic simulated experience should trick a user's brain, even if they are perfectly aware of being in no danger, even better than roller coasters do.
In any case, there would be real danger in flying in a robot body, as a physical crashlanding would at least give your brain a good jolt, even with good brain suspension. Good luck if your life support system gets damaged and you're stuck hanging off the side of a building. Putting my brain in a flying robot is not my idea of a good time, and telepresencing into a flying robot would be a waste of fuel. The sensations of flight produced by the sensors of a robot could be copied and simulated virtually. It would eventually be an identical experience to the user, so why not just use a proxy? Also, flying robots are dangerous to everyone around and underneath them, and their flight would have to be strictly controlled, especially in populated areas.
Interacting with physical environments: Mixed, Robots Overall
Robot bodies will have some advantages when interacting with a physical environment. Of course, first they have to get to the location, which again, costs energy and time. A user could telepresence into a robot at the desired location, of course, though they will have to either own the robot, or make some arrangement to borrow the robot. I should note that there are already an increasing number of robots that are controlled remotely to accomplish specific tasks, used in labs, construction, for surgery, chatting, and many other activities.
Robots can manipulate a physical environment, something a proxy couldn't do by itself. A robot can pick up a physical object and look underneath it. Proxies could interact with virtualized objects and environments, but while this experience could be as real to the proxy user as if they were really there, their actions would make no physical change without assistance from people or machines present at that location.
A robot's sensors would allow its user greater control in examining an environment. A physical space with sufficient sensors could be simulated to create a nearly identical experience for a proxy user, though there could be blindspots. A robot would be more consistent in the experience it delivered, but it would potentially have more limitations. A well constructed virtualized environment would allow a proxy to explore and manipulate objects much more easily than a robot could. Proxies could check out something on a high shelf, sit on a chandelier and tinkle its crystals, shrink down to hide under a couch, or examine locked rooms.
To mitigate some of their disadvantages, proxy users might remotely control special surveillance bots designed to move around a physical environment to better virtualize it. The "bugs" in Living Outside, Episode 3 are examples of these.
Robots could possibly eat, drink, and smoke if they had synthetic digestive tracts and lungs, seemingly necessary to support a brain in the first place. This possibility is interesting, but I have no idea when such tech might be developed and become common. Again, brain vats seem more likely.
The consumption of physical goodies is something that a proxy user could only do on their own end, where their brain was located. Proxy users could not drink the liquor in their friend's house, for example. They could have a perfect simulated experience of swallowing it, but it wouldn't get them drunk. Drugs similarly would have to be imbibed in the proxy user's physical space, and friends couldn't share them.
Interacting with physical people: Mixed, Robots Overall
Here's another area where robots have advantages, though not as many as you might think at first. People with sufficient brain implants could simulate the experience of a virtual person's presence. They could feel a proxy's hug, have sex with them, and otherwise interact with them as if they were really there. It would, in essence, be the interaction of two proxies. The extent of this interaction would be greater than that possible with a robot, because of the greater flexibility allowed by the virtual.
People with non-implant augmented reality technologies could interact with a proxy to a degree satisfactory for most purposes. They could see proxies with bionic contacts or feel a hug from a proxy through haptic clothing. Those without personal augmented reality would have a harder time perceiving a proxy, and their interaction would be limited. They could see and talk to them through a screen, but they wouldn't be able to touch them. For the unaugmented, interaction with robots would be superior. Though I wonder how many unaugmented people there will be by this time frame.
Intimate physical interaction with an unimplanted person would require robots. Proxies couldn't give a physical massage, or have physical sex. Telepresence into a robot that was relatively nearby would confer these same advantages without the need to remove your brain from your organic body, although lag would become an issue over longer distances. Lag could also be an issue for proxies.
Assorted things: Advantage Proxies
So many things that are completely trivial to implement for a virtual body would be quite difficult, expensive, or prohibitively impractical for a robot body.
Even if a robot body could fly, could it hover safely through a crowd? Could it stand on the ceiling and walk around upside down? Could it grow giant, or shrink to doll size? Change form and appearance at will? Turn completely invisible or incorporeal? Phase through walls? Taste with its knees? Blow up? What about turning into a tentacle monster, or copying someone's appearance? Could it teleport to China on a whim?
For a proxy, upgrades and adding components are a simple matter of a software update or tinkering with settings. Upgrading robots will not be so easy.
Interacting with virtual environments: Complete Advantage Proxies
A robot can't go into virtual environments, except by proxy. And virtual environments are seriously where the future is at. Their scope, quality, and diversity will far exceed anything found in the physical world. They will be more beautiful, more engaging, and far more interactive than anything we could build here, at very little cost. I've been exploring these possibilities while writing Living Outside, and they are pretty exciting.
Many people say, "Where is virtual reality? They've been promising it for decades!" I have two responses to that.
First, we already have VR! Look at cutting edge games like Crisis 2. That's virtual reality. The player just isn't immersed in its environment yet. True, it's not as good as people want. Computer technology isn't yet capable of delivering an immersive VR experience. That will require much more intensive processing power to simulate a realistic environment for all the bodily senses. We are only now approaching photorealistic graphics and physics engines of the required sophistication. We've also started working on the tactile elements of virtual objects, such as their texture and weight. We are quickly approaching a time when the average person's computer will be able to simulate realistic, fully immersive worlds, but the hardware isn't there yet.
Second, it will take some effort to create the interface that will completely immerse users in the amazing simulated realities we are creating. Decent audiovisual VR gear is around the corner. Soon, VR capable glasses that also provide untethered augmented reality will have enough utility for people to start paying attention to virtual reality again. Eventually, they will be small, lightweight and stylish. But full immersion VR, featuring senses like touch, acceleration, taste, and smell will require brain implants to be fully convincing. That's much further away, but it's being worked on. The applications of full immersion simulated reality are just too incredible for the necessary tech not to be developed, if it is plausible at all. Just think of the immense market forces that will drive the development of realistic virtual sex, or convincing telepresence.
Conclusion: We All Win (hopefully)!
As I said at the beginning, robots are awesome. I like to imagine that people will get a robot body to encase and protect their brains while they frolic in the liberated expanse of virtual reality. They would be useful for moving your brain in case of emergencies, if nothing else. That sounds like an ideal setup to me. But once people get used to proxies, they're not going to be very interested in robot bodies, except as infrastructure. They might take over one remotely every now and then for the novelty, but the experience will be inferior overall, and I don't think people are going to want to risk their brains by moving them around when they don't have to. Telepresence into robots for specific functions will undoubtedly be common, but for most purposes, like socializing, proxies win hands down.
Our current bodies are biological proxies. They are the only tools our brains have, at the moment, to interact with the world. Soon, we will enhance, and even replace them, with the help of technology. By replace, I don't necessarily mean remove our brains from them. We can liberate our brains from our flesh by rerouting our nervous system so that our brains can effectively inhabit other proxies, virtual and otherwise, by telepresence. And through those new forms, we will connect with other human beings like never before, in both quantity and quality.
People should stop saying, "I want my robot body!" and start saying "I want my virtual body!"
I try to envision virtual reality, virtualized environments, and proxies in Living Outside, so if you're interested in this stuff check it out. Episode 3 is particularly relevant to this discussion.
--
Brian
Thursday, March 3, 2011
Some Thoughts On Artificial Intelligence
I am only writing very limited artificial intelligence into my hard sci-fi story Living Outside, since I have chosen to focus on how other technologies could transform our lives. I am wary about making predictions regarding strong AI and its implications, but I had some thoughts recently about its development and I thought I would post about it so my brain would shut up.
The development of strong AI will require considerable processing power and data storage, as well as economic pressures to spur its funding.
I'm not going into data storage, since that's going to be less of a problem. So let's start with the needed processing power. The computational speed of the human brain is often given an upper limit of 10 petaflops (10 quadrillion operations a second), so that's what I'll use. Why is the computational power of the brain important? Because while the entire 10 petaflops may not be necessary to produce artificial intelligence, our brains show that it is sufficient to support intelligence. For comparison, Tianhe-I, the world's current fastest supercomputer, has attained 2.5 petaflops, which is approaching the brain's maximum possible power.
Of course, the brain is analog as well as digital, and is massively parallel, so computer architecture might have to change a great deal before it can emulate it in any meaningful way. I've read that memristors behave like neurons in many ways, so they might be able to more efficiently simulate neural systems if their potential pans out. Or we might try a different approach, such as evolutionary algorithms using more traditional systems. But let's go with 10 petaflops as a signpost for now.
Currently, only the most powerful supercomputers are in the petaflop range, and they're not likely to be used for AI research. The Tianhe-I supercomputer is currently involved in oil exploration and aircraft simulation. Other common uses for supercomputers are predicting weather patterns, simulating proteins, and researching missile technology. In other words, things with commercial and military applications.
Wouldn't the development of strong AI be lucrative? Maybe, if it could be developed using current hardware. But renting time on supercomputers is expensive, and research into general AI is not that well funded, especially when compared to something like drug development. Research into general AI is uncertain to turn a profit or yield practical applications. It's close to pure science research like that.
Incidentally, Intel is building a $5 billion dollar factory to manufacture chips with a 14nm transistor process. It's supposed to be ready in 2013. Developing and manufacturing that one process cost as much as the Large Hadron Collider.
AI research should take off as the necessary hardware becomes affordable and artificial intelligence is seen to hold commercially viable applications. This may happen relatively soon, if processing power continues along the exponential growth curve it's been following since before the 1960's. Computational power per dollar has, on average, doubled every 1.5 years. Sometime in the next decade, we will probably reach the end of Moore's Law in its original definition- the doubling of silicon transistors on an integrated circuit every 18 months. But replacements for the traditional silicon architecture are being heavily researched. Developments in nanotubes, graphene, molybdenite, memristors, multiple cores, various optimizations, and 3D architecture should help keep the party going for some time.
The exponential growth of computer technology has yielded amazing results. Right now, you can buy a Radeon HD 5970 graphics card for $700. It can reach speeds of .928 teraflops, which is about a trillion (double precision) operations a second. A trillion calculations a second is crazy, but to match the goal of 10 petaflops would require more than 10,000 of these cards, at a cost of $7 million for the cards alone. Many supercomputers, like Tianhe-I, actually use arrays of thousands of graphics cards to perform calculations, so using a graphics card as an example is apt.
If current exponential trends continue, the performance of a single graphics card 20 years from now would more than exceed 10 petaflops. That's because over the course of 20 years the computational power of processors would double 14 times. That's a power multiplication of 16,384 times, or 2^14.
Will the growth of processor power continue at the current rate for 20 years? Who knows? Can you even make a graphics card with petaflop performance? Theoretically yes, but we'll just have to wait and see what the practical and economic limitations are. In any case, I'm sure we will see impressive gains in computer power over the next two decades that should make strong AI much more viable.
But for now, grant that the average person in the year 2030 (or whenever it may happen) has a petaflop computer. What would an individual need that sort of power for? Such a computer could simulate completely realistic, full immersion virtual environments. What else could you need from a computer? How about an artificial best friend?
20 years from now, if we have anywhere near that level of hardware infrastructure, AI research should have taken off. This will result both from the affordability of the hardware and from the potential commercial market for AI. If researchers were to develop and run a human level AI on an expensive supercomputer today, it might turn out to have incredible commercial applications. It might go nowhere productive. Or it might result in an above average, billion dollar artificial lab assistant. It's a risky project. But when researchers have relatively cheap access to the processing power and storage capacity they need, and when there is a huge demand for AI because the average consumer has the hardware to run some version of it, I'm betting we'll see considerable money and research going into it.
People say that strong AI is hard, and they're right. But I think developing and distributing the hardware that AI will run on has got to be more difficult overall. I believe that there will come a time when the average person has a computer capable of running some kind of general artificial intelligence. Like today, much of that computer's potential will be unused. Most people only use some fraction of their computer's power, possibly even if they're running virtual worlds on it. Untapped computer resources create a vacuum that could be filled by AI. I think the largest part of this story will be seen by history as the development of the hardware infrastructure. Once the hardware is in place, the software, however difficult it is to develop, can be distributed worldwide overnight to millions of consumers wanting their very own AI.
There has been a lot of progress toward strong AI already. The hardware groundwork is being laid, not to mention the narrow but amazing AI functions being developed such as Google's search engine and Watson's understanding of natural language. Like the human mind, an artificial mind will be composed of numerous such subprocesses. And then there's robotics. The body, processing hardware, and subalgorithms of AI are already being built. Putting them all together will be hard, but every part of the equation matters and is being worked on.
Note that I never mentioned consciousness. Intelligence and consciousness are not the same thing, and it is unclear right now whether machines can be conscious. I strongly suspect that consciousness is a much harder problem than intelligence. One day there might be a machine with super human intelligence and not a shred of consciousness.
Or, you know, I could be way off about all this.
Finally, what uses will AI have? I'll make some guesses for fun. Research and engineering are two obvious uses. Companionship, perhaps. Powering virtual lovebots? Education- "Hey computer, do my taxes and then teach me to play Go." Also, helping you organize your life. And ultimately I could see a companion AI becoming your intelligent interface with the world, like the agents in Living Outside, but more involved.
--
Brian
The development of strong AI will require considerable processing power and data storage, as well as economic pressures to spur its funding.
I'm not going into data storage, since that's going to be less of a problem. So let's start with the needed processing power. The computational speed of the human brain is often given an upper limit of 10 petaflops (10 quadrillion operations a second), so that's what I'll use. Why is the computational power of the brain important? Because while the entire 10 petaflops may not be necessary to produce artificial intelligence, our brains show that it is sufficient to support intelligence. For comparison, Tianhe-I, the world's current fastest supercomputer, has attained 2.5 petaflops, which is approaching the brain's maximum possible power.
Of course, the brain is analog as well as digital, and is massively parallel, so computer architecture might have to change a great deal before it can emulate it in any meaningful way. I've read that memristors behave like neurons in many ways, so they might be able to more efficiently simulate neural systems if their potential pans out. Or we might try a different approach, such as evolutionary algorithms using more traditional systems. But let's go with 10 petaflops as a signpost for now.
Currently, only the most powerful supercomputers are in the petaflop range, and they're not likely to be used for AI research. The Tianhe-I supercomputer is currently involved in oil exploration and aircraft simulation. Other common uses for supercomputers are predicting weather patterns, simulating proteins, and researching missile technology. In other words, things with commercial and military applications.
Wouldn't the development of strong AI be lucrative? Maybe, if it could be developed using current hardware. But renting time on supercomputers is expensive, and research into general AI is not that well funded, especially when compared to something like drug development. Research into general AI is uncertain to turn a profit or yield practical applications. It's close to pure science research like that.
Incidentally, Intel is building a $5 billion dollar factory to manufacture chips with a 14nm transistor process. It's supposed to be ready in 2013. Developing and manufacturing that one process cost as much as the Large Hadron Collider.
AI research should take off as the necessary hardware becomes affordable and artificial intelligence is seen to hold commercially viable applications. This may happen relatively soon, if processing power continues along the exponential growth curve it's been following since before the 1960's. Computational power per dollar has, on average, doubled every 1.5 years. Sometime in the next decade, we will probably reach the end of Moore's Law in its original definition- the doubling of silicon transistors on an integrated circuit every 18 months. But replacements for the traditional silicon architecture are being heavily researched. Developments in nanotubes, graphene, molybdenite, memristors, multiple cores, various optimizations, and 3D architecture should help keep the party going for some time.
The exponential growth of computer technology has yielded amazing results. Right now, you can buy a Radeon HD 5970 graphics card for $700. It can reach speeds of .928 teraflops, which is about a trillion (double precision) operations a second. A trillion calculations a second is crazy, but to match the goal of 10 petaflops would require more than 10,000 of these cards, at a cost of $7 million for the cards alone. Many supercomputers, like Tianhe-I, actually use arrays of thousands of graphics cards to perform calculations, so using a graphics card as an example is apt.
If current exponential trends continue, the performance of a single graphics card 20 years from now would more than exceed 10 petaflops. That's because over the course of 20 years the computational power of processors would double 14 times. That's a power multiplication of 16,384 times, or 2^14.
Will the growth of processor power continue at the current rate for 20 years? Who knows? Can you even make a graphics card with petaflop performance? Theoretically yes, but we'll just have to wait and see what the practical and economic limitations are. In any case, I'm sure we will see impressive gains in computer power over the next two decades that should make strong AI much more viable.
But for now, grant that the average person in the year 2030 (or whenever it may happen) has a petaflop computer. What would an individual need that sort of power for? Such a computer could simulate completely realistic, full immersion virtual environments. What else could you need from a computer? How about an artificial best friend?
20 years from now, if we have anywhere near that level of hardware infrastructure, AI research should have taken off. This will result both from the affordability of the hardware and from the potential commercial market for AI. If researchers were to develop and run a human level AI on an expensive supercomputer today, it might turn out to have incredible commercial applications. It might go nowhere productive. Or it might result in an above average, billion dollar artificial lab assistant. It's a risky project. But when researchers have relatively cheap access to the processing power and storage capacity they need, and when there is a huge demand for AI because the average consumer has the hardware to run some version of it, I'm betting we'll see considerable money and research going into it.
People say that strong AI is hard, and they're right. But I think developing and distributing the hardware that AI will run on has got to be more difficult overall. I believe that there will come a time when the average person has a computer capable of running some kind of general artificial intelligence. Like today, much of that computer's potential will be unused. Most people only use some fraction of their computer's power, possibly even if they're running virtual worlds on it. Untapped computer resources create a vacuum that could be filled by AI. I think the largest part of this story will be seen by history as the development of the hardware infrastructure. Once the hardware is in place, the software, however difficult it is to develop, can be distributed worldwide overnight to millions of consumers wanting their very own AI.
There has been a lot of progress toward strong AI already. The hardware groundwork is being laid, not to mention the narrow but amazing AI functions being developed such as Google's search engine and Watson's understanding of natural language. Like the human mind, an artificial mind will be composed of numerous such subprocesses. And then there's robotics. The body, processing hardware, and subalgorithms of AI are already being built. Putting them all together will be hard, but every part of the equation matters and is being worked on.
Note that I never mentioned consciousness. Intelligence and consciousness are not the same thing, and it is unclear right now whether machines can be conscious. I strongly suspect that consciousness is a much harder problem than intelligence. One day there might be a machine with super human intelligence and not a shred of consciousness.
Or, you know, I could be way off about all this.
Finally, what uses will AI have? I'll make some guesses for fun. Research and engineering are two obvious uses. Companionship, perhaps. Powering virtual lovebots? Education- "Hey computer, do my taxes and then teach me to play Go." Also, helping you organize your life. And ultimately I could see a companion AI becoming your intelligent interface with the world, like the agents in Living Outside, but more involved.
--
Brian
Subscribe to:
Posts (Atom)