I am only writing very limited artificial intelligence into my hard sci-fi story Living Outside, since I have chosen to focus on how other technologies could transform our lives. I am wary about making predictions regarding strong AI and its implications, but I had some thoughts recently about its development and I thought I would post about it so my brain would shut up.
The development of strong AI will require considerable processing power and data storage, as well as economic pressures to spur its funding.
I'm not going into data storage, since that's going to be less of a problem. So let's start with the needed processing power. The computational speed of the human brain is often given an upper limit of 10 petaflops (10 quadrillion operations a second), so that's what I'll use. Why is the computational power of the brain important? Because while the entire 10 petaflops may not be necessary to produce artificial intelligence, our brains show that it is sufficient to support intelligence. For comparison, Tianhe-I, the world's current fastest supercomputer, has attained 2.5 petaflops, which is approaching the brain's maximum possible power.
Of course, the brain is analog as well as digital, and is massively parallel, so computer architecture might have to change a great deal before it can emulate it in any meaningful way. I've read that memristors behave like neurons in many ways, so they might be able to more efficiently simulate neural systems if their potential pans out. Or we might try a different approach, such as evolutionary algorithms using more traditional systems. But let's go with 10 petaflops as a signpost for now.
Currently, only the most powerful supercomputers are in the petaflop range, and they're not likely to be used for AI research. The Tianhe-I supercomputer is currently involved in oil exploration and aircraft simulation. Other common uses for supercomputers are predicting weather patterns, simulating proteins, and researching missile technology. In other words, things with commercial and military applications.
Wouldn't the development of strong AI be lucrative? Maybe, if it could be developed using current hardware. But renting time on supercomputers is expensive, and research into general AI is not that well funded, especially when compared to something like drug development. Research into general AI is uncertain to turn a profit or yield practical applications. It's close to pure science research like that.
Incidentally, Intel is building a $5 billion dollar factory to manufacture chips with a 14nm transistor process. It's supposed to be ready in 2013. Developing and manufacturing that one process cost as much as the Large Hadron Collider.
AI research should take off as the necessary hardware becomes affordable and artificial intelligence is seen to hold commercially viable applications. This may happen relatively soon, if processing power continues along the exponential growth curve it's been following since before the 1960's. Computational power per dollar has, on average, doubled every 1.5 years. Sometime in the next decade, we will probably reach the end of Moore's Law in its original definition- the doubling of silicon transistors on an integrated circuit every 18 months. But replacements for the traditional silicon architecture are being heavily researched. Developments in nanotubes, graphene, molybdenite, memristors, multiple cores, various optimizations, and 3D architecture should help keep the party going for some time.
The exponential growth of computer technology has yielded amazing results. Right now, you can buy a Radeon HD 5970 graphics card for $700. It can reach speeds of .928 teraflops, which is about a trillion (double precision) operations a second. A trillion calculations a second is crazy, but to match the goal of 10 petaflops would require more than 10,000 of these cards, at a cost of $7 million for the cards alone. Many supercomputers, like Tianhe-I, actually use arrays of thousands of graphics cards to perform calculations, so using a graphics card as an example is apt.
If current exponential trends continue, the performance of a single graphics card 20 years from now would more than exceed 10 petaflops. That's because over the course of 20 years the computational power of processors would double 14 times. That's a power multiplication of 16,384 times, or 2^14.
Will the growth of processor power continue at the current rate for 20 years? Who knows? Can you even make a graphics card with petaflop performance? Theoretically yes, but we'll just have to wait and see what the practical and economic limitations are. In any case, I'm sure we will see impressive gains in computer power over the next two decades that should make strong AI much more viable.
But for now, grant that the average person in the year 2030 (or whenever it may happen) has a petaflop computer. What would an individual need that sort of power for? Such a computer could simulate completely realistic, full immersion virtual environments. What else could you need from a computer? How about an artificial best friend?
20 years from now, if we have anywhere near that level of hardware infrastructure, AI research should have taken off. This will result both from the affordability of the hardware and from the potential commercial market for AI. If researchers were to develop and run a human level AI on an expensive supercomputer today, it might turn out to have incredible commercial applications. It might go nowhere productive. Or it might result in an above average, billion dollar artificial lab assistant. It's a risky project. But when researchers have relatively cheap access to the processing power and storage capacity they need, and when there is a huge demand for AI because the average consumer has the hardware to run some version of it, I'm betting we'll see considerable money and research going into it.
People say that strong AI is hard, and they're right. But I think developing and distributing the hardware that AI will run on has got to be more difficult overall. I believe that there will come a time when the average person has a computer capable of running some kind of general artificial intelligence. Like today, much of that computer's potential will be unused. Most people only use some fraction of their computer's power, possibly even if they're running virtual worlds on it. Untapped computer resources create a vacuum that could be filled by AI. I think the largest part of this story will be seen by history as the development of the hardware infrastructure. Once the hardware is in place, the software, however difficult it is to develop, can be distributed worldwide overnight to millions of consumers wanting their very own AI.
There has been a lot of progress toward strong AI already. The hardware groundwork is being laid, not to mention the narrow but amazing AI functions being developed such as Google's search engine and Watson's understanding of natural language. Like the human mind, an artificial mind will be composed of numerous such subprocesses. And then there's robotics. The body, processing hardware, and subalgorithms of AI are already being built. Putting them all together will be hard, but every part of the equation matters and is being worked on.
Note that I never mentioned consciousness. Intelligence and consciousness are not the same thing, and it is unclear right now whether machines can be conscious. I strongly suspect that consciousness is a much harder problem than intelligence. One day there might be a machine with super human intelligence and not a shred of consciousness.
Or, you know, I could be way off about all this.
Finally, what uses will AI have? I'll make some guesses for fun. Research and engineering are two obvious uses. Companionship, perhaps. Powering virtual lovebots? Education- "Hey computer, do my taxes and then teach me to play Go." Also, helping you organize your life. And ultimately I could see a companion AI becoming your intelligent interface with the world, like the agents in Living Outside, but more involved.
--
Brian
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment