Pages

Tuesday, December 7, 2010

Star Trek! Observations About Things That Don't Matter

I just finished watching the first generation Star Trek movies (1-8), after I watched the original TV series, and I have a few observations. First, to reiterate my last post, almost all red shirts killed were security officers. It makes sense for them to die. Shut up about it. Instead of saying, "Uh-oh, he's wearing a red shirt!" say, "Uh-oh, he's a security officer on a dangerous mission!"

Second, Star Trek 3: The Search for Spock has some serious plot holes. Spock's dead, but he left his spirit in McCoy for some reason. OK. So now they need to put his spirit back in his body. So Spock's dad gets all pissy because Kirk jettisoned his body onto a planet, as part of a funeral. Go get his body and bring it back, says dad. So they go to the planet and find that he's been magically reborn as a baby which grew into a full grown adult, because the planet was magic due to the Genesis Project or whatever. They bring the full grown Spock body back, put his spirit back in it, all is back to normal.

EXCEPT that NO ONE expected him to be reborn from the Genesis Project. So Spock's dad is upset because Kirk hasn't brought back his rotting body, which would obviously be of no help to resurrecting Spock. He's "vulcan mad" at Kirk because Kirk did the only thing that could possibly have revived Spock, although no one could have known it. So... the plot is a deadlocked contradiction. There was no reason to get his body, except to find it was miraculously restored to life. What was Spock's dad going to do if Kirk returned with Spock's corpse? Bring it back to life? With what technology? This whole ritual of putting souls back in the dead's body is only found in this movie, I believe, although there are instances of transferring Katras around to different people. Am I missing something?

Also, the Klingons (lead by Christopher Lloyd) were incredibly impressed with the Genesis Project's destructive capacity, since it can destroy and recreate an entire planet's biosphere. But why? It's expensive and really hard to do. It was well established in the original show that the Enterprise's phasers (and presumably the Klingon Bird of Prey's) could annihilate a planet's population in minutes. Whatever.

Before I get to more weird observations, I want to note how incredibly progressive the original show was. It was the mid 60's, and Star Trek featured several black characters, most of them doctors or scientists. Uhura was a black woman, on the bridge of the Enterprise, and she did more than handle communications. Frequently, when the computers on the bridge went awry, it would be her and Spock rewiring them under the consoles.

Chekov was another remarkable character. During the Cold War, Roddenberry made the Enterprise's primary tactical officer a Russian. Not only was he Russian, but most of his lines were about how awesome Russia was. Yet he was a trusted member of the crew.

And of course, Spock represented geeks, and their sometimes difficulty with emotions.

Star Trek also: promoted birth control; fought superstition; spoke against dogmatic religion and nationalism; presented an Utopic future without money or capitalism; spoke critically of the use of nuclear weapons; spoke for peace; and advocated reaching out to even the most vicious enemies.

That said, most of the women in Star Trek are 60's stereotypes- weak minded and dependent.

Here's the primary technical issue I have with the show- The speed of the Enterprise and the scope of the distances involved were not serious concerns to the people making the show. Obviously, no one expected that this cheap B-grade sci-fi show would become what it did. So it's not really that shocking when in early episodes the Enterprise zooming off to visit other galaxies, and sometimes fleeing an entire galaxy because of aliens in one solar system! Later, the show was confined to a small section of the Milky Way galaxy.

The speed of the warp drive's various levels is also erratic. And the show and movies play fast and loose with the difference between warp speed and sublight speeds.

In season 3, episode 3, "The Paradise Syndrome," Spock and McCoy have to leave a planet to stop a giant meteor from hitting the planet. Spock says that the meteor is going to hit the planet in 2 months, unless they alter its path within a few hours. So they travel at Warp 9 for "several hours" to reach it. Warp 9 is established as 1500 times the speed of light. Assuming that they traveled for only 2 hours, that would be 3000 light hours, or more than 20,000 AU. Each Astronomical Unit is equal to the average distance between the Earth and the Sun. That would take 115 days to traverse at the speed of light, and meteors travel at a tiny fraction of that speed.

This sort of thing happens a lot in the show, and in some of the movies. In the first Star Trek movie, a giant cloud surrounding a space ship is first seen in Klingon space and then is approaching Earth within days, without ever apparently going into warp.

Another weird thing about the show is Hodgkin's Law of Parallel Planetary Development, summarized as follows:

"The theory was that similar planets with similar environments and similar populations tended to gravitate toward similar biological developments over time. Although initially applicable only to biology it was later expanded to include a tendency to move toward similar sociological developments as well with sentient beings."
From here: http://memory-alpha.org/wiki/Hodgkin%27s_Law_of_Parallel_Planetary_Development

Most "aliens" in Star Trek looked completely human. Most spoke English. Many planets had independently "evolved" English. Some looked exactly like time periods from Earth. "They're going through the 1920's right now". In other words, it was an excuse to use the studio's sets to recreate different time periods in Earth's history. It was cheap and convenient, and allowed for lessons about how humans would evolve beyond contemporary 60's barbarities.

From episode 43, "Bread and Circuses," Spock remarks on the discovery that the aliens on a planet they are visiting speak English, "Complete Earth parallel. The language here is English." They needed some way to talk to aliens. This was before the Next Generation's universal translator, remember. In one case, even the continents looked exactly like Earth's.

They played pretty loose even with Hodgkin's law. If there was parallel development, shouldn't there be more people speaking Chinese? Or Hindi? And why are cave dwellers speaking English? Blame it on American cultural bias and convenience.

The thing I took away was that audience's today are a lot more sophisticated than they used to be. I have fun nitpicking with TOS, but it took shortcuts out of necessity, to tell stories about how society is and how it could change for the better. Technical details are secondary to that. If you can say about any TV show that it changed human culture for the better, surely Star Trek is in that category.

--
Brian

Friday, July 2, 2010

Aside: Star Trek Redshirts And Other Stuff

I've been watching the original Star Trek, and have been thinking about the "redshirt" phenomenon. It has become a recurring pop culture joke that crewmen wearing a red top tend to die. They even played with it in the recent reboot movie. But I don't get why this is a joke. It is true that 75% of all Enterprise crewmen who died in the original Star Trek wore red uniforms. However, that's because red is the color of security officers, the people you would expect to get killed. They are the ones patrolling areas, holding positions, guarding prisoners and doing other dangerous stuff. You would hope for an even higher percentage of red.

Where exactly is the joke here? People assigned dangerous, combat-related jobs tend to die more? I guess people might not know what a red uniform signifies. Actually, it can also mean engineering and tactical, but check out the list above: a "redshirt" almost always means security officer. I think there's a perception that red = ensign, which is untrue.

Spock, McCoy, Kirk and ensign Ricky go down to the planet. Guess who's not coming back? Let's see... the guy who's not one of the main stars? Would people prefer that science officers be killed in greater numbers?

Anyway... another thing that bothers me is the ever popular Murphy's Law, "anything that can go wrong, will go wrong." The purpose of Murphy's Law is to remind that taking care to consider a wide range of contingencies is a good precaution. That's all well and good, but the statement, as it stands, is either false or defines "can go wrong" so narrowly as to be a tautology. According to Murphy's Law, "what can go wrong" is identical in meaning to "what will go wrong."

This is not as bad as Finagle's Law, "Anything that can go wrong, will—at the worst possible moment." Really? Sad, sad pessimism.

Another annoyance: after someone says, "it's always in the last place you look," people love to say, "Duh! Why would you look anywhere else after you found it?" But the full sentiment behind the saying is, "it's always in the last place you would think to look." Which is also a false statement, but expresses the common experience of finding something after you've exhausted your other options. If it was just "the last place you looked," no one would comment on it. So, here are two useful expressions that are syntactically false, and result in annoyance to me. But that's human language, I guess.

--
Brian

Thursday, June 24, 2010

The Significance of Consciousness

It's a cliche to look up at the stars and comment on how small and insignificant we are. This bothers me because conscious beings like ourselves are arguably the only significant thing in the universe. Now, when I say "conscious beings," I am including the possibility of aliens and machine sentience. And of course the mental lives of a multitude of animals are worthwhile as well. Everything else in the universe is only significant in that it creates and sustains life, gives us resources, gives us something to ponder and inspires us to reach heavenward.

Is the universe really "big"? In terms of a physical dimension, only conscious beings give it any coherent scale. It's certainly got a lot going on, but so does the human body. And without beings like us, what point would there be in this, or any universe? It would be just a bunch of dust swirling around. Could the universe be a computer working on its creator's problems? Could it be innately conscious? Could stars be conscious in some way? I doubt it, but even these possibilities point out that consciousness is what is ultimately important.

The human (or possibly alien) brain, is probably the most complex naturally occurring structure. True, it is physically minuscule compared to the Milky Way. A child is a speck on a speck whirling around a speck out near the rim of one galaxy among the billions that we know of. And yet, one child's life is worth more than an infinite number of galaxies in a universe where life will never occur. Now, doesn't that make us special? Relative to rocks and hot gas, yes. But there's still the issue of our place in civilization, in relation to the human race and life as a whole, and gods willing in the galactic federation.

On a nerdy note, I want to say that I have always been bothered by the part in Watchmen where Dr. Manhattan claims that lifeless Mars is superior to Earth life. It's a bunch of pretty rocks! And they're only pretty because he's looking at them. And that crystal thing of his is way less interesting than all the crap going on here. Get over yourself Dr. Manhattan!

Next: Stuff I Find Suspicious

--
Brian

Singularity Part 4: Uploading Minds

I have several times encountered an odd notion: won't we have to to be able to "upload our minds" before the singularity can take place? "Uploading a mind" is frequently taken to mean transferring a person's unique consciousness into a machine. Moving consciousness into a machine or from one person to another is a common trope of mainstream science fiction, which hasn't helped the confusion.

More rational science fiction features a more realistic (and coherent) form of "uploading" a mind. Due to the unfortunate connotation of the term "uploading," I think "copying" is a better term. "Copying" a mind (or brain) means storing detailed brain scans in a computer. Often, but not always, this is followed by running a simulation of the recorded brain state, possibly giving that simulation a virtual body in a simulated environment or in a robot, etc.

Obviously, such a simulation based on your brain would not be "you". Even if consciousness could be generated by a machine, it is uncertain how comparable it would be to human consciousness. Besides the problem of machine consciousness, such a simulation could exist alongside you. No matter how closely it may resemble your thinking, memory, or emotions, you can't exist in two places at once.

So you can make a copy of yourself, and that may be fun and useful, and even keep your “mind” and memories and relationships alive after you die, but it will not exactly be you. In other words, shooting yourself after you make a perfect copy of your brain would be a waste. Even if a truly in-depth scan require the destruction of your brain, examining it layer by layer, it would not move your consciousness into a machine.

Finally, I don't see how the ability to copy or upload our minds is a precondition for the plausibility of the singularity, but it does seem to come up a lot for some reason, so there you go.

Next blog: Looking up at the stars.

--
Brian

Thursday, May 27, 2010

Singularity Part 3: Consciousness and Machines

I have seen numerous debates about the singularity hinge on the question of whether machines could ever become conscious. Specifically, one side takes the position that the singularity will never happen because we will never be able to create conscious machines to make it happen. That position conflates intelligence and consciousness, I believe erroneously. First, let me point out that the success of a singularity, fueled by superhuman AI, does not depend on whether the AI is conscious or not. It only depends on intelligence, and it seems very likely that a machine could have intelligence without consciousness.

Understanding and generating consciousness is a much harder problem than generating intelligence. For one thing, there are ready ways to measure intelligence. It's easy to imagine an anthropic robot that could interact with you as an intellectual equal. Think C-3PO or Data. Such a robot would be recognizably intelligent. But is it conscious? Can non-biological matter ever have consciousness? Could you even tell if it was? No one knows. It may be that conscious awareness is an intrinsic property of sufficient intelligence, emerging from the complex pattern of information and algorithms of a self-reflective system. But again: who the hell knows?

Consciousness may well be the hardest problem in science because of its subjective nature. Our existence itself is consciousness. Yet, we have no working theory of how the brain generates consciousness. With our current ignorance, speculating about machine consciousness is pure fantasy.

I have to repeat that conscious machines are not needed for a singularity. I'm all for sentient machines, of course. I'm just pointing out that this is an immensely difficult issue, and arguing about its likelihood is pointless at the moment. Still, theoretical discussions about the moral and existential issues of conscious machines are interesting exercises, and may help us prepare for a time when we know more about this.

I should note that, conversely, having consciousness doesn't necessarily mean having intelligence, as [insert name of stupid celebrity here, possibly a Baldwin] demonstrates.

Next blog: uploading the mind.

--
Brian

Singularity Part 2: Imagineeration or Where's my Jetpack?

First, Disney didn't make up the word imagineering, so I can use it how I want.

About my first post, I want to say that not every singularitarian (what a word!) thinks the same way about the future or lets the singularity cloud their vision. I just want to point out that some people use it to cast doubt on whether predictions about the future can ever be considered realistic or meaningful, and I have seen it shut down futurist discussions.

Imagining the future is sometimes seen as a futile endeavor. A post-singularity world might be so different from our own that our conceiving it is no more possible than a cavemen coming up with the idea of Facebook. But that is not a productive assumption. And even if the future is unimaginable, we can still make useful predictions a good deal into the future regarding where technology and society are headed based on current trends. A singularity event doesn't even necessarily change where any trend is heading, but may only accelerate it on its way. Superhuman AI still have to work with the laws of physics, after all. So, I think that for the sake of envisioning the future, strong AI should be seen as an expedient on the road toward transformation, not just the constructor of a road block to our imagination.

Meaningful attempts at futurism require critical thinking and healthy skepticism grounded in contemporary knowledge of science and technology. Also required is a grasp on humanity's driving forces, and which directions they're taking us. Something can't just sound cool like flying cars or jetpacks, it has to make sense in physical and social terms. The cliche thing to ask during discussions about the future is why we don't have those things yet. But we do have them! They're just expensive and dangerous, which is hardly a surprise. Engineers eighty years ago pointed out how impractical those technologies were. The energy required to move a car through the air is far greater than that used to roll it along the ground. And who you would trust to fly one? As for jetpacks? Come on! Was anyone ever serious about those? If Bobba Fett's jetpack accidentally killed him, what hope do the rest of us have with one strapped to our backs? (He's dead, get over it.)

On the other end of the "common sense folly" of making predictions, no one really foresaw what the Internet would become. Or just how the pill would change society and liberate women. No one on record, that I'm aware of, even foresaw personal automobiles or their impact. Futurism is not an exact science. But it is fun, and educated speculation can help us prepare for the future. It may not happen exactly the way we foresee it, but it's better than nothing. Just because the future is fuzzy doesn't mean you have to head there blindfolded. And we have never had a clearer view of how things are going to turn out. This is not the time to bury our heads in thinking like, "the future is unknowable".

So why look dumb in 30 years time if you can just keep your mouth shut? Because predictions are necessary to guide our path and foresee pitfalls. Here's an example: automation. That's something we desperately need to be examining and predicting. I've been thinking about automation for a while and will try to write about it soon in a more extensive way. But the gist of it is this:

Automation will be taking more and more jobs as robots and other automating technologies improve. Every year robots get better and cheaper, with capabilities now expanding into diverse areas of work: agriculture, docking, manufacturing, warehousing, construction, flying planes, programming, and more. Companies love to replace human workers with machines, and it makes perfect economic sense for them to do so. Automation will be taking menial labor jobs from millions of people all over the world in the decades to come. So what do those millions of unemployed, undereducated people with limited skill sets do? Where will new opportunities for these people come from? How will so many people adjust to fill positions that will likely require education, creativity, and skills that can't be replicated by increasingly sophisticated automation?

It's not a hopeless situation, but it's not being dealt with openly in America. Instead, anxiety over automation generates popular nightmares such as: Skynet of The Terminator series, the Borg of Stark Trek, the Cylon of Battlestar Galactica, that retarded Surrogates movie, the machines from the Matrix, and dozens of dystopic mechanized futures like Brazil. These are not generally constructive criticisms of automation. The message gleamed from most of them is that mechanization will rob us of our humanity or try to destroy us, leaving us with no choice but to nip it in the bud right now. But widespread automation will continue; it is simply too cost effective and powerful to stop. We need to be thinking openly and critically about how automation will play out, not just exercising our fears with childish apocalyptic scenarios.

Next post: what's there to say about conscious machines.

--
Brian

Thursday, May 20, 2010

Singularity Part 1: Strong AI as Deux ex Machina

If you're unfamiliar with the "technological singularity" or "strong AI":
http://en.wikipedia.org/wiki/Technological_singularity
http://en.wikipedia.org/wiki/Strong_ai
http://io9.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it


The world is in a state of transition and the road ahead holds many unknowns. This blog is an attempt to think clearly about the future—particularly, the promise and peril of new technologies and how people will change with them. To do this, one must consider a wide range of possibilities. I'm going to start by examining some issues regarding one very popular vision of the future, the technological singularity.

I find the singularity concept to be both intriguing and frustrating. It's intriguing because it gives hope for a "nerd rapture". And since AI are the main agents involved, humans have to do far less work. But the singularity is just as frustrating, because its very definition constrains the range of thought deemed relevant about the future. The whole point of the singularity is that we can't understand what happens during it, or what existence looks like after it. We also don't know when it will occur or how long it will take to change the world or in what way. The singularity's shroud of unknowing seems to make talking coherently about it moot. But there is one central element of the singularity which we can think critically about: strong artificial intelligence.

Whether it ever becomes "strong" or not, AI has a huge part to play in the future. Even though general purpose, human-level AI has proven more difficult to crack than researchers expected, AI is making great strides and encroaching further every day into areas of intelligence that we consider part of the human domain. AI may not look like a walking, talking person yet, but it is developing in numerous fields: game playing, pattern recognition of faces and locations (i.e. smart missiles), maneuvering robots, making music, predicting the stock market, forecasting the weather, and creating King Kong's cg hair and waving it realistically in the wind.

Not all of those may seem like they take intelligence, but they all feature impressive pattern recognition and creation. Our own brains operate with the help of countless subconscious processes that help us recognize faces, walk, talk, do math, write poetry, and drive a car. It's a large collection of these “agents” that make up our brain processes and sub-processes, and thus our minds. Looking at any individual process of the brain may not uncover something that looks “intelligent,” but put them together and there you go.

Strong AI is a key component for the occurrence of a singularity. The singularity is synonymous with the concept of an “intelligence explosion,” a meteoric rise in the quantity and quality of intelligence. This rise is given from strong AI improving on itself in an exponential feedback loop, far surpassing human capabilities and transforming the world through almost miraculous technological innovation. Another source of increased intelligence may be enhanced human beings, bolstered to super-normal cognitive capacity with the help of neural implants, smart drugs, and enriching information technologies such as smart phones and the Internet. An "intelligence explosion" may result from a combination of these two sources. AI has potentially better hardware, of course.

There are a few assumptions being made in regards to strong AI and the singularity:
1. Strong AI is possible. I wouldn't bet against it, but it might take a long time to develop.
2. The power of AI will improve rapidly on an exponential curve as it uses each level of increased intelligence to jump to another level. We don't know enough about the nature of intelligence to say that it actually can increase like that. Perhaps leaps in intelligence will be difficult even for superhuman minds, and AI may only improve slowly or encounter difficult problems.
3. It is possible for even a superintelligent being to improve technology considerably faster than humans already are. The development of technology takes intelligence, but it also requires the development of supporting technologies and often great resources. The development of a new processor, for example, requires many millions of dollars and a large research labs. While strong AI could certainly help with the man hours involved, I find it hard to imagine that it could do exponentially better than large groups of skilled humans, especially enhanced humans integrated with information technology.
4. That we won't hit a wall in regards to the exponential growth of technology. Such walls might be surpassed, but might flatten the growth rate of new tech.

Now, I'm not pessimistic (just look at my tag line) and I would be more happy than most if those assumptions were true. But I do believe that the future must be explored openly. So it bugs me that AI is often treated in science fiction as a type of black box, a literal Deux ex machina where a genie pops out of a box and fixes (or annihilates) everything. The advent of strong AI is too frequently the point where the discussion ends. I have seen its association with the singularity shield it from discussion.

It seems safe to say that the perspective and capacities of superhuman AI could be beyond our comprehension. Thus, many think we cannot coherently speculate about its actions or what direction it (or they) might take technology. But thinking like this only limits how we imagine the future, and that imagining is too important to constrain with intellectual insecurities. Is imagining the future a worthwhile endeavor? That's the next post.