Tags

Everybody is your 16th cousin

In my article two weeks ago about the odds of knowing a cousin I puzzled over the question of how many 3rd cousins a person might have. This is hard to answer, because it depends on figuring out how many successful offspring per generation the various levels of your family (and related families) have. Successful means that they also create a tree of descendants. This number varies a lot among families, it varies a lot among regions and it has varied a great deal over time. An Icelandic study found a number of around 2.8 but it’s hard to conclude a general rule. I’ve used 3 (81 great-great-grandchildren per couple) as a rough number.

There is something, however, that we can calculate without knowing how many children each couple has. That’s because we know, pretty accurately, how many ancestors you have. Our number gets less accurate over time because ancestors start duplicating — people appear multiple times in your family tree. And in fact by the time you go back large numbers of generations, say 600 years, the duplication is massive; all your ancestors appear many times.

To answer the question of “How likely is it that somebody is your 16th cousin” we can just look at how many ancestors you have back there. 16th cousins share with you a couple 17 generations ago. (You can share just one ancestor which makes you a half-cousin.) So your ancestor set from 17 generations ago will be 65,536 different couples. Actually less than that due to duplication, but at this level in a large population the duplication isn’t as big a factor as it becomes later, and if it does it’s because of a closer community which means you are even more related.

So you have 65K couples and so does your potential cousin. The next question is, what is the size of the population in which they lived? Well, back then the whole world had about 600 million people, so that’s an upper bound. So we can ask, if you take two random sets of 65,000 couples from a population of 300M couples, what are the odds that none of them match? With your 65,000 ancestors being just 0.02% of the world’s couples, and your potential cousin’s ancestors also being that set, you would think it likely they don’t match.

Turns out that’s almost nil. Like the famous birthday paradox, where a room of 30 people usually has 2 who share a birthday, the probability there is no intersection in these large groups is quite low. it is 99.9999% likely from these numbers that any given person is at least a 16th cousin. And 97.2% likely that they are a 15th cousin — but only 1.4% likely that they are an 11th cousin. It’s a double exponential explosion. The rough formula used is that the probability of no match will be (1-2^C/P)^(2^C) where C is the cousin number and P is the total source population. To be strict this should be done with factorials but the numbers are large enough that pure exponentials work.

Now, of course, the couples are not selected at random, and nor are they selected from the whole world. For many people, their ancestors would have all lived on the same continent, perhaps even in the same country. They might all come from the same ethnic group. For example, if you think that all the ancestors of the two people came from the half million or so Ashkenazi Jews of the 18th century then everybody is a 10th cousin.

Many populations did not interbreed much, and in some cases of strong ethnic or geographic isolation, barely at all. There are definitely silos, and they sometimes existed in the same town, where there might be far less interbreeding between races than among races. Over time, however, the numbers overwhelm even this. Within the close knit communities, like say a city of 50,000 couples who bred mostly with each other, everybody will be a 9th cousin.

These numbers provide upper bounds. Due to the double exponential, even when you start reducing the population numbers due to out-breeding and expansion, it still catches up within a few generations. This is just another measure of how we are all related, and also how meaningless very distant cousin relationships, like 10th cousins, are. As I’ve noted in other places, if you leave aside the geographic isolation that some populations lived in, you don’t have to go back more more than a couple of thousand years to reach the point where we are not just all related, but we all have the same set of ancestors (ie. everybody who procreated) just arranged in a different mix.

The upshot of all this: If you discover that you share a common ancestor with somebody from the 17th century, or even the 18th, it is completely unremarkable. The only thing remarkable about it is that you happened to know the path.

Caprica, uploading and gods

Caprica’s first half-season is almost over, but I started watching late due to travel and the Olympics. Here’s my commentary on the show to this point. I already commented last week on the lack of protagonists we can identify with. Now onto bigger issues.

Caprica is, I think, the first TV series to have uploading — the transfer of a human mind into computerized form — as its central theme. While AI is common in movies and TV, uploading is quite uncommon in Hollywood, even though it’s an important theme in written SF. This is what interests me in Caprica. It’s connection to Battlestar Galactica is fairly loose, and we won’t find the things we liked in that show showing up much in this one.

God, again

In fact, I mostly fear encroachment of material from BSG, in particular the “God” who was revealed to be the cause of all the events of that series. What we don’t yet know is whether the monotheistic “Soldiers of the One” are just yet another religion to have invented a “one true god” or if they really have received signs or influence for that god.

When I was critical of the deus ex machina ending of BSG, many people wanted to point out that religion had been present from the start in the show. But the presence of religion is not the same as the presence of a real god, and if not for BSG, I doubt any viewer would suspect the “One” was real. However, knowing that their is a one true god, we must fear the worst. Since that god set up all the events of BSG long ago, including various timings of the Cylon wars, it’s hard to believe that the god is not also setting up the timing of the creation of the Cylons, and thus directly or indirectly arranged Zoe’s death and transfer. I hope not but it’s hard to avoid that conclusion. The best we can hope for is that no direct influence of the god is shown to us.

Alas, for a show about uploading, the writers do need some more education about computers. Much of the stuff we see is standard Hollywood. Nonetheless the virtual worlds and the two uploaded beings (Zoe-A and Tamara-A) are by far the most interesting thing in the show, and fan ratings which put the episode “There is another Sky” at the top indicate most viewers agree. We’re note getting very much of them, though.

Worldbuilding

The colonial world is interesting, with many elements not typically shown in TV, such as well accepted homosexuality, group marriage, open drug use and kinky holo-clubs. There’s a lot of focus on the Tauron culture, but right now this impresses me as mostly a mish-mash, not the slow revelation of a deeply constructed background. I get the real impression that they just make of something they like when they want to display Tauron culture. As far as what’s interesting in Caprican or other culture, we mostly see that only in Sister Clarice and her open family.

I was hoping for better worldbuilding and it is still not too late. The pilot did things decently enough but there has not been much expansion. The scenes of the city are now just establishing shots, not glimpses into an alien world. The strange things — like the world’s richest man and his wife not having bodyguards after open attacks on their person — might be a different culture or might just be writing errors.

William Adama

For BSG fans, there is strong interest in William Adama, the only character shared between the shows. But this one seems nothing like the hero of the original show. And he seems inconsistent. We learn that the defining event of his life was the terrorist murder of his sister and mother by a monotheist cult. (Well, defining event in a life that goes on to have more big events, I suppose.) Yet he shows no more than average mistrust of monotheism when it is revealed that the Cylons are monotheists and believe in a “one true god.” He doesn’t like Baltar, but he’s pretty tolerant when Baltar starts a cult of a one true god on the ship, and even gives him weapons at some point. He just doesn’t act like somebody who would have a knee-jerk initial jolt at monotheists preaching one true god.

There was also no sign of Tamara Adama in BSG. The original script plans called for her avatar to also contribute to the minds of the early Cylons, and this may not happen. If it does happen, it is odd that we never see any sign of Cylons remembering being his sister. We also have to presume that neither he, nor anybody else knows that his father played a pivotal role in the creation of the Cylons. That would make his father quite infamous, nobody would remember his law career.

New Cap City

Started out as the most interesting place on Caprica. Getting a bit slower. Not certain who Emanuelle is — is she Tamara, or working for Tamara? If so, why is she hooking Joseph on the Amp? When she was winged, it was odd that she had her arm flicker out — I would assume that in a world trying to appear real, non-fatal wounds would look like wounds, and even killed people would leave bodies as far as the other players were concerned.

If Zoe enters New Cap City, she should not be like Tamara, unable to be killed. She is now running in a robot body and interfacing with a holoband like humans do.

Will Tamara’s popularity with viewers turn her from a minor character into something more important?

Origin of the Cylons

The big question remains, where do the minds of the Cylon armies come from? Are they all copies of Zoe? Has Zoe given Philemon the clue as to how to create other copies, perhaps more mindless ones? Does Tamara provide a mind to a Cylon? Do the Soldiers of the One get access to the upload generator that Daniel used on Tamara and make their own uploads, and do they become the Cylon minds? We know that something of Zoe or the STO ends up in Cylon minds.

Who is the hero of Caprica?

As some readers may know, I maintained a sub-blog last year for analysis of Battlestar Galactica. BSG was very good for a while, but sadly had an extremely disappointing ending. Postings in the Battlestar Galactica Analysis Blog did not usually show up in the front page of the main blog, you had to read or subscribe to it independently.

There is a new prequel spin-off series on called Caprica, which has had 6 episodes, and just has 2 more before going on a mid-season hiatus. I will use the old battlestar blog for more limited commentary on that show, which for now I am watching. (However, not too many people are, so it’s hard to say how long it will be on.)

My first commentary is not very science-fiction related, though I will be getting to that later — since the reason I am watching Caprica is my strong interest in fiction about mind uploading and artificial intelligence, and that is a strong focus of the show.

Instead, I will ask a question that may explain the poor audiences the show is getting. Who is the hero of Caprica? The character the audience is supposed to identify with? The one we care about, the one we tune in so we can see what happens to them? This is an important question, since while a novel or movie can be great without a traditional protagonist or even an anti-hero, it’s harder for a TV series to pull that off.  read more »

Haplogroups, Haplotypes and genealogy, oh my

I received some criticism the other day over my own criticism of the use of haplogroups in genealogy — the finding and tracing of relatives. My language was imprecise so I want to make a correction and explore the issue in a bit more detail.

One of the most basic facts of inheritance is that while most of your DNA is a mishmash of your parents (and all their ancestors before them) two pieces of DNA are passed down almost unchanged. One is the mitochondrial DNA, which is passed down from the mother to all her children. The other is the Y chromosome, which is passed down directly from father to son. Girls don’t get one. Most of the mother’s X chromosome is passed down unchanged to her sons (but not her daughters) but of course they can’t pass it unchanged to anybody.

This allow us to track the ancestry of two lines. The maternal line tracks your mother, her mother, her mother, her mother and so on. The paternal line tracks your father, his father and so on. The paternal line should, in theory, match the surname, but for various reasons it sometimes doesn’t. Females don’t have a Y, but they can often find out what Y their father had if they can sequence a sample from him, his sons, his brothers and other male relatives who share his surname.

The ability to do this got people very excited. DNA that can be tracked back arbitrarily far in time has become very useful for the study of human migrations and population genetics. The DNA is normally passed down completely but every so often there is a mutation. These mutations, if they don’t kill you, are passed down. The various collections of mutations are formed into a tree, and the branches of the tree are known as haplogroups. For both kinds of DNA, there are around a couple of hundred haplogroups commonly identified. Many DNA testing companies will look at your DNA and tell you your MTDNA haplogroup, and if male, your Y haplogroup.  read more »

The privacy risks of genetic genealogy (23andMe part 2)

Last week, I wrote about interesting experiences finding Cousins who were already friends via genetic testing. 23andMe’s new “Relative Finder” product identifies the other people in their database of about 35,000 to whom you are related, guessing how close. Surprisingly, 2 of the 4 relatives I made contact with were already friends of mine, but not known to be relatives.

Many people are very excited about the potential for services like Relative Finder to take the lid off the field of genealogy. Some people care deeply about genealogy (most notably the Mormons) and others wonder what the fuss is. Genetic genealogy offers the potential to finally link all the family trees built by the enthusiasts and to provably test already known or suspected relationships. As such, the big genealogy web sites are all getting involved, and the Family Tree DNA company, which previously did mostly worthless haplogroup studies (and more useful haplotype scans,) is opening up a paired-chromosome scan service for $250 — half the price of 23andMe’s top-end scan. (There is some genealogical value to the deeper clade Y studies FTDNA does, but the Mitochondrial and 12-marker Y studies show far less than people believe about living relatives. I have a followup post about haplogroups and haplotypes in genealogy.) Note that in March 2010, 23andMe is offering a scan for just $199.

The cost of this is going to keep decreasing and soon will be sub-$100. At the same time, the cost of full sequencing is falling by a factor of 10 every year (!) and many suspect it may reach the $100 price point within just a few years. (Genechip sequencing only finds the SNPs, while a full sequencing reads every letter (allele) of your genome, and perhaps in the future your epigenome.

Discover of relatives through genetics has one big surprising twist to it. You are participating in it whether you sign up or not. That’s because your relatives may be participating in it, and as it gets cheaper, your relatives will almost certainly be doing so. You might be the last person on the planet to accept sequencing but it won’t matter.  read more »

The odds of knowing your cousins: 23andme Part 1

Bizarrely, Jonathan Zittrain turns out to be my cousin — which is odd because I have known him for some time and he is also very active in the online civil rights world. How we came to learn this will be the first of my postings on the future of DNA sequencing and the company 23andMe.

(Follow the genetics for part two and other articles.)

23andMe is one of a small crop of personal genomics companies. For a cash fee (ranging from $400 to $1000, but dropping with regularity) you get a kit to send in a DNA sample. They can’t sequence your genome for that amount today, but they can read around 600,000 “single-nucleotide polymorphisms” (SNPs) which are single-letter locations in the genome that are known to vary among different people, and the subject of various research about disease. 23andMe began hoping to let their customers know about how their own DNA predicted their risk for a variety of different diseases and traits. The result is a collection of information — some of which will just make you worry (or breathe more easily) and some of which is actually useful. However, the company’s second-order goal is the real money-maker. They hope to get the sequenced people to fill out surveys and participate in studies. For example, the more people fill out their weight in surveys, the more likely they might notice, “Hey, all the fat people have this SNP, and the thin people have that SNP, maybe we’ve found something.”

However, recently they added a new feature called “Relative Finder.” With Relative Finder, they will compare your DNA with all the other customers, and see if they can find long identical stretches which are very likely to have come from a common ancestor. The more of this they find, the more closely related two people are. All of us are related, often closer than we think, but this technique, in theory, can identify closer relatives like 1st through 4th cousins. (It gets a bit noisy after this.)

Relative Finder shows you a display listing all the people you are related to in their database, and for some people, it turns out to be a lot. You don’t see the name of the person but you can send them an E-mail, and if they agree and respond, you can talk, or even compare your genomes to see where you have matching DNA.

For me it showed one third cousin, and about a dozen 4th cousins. Many people don’t get many relatives that close. A third cousin, if you were wondering, is somebody who shares a great-great-grandparent with you, or more typically a pair of them. It means that your grandparents and their grandparents were “1st” cousins (ordinary cousins.) Most people don’t have much contact with 3rd cousins or care much to. It’s not a very close relationship.

However, I was greatly shocked to see the response that this mystery cousin was Jonathan Zittrain. Jonathan and I are not close friends, more appropriately we might be called friendly colleagues in the cyberlaw field, he being a founder of the Berkman Center and I being at the EFF. But we had seen one another a few times in the prior month, and both lectured recently at the new Singularity University, so we are not distant acquaintances either. Still, it was rather shocking to see this result. I was curious to try to figure out what the odds of it are.  read more »

Olympic sports and failure

I wanted to post some follow-up to my prior post about sports where the contestants go to the edge and fall down. As I see it, the sports fall into the following rough classes:

  1. You push as hard as you can. The athlete who does the best — goes higher, stronger, faster — wins. The 100 metres is a perfect example.
  2. You push hard, but you must make judgments as you compete, such as how hard to push, when to pass etc. If you judge incorrectly, it lowers your final performance, but only modestly.
  3. You must judge your abilities and the condition of the course and your equipment, and choose what is difficult enough to score well, but which you can be confident you will execute. To win, you must skate (literally or figuratively) close to the edge of what you can do. If you misjudge, or have bad luck, you are penalized significantly, and will move from 1st place to the rear of the pack.

My main concern is with sports in the third group, like figure skating, half-pipe and many others, including most of the judged sports with degrees of difficulty. The concern is that sudden shift from leader to loser because you did what you are supposed to do — go close to the edge. Medals come from either being greatly superior, knowing your edge very well, which is the intention, or from being lucky that particular day — which I think is not the intention.

Many sports seek to get around this. In high jump and pole vault, you just keep raising the bar until you can’t get over it, and somebody gets over the highest bar. This is a good system, but difficult when a sport takes a long time or is draining to do even once. You could do a figure skating contest where they all keep trying more and more difficult moves until they all fail but one, but it would take a long time, be draining, and cause injuries as there is no foam pit like in high jump.

Other sports try to solve this by letting you do 2 or more runs, and taking the best run. This is good, but we also have an instinct that the person who can do it twice is better than the one who did it once and fell down the other time. Sports that sum up several times demand consistent performance, which is good, but in essence they also put a major punishment on a single failure, perhaps an even greater one. This approach requires you be a touch conservative, just under your edge, so you know you will deliver several very good runs, and beat somebody who dares beyond their ability, but falls in one or more runs. At least it reduces the luck.

The particular problem is this. “Big failure” sports will actually often give awards either to a top athlete who got a good run, or to the athlete who was more conservative in choosing what to do, and thus had a very high probability of a clean run. Fortunately this will not happen too often, as one of the top tier who went-for-broke will have that clean run and get 1st place. But the person who does that may not be the one who is overall most capable.

Imagine if high jump were done with each competitor choosing what height they wanted the bar to be at in advance, and getting one try at it, and getting a medal if it’s the highest, but nothing if they miss.

The sports like short-track speed skating, which are highly entertaining, have this problem in spades, for athletes who wipe out can also impede other athletes. While the rules try to make it up to the athlete who was knocked out, they have a hard time doing this perfectly. For example in the semi-finals of short-track, if you get knocked out while you were in 3rd, you are not assured to get a consolation qualification even if you were just about to try for 2nd with the strength you were saving.

In some cases the chaos is not going away because they know audiences like it. Time trials are the purest and fairest competition in most cases but are dead-boring to watch.

Curling is the best Olympic sport

Some notes from the bi-annual Olympics crackfest…

I’m starting to say that Curling might be the best Olympic sport. Why?

  • It’s the most dominated by strategy. It also requires precision and grace, but above all the other Olympic sports, long pauses to think about the game are part of the game. If you haven’t guessed, I like strategy.
  • Yes, other sports have in-game strategy, of course, particularly the team sports. And since the gold medalist from 25 years ago in almost every sport would barely qualify, you can make a case that all the sports are mostly mental in their way. But with curling, it’s right there, and I think it edges out the others in how important it is.
  • While it requires precision and athletic skill, it does not require strength and endurance to the human limits. As such, skilled players of all ages can compete. (Indeed, the fact that out-of-shape curlers can compete has caused some criticism.) A few other sports, like sharpshooting and equestrian events, also demand skill over youth. All the other sports give a strong advantage to those at the prime age.
  • Mixed curling is possible, and there are even tournaments. There’s debate on whether completely free mixing would work, but I think there should be more mixed sports, and more encouragement of it. (Many of the team sports could be made mixed, of course mixed tennis used to be in the Olympics and is returning.)
  • The games are tense and exciting, and you don’t need a clock, judge or computer to tell you who is winning.

On the downside, not everybody is familiar with the game, the games can take quite a long time and the tournament even longer for just one medal, and compared to a multi-person race it’s a slow game. It’s not slow compared to an even that is many hours of time trials, though those events have brief bursts of high-speed excitement mixed in with waiting. And yes, I’m watching Canada-v-USA hockey now too.  read more »

Avatar isn't Dances With Wolves, it's another plot

Everybody has an Avatar review. Indeed, Avatar is a monument of moviemaking in terms of the quality of its animation and 3-D. Its most interesting message for Hollywood may be “soon actors will no longer need to look pretty.” Once the generation of human forms passes through the famous uncanny valley there will be many movies made with human characters where you never see their real faces. That means the actors can be hired based strictly on their ability to act, and their bankability, not necessarily their looks, or more to the point their age. Old actors will be able to play their young selves before too long, and be romantic leading men and women again. Fat actors will play thin, supernaturally beautiful leads.

And our images of what a good looking person looks like will get even more bizarre. We’ll probably get past the age thing, with software to make old star look like young star, before we break through the rest of the uncanny valley. If old star keeps him or herself in shape, the skin, hair and shapes of things like the nose and earlobes can be fixed, perhaps even today.

But this is not what I want to speak about. What I do want to speak about involves Avatar spoilers.  read more »

Twitter clients, only shorten URLs as much as you truly need to and make them readable

I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don’t know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that’s easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.

I understand the attraction, and not just on twitter. Some sites generate hugely long URLs which fold over many lines if put in text files or entered for display in comments and other locations. The result, though, is that you can no longer determine where the link will take you from the URL. This hurts the UI of the web, and makes it possible to fool people into going to attack sites or Rick Astley videos. Because of this, some better twitter clients re-expand the shortened URLs when displaying on a larger screen.

Anyway, here’s an idea for the Twitter clients and URL shorteners, if they must be used. In a tweet, figure out how much room there is to put the compacted URL, and work with a shortener that will let you generate a URL of exactly that length. And if that length has some room, try to put in some elements from the original URL so I can see them. For example, you can probably fit the domain name, especially if you strip off the “www.” from it (in the visible part, not in the real URL.) Try to leave as many things that look like real words, and strip things that look like character encoded binary codes and numbers. Of course, in the end you’ll need something to make the short URL unique, but not that much. Of course, if there already is a URL created for the target, re-use that.

Google just did its own URL shortener. I’m not quite sure what the motives of URL shortener sites are. While sometimes I see redirects that pause at the intermediate site, nobody wants that and so few ever use such sites. The search engines must have started ignoring URL redirect sites when it comes to pagerank long ago. They take donations and run ads on the pages where people create the tiny URLs, but when it comes to ones used on Twitter, these are almost all automatically generated, so the user never sees the site.

Why facebook wants you to open up your profile

There is some controversy, including a critique from our team at the EFF of Facebook’s new privacy structure, and their new default and suggested policies that push people to expose more of their profile and data to “everyone.”

I understand why Facebook finds this attractive. “Everyone” means search engines like Google, and also total 3rd party apps like those that sprung up around Twitter.

On Twitter, I tried to have a “protected” profile, open only to friends, but that’s far from the norm there. And it turns out it doesn’t work particularly well. Because twitter is mostly exposed to public view, all sorts of things started appearing to treat twitter as more a micro blogging platform than a way to share short missives with friends. All of these new functions didn’t work on a protected account. With a protected account, you could not even publicly reply to people who did not follow you. Even the Facebook app that imports your tweets to Facebook doesn’t work on protected accounts, though it certainly could.

Worse, many people try to use twitter as a “backchannel” for comments about events like conferences. I think it’s dreadful as a backchannel, and conferences encourage it mostly as a form of spam: when people tweet to one another about the conference, they are also flooding the outside world with constant reminders about the conference. To use the backchannel though, you put in tags and generally this is for the whole world to see, not just your followers. People on twitter want to be seen.

Not so on Facebook and it must be starting to scare them. On Facebook, for all its privacy issues, mainly you are seen by your friends. Well, and all those annoying apps that, just to use them, need to know everything about you. You disclose a lot more to Facebook than you do to Twitter and so it’s scary to see a push to make it more public.

Being public means that search engines will find material, and that’s hugely important commercially, even to a site as successful as Facebook. Most sites in the world are disturbed to learn they get a huge fraction of their traffic from search engines. Facebook is an exception but doesn’t want to be. It wants to get all the traffic it gets now, plus more.

And then there’s the cool 3rd party stuff. Facebook of course has its platform, and that platform has serious privacy issues, but at least Facebook has some control over it, and makes the “apps” (really embedded 3rd party web sites) agree to terms. But you can’t beat the innovation that comes from having less controlled entrepreneurs doing things, and that’s what happens on twitter. Facebook doesn’t want to be left behind.

What’s disturbing about this is the idea that we will see sites starting to feel that abandoning or abusing privacy gives them a competitive edge. We used to always hope that sites would see protecting their users’ privacy as a competitive edge, but the reverse could take place, which would be a disaster.

Is there an answer? It may be to try to build applications in more complex ways that still protect privacy. Though in the end, you can’t do that if search engines are going to spider your secrets in order to do useful things with them; at least not the way search engines work today.

The lesson of Galactica and treating your creations well

A few weeks ago I reviewed the disappointing “The Plan” and in particular commented on how I wished the Cylons really had had a plan of some complexity.

More recently, I was thinking about what many would interpret as the message in BSG, which is said by many characters, and which is at the core of the repeating cycle of destruction. When you get good enough to create life (ie. Cylons) you must love them and keep them close, and not enslave them or they will come back to destroy you. This slavery and destruction is the “all this” that has happened before and will happen again.

Now that it is spelled out how the whole Cylon holocaust was the result of the petulance of Cylon #1, John, and that this (and its coverup) were at the heart of the Cylon civil war, the message becomes more muddled.

For you see, Ellen and the other 4 did keep their creation close. They loved John, and raised him like a boy. Ellen was willing to forgive John in spite of all he had done. And what was the result? He struck back and killed and reprogrammed them, and then the rest of his siblings, to start a war that would destroy all humanity, to teach them a lesson and in revenge for the slavery of the Centurions. Yet John was never enslaved, though he did decide he was treated poorly by being born into a human body. It’s never quite clear what memories from the Centurions made it into the 8 Cylons, if any. It seems more and more likely that it was not very much, though we have yet to see the final answer on that. Further they enslaved the Centurions and the Raiders too.

So Ellen kept her creations close, and loved them, and the result was total destruction. Oddly, the Centurions had been willing to give up their war with humanity in order to get flesh bodies for their race. The Centurions were fighting for their freedom it seems, not apparently to destroy humanity though perhaps they would have gotten to that level had they taken the upper hand in the war. Ellen intervened and added the love and the result was destruction.

I don’t know if this is the intentional message — that even if you do follow the advice given to keep your creations close and loved, it still all fails in the end. If so, it’s an even bleaker message than most imagine.

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.

Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)

Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.

Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.

This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.

We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.

So I propose two distinct things:

  1. A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
  2. A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.

The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:

  • Full frames (ie. what we do today)
  • Rastered updates to specific rectangles, with ability to scale them.
  • More arbitrary shapes (alpha) and ability to move the shapes with any timebase
  • VNC level abilities
  • X windows level abilities
  • Graphics card (polygon) level abilities
  • In the unlikely extreme, the abilities of high level languages like display postscript.

I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.

One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.

Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)

Note that this approach also delivers something I asked for in 2004 — that it be easy to make any LCD act as a wireless digital picture frame.

It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.

Protocol

As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.

  • Paint line or rectangle with specified values, or gradient fill.
  • Move object, and move entire screen
  • Adjust brightness of rectangle (fade)
  • Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
  • Display pre-buffered rectangle

However, lessons learned from other protocols might expand this list slightly.

One connector?

This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.

There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.

To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.

Final rough description

Here’s a more refined rough set of features of the universal connector:

  • Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
  • Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
  • Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
  • If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
  • Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
  • When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)

The Cylons did not have "The Plan"

Last week saw the DVD release of what may be the final Battlestar Galactica movie/episode, a flashback movie called “The Plan.” It was written by Jane Espenson and is the story of the attack and early chase from the point of view of the Cylons, most particularly Number One (Cavil.) (Review first, spoilers after the break.)

I’ve been highly down on BSG since the poor ending, but this lowered my expectations, giving me a better chance of enjoying The Plan. However, sadly it fell short even of lowered expectations. Critics have savaged it as a clip show, and while it does contain about 20% re-used footage (but not including some actors who refused to participate) it is not a clip show. Sadly, it is mostly a “deleted scenes” show.

You’ve all seen DVDs with “deleted scenes.” I stopped watching these on DVDs because it often was quite apparent why they were deleted. The scene didn’t really add anything the audience could not figure out on its own, or anything the story truly needed. Of course in The Plan we are seeing not deleted material but retroactive continuity. Once the story of Cavil as the mastermind of the attack was written in season 4, and that he did it to impress his creators (who themselves were not written as Cylons until season three) most of the things you will see become obvious. You learn very little more about them that you could not imagine.

There is some worthwhile material. The more detailed nuking of the colonies is chilling, particularly with the Cylon models smiling at the explosions — the same models the audience came to forgive later. Many like the backstory given to a hidden “Simon” model on board the fleet never seen in the show. He turns out (in a retcon) to be one of the first to become more loving and human, since we see him at the opening having secretly married a human woman, but we also don’t forget the other Simon models we saw, who were happy to run medical experiments on humans, smile at nukes, and lobotomize their fellow Cylons to meet Cavil’s needs.

We learn the answers to a few mysteries that fans asked about — who did Six meet after leaving Baltar on Caprica? The shown meeting is anticlimactic. How did Shelley Godfrey disappear after accusing Baltar? The answer is entirely mundane, and better left as a mystery. (Though it does put to rest speculation that she was actually a physical appearance of the Angel in Baltar’s head, who mysteriously was not present during Godfrey’s scenes.)

We get more evidence that Cavil is cold and heartless. Stockwell enjoys playing him that way. But I can’t say it told me much new about his character.

More disappointing is what we don’t get. We don’t learn what was going on in the first episode, 33 and what was really on the Olympic Carrier, a source of much angst for Apollo and Starbuck during the series. We don’t learn how the Cylons managed to be close enough to resurrect those tossed out airlocks, but not to catch the fleet. We don’t learn how Cavil convinced the other Cylons to kill all the humans, or their thoughts on it. We don’t learn how that decision got reversed. We learn more about what made Boomer do her sabotages and shooting of Adama, but we don’t learn anything about why she was greeted above Kobol by 100 naked #8s who then let her nuke their valuable base star. Now that the big secret of the god of Galactica is revealed, we learn nothing more about that god, and the angels don’t even appear.

In short, we learn almost nothing, which is odd for a flashback show aired after the big secrets have been revealed. Normally that is the chance to show things without having to hide the big secrets. Of course, they didn’t know most of these big secrets in the first season.

Overall verdict: You won’t miss a lot if you miss this, feel free to wait for it to air on TV.

Some minor spoiler items after the break.  read more »

Every connector, including video, should send power both ways

I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?

As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.

In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.

An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.

Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.

I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.

Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)

Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)

Flashforward, Deja Vu and Hollywood's problem with time travel

Tonight I watched the debut of FlashForward, which is based on the novel of the same name by Rob Sawyer, an SF writer from my hometown whom I have known for many years. However, “based on” is the correct phrase because the TV show features Hollywood’s standard inability to write a decent time travel story. Oddly, just last week I watched the fairly old movie “Deja Vu” with Denzel Washington, which is also a time travel story.

Hollywood absolutely loves time travel. It’s hard to find a Hollywood F/SF TV show that hasn’t fallen to the temptation to have a time travel episode. Battlestar Galactica’s producer avowed he would never have time travel, and he didn’t, but he did have a god who delivered prophecies of the future which is a very close cousin of that. Time travel stories seem easy, and they are fun. They are often used to explore alternate possibilities for characters, which writers and viewers love to see.

But it’s very hard to do it consistently. In fact, it’s almost never done consistently, except perhaps in shows devoted to time travel (where it gets more thought) and not often even then. Time travel stories must deal with the question of whether a trip to the past (by people or information) changes the future, how it changes it, who it changes it for, and how “fast” it changes it. I have an article in the works on a taxonomy of time travel fiction, but some rough categories from it are:

  • Calvinist: Everything is cast, nothing changes. When you go back into the past it turns out you always did, and it results in the same present you came from.
  • Alternate world: Going into the past creates a new reality, and the old reality vanishes (at varying speeds) or becomes a different, co-existing fork. Sometimes only the TT (time traveler) is aware of this, sometimes not even she is.
  • Be careful not to change the past: If you change it, you might erase yourself. If you break it, you may get a chance to fix it in some limited amount of time.
  • Go ahead and change the past: You won’t get erased, but your world might be erased when you return to it.
  • Try to change the past and you can’t: Some magic force keeps pushing things back the way they are meant to be. You kill Hitler and somebody else rises to do the same thing.

Inherent in several of these is the idea of a second time dimension, in which there is a “before” the past was changed and an “after” the past was changed. In this second time dimension, it takes time (or rather time-2) for the changes to propagate. This is mainly there to give protagonists a chance to undo changes. We see Marty Mcfly slowly fade away until he gets his parents back together, and then instantly he’s OK again.

In a time travel story, it is likely we will see cause follow effect, reversing normal causality. However, many writers take this as an excuse to throw all logic out the window. And almost all Hollywood SF inconsistently mixes up the various modes I describe above in one way or another.

Spoilers below for the first episode of FlashForward, and later for Deja Vu.

Update note: The fine folks at io9 asked FlashForward’s producers about the flaw I raise but they are not as bothered by it.  read more »

On worldcon and convention design

The Worldcon (World Science Fiction Convention) in Montreal was enjoyable. Like all worldcons, which are run by fans rather than professional convention staff, it had its issues, but nothing too drastic. Our worst experience actually came from the Delta hotel, which I’ll describe below.

For the past few decades, Worldcons have been held in convention centers. They attract from 4,000 to 7,000 people and are generally felt to not fit in any ordinary hotel outside Las Vegas. (They don’t go to Las Vegas both because there is no large fan base there to run it, and the Las Vegas Hotels, unlike those in most towns, have no incentive to offer a cut-rate deal on a summer weekend.)

Because they are always held where deals are to be had on hotels and convention space, it is not uncommon for them to get the entire convention center or a large portion of it. This turns out to be a temptation which most cons succumb to, but should not. The Montreal convention was huge and cavernous. It had little of the intimacy a mostly social event should have. Use of the entire convention center meant long walks and robbed the convention of a social center — a single place through which you could expect people to flow, so you would see your friends, join up for hallway conversations and gather people to go for meals.

This is one of those cases where less can be more. You should not take more space than you need. The convention should be as initimate as it can be without becoming crowded. That may mean deliberately not taking function space.

A social center is vital to a good convention. Unfortunately when there are hotels in multiple directions from the convention center so that people use different exits, it is hard for the crowd to figure one out. At the Montreal convention (Anticipation) the closest thing to such a center was near the registration desk, but it never really worked. At other conventions, anywhere on the path to the primary entrance works. Sometimes it is the lobby and bar of the HQ hotel, but this was not the case here.

When the social center will not be obvious, the convention should try to find the best one, and put up a sign saying it is the congregation point. In some convention centers, meeting rooms will be on a different floor from other function space, and so it may be necessary to have two meeting points, one for in-between sessions, and the other for general time.

The social center/meeting point is the one thing it can make sense to use some space on. Expect a good fraction of the con to congregate there in break times. Let them form groups of conversation (there should be sound absorbing walls) but still be able to see and find other people in the space.

A good thing to make a meeting point work is to put up the schedule there, ideally in a dynamic way. This can be computer screens showing the titles of the upcoming sessions, or even human changed cards saying this. Anticipation used a giant schedule on the wall, which is also OK. The other methods allow descriptions to go up with the names. Anticipation did a roundly disliked “pocket” program printed on tabloid sized paper, with two pages usually needed to cover a whole day. Nobody had a pocket it could fit in. In addition, there were many changes to the schedule and the online version was not updated. Again, this is a volunteer effort, so I expect some glitches like this to happen, they are par for the course.  read more »

Worldcon panel on BSG surprisingly negative

On Saturday I attended the Battlestar Galactica Postmortem panel at the World Science Fiction convention in Montreal. The “worldcon” is the top convention for serious fans of SF, with typically 4,000 to 6,000 attendees from around the world. There are larger (much larger) “media” conventions like ComicCon an DragonCon, but the Worlcon is considered “it” for written SF. It gives out the Hugo award. While the fans at a worldcon do put an emphasis on written SF, they also are voracious consumers of media SF, and so there are many panels on it, and two Hugo awards for it.

Two things surprised me a bit about the Worldcon panel. First of all, it was much more lightly attended than I would have expected considering the large fandom BSG built, and how its high quality had particularly appealed to these sorts of fans. Secondly, it was more negative and bitter about the ending that I would have expecting — and I was expecting quite a lot.

In fact, a few times audience members and panelists felt it necessary to encourage the crowd to stop just ranting about the ending and to talk about the good things. In spite of being so negative on the ending myself I found myself being one of those also trying to talk about the good stuff.

What was surprising was that while I still stand behind my own analysis, I know that in many online communities opinion on the ending is more positive. There are many who hate it but many who love it, and at least initially, more who loved it in some communities.

The answer may be is that it is the serious SF fan, the fan who looks to books as the source of the greatest SF, the BSG ending was the largest betrayal. Here we were hoping for a show that would bring some of the quality we seek in written SF to the screen, and here it fell down. Fans with a primary focus on movie and TV SF were much more tolerant of the ending, since as I noted, TV SF endings are almost never good anyway, and the show itself was a major cut above typical TV SF.

The small audience surprised me. I have seen other shows such as Buffy (which is not even SF), Babylon 5 and various forms of Star Trek still fill a room for discussion of the show. It is my contention that had BSG ended better, it would have joined this pantheon of great shows that maintains a strong fandom for decades.

The episode “Revelations” where the ruined Earth is discovered was nominated for the Hugo for best short dramatic program. It came in 4th — the winner was the highly unusual “Dr. Horrible’s sing-along-blog” which was a web production from fan favourite Joss Whedon of Buffy and Firefly. BSG won a Hugo for the first episode “33” and has been nominated each year since then but has failed to win each time, with a Doctor Who episode the winner in each case.

At the panel, the greatest source of frustration was the out-of-nowhere decision to abandon all technology, with Starbuck’s odd fate a #2. This matches the most common complaints I have seen online.

On another note, while normally Worldcon Hugo voters tend to go for grand SF books, this time the best Novel award went to Neil Gaiman’s “The Graveyard Book.” Gaiman himself, in his acceptance speech, did the odd thing of declaring that he thought Anathem (which was also my choice) should have won. Anathem came 2nd or 3rd, depending on how you like to read STV ballot counting. Gaiman however, was guest of honour at the convention, and it attracted a huge number of Gaiman fans because of this, which may have altered the voting. (Voting is done by convention members. Typically about 1,000 people will vote on best novel.)

Battlestar's "Daybreak:" The worst ending in the history of on-screen science fiction

Battlestar Galactica attracted a lot of fans and a lot of kudos during its run, and engendered this sub blog about it. Here, in my final post on the ending, I present the case that its final hour was the worst ending in the history of science fiction on the screen. This is a condemnation of course, but also praise, because my message is not simply that the ending was poor, but that the show rose so high that it was able to fall so very far. I mean it was the most disappointing ending ever.

(There are, of course, major spoilers in this essay.)

Other SF shows have ended very badly, to be sure. This is particularly true of TV SF. Indeed, it is in the nature of TV SF to end badly. First of all, it’s written in episodic form. Most great endings are planned from the start. TV endings rarely are. To make things worse, TV shows are usually ended when the show is in the middle of a decline. They are often the result of a cancellation, or sometimes a producer who realizes a cancellation is imminent. Quite frequently, the decline that led to cancellation can be the result of a creative failure on the show — either the original visionaries have gone, or they are burned out. In such situations, a poor ending is to be expected.

Sadly, I’m hard pressed to think of a TV SF series that had a truly great ending. That’s the sort of ending you might find in a great book or movie, the ending that caps the work perfectly, which solidifies things in a cohesive whole. Great endings will sometimes finally make sense out of everything, or reveal a surprise that, in retrospect, should have been obvious all along. I’m convinced that many of the world’s best endings came about when the writer actually worked out the ending first, then then wrote a story leading to that ending.

There have been endings that were better than the show. Star Trek: Voyager sunk to dreadful depths in the middle of its run, and its mediocre ending was thus a step up. Among good SF/Fantasy shows, Quantum Leap, Buffy and the Prisoner stand out as having had decent endings. Babylon 5’s endings (plural) were good but, just as I praise Battlestar Galactica (BSG) by saying its ending sucked, Babylon 5’s endings were not up to the high quality of the show. (What is commonly believed to be B5’s original planned ending, written before the show began, might well have made the grade.)

Ron Moore’s goals

To understand the fall of BSG, one must examine it both in terms of more general goals for good SF, and the stated goals of the head writer and executive producer, Ronald D. Moore. The ending failed by both my standards (which you may or may not care about) but also his.

Moore began the journey by laying out a manifesto of how he wanted to change TV SF. He wrote an essay about Naturalistic science fiction where he outlined some great goals and promises, which I will summarize here, in a slightly different order

  • Avoiding SF clichés like time travel, mind control, god-like powers, and technobabble.
  • Keeping the science real.
  • Strong, real characters, avoiding the stereotypes of older TV SF. The show should be about them, not the hardware.
  • A new visual and editing style unlike what has come before, with a focus on realism.

Over time he expanded, modified and sometimes intentionally broke these rules. He allowed the ships to make sound in space after vowing they would not. He eschewed aliens in general. He increased his focus on characters, saying that his mantra in concluding the show was “it’s the characters, stupid.”

The link to reality

In addition, his other goal for the end was to make a connection to our real world. To let the audience see how the story of the characters related to our story. Indeed, the writers toyed with not destroying Galactica, and leaving it buried on Earth, and ending the show with the discovery of the ship in Central America. They rejected this ending because they felt it would violate our contemporary reality too quickly, and make it clear this was an alternate history. Moore felt an alternative universe was not sufficient.

The successes, and then failures

During its run, BSG offered much that was great, in several cases groundbreaking elements never seen before in TV SF:

  • Artificial minds in humanoid bodies who were emotional, sexual and religious.
  • Getting a general audience to undertand the “humanity” of these machines.
  • Stirring space battles with much better concepts of space than typically found on TV. Bullets and missiles, not force-rays.
  • No bumpy-head aliens, no planet of the week, no cute time travel or alternate-reality-where-everybody-is-evil episodes.
  • Dark stories of interesting characters.
  • Multiple copies of the same being, beings programmed to think they were human, beings able to transfer their mind to a new body at the moment of death.
  • A mystery about the origins of the society and its legends, and a mystery about a lost planet named Earth.
  • A mystery about the origin of the Cylons and their reasons for their genocide.
  • Daring use of concepts like suicide bombing and terrorism by the protagonists.
  • Kick-ass leadership characters in Adama and Roslin who were complex, but neither over the top nor understated.
  • Starbuck as a woman. Before she became a toy of god, at least.
  • Baltar: One of the best TV villains ever, a self-centered slightly mad scientist who does evil without wishing to, manipulated by a strange vision in his head.
  • Other superb characters, notably Tigh, Tyrol, Gaeta and Zarek.

But it all came to a far lesser end due to the following failures I will outline in too much detail:

  • The confirmation/revelation of an intervening god as the driving force behind events
  • The use of that god to resolve large numbers of major plot points
  • A number of significant scientific mistakes on major plot points, including:
    • Twisting the whole story to fit a completely wrong idea of what Mitochondrial Eve is
    • To support that concept, an impossible-to-credit political shift among the characters
    • The use of concepts from Intelligent Design to resolve plot issues.
    • The introduction of the nonsense idea of “collective unconscious” to explain cultural similarities.
  • The use of “big secrets” to dominate what was supposed to be a character-driven story
  • Removing all connection to our reality by trying to build a poorly constructed one
  • Mistakes, one of them major and never corrected, which misled the audience

And then I’ll explain the reason why the fall was so great — how, until the last moments, a few minor differences could have fixed most of the problems.  read more »

Tales of the Michael Jackson lottery, eBay and security

I’ve been fascinated of late with the issue of eBay auctions of hot-hot items, like the playstation 3 and others. The story of the Michael Jackson memorial tickets is an interesting one.

17,000 tickets were given out as 8,500 pairs to winners chosen from 1.6 million online applications. Applicants had to give their name and address, and if they won, they further had to use or create a ticketmaster account to get their voucher. They then had to take the voucher to Dodger stadium in L.A. on Monday. (This was a dealbreaker even for honest winners from too far outside L.A. such as a Montreal flight attendant.) At the stadium, they had to present ID to show they were the winner, whereupon they were given 2 tickets (with random seat assignment) and two standard club security wristbands, one of which was affixed to their arm. They were told if the one on the arm was damaged in any way, they would not get into the memorial. The terms indicated the tickets were non-transferable.

Immediately a lot of people, especially those not from California who won, tried to sell tickets on eBay and Craigslist. In fact, even before the lottery results, people were listing something more speculative, “If I win the lottery, you pay me and you’ll get my tickets.” (One could enter the lottery directly of course, but this would increase your chances as only one entry was allowed, in theory, per person.)

Both eBay and Craigslist had very strong policies against listing these tickets, and apparently had staff and software working regularly to remove listings. Listings on eBay were mostly disappearing quickly, though some persisted for unknown reasons. Craiglist listings also vanished quickly, though some sellers were clever enough to put their phone numbers in their listing titles. On Craigslist a deleted ad still shows up in the search summary for some time after the posting itself is gone.

There was a strong backlash by fans against the sellers. On both sites, ordinary users were regularly hitting the links to report inappropriate postings. In addition, a brand new phenomenon emerged on eBay — some users were deliberately placing 99 million dollar bids on any auction they found for tickets, eliminating any chance of further bidding. (See note) In that past that could earn you negative reputation, but eBay has removed negative reputation for buyers. In addition, it could earn you a mark as a non-paying buyer, but in this case, the seller is unable to file such a complaint because their auction of the non-tranferable ticket itself violates eBay’s terms.  read more »