Twitter clients, only shorten URLs as much as you truly need to and make them readable

I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don’t know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that’s easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.

I understand the attraction, and not just on twitter. Some sites generate hugely long URLs which fold over many lines if put in text files or entered for display in comments and other locations. The result, though, is that you can no longer determine where the link will take you from the URL. This hurts the UI of the web, and makes it possible to fool people into going to attack sites or Rick Astley videos. Because of this, some better twitter clients re-expand the shortened URLs when displaying on a larger screen.

Anyway, here’s an idea for the Twitter clients and URL shorteners, if they must be used. In a tweet, figure out how much room there is to put the compacted URL, and work with a shortener that will let you generate a URL of exactly that length. And if that length has some room, try to put in some elements from the original URL so I can see them. For example, you can probably fit the domain name, especially if you strip off the “www.” from it (in the visible part, not in the real URL.) Try to leave as many things that look like real words, and strip things that look like character encoded binary codes and numbers. Of course, in the end you’ll need something to make the short URL unique, but not that much. Of course, if there already is a URL created for the target, re-use that.

Google just did its own URL shortener. I’m not quite sure what the motives of URL shortener sites are. While sometimes I see redirects that pause at the intermediate site, nobody wants that and so few ever use such sites. The search engines must have started ignoring URL redirect sites when it comes to pagerank long ago. They take donations and run ads on the pages where people create the tiny URLs, but when it comes to ones used on Twitter, these are almost all automatically generated, so the user never sees the site.

Why facebook wants you to open up your profile

There is some controversy, including a critique from our team at the EFF of Facebook’s new privacy structure, and their new default and suggested policies that push people to expose more of their profile and data to “everyone.”

I understand why Facebook finds this attractive. “Everyone” means search engines like Google, and also total 3rd party apps like those that sprung up around Twitter.

On Twitter, I tried to have a “protected” profile, open only to friends, but that’s far from the norm there. And it turns out it doesn’t work particularly well. Because twitter is mostly exposed to public view, all sorts of things started appearing to treat twitter as more a micro blogging platform than a way to share short missives with friends. All of these new functions didn’t work on a protected account. With a protected account, you could not even publicly reply to people who did not follow you. Even the Facebook app that imports your tweets to Facebook doesn’t work on protected accounts, though it certainly could.

Worse, many people try to use twitter as a “backchannel” for comments about events like conferences. I think it’s dreadful as a backchannel, and conferences encourage it mostly as a form of spam: when people tweet to one another about the conference, they are also flooding the outside world with constant reminders about the conference. To use the backchannel though, you put in tags and generally this is for the whole world to see, not just your followers. People on twitter want to be seen.

Not so on Facebook and it must be starting to scare them. On Facebook, for all its privacy issues, mainly you are seen by your friends. Well, and all those annoying apps that, just to use them, need to know everything about you. You disclose a lot more to Facebook than you do to Twitter and so it’s scary to see a push to make it more public.

Being public means that search engines will find material, and that’s hugely important commercially, even to a site as successful as Facebook. Most sites in the world are disturbed to learn they get a huge fraction of their traffic from search engines. Facebook is an exception but doesn’t want to be. It wants to get all the traffic it gets now, plus more.

And then there’s the cool 3rd party stuff. Facebook of course has its platform, and that platform has serious privacy issues, but at least Facebook has some control over it, and makes the “apps” (really embedded 3rd party web sites) agree to terms. But you can’t beat the innovation that comes from having less controlled entrepreneurs doing things, and that’s what happens on twitter. Facebook doesn’t want to be left behind.

What’s disturbing about this is the idea that we will see sites starting to feel that abandoning or abusing privacy gives them a competitive edge. We used to always hope that sites would see protecting their users’ privacy as a competitive edge, but the reverse could take place, which would be a disaster.

Is there an answer? It may be to try to build applications in more complex ways that still protect privacy. Though in the end, you can’t do that if search engines are going to spider your secrets in order to do useful things with them; at least not the way search engines work today.

The lesson of Galactica and treating your creations well

A few weeks ago I reviewed the disappointing “The Plan” and in particular commented on how I wished the Cylons really had had a plan of some complexity.

More recently, I was thinking about what many would interpret as the message in BSG, which is said by many characters, and which is at the core of the repeating cycle of destruction. When you get good enough to create life (ie. Cylons) you must love them and keep them close, and not enslave them or they will come back to destroy you. This slavery and destruction is the “all this” that has happened before and will happen again.

Now that it is spelled out how the whole Cylon holocaust was the result of the petulance of Cylon #1, John, and that this (and its coverup) were at the heart of the Cylon civil war, the message becomes more muddled.

For you see, Ellen and the other 4 did keep their creation close. They loved John, and raised him like a boy. Ellen was willing to forgive John in spite of all he had done. And what was the result? He struck back and killed and reprogrammed them, and then the rest of his siblings, to start a war that would destroy all humanity, to teach them a lesson and in revenge for the slavery of the Centurions. Yet John was never enslaved, though he did decide he was treated poorly by being born into a human body. It’s never quite clear what memories from the Centurions made it into the 8 Cylons, if any. It seems more and more likely that it was not very much, though we have yet to see the final answer on that. Further they enslaved the Centurions and the Raiders too.

So Ellen kept her creations close, and loved them, and the result was total destruction. Oddly, the Centurions had been willing to give up their war with humanity in order to get flesh bodies for their race. The Centurions were fighting for their freedom it seems, not apparently to destroy humanity though perhaps they would have gotten to that level had they taken the upper hand in the war. Ellen intervened and added the love and the result was destruction.

I don’t know if this is the intentional message — that even if you do follow the advice given to keep your creations close and loved, it still all fails in the end. If so, it’s an even bleaker message than most imagine.

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.

Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)

Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.

Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.

This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.

We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.

So I propose two distinct things:

  1. A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
  2. A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.

The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:

  • Full frames (ie. what we do today)
  • Rastered updates to specific rectangles, with ability to scale them.
  • More arbitrary shapes (alpha) and ability to move the shapes with any timebase
  • VNC level abilities
  • X windows level abilities
  • Graphics card (polygon) level abilities
  • In the unlikely extreme, the abilities of high level languages like display postscript.

I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.

One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.

Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)

Note that this approach also delivers something I asked for in 2004 — that it be easy to make any LCD act as a wireless digital picture frame.

It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.


As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.

  • Paint line or rectangle with specified values, or gradient fill.
  • Move object, and move entire screen
  • Adjust brightness of rectangle (fade)
  • Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
  • Display pre-buffered rectangle

However, lessons learned from other protocols might expand this list slightly.

One connector?

This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.

There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.

To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.

Final rough description

Here’s a more refined rough set of features of the universal connector:

  • Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
  • Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
  • Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
  • If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
  • Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
  • When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)

The Cylons did not have "The Plan"

Last week saw the DVD release of what may be the final Battlestar Galactica movie/episode, a flashback movie called “The Plan.” It was written by Jane Espenson and is the story of the attack and early chase from the point of view of the Cylons, most particularly Number One (Cavil.) (Review first, spoilers after the break.)

I’ve been highly down on BSG since the poor ending, but this lowered my expectations, giving me a better chance of enjoying The Plan. However, sadly it fell short even of lowered expectations. Critics have savaged it as a clip show, and while it does contain about 20% re-used footage (but not including some actors who refused to participate) it is not a clip show. Sadly, it is mostly a “deleted scenes” show.

You’ve all seen DVDs with “deleted scenes.” I stopped watching these on DVDs because it often was quite apparent why they were deleted. The scene didn’t really add anything the audience could not figure out on its own, or anything the story truly needed. Of course in The Plan we are seeing not deleted material but retroactive continuity. Once the story of Cavil as the mastermind of the attack was written in season 4, and that he did it to impress his creators (who themselves were not written as Cylons until season three) most of the things you will see become obvious. You learn very little more about them that you could not imagine.

There is some worthwhile material. The more detailed nuking of the colonies is chilling, particularly with the Cylon models smiling at the explosions — the same models the audience came to forgive later. Many like the backstory given to a hidden “Simon” model on board the fleet never seen in the show. He turns out (in a retcon) to be one of the first to become more loving and human, since we see him at the opening having secretly married a human woman, but we also don’t forget the other Simon models we saw, who were happy to run medical experiments on humans, smile at nukes, and lobotomize their fellow Cylons to meet Cavil’s needs.

We learn the answers to a few mysteries that fans asked about — who did Six meet after leaving Baltar on Caprica? The shown meeting is anticlimactic. How did Shelley Godfrey disappear after accusing Baltar? The answer is entirely mundane, and better left as a mystery. (Though it does put to rest speculation that she was actually a physical appearance of the Angel in Baltar’s head, who mysteriously was not present during Godfrey’s scenes.)

We get more evidence that Cavil is cold and heartless. Stockwell enjoys playing him that way. But I can’t say it told me much new about his character.

More disappointing is what we don’t get. We don’t learn what was going on in the first episode, 33 and what was really on the Olympic Carrier, a source of much angst for Apollo and Starbuck during the series. We don’t learn how the Cylons managed to be close enough to resurrect those tossed out airlocks, but not to catch the fleet. We don’t learn how Cavil convinced the other Cylons to kill all the humans, or their thoughts on it. We don’t learn how that decision got reversed. We learn more about what made Boomer do her sabotages and shooting of Adama, but we don’t learn anything about why she was greeted above Kobol by 100 naked #8s who then let her nuke their valuable base star. Now that the big secret of the god of Galactica is revealed, we learn nothing more about that god, and the angels don’t even appear.

In short, we learn almost nothing, which is odd for a flashback show aired after the big secrets have been revealed. Normally that is the chance to show things without having to hide the big secrets. Of course, they didn’t know most of these big secrets in the first season.

Overall verdict: You won’t miss a lot if you miss this, feel free to wait for it to air on TV.

Some minor spoiler items after the break.  read more »

Every connector, including video, should send power both ways

I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?

As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.

In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.

An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.

Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.

I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.

Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)

Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)

Flashforward, Deja Vu and Hollywood's problem with time travel

Tonight I watched the debut of FlashForward, which is based on the novel of the same name by Rob Sawyer, an SF writer from my hometown whom I have known for many years. However, “based on” is the correct phrase because the TV show features Hollywood’s standard inability to write a decent time travel story. Oddly, just last week I watched the fairly old movie “Deja Vu” with Denzel Washington, which is also a time travel story.

Hollywood absolutely loves time travel. It’s hard to find a Hollywood F/SF TV show that hasn’t fallen to the temptation to have a time travel episode. Battlestar Galactica’s producer avowed he would never have time travel, and he didn’t, but he did have a god who delivered prophecies of the future which is a very close cousin of that. Time travel stories seem easy, and they are fun. They are often used to explore alternate possibilities for characters, which writers and viewers love to see.

But it’s very hard to do it consistently. In fact, it’s almost never done consistently, except perhaps in shows devoted to time travel (where it gets more thought) and not often even then. Time travel stories must deal with the question of whether a trip to the past (by people or information) changes the future, how it changes it, who it changes it for, and how “fast” it changes it. I have an article in the works on a taxonomy of time travel fiction, but some rough categories from it are:

  • Calvinist: Everything is cast, nothing changes. When you go back into the past it turns out you always did, and it results in the same present you came from.
  • Alternate world: Going into the past creates a new reality, and the old reality vanishes (at varying speeds) or becomes a different, co-existing fork. Sometimes only the TT (time traveler) is aware of this, sometimes not even she is.
  • Be careful not to change the past: If you change it, you might erase yourself. If you break it, you may get a chance to fix it in some limited amount of time.
  • Go ahead and change the past: You won’t get erased, but your world might be erased when you return to it.
  • Try to change the past and you can’t: Some magic force keeps pushing things back the way they are meant to be. You kill Hitler and somebody else rises to do the same thing.

Inherent in several of these is the idea of a second time dimension, in which there is a “before” the past was changed and an “after” the past was changed. In this second time dimension, it takes time (or rather time-2) for the changes to propagate. This is mainly there to give protagonists a chance to undo changes. We see Marty Mcfly slowly fade away until he gets his parents back together, and then instantly he’s OK again.

In a time travel story, it is likely we will see cause follow effect, reversing normal causality. However, many writers take this as an excuse to throw all logic out the window. And almost all Hollywood SF inconsistently mixes up the various modes I describe above in one way or another.

Spoilers below for the first episode of FlashForward, and later for Deja Vu.

Update note: The fine folks at io9 asked FlashForward’s producers about the flaw I raise but they are not as bothered by it.  read more »

On worldcon and convention design

The Worldcon (World Science Fiction Convention) in Montreal was enjoyable. Like all worldcons, which are run by fans rather than professional convention staff, it had its issues, but nothing too drastic. Our worst experience actually came from the Delta hotel, which I’ll describe below.

For the past few decades, Worldcons have been held in convention centers. They attract from 4,000 to 7,000 people and are generally felt to not fit in any ordinary hotel outside Las Vegas. (They don’t go to Las Vegas both because there is no large fan base there to run it, and the Las Vegas Hotels, unlike those in most towns, have no incentive to offer a cut-rate deal on a summer weekend.)

Because they are always held where deals are to be had on hotels and convention space, it is not uncommon for them to get the entire convention center or a large portion of it. This turns out to be a temptation which most cons succumb to, but should not. The Montreal convention was huge and cavernous. It had little of the intimacy a mostly social event should have. Use of the entire convention center meant long walks and robbed the convention of a social center — a single place through which you could expect people to flow, so you would see your friends, join up for hallway conversations and gather people to go for meals.

This is one of those cases where less can be more. You should not take more space than you need. The convention should be as initimate as it can be without becoming crowded. That may mean deliberately not taking function space.

A social center is vital to a good convention. Unfortunately when there are hotels in multiple directions from the convention center so that people use different exits, it is hard for the crowd to figure one out. At the Montreal convention (Anticipation) the closest thing to such a center was near the registration desk, but it never really worked. At other conventions, anywhere on the path to the primary entrance works. Sometimes it is the lobby and bar of the HQ hotel, but this was not the case here.

When the social center will not be obvious, the convention should try to find the best one, and put up a sign saying it is the congregation point. In some convention centers, meeting rooms will be on a different floor from other function space, and so it may be necessary to have two meeting points, one for in-between sessions, and the other for general time.

The social center/meeting point is the one thing it can make sense to use some space on. Expect a good fraction of the con to congregate there in break times. Let them form groups of conversation (there should be sound absorbing walls) but still be able to see and find other people in the space.

A good thing to make a meeting point work is to put up the schedule there, ideally in a dynamic way. This can be computer screens showing the titles of the upcoming sessions, or even human changed cards saying this. Anticipation used a giant schedule on the wall, which is also OK. The other methods allow descriptions to go up with the names. Anticipation did a roundly disliked “pocket” program printed on tabloid sized paper, with two pages usually needed to cover a whole day. Nobody had a pocket it could fit in. In addition, there were many changes to the schedule and the online version was not updated. Again, this is a volunteer effort, so I expect some glitches like this to happen, they are par for the course.  read more »

Worldcon panel on BSG surprisingly negative

On Saturday I attended the Battlestar Galactica Postmortem panel at the World Science Fiction convention in Montreal. The “worldcon” is the top convention for serious fans of SF, with typically 4,000 to 6,000 attendees from around the world. There are larger (much larger) “media” conventions like ComicCon an DragonCon, but the Worlcon is considered “it” for written SF. It gives out the Hugo award. While the fans at a worldcon do put an emphasis on written SF, they also are voracious consumers of media SF, and so there are many panels on it, and two Hugo awards for it.

Two things surprised me a bit about the Worldcon panel. First of all, it was much more lightly attended than I would have expected considering the large fandom BSG built, and how its high quality had particularly appealed to these sorts of fans. Secondly, it was more negative and bitter about the ending that I would have expecting — and I was expecting quite a lot.

In fact, a few times audience members and panelists felt it necessary to encourage the crowd to stop just ranting about the ending and to talk about the good things. In spite of being so negative on the ending myself I found myself being one of those also trying to talk about the good stuff.

What was surprising was that while I still stand behind my own analysis, I know that in many online communities opinion on the ending is more positive. There are many who hate it but many who love it, and at least initially, more who loved it in some communities.

The answer may be is that it is the serious SF fan, the fan who looks to books as the source of the greatest SF, the BSG ending was the largest betrayal. Here we were hoping for a show that would bring some of the quality we seek in written SF to the screen, and here it fell down. Fans with a primary focus on movie and TV SF were much more tolerant of the ending, since as I noted, TV SF endings are almost never good anyway, and the show itself was a major cut above typical TV SF.

The small audience surprised me. I have seen other shows such as Buffy (which is not even SF), Babylon 5 and various forms of Star Trek still fill a room for discussion of the show. It is my contention that had BSG ended better, it would have joined this pantheon of great shows that maintains a strong fandom for decades.

The episode “Revelations” where the ruined Earth is discovered was nominated for the Hugo for best short dramatic program. It came in 4th — the winner was the highly unusual “Dr. Horrible’s sing-along-blog” which was a web production from fan favourite Joss Whedon of Buffy and Firefly. BSG won a Hugo for the first episode “33” and has been nominated each year since then but has failed to win each time, with a Doctor Who episode the winner in each case.

At the panel, the greatest source of frustration was the out-of-nowhere decision to abandon all technology, with Starbuck’s odd fate a #2. This matches the most common complaints I have seen online.

On another note, while normally Worldcon Hugo voters tend to go for grand SF books, this time the best Novel award went to Neil Gaiman’s “The Graveyard Book.” Gaiman himself, in his acceptance speech, did the odd thing of declaring that he thought Anathem (which was also my choice) should have won. Anathem came 2nd or 3rd, depending on how you like to read STV ballot counting. Gaiman however, was guest of honour at the convention, and it attracted a huge number of Gaiman fans because of this, which may have altered the voting. (Voting is done by convention members. Typically about 1,000 people will vote on best novel.)

Battlestar's "Daybreak:" The worst ending in the history of on-screen science fiction

Battlestar Galactica attracted a lot of fans and a lot of kudos during its run, and engendered this sub blog about it. Here, in my final post on the ending, I present the case that its final hour was the worst ending in the history of science fiction on the screen. This is a condemnation of course, but also praise, because my message is not simply that the ending was poor, but that the show rose so high that it was able to fall so very far. I mean it was the most disappointing ending ever.

(There are, of course, major spoilers in this essay.)

Other SF shows have ended very badly, to be sure. This is particularly true of TV SF. Indeed, it is in the nature of TV SF to end badly. First of all, it’s written in episodic form. Most great endings are planned from the start. TV endings rarely are. To make things worse, TV shows are usually ended when the show is in the middle of a decline. They are often the result of a cancellation, or sometimes a producer who realizes a cancellation is imminent. Quite frequently, the decline that led to cancellation can be the result of a creative failure on the show — either the original visionaries have gone, or they are burned out. In such situations, a poor ending is to be expected.

Sadly, I’m hard pressed to think of a TV SF series that had a truly great ending. That’s the sort of ending you might find in a great book or movie, the ending that caps the work perfectly, which solidifies things in a cohesive whole. Great endings will sometimes finally make sense out of everything, or reveal a surprise that, in retrospect, should have been obvious all along. I’m convinced that many of the world’s best endings came about when the writer actually worked out the ending first, then then wrote a story leading to that ending.

There have been endings that were better than the show. Star Trek: Voyager sunk to dreadful depths in the middle of its run, and its mediocre ending was thus a step up. Among good SF/Fantasy shows, Quantum Leap, Buffy and the Prisoner stand out as having had decent endings. Babylon 5’s endings (plural) were good but, just as I praise Battlestar Galactica (BSG) by saying its ending sucked, Babylon 5’s endings were not up to the high quality of the show. (What is commonly believed to be B5’s original planned ending, written before the show began, might well have made the grade.)

Ron Moore’s goals

To understand the fall of BSG, one must examine it both in terms of more general goals for good SF, and the stated goals of the head writer and executive producer, Ronald D. Moore. The ending failed by both my standards (which you may or may not care about) but also his.

Moore began the journey by laying out a manifesto of how he wanted to change TV SF. He wrote an essay about Naturalistic science fiction where he outlined some great goals and promises, which I will summarize here, in a slightly different order

  • Avoiding SF clichés like time travel, mind control, god-like powers, and technobabble.
  • Keeping the science real.
  • Strong, real characters, avoiding the stereotypes of older TV SF. The show should be about them, not the hardware.
  • A new visual and editing style unlike what has come before, with a focus on realism.

Over time he expanded, modified and sometimes intentionally broke these rules. He allowed the ships to make sound in space after vowing they would not. He eschewed aliens in general. He increased his focus on characters, saying that his mantra in concluding the show was “it’s the characters, stupid.”

The link to reality

In addition, his other goal for the end was to make a connection to our real world. To let the audience see how the story of the characters related to our story. Indeed, the writers toyed with not destroying Galactica, and leaving it buried on Earth, and ending the show with the discovery of the ship in Central America. They rejected this ending because they felt it would violate our contemporary reality too quickly, and make it clear this was an alternate history. Moore felt an alternative universe was not sufficient.

The successes, and then failures

During its run, BSG offered much that was great, in several cases groundbreaking elements never seen before in TV SF:

  • Artificial minds in humanoid bodies who were emotional, sexual and religious.
  • Getting a general audience to undertand the “humanity” of these machines.
  • Stirring space battles with much better concepts of space than typically found on TV. Bullets and missiles, not force-rays.
  • No bumpy-head aliens, no planet of the week, no cute time travel or alternate-reality-where-everybody-is-evil episodes.
  • Dark stories of interesting characters.
  • Multiple copies of the same being, beings programmed to think they were human, beings able to transfer their mind to a new body at the moment of death.
  • A mystery about the origins of the society and its legends, and a mystery about a lost planet named Earth.
  • A mystery about the origin of the Cylons and their reasons for their genocide.
  • Daring use of concepts like suicide bombing and terrorism by the protagonists.
  • Kick-ass leadership characters in Adama and Roslin who were complex, but neither over the top nor understated.
  • Starbuck as a woman. Before she became a toy of god, at least.
  • Baltar: One of the best TV villains ever, a self-centered slightly mad scientist who does evil without wishing to, manipulated by a strange vision in his head.
  • Other superb characters, notably Tigh, Tyrol, Gaeta and Zarek.

But it all came to a far lesser end due to the following failures I will outline in too much detail:

  • The confirmation/revelation of an intervening god as the driving force behind events
  • The use of that god to resolve large numbers of major plot points
  • A number of significant scientific mistakes on major plot points, including:
    • Twisting the whole story to fit a completely wrong idea of what Mitochondrial Eve is
    • To support that concept, an impossible-to-credit political shift among the characters
    • The use of concepts from Intelligent Design to resolve plot issues.
    • The introduction of the nonsense idea of “collective unconscious” to explain cultural similarities.
  • The use of “big secrets” to dominate what was supposed to be a character-driven story
  • Removing all connection to our reality by trying to build a poorly constructed one
  • Mistakes, one of them major and never corrected, which misled the audience

And then I’ll explain the reason why the fall was so great — how, until the last moments, a few minor differences could have fixed most of the problems.  read more »

Tales of the Michael Jackson lottery, eBay and security

I’ve been fascinated of late with the issue of eBay auctions of hot-hot items, like the playstation 3 and others. The story of the Michael Jackson memorial tickets is an interesting one.

17,000 tickets were given out as 8,500 pairs to winners chosen from 1.6 million online applications. Applicants had to give their name and address, and if they won, they further had to use or create a ticketmaster account to get their voucher. They then had to take the voucher to Dodger stadium in L.A. on Monday. (This was a dealbreaker even for honest winners from too far outside L.A. such as a Montreal flight attendant.) At the stadium, they had to present ID to show they were the winner, whereupon they were given 2 tickets (with random seat assignment) and two standard club security wristbands, one of which was affixed to their arm. They were told if the one on the arm was damaged in any way, they would not get into the memorial. The terms indicated the tickets were non-transferable.

Immediately a lot of people, especially those not from California who won, tried to sell tickets on eBay and Craigslist. In fact, even before the lottery results, people were listing something more speculative, “If I win the lottery, you pay me and you’ll get my tickets.” (One could enter the lottery directly of course, but this would increase your chances as only one entry was allowed, in theory, per person.)

Both eBay and Craigslist had very strong policies against listing these tickets, and apparently had staff and software working regularly to remove listings. Listings on eBay were mostly disappearing quickly, though some persisted for unknown reasons. Craiglist listings also vanished quickly, though some sellers were clever enough to put their phone numbers in their listing titles. On Craigslist a deleted ad still shows up in the search summary for some time after the posting itself is gone.

There was a strong backlash by fans against the sellers. On both sites, ordinary users were regularly hitting the links to report inappropriate postings. In addition, a brand new phenomenon emerged on eBay — some users were deliberately placing 99 million dollar bids on any auction they found for tickets, eliminating any chance of further bidding. (See note) In that past that could earn you negative reputation, but eBay has removed negative reputation for buyers. In addition, it could earn you a mark as a non-paying buyer, but in this case, the seller is unable to file such a complaint because their auction of the non-tranferable ticket itself violates eBay’s terms.  read more »

Design for a universal plug

I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. This week saw the announcement that all the companies selling cell phones in Europe will standardize on a single charging connector, based on micro-USB. (A large number of devices today use the now deprecated Mini-USB plug, and it was close to becoming a standard by default.) As most devices are including a USB plug for data, this is not a big leap, though it turned out a number of devices would not charge from other people’s chargers, either from stupidity or malice. (My Motorola RAZR will not charge from a generic USB charger or even an ordinary PC. It needs a special charger with the data pins shorted, or if it plugs into a PC, it insists on a dialog with the Motorola phone tools driver before it will accept a charge. Many suspect this was to just sell chargers and the software.) The new agreement is essentially just a vow to make sure everybody’s chargers work with everybody’s devices. It’s actually a win for the vendors who can now not bother to ship a charger with the phone, presuming you have one or will buy one. It is not required they have the plug — supplying an adapter is sufficient, as Apple is likely to do. Mp3 player vendors have not yet signed on.

USB isn’t a great choice since it only delivers 500ma at 5 volts officially, though many devices are putting 1 amp through it. That’s not enough to quickly charge or even power some devices. USB 3.0 officially raised the limit to 900ma, or 4.5 watts.

USB is a data connector with some power provided which has been suborned for charging and power. What about a design for a universal plug aimed at doing power, with data being the secondary goal? Not that it would suck at data, since it’s now pretty easy to feed a gigabit over 2 twisted pairs with cheap circuits. Let’s look at the constraints

Smart Power

The world’s new power connector should be smart. It should offer 5 volts at low current to start, to power the electronics that will negotiate how much voltage and current will actually go through the connector. It should also support dumb plugs, which offer only a resistance value on the data pins, with each resistance value specifying a commonly used voltage and current level.

Real current would never flow until connection (and ground if needed) has been assured. As such, there is minimal risk of arcing or electric shock through the plug. The source can offer the sorts of power it can deliver (AC, DC, what voltages, what currents) and the sink (power using device) can pick what it wants from that menu. Sinks should be liberal in what they take though (as they all have become of late) so they can be plugged into existing dumb outlets through simple adapters.

Style of pins

We want low current plugs to be small, and heavy current plugs to be big. I suggest a triangular pin shape, something like what is shown here. In this design, two main pins can only go in one way. The lower triangle is an optional ground — but see notes on grounding below.  read more »

The overengineering and non-deployment of SSL/TLS

I have written before about how overzealous design of cryptographic protocols often results in their non-use. Protocol engineers are trained to be thorough and complete. They rankle at leaving in vulnerabilities, even against the most extreme threats. But the perfect is often the enemy of the good. None of the various protocols to encrypt E-mail have ever reached even a modicum of success in the public space. It’s a very rare VoIP call (other than Skype) that is encrypted.

The two most successful encryption protocols in the public space are SSL/TLS (which provide the HTTPS system among other things) and Skype. At a level below that are some of the VPN applications and SSH.

TLS (the successor to SSL) is very widely deployed but still very rarely used. Only the most tiny fraction of web sessions are encrypted. Many sites don’t support it at all. Some will accept HTTPS but immediately push you back to HTTP. In most cases, sites will have you log in via HTTPS so your password is secure, and then send you back to unencrypted HTTP, where anybody on the wireless network can watch all your traffic. It’s a rare site that lets you conduct your entire series of web interactions entirely encrypted. This site fails in that regard. More common is the use of TLS for POP3 and IMAP sessions, both because it’s easy, there is only one TCP session, and the set of users who access the server is a small and controlled set. The same is true with VPNs — one session, and typically the users are all required by their employer to use the VPN, so it gets deployed. IPSec code exists in many systems, but is rarely used in stranger-to-stranger communications (or even friend-to-friend) due to the nightmares of key management.

TLS’s complexity makes sense for “sessions” but has problems when you use it for transactions, such as web hits. Transactions want to be short. They consist of a request, and a response, and perhaps an ACK. Adding extra back and forths to negotiate encryption can double or triple the network cost of the transactions.

Skype became a huge success at encrypting because it is done with ZUI — the user is not even aware of the crypto. It just happens. SSH takes an approach that is deliberately vulnerable to man-in-the-middle attacks on the first session in order to reduce the UI, and it has almost completely replaced unencrypted telnet among the command line crowd.

I write about this because now Google is finally doing an experiment to let people have their whole gmail session be encrypted with HTTPS. This is great news. But hidden in the great news is the fact that Google is evaluating the “cost” of doing this. There also may be some backlash if Google does this on web search, as it means that ordinary sites will stop getting to see the search query in the “Referer” field until they too switch to HTTPS and Google sends traffic to them over HTTPS. (That’s because, for security reasons, the HTTPS design says that if I made a query encrypted, I don’t want that query to be repeated in the clear when I follow a link to a non-encrypted site.) Many sites do a lot of log analysis to see what search terms are bringing in traffic, and may object when that goes away.  read more »

Authenticated actions as an alternative to login

The usual approach to authentication online is the “login” approach — you enter userid and password, and for some “session” your actions are authenticated. (Sometimes special actions require re-authentication, which is something my bank does on things like cash transfers.) This is so widespread that all browsers will now remember all your passwords for you, and systems like OpenID have arise to provide “universal sign on,” though to only modest acceptance.

Another approach which security people have been trying to push for some time is authentication via digital signature and certificate. Your browser is able, at any time, to prove who you are, either for special events (including logins) or all the time. In theory these tools are present in browsers but they are barely used. Login has been popular because it always works, even if it has a lot of problems with how it’s been implemented. In addition, for privacy reasons, it is important your browser not identify you all the time by default. You must decide you want to be identified to any given web site.

I wrote earlier about the desire for more casual athentication for things like casual comments on message boards, where creating an account is a burden and even use of a universal login can be a burden.

I believe an answer to some of the problems can come from developing a system of authenticated actions rather than always authenticating sessions. Creating a session (ie. login) can be just one of a range of authenticated actions, or AuthAct.

To do this, we would adapt HTML actions (such as submit buttons on forms) so that they could say, “This action requires the following authentication.” This would tell the browser that if the user is going to click on the button, their action will be authenticated and probably provide some identity information. In turn, the button would be modified by the browser to make it clear that the action is authenticated.

An example might clarify things. Say you have a blog post like this with a comment form. Right now the button below you says “Post Comment.” On many pages, you could not post a comment without logging in first, or, as on this site, you may have to fill other fields in to post the comment.

In this system, the web form would indicate that posting a comment is something that requires some level of authentication or identity. This might be an account on the site. It might be an account in a universal account system (like a single sign-on system). It might just be a request for identity.

Your browser would understand that, and change the button to say, “Post Comment (as BradT).” The button would be specially highlighted to show the action will be authenticated. There might be a selection box in the button, so you can pick different actions, such as posting with different identities or different styles of identification. Thus it might offer choices like “as BradT” or “anonymously” or “with pseudonym XXX” where that might be a unique pseudonym for the site in question.

Now you could think of this as meaning “Login as BradT, and then post the comment” but in fact it would be all one action, one press. In this case, if BradT is an account in a universal sign-on system, the site in question may never have seen that identity before, and won’t, until you push the submit button. While the site could remember you with a cookie (unless you block that) or based on your IP for the next short while (which you can’t block) the reality is there is no need for it to do that. All your actions on the site can be statelessly authenticated, with no change in your actions, but a bit of a change in what is displayed. Your browser could enforce this, by converting all cookies to session cookies if AuthAct is in use.

Note that the first time you use this method on a site, the box would say “Choose identity” and it would be necessary for you to click and get a menu of identities, even if you only have one. This is because a there are always tools that try to fake you out and make you press buttons without you knowing it, by taking control of the mouse or covering the buttons with graphics that skip out of the way — there are many tricks. The first handover of identity requires explicit action. It is almost as big an event as creating an account, though not quite that significant.

You could also view the action as, “Use the account BradT, creating it if necessary, and under that name post the comment.” So a single posting would establish your ID and use it, as though the site doesn’t require userids at all.  read more »

Why you don't want gods in your fiction

I won’t deny that some of my distaste for the religious ending comes from my own preference for a realistic SF story, where everything that happens has a natural, rather than supernatural explanation, and that this comes in part from my non-religious worldview.

Nonetheless, I believe there are many valid reasons why you don’t want to have interventionist gods in your fiction. God should not be a character in your story, unless you are trying to write religious fiction like Left Behind or Touched by an Angel.

The reason is that God, as we know, works in strange and mysterious ways, his wonders to perform. We don’t expect to understand them. In fact, there is not even a requirement that they make sense. Some even argue that if you’re going to write authentic fiction with God as a character his actions should not make sense to the characters or the reader.

The author of a story is “god” in that they can write whatever they want. But in real, quality fiction, the author is constrained as to what they will do. They are supposed to make their stories make sense. Things should happen for a reason. If the stories are about characters, things should happen for reasons that come from the characters. If the story is also about setting, as SF is, reasons come from the setting. Mainstream fiction tries to follow all the rules of the real world. SF tries to explore hypothetical worlds with different technology, or new science, or even ways of living. Fantasy explores fantastic worlds, but when done properly, the author defines the new rules and sticks to them.

But if you make a divine character, even an offscreen divine character, you give the author too much power. They can literally write anything, and declare it to be the will of god. You don’t want your writer able to do that. You may want them to be able to start with anything, but once started the story should make sense.

As BSG ended, Adama and Baltar describe (correctly, but not strongly enough) how improbable it is that evolved humans can mate with the colonials. In reality, the only path to this is common ancestry, ie. the idea that humans from our-Earth were taken from it and became the Kobolians. But Baltar is able to explain it all away in one line with his new role as priest, it’s the will of god.

In a good story, you don’t get to explain things this way. You need to work a bit harder.

Now, if you absolutely must have a god, you want to constrain that god. That’s not too far-fetched. If you were writing a story in Christianity, and you depicted Jesus torturing innocents, people would not accept it, they would say it’s at odds with how Jesus is defined (though Yaweh had fewer problems with it.) BSG’s god is never defined well enough to have any constraints.

He,and his minions, are certainly capricious though. Genocides, Lies, Manipulations, exploding star systems, plotting out people’s lives, leading Starbuck to her death to achieve goals which could easily have been done other ways. Making that cycle of genocide repeat again and again until random chance breaks it. Not the sort of god we can draw much from. (One hopes if we are going to have gods in our fiction, they provide some moral lesson or other reason for being there rather than to simply be a plot device that explains things that make no sense.)

In literature, bringing in the arbitrary actions at the end of a story to resolve the plot is called a Deus ex Machina and it’s frowned upon for good reasons. The BSG god was introduced early on, so is not a last minute addition. People will disagree, but I think the divinely provided link to real Earth is last minute, in the sense that nothing in the story to that point tells you real Earth is out there, just the rules of drama (that the name “Earth” means something to the audience other than that ruined planet.)

If you want to write religious fiction, of course you can. I’m less interested in reading it. Moore said he did not intend to write this. He wrote the miniseries and made the Cylons monotheists and the colonials polytheists (like the original) and the network came back and said that was really interesting. So he expanded it.

But he expanded it from something good — characters who have religious beliefs — to something bad. The religious beliefs were true. But they were some entirely made-up religion with little correspondence to any Earth religion (even the Buddhism that Moore professes) and as such with no relevance to the people who tend to seek out religious fiction.

Giving religions to the characters is good. It’s real. It’s an important part of our society worth exploring. However, resolving that some of the beliefs are correct, and bringing in the hand of god is another matter.

More loose ends

  • The Colony had several base ships. When it started breaking apart, base ships full of Cavils, Dorals and Simons should have jumped away. What happened to them, and why won’t they come a calling soon? (God’s will?)
  • Likewise, a force of Cavils, Dorals and Simons was invading Galactica and was in a temporary truce when fighting broke out again and Galactica jumped. What happened to them. In particular, since the first Hybrid predicted the splintered Cylon factions would be joined together again, why didn’t they?
  • We never resolved why the first Earth was destroyed 2,000 years ago, and that this was the same time as the fall of Kobol and exodus of the 12 tribes. Was this just a big mistake and all 13 tribes were supposed to flee at the same time?
  • I don’t know for sure about 150,000 years ago (it comes and goes) but 135,000 years ago the Sahara was covered by large lakes.

Creationism and the Abduction theory

The posts will come fast and furious in the next two days.

First I want to cover a little more about why this ending is of so much concern to many viewers. While many will accept that it is unscientific, and just say that they never cared that much about such things, the particular errors and issues of the final plot are rather special. What we saw was not merely spacecraft making sound in space or FTL drives or some other random scientific error.

The error in BSG centers around the most pernicious anti-scientific idea of our day: Creationism/Intelligent Design. In particular, it tells the “Ark” story, though it sets it 150,000 years ago rather than 4,000. And, because Moore knows the Ark story is totally bogus, he tries to fix it, by having the alien colonists able to breed with us humans, and thus having the final result be a merger of the two streams of humanity. That’s better than the pure Ark story, and perhaps enough better that I see some viewers are satisfied with it, but with deeper examination, it is just as bad an idea, and perhaps in its way more pernicious because it is easier for people to accept the flaws.

SF writers have been writing the Ark story since the dawn of SF. Indeed, the alien Adam and Eve plot is such a cliche from the 40s that you would have a hard time selling it to an SF magazine today. Not simply because it’s nonsense, but because it became overused back in the day when it wasn’t as obvious to people how nonsensical it was.

The Ark story is not just any bad science. It’s the worst bad science there is. Because there are dedicated forces who want so much for people to accept the Ark story as possible. Normally busy scientists would not even bother to debunk a story like that, but they spend a lot of time debunking this one because of the dedicated religious forces who seek to push it into schools and other places religion does not belong. And debunk it they have, and very solidly. The depth of the debunking is immense, and can’t be covered in this blog. I recommend the talk.origins archive with their giant FAQ for answers to many of the questions about this.

BSG plays a number of tricks to make the Ark story more palatable. It puts it back further in time, prior to the migrations of humanity out of Africa. (Oddly, it also has Adama spread the people around the continents, which simply means all the ones who did not stay in Africa died out without a trace or any descendents.) It makes it a merger rather than a pure origin to account for the long fossil and geological record. It has the aliens destroy all their technology and cast it into the sun to explain why there is no trace of it.

It does all those things, but in the end, the explanation remains religious. As the story is shown, you still need to invoke a variety of divine miracles to make it happen, and the show does indeed do this. The humans, on this planet, are the same species as aliens from another galaxy, due to the plan of God. They have cats and dogs and the rest, even though 150,000 years ago, humans have yet to domesticate any animals. Indeed, god has to have designed the colonials from the start to be the same species as the natives of Earth, it all has to have been set up many thousands of years ago. This is “intelligent design,” the form of creationism that gets dressed like science to help make it more palatable. It is also a pernicious idea.

In one fell swoop, BSG changes from science fiction — hard, soft or otherwise — to religious fiction, or religious SF if you wish. Its story, as shown, is explained on screen as being divine intervention. Now, thanks to BSG, there will be discussion of the ending. But it will involve the defenders of science having to explain again why the Ark story is silly and ignores what we know of biology. I am shocked that Kevin Grazier, who advocates science teaching for children, including biology, was willing to be a part of this ending.

Sadly this ending goes beyond being bad SF.

How to make it work.

Now there is one plot which BSG did not explore which would have made a lot of sense if they wanted to tell this story. It’s been noted on this blog a few times, but discounted because we believed BSG had a “no aliens” rule. This is what I called the “Alien Abduction plot.”

In this plot, aliens — in this case the God, who does not have to be a supernatural god — captured humans and various plants and animals from real Earth many thousands of years ago. The god took them to Kobol, and possibly with other gods (the Lords of Kobol) created a culture and raised them there. From this flows our story.

This plot has been used many times. Recently in Ken Macleod’s “Cosmonaut Keep” series the characters find a human culture way out in the stars, populated by people taken by “gods” (highly advanced beings) a long time ago. The same idea appears in Rob Sawyer’s dinosaur series, and many other books.

Do this, and it suddenly explains why the colonials are the same species as the people on Earth, but more advanced. It does not explain their cats and dogs, or their Earth idioms, but those can be marked down to drama. (They would have to have independently domesticated cats and dogs and other animals, as this had not happened on Earth. Same for the plants. The gods could also have done this for them.)

This plot works well enough that it’s surprising no hint of it was left in the show. I do not believe it was the intention of the writers, though I would love to see post-show interviews declaring that it was.

And even this plot has a hard time explaining what happened to their culture, the metal in their teeth and many other items. For try as they might they could not abandon all their technology. Even things that seem very basic to the Colonials, like better spears, writing, animal and plant domestication, knives, sailboats, complex language and so many other things are still aeons ahead of the humans. They plan to breed with the humans, and will be taking them into their schools and educating them. There was a sudden acceleration of culture 50,000 years ago, but not 150,000. And then there’s the artificial DNA in Hera and any other Cylon descendents. (And no, Hera isn’t the only person we are supposed to be descended from, she is just the source of the maternal lines.) But maybe you can shoehorn it in, which makes it surprising it wasn’t used.

The idea, taken from the old series, that the Greeks would have taken some of their culture from the aliens also is hard to make work. Why do their cultural ideas and now hopefully debunked (to them) polytheist religion show up nowhere else but Greece and eventually Rome? How do they get there, and only there, over 140,000 years of no writing, hunter-gatherer life? I am not a student of classical cultures, but I believe we also have lots of evidence of the origin and evolution of our modern Greek myths. They did not spring, pardon the phrase, fully formed from the head of Zeus. Rather they are based on older and simpler stories we have also traced. But the alien religion is based on our modern concepts of ancient Greek religion.

Even in 5,000 to 10,000 years, there would be a moderate amount of genetic drift in the Kobol environment, including the artificial genetic manipulation involved with Cylons. Since we learn that Africa has more game than the 12 colonies, it’s clear the colonials did not have all of Earth’s animals. It is contact with animals that generates most of our diseases. When different groups of humans get separated for many thousands of years, with different animals, the result is major plagues when they meet. Without divine intervention, the colonials are about to be reduced to a small fraction of their population. Especially after tossing their hospitals into the sun. (Why don’t we see any sick people saying, “Excuse me, do I get a vote on this whole abandon technology idea?)


The other plot which could have explained this I called the “Atlantis” plot. In this plot there is an advanced civilization long ago which reaches the stars but falls and disappears without a trace. It is the civilization that colonizes Kobol and becomes as gods. This requires no aliens. This is not their chosen plot, since it’s even harder to explain how this civilization left no trace, since it would not have gone to the technology destroying extremes the Colonists are shown to do.

Coming up: Why religious SF is a bad idea, even if you believe in the religion. (Hint: while the author is god, you don’t want them to really use that power.)

On high quality Science Fiction

(This post from my Battlestar Galactica Analysis Blog is cross-posted to my main blog too.)

There’s been some debate in the comments here about whether I and those like me are being far too picky about technical and plot elements in Battlestar Galactica. It got meaty enough that I wanted to summarize some thoughts about the nature of quality SF, and the reasons why it is important. BSG is quality SF, and it set out to be, so I hold it to a higher bar. When I criticise it for where it sometimes drops the ball, this is not the criticism of disdain, but of respect.

I wrote earlier about the nature of hard SF. It is traditionally hard to define, and people never fully agree about what it is, and what SF is in general. I don’t expect this essay to resolve that.

Broadly, SF is to me fiction which tries to explore the consequences of science, technology and the future. All fiction asks “what if?” but in SF, the “what if?” is often about the setting, and in particular the technology of the setting, and not simply about the characters. Hard SF makes a dedication to not break the laws of physics and other important principles of science while doing so. Fantasy, on the other hand, is free to set up any rules it likes, though all but the worst fantasy feels obligated to stick to those rules and remain consistent.

Hard SF, however, has another association in people’s minds. Many feel that hard SF has to focus on the science and technology. It is a common criticism of hard SF that it spends so much time on the setting that the characters and story suffer. In some cases they suffer completely; stories in Analog Science Fiction are notorious for this, and give hard SF a bad name.

Perhaps because of that name, Ron Moore declared that he would make BSG be Naturalistic Science Fiction. he declared that he wanted to follow the rules of science, as hard SF does, but as you would expect in a TV show, character and story were still of paramount importance. His credo also described many of the tropes of TV SF he would avoid, including time travel and aliens, and stock stereotyped characters.

I am all for this. While hard SF that puts its focus on the technology makes great sense in a Greg Egan novel, it doesn’t make sense in a drama. TV and movies don’t have the time to do it well, nor the audience that seeks this.

However, staying within the laws of physics has a lot of merit. I believe that it can be very good for a story if the writer is constrained, and can’t simply make up anything they desire. Mystery writers don’t feel limited that they can’t have their characters able to fly or read minds. In fact, it would ruin most of their mystery plots of they could. Staying within the rules — rules you didn’t set up — can be harder to do, but this often is good, not bad. This is particularly true for the laws of science, because they are real and logical. So often, writers who want to break the rules end up breaking the rules of logic. Their stories don’t make any sense, regardless of questions of science. When big enough, we call these logical flaws plot holes. Sticking to reality actually helps reduce them. It also keeps the audience happy. Only a small fraction of the audience may understand enough science to know that something is bogus, but you never know how many there are, and they are often the smarter and more influential members of the audience.

I lament at the poor quality of the realism in TV SF. Most shows do an absolutely dreadful job. I lament this because they are not doing that bad job deliberately. They are just careless. For fees that would be a pittance to any Hollywood budget, they could make good use of a science and SF advisor. (I recommend both. The SF advisor will know more about drama and fiction, and also will know what’s already been done, or done to death in other SF.) Good use doesn’t mean always doing what they say. While I do think it is good to be constrained, I recognize the right of creators to decide they do want to break the rules. I just want them to be aware that they are breaking the rules. I want them to have decided “I need to do this to tell the story I am telling” and not because they don’t care or don’t think the audience will care.

There does not have to be much of a trade-off between doing a good, realistic, consistent story and having good drama and characters. This is obviously true. Most non-genre fiction happily stays within the laws of reality. (Well, not action movies, but that’s another story.)

Why it’s important

My demand for realism is partly so I get a better, more consistent story without nagging errors distracting me from it. But there is a bigger concern.

TV and movie SF are important. They are the type of SF that most of the world will see. They are what will educate the public about many of the most important issues in science and technology, and these are some of the most important issues of the day. More people will watch even the cable-channel-rated Battlestar Galactica than read the most important novels in the field.

Because BSG is good, it will become a reference point for people’s debates about things like AI and robots, religion and spirituality in AIs and many other questions. This happens in two ways. First, popular SF allows you to explain a concept to an audience quickly. If I want to talk about a virtual reality where everybody is in a tank while they live in a synthetic world, I can mention The Matrix and the audience immediately has some sense of what I am talking about. Because of the flaws in The Matrix I may need to explain the differences between that and what I want to describe, but it’s still easier.

Secondly, people will have developed attitudes about what things mean from the movies. HAL-9000 from 2001 formed a lot of public opinion on AIs. Few get into a debate about robots without bringing up Asimov, or at worst case, Star Wars.

If the popular stories get it wrong, then the public starts with a wrong impression. Because so much TV SF is utter crap, a lot of the public has really crappy ideas about various issues in science and technology. The more we can correct this, the better. So much TV SF comes from people who don’t really even care that they are doing SF. They do it because they can have fancy special effects, or know it will reach a certain number of fans. They have no excuse, though, for not trying to make it better.

BSG excited me because it set a high bar, and promised realism. And in a lot of ways it has delivered. Because it has FTL drives, it would not meet the hard SF fan’s standard, but I understand how you are not going to do an interstellar chase show with sublight travel that would hold a TV audience. And I also know that Moore, the producer knows this and made a conscious decision to break the rules. There are several other places where he did this.

This was good because the original show, which I watched as an 18 year old, was dreadful. It had no concept of the geometry of space. TV shows and movies are notoriously terrible at this, but this was in the lower part of the spectrum. They just arrived at the planet of the week when the writers wanted them to. And it had this nonsense idea that the Earth could be a colony of ancient aliens. That pernicious idea, the “Ark” theory, is solidly debunked thanks to the fact that creationists keep bringing it up, but it does no good for SF to do anything to encourage it. BSG seemed to be ready to fix all these things. Yet since there are hints that the Ark question may not be addressed, I am disappointed on that count.

To some extent, the criticism that some readers have made — that too much attention to detail and demand for perfection — can ruin the story for you. You do have to employ some suspension of disbelief to enjoy most SF. Even rule-follow hard SF usually invents something new and magical that has yet to be invented. It might be possible, but the writer has no actual clue as to how. You just accept it and enjoy the story. Perhaps I do myself a disservice by getting bothered by minor nits. There are others who have it worse than I do, at least. But I’m not a professional TV science advisor. Perhaps I could be one, but for now, if I can see it, I think it means that they could have seen it. And I always enjoy a show more, when it’s clearly obvious how much they care about the details. And so does everybody else, even when they don’t know it. Attention to details creates a sense of depth which enhances a work even if you never explore the depth. You know it’s there. You feel it, and the work becomes stronger and more relevant.

Now some of the criticisms I am making here are not about science or niggling technical details. Some of the recent trends, I think, are errors of story and character. Of course, you’re never going to be in complete agreement with a writer about where a story or character should go. But if characters become inconsistent, it hurts the story as much or more as when the setting becomes inconsistent.

But still, after all this, let’s see far more shows like Battlestar Galactica 2003, and fewer like Battletar Galactica 1978, and I’ll still be happy.

Data hosting could let me make Facebook faster

I’ve written about “data hosting/data deposit box” as an alternative to “cloud computing.” Cloud computing is timesharing — we run our software and hold our data on remote computers, and connect to them from terminals. It’s a swing back from personal computing, where you had your own computer, and it erases the 4th amendment by putting our data in the hands of others.

Lately, the more cloud computing applications I use, the more I realize one other benefit that data hosting could provide as an architecture. Sometimes the cloud apps I use are slow. It may be because of bandwidth to them, or it may simply be because they are overloaded. One of the advantages of cloud computing and timesharing is that it is indeed cheaper to buy a cluster mainframe and have many people share it than to have a computer for everybody, because those computers sit idle most of the time.

But when I want a desktop application to go faster, I can just buy a faster computer. And I often have. But I can’t make Facebook faster that way. Right now there’s no way I can do it. If it weren’t free, I could complain, and perhaps pay for a larger share, though that’s harder to solve with bandwidth.

In the data hosting approach, the user pays for the data host. That data host would usually be on their ISP’s network, or perhaps (with suitable virtual machine sandboxing) it might be the computer on their desk that has all those spare cycles. You would always get good bandwidth to it for the high-bandwidth user interface stuff. And you could pay to get more CPU if you need more CPU. That can still be efficient, in that you could possibly be in a cloud of virtual machines on a big mainframe cluster at your ISP. The difference is, it’s close to you, and under your control. You own it.

There’s also no reason you couldn’t allow applications that have some parallelism to them to try to use multiple hosts for high-CPU projects. Your own PC might well be enough for most requests, but perhaps some extra CPU would be called for from time to time, as long as there is bandwidth enough to send the temporary task (or sub-tasks that don’t require sending a lot of data along with them.)

And, as noted before, since the users own the infrastructure, this allows new, innovative free applications to spring up because they don’t have to buy their infrastructure. You can be the next youtube, eating that much bandwidth, with full scalability, without spending much on bandwidth at all.

Being the greatest athlete ever

NBC has had just a touch of coverage of Michael Phelps and his 8 gold medals, which in breaking Mark Spitz’s 7 from 1972 has him declared the greatest Olympic athlete, or even athlete of all time. And there’s no doubt he’s one of the greatest swimmers of all time and this is an incredible accomplishment. Couch potato that I am, I can hardly criticise him.

(We are of course watching the Olympics in HDTV using MythTV, but fast-forwarding over the major bulk of it. Endless beach volleyball, commercials and boring events whiz by. I can’t imagine watching without such a box. I would probably spend more time, which they would like, but be less satisfied and see fewer of the events I wish to.)

Phelps got 8 Gold but 3 of them were relays. He certainly contributed to those relays, may well have made the difference for the U.S. team and allowed it to win a gold it would not have won without him. So it seems fair to add them, no?

No. The problem is you can’t win relay gold unless you are lucky enough to be a citizen of one of a few powerhouse swimming nations, in particular the USA and Australia, along with a few others. Almost no matter how brilliant you are, if you don’t compete for one of these countries, you have no chance at those medals. So only a subset of the world’s population even gets to compete for the chance to win 7 or 8 medals at the games. This applies to almost all team medals, be they relay or otherwise. Perhaps the truly determined can emigrate to a contending country. A pretty tall order.

Phelps one 5 individual golds, and that is also the record, though it is shared by 3 others. He has more golds than anybody, though other athletes have more total medals.

Of course, swimming is one of the special sports in which there are enough similar events that it is possible to attain a total like this. There are many sports that don’t even have 7 events a single person could compete in. (They may have more events but they will be divided by sex, or weight class.)

Shooting has potential for a star. It used to even be mixed (men and women) until they split it. It has 9 male events, and one could in theory be master of them all.

Track and Field has 47 events split over men and women. However, it is so specialized in how muscles are trained that nobody expects sprinters to compete in long events or vice versa. Often the best sprinter does well in Long Jump or Triple Jump, allowing the potential of a giant medal run for somebody able to go from 100m to 400m in range. In theory there are 8 individual events 400m or shorter.

And there are a few other places. But the point is that to do what Phelps (or Spitz) did, you have to be in a small subset of sports, and be from a small set of countries. There have been truly “cross sport” athletes at the Olympics but in today’s world of specialized training, it’s rare. If anybody managed to win multiple golds over different sports and beat this record, then the title of greatest Olympian would be very deserving. One place I could see some crossover is between high-diving and Trampoline. While a new event, Trampoline seems to be like doing 20 vaults or high dives in a row. And not that it wasn’t exciting to watch him race.

More Burning Man packing…

What is hard science fiction?

I’ve just returned from Denver and the World Science Fiction Convention (worldcon) where I spoke on issues such as privacy, DRM and creating new intelligent beings. However, I also attended a session on “hard” science fiction, and have some thoughts to relate from it.

Defining the sub-genres of SF, or any form of literature, is a constant topic for debate. No matter where you draw the lines, authors will work to bend them as well. Many people just give up and say “Science Fiction is what I point at when I say Science Fiction.”

Genres in the end are more about taste than anything else. They exist for readers to find fiction that is likely to match their tastes. Hard SF, broadly, is SF that takes extra care to follow the real rules of physics. It may include unknown science or technology but doesn’t include what those rules declare to be impossible. On the border of hard SF one also finds SF that does a few impossible things (most commonly faster-than-light starships) but otherwise sticks to the rules. As stories include more impossible and unlikely things, they travel down the path to fantasy, eventually arriving at a fully fantastic level where the world works in magical ways as the author found convenient.

Even in fantasy however, readers like to demand consistency. Once magical rules are set up, people like them to be followed.

In addition to Hard SF, softer SF and Fantasy, the “alternate history” genre has joined the pantheon, now often dubbed “speculative fiction.” All fiction deals with hypotheticals, but in speculative fiction, the “what if?” is asked about the world, not just the lives of some characters. This year, the Hugo award for best (ostensibly SF) novel of the year went to Chabon’s The Yiddish Policemen’s Union which is a very clear alternate history story. In it, the USA decides to accept Jews that Hitler is expelling from Europe, and gives them a temporary homeland around Sitka, Alaska. During the book, the lease on the homeland is expiring, and there is no Israel. It’s a very fine book, but I didn’t vote for it because I want to promote actual SF, not alternate history, with the award.

However, in considering why fans like alternate history, I realized something else. In mainstream literature, the cliche is that the purpose of literature is to “explore the human condition.” SF tends to expand that, to explore both the human condition and the nature of the technology and societies we create, as well as the universe itself. SF gets faulted by the mainstream literature community for exploring those latter topics at the expense of the more character oriented explorations that are the core of mainstream fiction. This is sometimes, but not always, a fair criticism.

Hard SF fans want their fiction to follow the rules of physics, which is to say, take place in what could be the real world. In a sense, that’s similar to the goal of mainstream fiction, even though normally hard SF and mainstream fiction are considered polar opposites in the genre spectrum. After all, mainstream fiction follows the rules of physics as well or better than the hardest SF. It follows them because the author isn’t trying to explore questions of science, technology and the universe, but it does follow them. Likewise, almost all alternate history also follows the laws of physics. It just tweaks some past event, not a past rule. As such it explores the “real world” as closely as SF does, and I suspect this is why it is considered a subgenre of fantasy and SF.

I admit to a taste for hard SF. Future hard SF is a form of futurism; an explanation of real possible futures for the world. It explores real issues. The best work in hard SF today comes (far too infrequently) from Vernor Vinge, including his recent hugo winning novel, Rainbows End. His most famous work, A Fire Upon the Deep, which I published in electronic form 15 years ago, is a curious beast. It includes one extremely unlikely element of setting — a galaxy where the rules of physics which govern the speed of computation vary with distance from the center of the galaxy. Some view that as fantastic, but its real purpose is to allow him to write about the very fascinating and important topic of computerized super-minds, who are so smart that they are as gods to us. Coining the term “applied theology” Vinge uses his setting to allow the superminds to exist in the same story as characters like us that we can relate to. Vinge feels that you can’t write an authentic story about superminds, and thus need to have human characters, and so uses this element some would view as fantastic. So I embrace this as hard SF, and for the purists, the novels suggest that the “zones” may be artificial.

The best hard SF thus explores the total human condition. Fantastic fiction can do this as well, but it must do it by allegory. In fantasy, we are not looking at the real world, but we usually are trying to say something about it. However, it is not always good to let the author pick and choose what’s real and what’s not about the world, since it is too easy to fall into the trap of speaking only about your made-up reality and not about the world.

Not that this is always bad. Exploring the “human condition” or reality is just one thing we ask of our fiction. We also always want a ripping good read. And that can occur in any genre.