Submitted by brad on Mon, 2009-08-17 14:32.
The Worldcon (World Science Fiction Convention) in Montreal was enjoyable. Like all worldcons, which are run by fans rather than professional convention staff, it had its issues, but nothing too drastic. Our worst experience actually came from the Delta hotel, which I’ll describe below.
For the past few decades, Worldcons have been held in convention centers. They attract from 4,000 to 7,000 people and are generally felt to not fit in any ordinary hotel outside Las Vegas. (They don’t go to Las Vegas both because there is no large fan base there to run it, and the Las Vegas Hotels, unlike those in most towns, have no incentive to offer a cut-rate deal on a summer weekend.)
Because they are always held where deals are to be had on hotels and convention space, it is not uncommon for them to get the entire convention center or a large portion of it. This turns out to be a temptation which most cons succumb to, but should not. The Montreal convention was huge and cavernous. It had little of the intimacy a mostly social event should have. Use of the entire convention center meant long walks and robbed the convention of a social center — a single place through which you could expect people to flow, so you would see your friends, join up for hallway conversations and gather people to go for meals.
This is one of those cases where less can be more. You should not take more space than you need. The convention should be as initimate as it can be without becoming crowded. That may mean deliberately not taking function space.
A social center is vital to a good convention. Unfortunately when there are hotels in multiple directions from the convention center so that people use different exits, it is hard for the crowd to figure one out. At the Montreal convention (Anticipation) the closest thing to such a center was near the registration desk, but it never really worked. At other conventions, anywhere on the path to the primary entrance works. Sometimes it is the lobby and bar of the HQ hotel, but this was not the case here.
When the social center will not be obvious, the convention should try to find the best one, and put up a sign saying it is the congregation point. In some convention centers, meeting rooms will be on a different floor from other function space, and so it may be necessary to have two meeting points, one for in-between sessions, and the other for general time.
The social center/meeting point is the one thing it can make sense to use some space on. Expect a good fraction of the con to congregate there in break times. Let them form groups of conversation (there should be sound absorbing walls) but still be able to see and find other people in the space.
A good thing to make a meeting point work is to put up the schedule there, ideally in a dynamic way. This can be computer screens showing the titles of the upcoming sessions, or even human changed cards saying this. Anticipation used a giant schedule on the wall, which is also OK. The other methods allow descriptions to go up with the names. Anticipation did a roundly disliked “pocket” program printed on tabloid sized paper, with two pages usually needed to cover a whole day. Nobody had a pocket it could fit in. In addition, there were many changes to the schedule and the online version was not updated. Again, this is a volunteer effort, so I expect some glitches like this to happen, they are par for the course. read more »
Submitted by brad on Wed, 2009-08-12 12:54.
On Saturday I attended the Battlestar Galactica Postmortem panel at the World Science Fiction convention in Montreal. The “worldcon” is the top convention for serious fans of SF, with typically 4,000 to 6,000 attendees from around the world. There are larger (much larger) “media” conventions like ComicCon an DragonCon, but the Worlcon is considered “it” for written SF. It gives out the Hugo award. While the fans at a worldcon do put an emphasis on written SF, they also are voracious consumers of media SF, and so there are many panels on it, and two Hugo awards for it.
Two things surprised me a bit about the Worldcon panel. First of all, it was much more lightly attended than I would have expected considering the large fandom BSG built, and how its high quality had particularly appealed to these sorts of fans. Secondly, it was more negative and bitter about the ending that I would have expecting — and I was expecting quite a lot.
In fact, a few times audience members and panelists felt it necessary to encourage the crowd to stop just ranting about the ending and to talk about the good things. In spite of being so negative on the ending myself I found myself being one of those also trying to talk about the good stuff.
What was surprising was that while I still stand behind my own analysis, I know that in many online communities opinion on the ending is more positive. There are many who hate it but many who love it, and at least initially, more who loved it in some communities.
The answer may be is that it is the serious SF fan, the fan who looks to books as the source of the greatest SF, the BSG ending was the largest betrayal. Here we were hoping for a show that would bring some of the quality we seek in written SF to the screen, and here it fell down. Fans with a primary focus on movie and TV SF were much more tolerant of the ending, since as I noted, TV SF endings are almost never good anyway, and the show itself was a major cut above typical TV SF.
The small audience surprised me. I have seen other shows such as Buffy (which is not even SF), Babylon 5 and various forms of Star Trek still fill a room for discussion of the show. It is my contention that had BSG ended better, it would have joined this pantheon of great shows that maintains a strong fandom for decades.
The episode “Revelations” where the ruined Earth is discovered was nominated for the Hugo for best short dramatic program. It came in 4th — the winner was the highly unusual “Dr. Horrible’s sing-along-blog” which was a web production from fan favourite Joss Whedon of Buffy and Firefly. BSG won a Hugo for the first episode “33” and has been nominated each year since then but has failed to win each time, with a Doctor Who episode the winner in each case.
At the panel, the greatest source of frustration was the out-of-nowhere decision to abandon all technology, with Starbuck’s odd fate a #2. This matches the most common complaints I have seen online.
On another note, while normally Worldcon Hugo voters tend to go for grand SF books, this time the best Novel award went to Neil Gaiman’s “The Graveyard Book.” Gaiman himself, in his acceptance speech, did the odd thing of declaring that he thought Anathem (which was also my choice) should have won. Anathem came 2nd or 3rd, depending on how you like to read STV ballot counting. Gaiman however, was guest of honour at the convention, and it attracted a huge number of Gaiman fans because of this, which may have altered the voting. (Voting is done by convention members. Typically about 1,000 people will vote on best novel.)
Submitted by brad on Mon, 2009-07-13 13:38.
Battlestar Galactica attracted a lot of fans and a lot of kudos during its
run, and engendered this sub blog about it. Here, in my final post on the ending, I present
the case that its final hour was the worst ending in the history of science fiction on
the screen. This is a condemnation of course, but also praise, because
my message is not simply that the ending was poor, but that the show rose so high that it was able to fall
so very far. I mean it was the most disappointing ending ever.
(There are, of course, major spoilers in this essay.)
Other SF shows have ended very badly, to be sure. This is particularly true of TV SF.
Indeed, it is in the nature of TV SF to end badly. First of all, it’s written in
episodic form. Most great endings are planned from the start. TV endings
rarely are. To make things worse, TV shows are usually ended when the show is
in the middle of a decline. They are often the result of a cancellation, or
sometimes a producer who realizes a cancellation is imminent. Quite frequently,
the decline that led to cancellation can be the result of a creative failure
on the show — either the original visionaries have gone, or they are burned
out. In such situations, a poor ending is to be expected.
Sadly, I’m hard pressed to think of a TV SF series that had a truly great
ending. That’s the sort of ending you might find in a great book or movie, the
ending that caps the work perfectly, which solidifies things in a cohesive
whole. Great endings will sometimes finally make sense out of everything, or
reveal a surprise that, in retrospect, should have been obvious all along.
I’m convinced that many of the world’s best endings came about when the writer actually
worked out the ending first, then then wrote a story leading to that ending.
There have been endings that were better than the show. Star Trek: Voyager
sunk to dreadful depths in the middle of its run, and its mediocre ending was
thus a step up. Among good SF/Fantasy shows, Quantum Leap,
Buffy and the Prisoner stand out as having had decent endings. Babylon 5’s endings (plural)
were good but, just as I praise Battlestar Galactica (BSG) by saying its ending sucked, Babylon 5’s
endings were not up to the high quality of the show. (What is commonly believed
to be B5’s original planned ending, written before the show began, might
well have made the grade.)
Ron Moore’s goals
To understand the fall of BSG, one must examine it both in terms of more general
goals for good SF, and the stated goals of the head writer and executive producer,
Ronald D. Moore. The ending failed by both my standards (which you may or may not care about) but also his.
Moore began the journey by laying out a manifesto of how he wanted to change TV
SF. He wrote an essay about Naturalistic science fiction where he outlined
some great goals and promises, which I will summarize here, in a slightly different order
- Avoiding SF clichés like time travel, mind control, god-like powers, and technobabble.
- Keeping the science real.
- Strong, real characters, avoiding the stereotypes of older TV SF. The show should be about them, not the hardware.
- A new visual and editing style unlike what has come before, with a focus on realism.
Over time he expanded, modified and sometimes intentionally broke these rules. He allowed the ships
to make sound in space after vowing they would not. He eschewed aliens in general. He increased his
focus on characters, saying that his mantra in concluding the show was “it’s the characters,
The link to reality
In addition, his other goal for the end was to make a connection to our real world. To
let the audience see how the story of the characters related to our story. Indeed, the
writers toyed with not destroying Galactica, and leaving it buried on Earth, and
ending the show with the discovery of the ship in Central America.
They rejected this ending because they felt it would violate our contemporary reality too quickly,
and make it clear this was an alternate history. Moore felt an alternative universe
was not sufficient.
The successes, and then failures
During its run, BSG offered much that was great, in several cases groundbreaking elements never seen before in TV SF:
- Artificial minds in humanoid bodies who were emotional, sexual and religious.
- Getting a general audience to undertand the “humanity” of these machines.
- Stirring space battles with much better concepts of space than typically found on TV. Bullets and missiles, not force-rays.
- No bumpy-head aliens, no planet of the week, no cute time travel or alternate-reality-where-everybody-is-evil episodes.
- Dark stories of interesting characters.
- Multiple copies of the same being, beings programmed to think they were human, beings able to transfer their mind to a new body at the moment of death.
- A mystery about the origins of the society and its legends, and a mystery about a lost planet named Earth.
- A mystery about the origin of the Cylons and their reasons for their genocide.
- Daring use of concepts like suicide bombing and terrorism by the protagonists.
- Kick-ass leadership characters in Adama and Roslin who were complex, but neither over the top nor understated.
- Starbuck as a woman. Before she became a toy of god, at least.
- Baltar: One of the best TV villains ever, a self-centered slightly mad scientist who does evil without
wishing to, manipulated by a strange vision in his head.
- Other superb characters, notably Tigh, Tyrol, Gaeta and Zarek.
But it all came to a far
lesser end due to the following failures I will outline in too much detail:
- The confirmation/revelation of an intervening god as the driving force behind events
- The use of that god to resolve large numbers of major plot points
- A number of significant scientific mistakes on major plot points, including:
- Twisting the whole story to fit a completely wrong idea of what Mitochondrial Eve is
- To support that concept, an impossible-to-credit political shift among the characters
- The use of concepts from Intelligent Design to resolve plot issues.
- The introduction of the nonsense idea of “collective unconscious” to explain cultural similarities.
- The use of “big secrets” to dominate what was supposed to be a character-driven story
- Removing all connection to our reality by trying to build a poorly constructed one
- Mistakes, one of them major and never corrected, which misled the audience
And then I’ll explain the reason why the fall was so great — how, until the last moments, a few
minor differences could have fixed most of the problems. read more »
Submitted by brad on Tue, 2009-07-07 13:21.
I’ve been fascinated of late with the issue of eBay auctions of hot-hot items, like the playstation 3 and others. The story of the Michael Jackson memorial tickets is an interesting one.
17,000 tickets were given out as 8,500 pairs to winners chosen from 1.6 million online applications. Applicants had to give their name and address, and if they won, they further had to use or create a ticketmaster account to get their voucher. They then had to take the voucher to Dodger stadium in L.A. on Monday. (This was a dealbreaker even for honest winners from too far outside L.A. such as a Montreal flight attendant.) At the stadium, they had to present ID to show they were the winner, whereupon they were given 2 tickets (with random seat assignment) and two standard club security wristbands, one of which was affixed to their arm. They were told if the one on the arm was damaged in any way, they would not get into the memorial. The terms indicated the tickets were non-transferable.
Immediately a lot of people, especially those not from California who won, tried to sell tickets on eBay and Craigslist. In fact, even before the lottery results, people were listing something more speculative, “If I win the lottery, you pay me and you’ll get my tickets.” (One could enter the lottery directly of course, but this would increase your chances as only one entry was allowed, in theory, per person.)
Both eBay and Craigslist had very strong policies against listing these tickets, and apparently had staff and software working regularly to remove listings. Listings on eBay were mostly disappearing quickly, though some persisted for unknown reasons. Craiglist listings also vanished quickly, though some sellers were clever enough to put their phone numbers in their listing titles. On Craigslist a deleted ad still shows up in the search summary for some time after the posting itself is gone.
There was a strong backlash by fans against the sellers. On both sites, ordinary users were regularly hitting the links to report inappropriate postings. In addition, a brand new phenomenon emerged on eBay — some users were deliberately placing 99 million dollar bids on any auction they found for tickets, eliminating any chance of further bidding. (See note) In that past that could earn you negative reputation, but eBay has removed negative reputation for buyers. In addition, it could earn you a mark as a non-paying buyer, but in this case, the seller is unable to file such a complaint because their auction of the non-tranferable ticket itself violates eBay’s terms. read more »
Submitted by brad on Wed, 2009-07-01 18:49.
I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. This week saw the announcement that all the companies selling cell phones in Europe will standardize on a single charging connector, based on micro-USB. (A large number of devices today use the now deprecated Mini-USB plug, and it was close to becoming a standard by default.) As most devices are including a USB plug for data, this is not a big leap, though it turned out a number of devices would not charge from other people’s chargers, either from stupidity or malice. (My Motorola RAZR will not charge from a generic USB charger or even an ordinary PC. It needs a special charger with the data pins shorted, or if it plugs into a PC, it insists on a dialog with the Motorola phone tools driver before it will accept a charge. Many suspect this was to just sell chargers and the software.) The new agreement is essentially just a vow to make sure everybody’s chargers work with everybody’s devices. It’s actually a win for the vendors who can now not bother to ship a charger with the phone, presuming you have one or will buy one. It is not required they have the plug — supplying an adapter is sufficient, as Apple is likely to do. Mp3 player vendors have not yet signed on.
USB isn’t a great choice since it only delivers 500ma at 5 volts officially, though many devices are putting 1 amp through it. That’s not enough to quickly charge or even power some devices. USB 3.0 officially raised the limit to 900ma, or 4.5 watts.
USB is a data connector with some power provided which has been suborned for charging and power. What about a design for a universal plug aimed at doing power, with data being the secondary goal? Not that it would suck at data, since it’s now pretty easy to feed a gigabit over 2 twisted pairs with cheap circuits. Let’s look at the constraints
The world’s new power connector should be smart. It should offer 5 volts at low current to start, to power the electronics that will negotiate how much voltage and current will actually go through the connector. It should also support dumb plugs, which offer only a resistance value on the data pins, with each resistance value specifying a commonly used voltage and current level.
Real current would never flow until connection (and ground if needed) has been assured. As such, there is minimal risk of arcing or electric shock through the plug. The source can offer the sorts of power it can deliver (AC, DC, what voltages, what currents) and the sink (power using device) can pick what it wants from that menu. Sinks should be liberal in what they take though (as they all have become of late) so they can be plugged into existing dumb outlets through simple adapters.
Style of pins
We want low current plugs to be small, and heavy current plugs to be big. I suggest a triangular pin shape, something like what is shown here. In this design, two main pins can only go in one way. The lower triangle is an optional ground — but see notes on grounding below. read more »
Submitted by brad on Fri, 2009-06-26 13:10.
I have written before about how overzealous design of cryptographic protocols often results in their non-use. Protocol engineers are trained to be thorough and complete. They rankle at leaving in vulnerabilities, even against the most extreme threats. But the perfect is often the enemy of the good. None of the various protocols to encrypt E-mail have ever reached even a modicum of success in the public space. It’s a very rare VoIP call (other than Skype) that is encrypted.
The two most successful encryption protocols in the public space are SSL/TLS (which provide the HTTPS system among other things) and Skype. At a level below that are some of the VPN applications and SSH.
TLS (the successor to SSL) is very widely deployed but still very rarely used. Only the most tiny fraction of web sessions are encrypted. Many sites don’t support it at all. Some will accept HTTPS but immediately push you back to HTTP. In most cases, sites will have you log in via HTTPS so your password is secure, and then send you back to unencrypted HTTP, where anybody on the wireless network can watch all your traffic. It’s a rare site that lets you conduct your entire series of web interactions entirely encrypted. This site fails in that regard. More common is the use of TLS for POP3 and IMAP sessions, both because it’s easy, there is only one TCP session, and the set of users who access the server is a small and controlled set. The same is true with VPNs — one session, and typically the users are all required by their employer to use the VPN, so it gets deployed.
IPSec code exists in many systems, but is rarely used in stranger-to-stranger communications (or even friend-to-friend) due to the nightmares of key management.
TLS’s complexity makes sense for “sessions” but has problems when you use it for transactions, such as web hits. Transactions want to be short. They consist of a request, and a response, and perhaps an ACK. Adding extra back and forths to negotiate encryption can double or triple the network cost of the transactions.
Skype became a huge success at encrypting because it is done with ZUI — the user is not even aware of the crypto. It just happens. SSH takes an approach that is deliberately vulnerable to man-in-the-middle attacks on the first session in order to reduce the UI, and it has almost completely replaced unencrypted telnet among the command line crowd.
I write about this because now Google is finally doing an experiment to let people have their whole gmail session be encrypted with HTTPS. This is great news. But hidden in the great news is the fact that Google is evaluating the “cost” of doing this. There also may be some backlash if Google does this on web search, as it means that ordinary sites will stop getting to see the search query in the “Referer” field until they too switch to HTTPS and Google sends traffic to them over HTTPS. (That’s because, for security reasons, the HTTPS design says that if I made a query encrypted, I don’t want that query to be repeated in the clear when I follow a link to a non-encrypted site.) Many sites do a lot of log analysis to see what search terms are bringing in traffic, and may object when that goes away. read more »
Submitted by brad on Wed, 2009-06-10 16:58.
The usual approach to authentication online is the “login” approach — you enter userid and password, and for some “session” your actions are authenticated. (Sometimes special actions require re-authentication, which is something my bank does on things like cash transfers.) This is so widespread that all browsers will now remember all your passwords for you, and systems like OpenID have arise to provide “universal sign on,” though to only modest acceptance.
Another approach which security people have been trying to push for some time is authentication via digital signature and certificate. Your browser is able, at any time, to prove who you are, either for special events (including logins) or all the time. In theory these tools are present in browsers but they are barely used. Login has been popular because it always works, even if it has a lot of problems with how it’s been implemented. In addition, for privacy reasons, it is important your browser not identify you all the time by default. You must decide you want to be identified to any given web site.
I wrote earlier about the desire for more casual athentication for things like casual comments on message boards, where creating an account is a burden and even use of a universal login can be a burden.
I believe an answer to some of the problems can come from developing a system of authenticated actions rather than always authenticating sessions. Creating a session (ie. login) can be just one of a range of authenticated actions, or AuthAct.
To do this, we would adapt HTML actions (such as submit buttons on forms) so that they could say, “This action requires the following authentication.” This would tell the browser that if the user is going to click on the button, their action will be authenticated and probably provide some identity information. In turn, the button would be modified by the browser to make it clear that the action is authenticated.
An example might clarify things. Say you have a blog post like this with a comment form. Right now the button below you says “Post Comment.” On many pages, you could not post a comment without logging in first, or, as on this site, you may have to fill other fields in to post the comment.
In this system, the web form would indicate that posting a comment is something that requires some level of authentication or identity. This might be an account on the site. It might be an account in a universal account system (like a single sign-on system). It might just be a request for identity.
Your browser would understand that, and change the button to say, “Post Comment (as BradT).” The button would be specially highlighted to show the action will be authenticated. There might be a selection box in the button, so you can pick different actions, such as posting with different identities or different styles of identification. Thus it might offer choices like “as BradT” or “anonymously” or “with pseudonym XXX” where that might be a unique pseudonym for the site in question.
Now you could think of this as meaning “Login as BradT, and then post the comment” but in fact it would be all one action, one press. In this case, if BradT is an account in a universal sign-on system, the site in question may never have seen that identity before, and won’t, until you push the submit button. While the site could remember you with a cookie (unless you block that) or based on your IP for the next short while (which you can’t block) the reality is there is no need for it to do that. All your actions on the site can be statelessly authenticated, with no change in your actions, but a bit of a change in what is displayed. Your browser could enforce this, by converting all cookies to session cookies if AuthAct is in use.
Note that the first time you use this method on a site, the box would say “Choose identity” and it would be necessary for you to click and get a menu of identities, even if you only have one. This is because a there are always tools that try to fake you out and make you press buttons without you knowing it, by taking control of the mouse or covering the buttons with graphics that skip out of the way — there are many tricks. The first handover of identity requires explicit action. It is almost as big an event as creating an account, though not quite that significant.
You could also view the action as, “Use the account BradT, creating it if necessary, and under that name post the comment.” So a single posting would establish your ID and use it, as though the site doesn’t require userids at all. read more »
Submitted by brad on Sat, 2009-03-21 15:28.
I won’t deny that some of my distaste for the religious ending comes from my own preference for a realistic SF story, where everything that happens has a natural, rather than supernatural explanation, and that this comes in part from my non-religious worldview.
Nonetheless, I believe there are many valid reasons why you don’t want to have interventionist gods in your fiction. God should not be a character in your story, unless you are trying to write religious fiction like Left Behind or Touched by an Angel.
The reason is that God, as we know, works in strange and mysterious ways, his wonders to perform. We don’t expect to understand them. In fact, there is not even a requirement that they make sense. Some even argue that if you’re going to write authentic fiction with God as a character his actions should not make sense to the characters or the reader.
The author of a story is “god” in that they can write whatever they want. But in real, quality fiction, the author is constrained as to what they will do. They are supposed to make their stories make sense. Things should happen for a reason. If the stories are about characters, things should happen for reasons that come from the characters. If the story is also about setting, as SF is, reasons come from the setting. Mainstream fiction tries to follow all the rules of the real world. SF tries to explore hypothetical worlds with different technology, or new science, or even ways of living. Fantasy explores fantastic worlds, but when done properly, the author defines the new rules and sticks to them.
But if you make a divine character, even an offscreen divine character, you give the author too much power. They can literally write anything, and declare it to be the will of god. You don’t want your writer able to do that. You may want them to be able to start with anything, but once started the story should make sense.
As BSG ended, Adama and Baltar describe (correctly, but not strongly enough) how improbable it is that evolved humans can mate with the colonials. In reality, the only path to this is common ancestry, ie. the idea that humans from our-Earth were taken from it and became the Kobolians. But Baltar is able to explain it all away in one line with his new role as priest, it’s the will of god.
In a good story, you don’t get to explain things this way. You need to work a bit harder.
Now, if you absolutely must have a god, you want to constrain that god. That’s not too far-fetched. If you were writing a story in Christianity, and you depicted Jesus torturing innocents, people would not accept it, they would say it’s at odds with how Jesus is defined (though Yaweh had fewer problems with it.) BSG’s god is never defined well enough to have any constraints.
He,and his minions, are certainly capricious though. Genocides, Lies, Manipulations, exploding star systems, plotting out people’s lives, leading Starbuck to her death to achieve goals which could easily have been done other ways. Making that cycle of genocide repeat again and again until random chance breaks it. Not the sort of god we can draw much from. (One hopes if we are going to have gods in our fiction, they provide some moral lesson or other reason for being there rather than to simply be a plot device that explains things that make no sense.)
In literature, bringing in the arbitrary actions at the end of a story to resolve the plot is called a Deus ex Machina and it’s frowned upon for good reasons. The BSG god was introduced early on, so is not a last minute addition. People will disagree, but I think the divinely provided link to real Earth is last minute, in the sense that nothing in the story to that point tells you real Earth is out there, just the rules of drama (that the name “Earth” means something to the audience other than that ruined planet.)
If you want to write religious fiction, of course you can. I’m less interested in reading it. Moore said he did not intend to write this. He wrote the
miniseries and made the Cylons monotheists and the colonials polytheists (like the original) and the network came back and said that was really interesting. So he expanded it.
But he expanded it from something good — characters who have religious beliefs — to something bad. The religious beliefs were true. But they were some entirely made-up religion with little correspondence to any Earth religion (even the Buddhism that Moore professes) and as such with no relevance to the people who tend to seek out religious fiction.
Giving religions to the characters is good. It’s real. It’s an important part of our society worth exploring. However, resolving that some of the beliefs are correct, and bringing in the hand of god is another matter.
More loose ends
- The Colony had several base ships. When it started breaking apart, base ships full of Cavils, Dorals and Simons should have jumped away. What happened to them, and why won’t they come a calling soon? (God’s will?)
- Likewise, a force of Cavils, Dorals and Simons was invading Galactica and was in a temporary truce when fighting broke out again and Galactica jumped. What happened to them. In particular, since the first Hybrid predicted the splintered Cylon factions would be joined together again, why didn’t they?
- We never resolved why the first Earth was destroyed 2,000 years ago, and that this was the same time as the fall of Kobol and exodus of the 12 tribes. Was this just a big mistake and all 13 tribes were supposed to flee at the same time?
- I don’t know for sure about 150,000 years ago (it comes and goes) but 135,000 years ago the Sahara was covered by large lakes.
Submitted by brad on Sat, 2009-03-21 03:56.
The posts will come fast and furious in the next two days.
First I want to cover a little more about why this ending is of so much concern to many viewers. While many will accept that it is unscientific, and just say that they never cared that much about such things, the particular errors and issues of the final plot are rather special. What we saw was not merely spacecraft making sound in space or FTL drives or some other random scientific error.
The error in BSG centers around the most pernicious anti-scientific idea of our day: Creationism/Intelligent Design. In particular, it tells the “Ark” story, though it sets it 150,000 years ago rather than 4,000. And, because Moore knows the Ark story is totally bogus, he tries to fix it, by having the alien colonists able to breed with us humans, and thus having the final result be a merger of the two streams of humanity. That’s better than the pure Ark story, and perhaps enough better that I see some viewers are satisfied with it, but with deeper examination, it is just as bad an idea, and perhaps in its way more pernicious because it is easier for people to accept the flaws.
SF writers have been writing the Ark story since the dawn of SF. Indeed, the alien Adam and Eve plot is such a cliche from the 40s that you would have a hard time selling it to an SF magazine today. Not simply because it’s nonsense, but because it became overused back in the day when it wasn’t as obvious to people how nonsensical it was.
The Ark story is not just any bad science. It’s the worst bad science there is. Because there are dedicated forces who want so much for people to accept the Ark story as possible. Normally busy scientists would not even bother to debunk a story like that, but they spend a lot of time debunking this one because of the dedicated religious forces who seek to push it into schools and other places religion does not belong. And debunk it they have, and very solidly. The depth of the debunking is immense, and can’t be covered in this blog. I recommend the talk.origins archive with their giant FAQ for answers to many of the questions about this.
BSG plays a number of tricks to make the Ark story more palatable. It puts it back further in time, prior to the migrations of humanity out of Africa. (Oddly, it also has Adama spread the people around the continents, which simply means all the ones who did not stay in Africa died out without a trace or any descendents.) It makes it a merger rather than a pure origin to account for the long fossil and geological record. It has the aliens destroy all their technology and cast it into the sun to explain why there is no trace of it.
It does all those things, but in the end, the explanation remains religious. As the story is shown, you still need to invoke a variety of divine miracles to make it happen, and the show does indeed do this. The humans, on this planet, are the same species as aliens from another galaxy, due to the plan of God. They have cats and dogs and the rest, even though 150,000 years ago, humans have yet to domesticate any animals. Indeed, god has to have designed the colonials from the start to be the same species as the natives of Earth, it all has to have been set up many thousands of years ago. This is “intelligent design,” the form of creationism that gets dressed like science to help make it more palatable. It is also a pernicious idea.
In one fell swoop, BSG changes from science fiction — hard, soft or otherwise — to religious fiction, or religious SF if you wish. Its story, as shown, is explained on screen as being divine intervention. Now, thanks to BSG, there will be discussion of the ending. But it will involve the defenders of science having to explain again why the Ark story is silly and ignores what we know of biology. I am shocked that Kevin Grazier, who advocates science teaching for children, including biology, was willing to be a part of this ending.
Sadly this ending goes beyond being bad SF.
How to make it work.
Now there is one plot which BSG did not explore which would have made a lot of sense if they wanted to tell this story. It’s been noted on this blog a few times, but discounted because we believed BSG had a “no aliens” rule. This is what I called the “Alien Abduction plot.”
In this plot, aliens — in this case the God, who does not have to be a supernatural god — captured humans and various plants and animals from real Earth many thousands of years ago. The god took them to Kobol, and possibly with other gods (the Lords of Kobol) created a culture and raised them there. From this flows our story.
This plot has been used many times. Recently in Ken Macleod’s “Cosmonaut Keep” series the characters find a human culture way out in the stars, populated by people taken by “gods” (highly advanced beings) a long time ago. The same idea appears in Rob Sawyer’s dinosaur series, and many other books.
Do this, and it suddenly explains why the colonials are the same species as the people on Earth, but more advanced. It does not explain their cats and dogs, or their Earth idioms, but those can be marked down to drama. (They would have to have independently domesticated cats and dogs and other animals, as this had not happened on Earth. Same for the plants. The gods could also have done this for them.)
This plot works well enough that it’s surprising no hint of it was left in the show. I do not believe it was the intention of the writers, though I would love to see post-show interviews declaring that it was.
And even this plot has a hard time explaining what happened to their culture, the metal in their teeth and many other items. For try as they might they could not abandon all their technology. Even things that seem very basic to the Colonials, like better spears, writing, animal and plant domestication, knives, sailboats, complex language and so many other things are still aeons ahead of the humans. They plan to breed with the humans, and will be taking them into their schools and educating them. There was a sudden acceleration of culture 50,000 years ago, but not 150,000. And then there’s the artificial DNA in Hera and any other Cylon descendents. (And no, Hera isn’t the only person we are supposed to be descended from, she is just the source of the maternal lines.) But maybe you can shoehorn it in, which makes it surprising it wasn’t used.
The idea, taken from the old series, that the Greeks would have taken some of their culture from the aliens also is hard to make work. Why do their cultural ideas and now hopefully debunked (to them) polytheist religion show up nowhere else but Greece and eventually Rome? How do they get there, and only there, over 140,000 years of no writing, hunter-gatherer life? I am not a student of classical cultures, but I believe we also have lots of evidence of the origin and evolution of our modern Greek myths. They did not spring, pardon the phrase, fully formed from the head of Zeus. Rather they are based on older and simpler stories we have also traced. But the alien religion is based on our modern concepts of ancient Greek religion.
Even in 5,000 to 10,000 years, there would be a moderate amount of genetic drift in the Kobol environment, including the artificial genetic manipulation involved with Cylons. Since we learn that Africa has more game than the 12 colonies, it’s clear the colonials did not have all of Earth’s animals. It is contact with animals that generates most of our diseases. When different groups of humans get separated for many thousands of years, with different animals, the result is major plagues when they meet. Without divine intervention, the colonials are about to be reduced to a small fraction of their population. Especially after tossing their hospitals into the sun. (Why don’t we see any sick people saying, “Excuse me, do I get a vote on this whole abandon technology idea?)
The other plot which could have explained this I called the “Atlantis” plot. In this plot there is an advanced civilization long ago which reaches the stars but falls and disappears without a trace. It is the civilization that colonizes Kobol and becomes as gods. This requires no aliens. This is not their chosen plot, since it’s even harder to explain how this civilization left no trace, since it would not have gone to the technology destroying extremes the Colonists are shown to do.
Coming up: Why religious SF is a bad idea, even if you believe in the religion. (Hint: while the author is god, you don’t want them to really use that power.)
Submitted by brad on Mon, 2009-02-23 21:03.
(This post from my Battlestar Galactica Analysis Blog is cross-posted to my main blog too.)
There’s been some debate in the comments here about whether I and those like me are being far too picky about technical and plot elements in Battlestar Galactica. It got meaty enough that I wanted to summarize some thoughts about the nature of quality SF, and the reasons why it is important. BSG is quality SF, and it set out to be, so I hold it to a higher bar. When I criticise it for where it sometimes drops the ball, this is not the criticism of disdain, but of respect.
I wrote earlier about the nature of hard SF. It is traditionally hard to define, and people never fully agree about what it is, and what SF is in general. I don’t expect this essay to resolve that.
Broadly, SF is to me fiction which tries to explore the consequences of science, technology and the future. All fiction asks “what if?” but in SF, the “what if?” is often about the setting, and in particular the technology of the setting, and not simply about the characters. Hard SF makes a dedication to not break the laws of physics and other important principles of science while doing so. Fantasy, on the other hand, is free to set up any rules it likes, though all but the worst fantasy feels obligated to stick to those rules and remain consistent.
Hard SF, however, has another association in people’s minds. Many feel that hard SF has to focus on the science and technology. It is a common criticism of hard SF that it spends so much time on the setting that the characters and story suffer. In some cases they suffer completely; stories in Analog Science Fiction are notorious for this, and give hard SF a bad name.
Perhaps because of that name, Ron Moore declared that he would make BSG be Naturalistic Science Fiction. he declared that he wanted to follow the rules of science, as hard SF does, but as you would expect in a TV show, character and story were still of paramount importance. His credo also described many of the tropes of TV SF he would avoid, including time travel and aliens, and stock stereotyped characters.
I am all for this. While hard SF that puts its focus on the technology makes great sense in a Greg Egan novel, it doesn’t make sense in a drama. TV and movies don’t have the time to do it well, nor the audience that seeks this.
However, staying within the laws of physics has a lot of merit. I believe that it can be very good for a story if the writer is constrained, and can’t simply make up anything they desire. Mystery writers don’t feel limited that they can’t have their characters able to fly or read minds. In fact, it would ruin most of their mystery plots of they could. Staying within the rules — rules you didn’t set up — can be harder to do, but this often is good, not bad. This is particularly true for the laws of science, because they are real and logical. So often, writers who want to break the rules end up breaking the rules of logic. Their stories don’t make any sense, regardless of questions of science. When big enough, we call these logical flaws plot holes. Sticking to reality actually helps reduce them. It also keeps the audience happy. Only a small fraction of the audience may understand enough science to know that something is bogus, but you never know how many there are, and they are often the smarter and more influential members of the audience.
I lament at the poor quality of the realism in TV SF. Most shows do an absolutely dreadful job. I lament this because they are not doing that bad job deliberately. They are just careless. For fees that would be a pittance to any Hollywood budget, they could make good use of a science and SF advisor. (I recommend both. The SF advisor will know more about drama and fiction, and also will know what’s already been done, or done to death in other SF.) Good use doesn’t mean always doing what they say. While I do think it is good to be constrained, I recognize the right of creators to decide they do want to break the rules. I just want them to be aware that they are breaking the rules. I want them to have decided “I need to do this to tell the story I am telling” and not because they don’t care or don’t think the audience will care.
There does not have to be much of a trade-off between doing a good, realistic, consistent story and having good drama and characters. This is obviously true. Most non-genre fiction happily stays within the laws of reality. (Well, not action movies, but that’s another story.)
Why it’s important
My demand for realism is partly so I get a better, more consistent story without nagging errors distracting me from it. But there is a bigger concern.
TV and movie SF are important. They are the type of SF that most of the world will see. They are what will educate the public about many of the most important issues in science and technology, and these are some of the most important issues of the day. More people will watch even the cable-channel-rated Battlestar Galactica than read the most important novels in the field.
Because BSG is good, it will become a reference point for people’s debates about things like AI and robots, religion and spirituality in AIs and many other questions. This happens in two ways. First, popular SF allows you to explain a concept to an audience quickly. If I want to talk about a virtual reality where everybody is in a tank while they live in a synthetic world, I can mention The Matrix and the audience immediately has some sense of what I am talking about. Because of the flaws in The Matrix I may need to explain the differences between that and what I want to describe, but it’s still easier.
Secondly, people will have developed attitudes about what things mean from the movies. HAL-9000 from 2001 formed a lot of public opinion on AIs. Few get into a debate about robots without bringing up Asimov, or at worst case, Star Wars.
If the popular stories get it wrong, then the public starts with a wrong impression. Because so much TV SF is utter crap, a lot of the public has really crappy ideas about various issues in science and technology. The more we can correct this, the better. So much TV SF comes from people who don’t really even care that they are doing SF. They do it because they can have fancy special effects, or know it will reach a certain number of fans. They have no excuse, though, for not trying to make it better.
BSG excited me because it set a high bar, and promised realism. And in a lot of ways it has delivered. Because it has FTL drives, it would not meet the hard SF fan’s standard, but I understand how you are not going to do an interstellar chase show with sublight travel that would hold a TV audience. And I also know that Moore, the producer knows this and made a conscious decision to break the rules. There are several other places where he did this.
This was good because the original show, which I watched as an 18 year old, was dreadful. It had no concept of the geometry of space. TV shows and movies are notoriously terrible at this, but this was in the lower part of the spectrum. They just arrived at the planet of the week when the writers wanted them to. And it had this nonsense idea that the Earth could be a colony of ancient aliens. That pernicious idea, the “Ark” theory, is solidly debunked thanks to the fact that creationists keep bringing it up, but it does no good for SF to do anything to encourage it. BSG seemed to be ready to fix all these things. Yet since there are hints that the Ark question may not be addressed, I am disappointed on that count.
To some extent, the criticism that some readers have made — that too much attention to detail and demand for perfection — can ruin the story for you. You do have to employ some suspension of disbelief to enjoy most SF. Even rule-follow hard SF usually invents something new and magical that has yet to be invented. It might be possible, but the writer has no actual clue as to how. You just accept it and enjoy the story. Perhaps I do myself a disservice by getting bothered by minor nits. There are others who have it worse than I do, at least. But I’m not a professional TV science advisor. Perhaps I could be one, but for now, if I can see it, I think it means that they could have seen it. And I always enjoy a show more, when it’s clearly obvious how much they care about the details. And so does everybody else, even when they don’t know it. Attention to details creates a sense of depth which enhances a work even if you never explore the depth. You know it’s there. You feel it, and the work becomes stronger and more relevant.
Now some of the criticisms I am making here are not about science or niggling technical details. Some of the recent trends, I think, are errors of story and character. Of course, you’re never going to be in complete agreement with a writer about where a story or character should go. But if characters become inconsistent, it hurts the story as much or more as when the setting becomes inconsistent.
But still, after all this, let’s see far more shows like Battlestar Galactica 2003, and fewer like Battletar Galactica 1978, and I’ll still be happy.
Submitted by brad on Thu, 2009-01-15 18:46.
I’ve written about “data hosting/data deposit box” as an alternative to “cloud computing.” Cloud computing is timesharing — we run our software and hold our data on remote computers, and connect to them from terminals. It’s a swing back from personal computing, where you had your own computer, and it erases the 4th amendment by putting our data in the hands of others.
Lately, the more cloud computing applications I use, the more I realize one other benefit that data hosting could provide as an architecture. Sometimes the cloud apps I use are slow. It may be because of bandwidth to them, or it may simply be because they are overloaded. One of the advantages of cloud computing and timesharing is that it is indeed cheaper to buy a cluster mainframe and have many people share it than to have a computer for everybody, because those computers sit idle most of the time.
But when I want a desktop application to go faster, I can just buy a faster computer. And I often have. But I can’t make Facebook faster that way. Right now there’s no way I can do it. If it weren’t free, I could complain, and perhaps pay for a larger share, though that’s harder to solve with bandwidth.
In the data hosting approach, the user pays for the data host. That data host would usually be on their ISP’s network, or perhaps (with suitable virtual machine sandboxing) it might be the computer on their desk that has all those spare cycles. You would always get good bandwidth to it for the high-bandwidth user interface stuff. And you could pay to get more CPU if you need more CPU. That can still be efficient, in that you could possibly be in a cloud of virtual machines on a big mainframe cluster at your ISP. The difference is, it’s close to you, and under your control. You own it.
There’s also no reason you couldn’t allow applications that have some parallelism to them to try to use multiple hosts for high-CPU projects. Your own PC might well be enough for most requests, but perhaps some extra CPU would be called for from time to time, as long as there is bandwidth enough to send the temporary task (or sub-tasks that don’t require sending a lot of data along with them.)
And, as noted before, since the users own the infrastructure, this allows new, innovative free applications to spring up because they don’t have to buy their infrastructure. You can be the next youtube, eating that much bandwidth, with full scalability, without spending much on bandwidth at all.
Submitted by brad on Mon, 2008-08-18 21:22.
NBC has had just a touch of coverage of Michael Phelps and his 8 gold medals, which in breaking Mark Spitz’s 7 from 1972 has him declared the greatest Olympic athlete, or even athlete of all time. And there’s no doubt he’s one of the greatest swimmers of all time and this is an incredible accomplishment. Couch potato that I am, I can hardly criticise him.
(We are of course watching the Olympics in HDTV using MythTV, but fast-forwarding over the major bulk of it. Endless beach volleyball, commercials and boring events whiz by. I can’t imagine watching without such a box. I would probably spend more time, which they would like, but be less satisfied and see fewer of the events I wish to.)
Phelps got 8 Gold but 3 of them were relays. He certainly contributed to those relays, may well have made the difference for the U.S. team and allowed it to win a gold it would not have won without him. So it seems fair to add them, no?
No. The problem is you can’t win relay gold unless you are lucky enough to be a citizen of one of a few powerhouse swimming nations, in particular the USA and Australia, along with a few others. Almost no matter how brilliant you are, if you don’t compete for one of these countries, you have no chance at those medals. So only a subset of the world’s population even gets to compete for the chance to win 7 or 8 medals at the games. This applies to almost all team medals, be they relay or otherwise. Perhaps the truly determined can emigrate to a contending country. A pretty tall order.
Phelps one 5 individual golds, and that is also the record, though it is shared by 3 others. He has more golds than anybody, though other athletes have more total medals.
Of course, swimming is one of the special sports in which there are enough similar events that it is possible to attain a total like this. There are many sports that don’t even have 7 events a single person could compete in. (They may have more events but they will be divided by sex, or weight class.)
Shooting has potential for a star. It used to even be mixed (men and women) until they split it. It has 9 male events, and one could in theory be master of them all.
Track and Field has 47 events split over men and women. However, it is so specialized in how muscles are trained that nobody expects sprinters to compete in long events or vice versa. Often the best sprinter does well in Long Jump or Triple Jump, allowing the potential of a giant medal run for somebody able to go from 100m to 400m in range. In theory there are 8 individual events 400m or shorter.
And there are a few other places. But the point is that to do what Phelps (or Spitz) did, you have to be in a small subset of sports, and be from a small set of countries. There have been truly “cross sport” athletes at the Olympics but in today’s world of specialized training, it’s rare. If anybody managed to win multiple golds over different sports and beat this record, then the title of greatest Olympian would be very deserving. One place I could see some crossover is between high-diving and Trampoline. While a new event, Trampoline seems to be like doing 20 vaults or high dives in a row. And not that it wasn’t exciting to watch him race.
More Burning Man packing…
Submitted by brad on Tue, 2008-08-12 16:27.
I’ve just returned from Denver and the World Science Fiction Convention (worldcon) where I spoke on issues such as privacy, DRM and creating new intelligent beings. However, I also attended a session on “hard” science fiction, and have some thoughts to relate from it.
Defining the sub-genres of SF, or any form of literature, is a constant topic for debate. No matter where you draw the lines, authors will work to bend them as well. Many people just give up and say “Science Fiction is what I point at when I say Science Fiction.”
Genres in the end are more about taste than anything else. They exist for readers to find fiction that is likely to match their tastes. Hard SF, broadly, is SF that takes extra care to follow the real rules of physics. It may include unknown science or technology but doesn’t include what those rules declare to be impossible. On the border of hard SF one also finds SF that does a few impossible things (most commonly faster-than-light starships) but otherwise sticks to the rules. As stories include more impossible and unlikely things, they travel down the path to fantasy, eventually arriving at a fully fantastic level where the world works in magical ways as the author found convenient.
Even in fantasy however, readers like to demand consistency. Once magical rules are set up, people like them to be followed.
In addition to Hard SF, softer SF and Fantasy, the “alternate history” genre has joined the pantheon, now often dubbed “speculative fiction.” All fiction deals with hypotheticals, but in speculative fiction, the “what if?” is asked about the world, not just the lives of some characters. This year, the Hugo award for best (ostensibly SF) novel of the year went to Chabon’s The Yiddish Policemen’s Union which is a very clear alternate history story. In it, the USA decides to accept Jews that Hitler is expelling from Europe, and gives them a temporary homeland around Sitka, Alaska. During the book, the lease on the homeland is expiring, and there is no Israel. It’s a very fine book, but I didn’t vote for it because I want to promote actual SF, not alternate history, with the award.
However, in considering why fans like alternate history, I realized something else. In mainstream literature, the cliche is that the purpose of literature is to “explore the human condition.” SF tends to expand that, to explore both the human condition and the nature of the technology and societies we create, as well as the universe itself.
SF gets faulted by the mainstream literature community for exploring those latter topics at the expense of the more character oriented explorations that are the core of mainstream fiction. This is sometimes, but not always, a fair criticism.
Hard SF fans want their fiction to follow the rules of physics, which is to say, take place in what could be the real world. In a sense, that’s similar to the goal of mainstream fiction, even though normally hard SF and mainstream fiction are considered polar opposites in the genre spectrum. After all, mainstream fiction follows the rules of physics as well or better than the hardest SF. It follows them because the author isn’t trying to explore questions of science, technology and the universe, but it does follow them. Likewise, almost all alternate history also follows the laws of physics. It just tweaks some past event, not a past rule. As such it explores the “real world” as closely as SF does, and I suspect this is why it is considered a subgenre of fantasy and SF.
I admit to a taste for hard SF. Future hard SF is a form of futurism; an explanation of real possible futures for the world. It explores real issues. The best work in hard SF today comes (far too infrequently) from Vernor Vinge, including his recent hugo winning novel, Rainbows End. His most famous work, A Fire Upon the Deep, which I published in electronic form 15 years ago, is a curious beast. It includes one extremely unlikely element of setting — a galaxy where the rules of physics which govern the speed of computation vary with distance from the center of the galaxy. Some view that as fantastic, but its real purpose is to allow him to write about the very fascinating and important topic of computerized super-minds, who are so smart that they are as gods to us. Coining the term “applied theology” Vinge uses his setting to allow the superminds to exist in the same story as characters like us that we can relate to. Vinge feels that you can’t write an authentic story about superminds, and thus need to have human characters, and so uses this element some would view as fantastic. So I embrace this as hard SF, and for the purists, the novels suggest that the “zones” may be artificial.
The best hard SF thus explores the total human condition. Fantastic fiction can do this as well, but it must do it by allegory. In fantasy, we are not looking at the real world, but we usually are trying to say something about it. However, it is not always good to let the author pick and choose what’s real and what’s not about the world, since it is too easy to fall into the trap of speaking only about your made-up reality and not about the world.
Not that this is always bad. Exploring the “human condition” or reality is just one thing we ask of our fiction. We also always want a ripping good read. And that can occur in any genre.
Submitted by brad on Thu, 2008-07-31 18:26.
I often rant here about the need for better universal power supply technology. And there is some progress. On a recent trip to Europe, I was astounded how much we took in the way of power supply gear. I am curious at what the record is for readers here. I suggested we have a contest at a recent gathering. I had six supplies, and did not win.
Here’s what the two of us had on the German trip in terms of devices. There were slightly fewer supplies, due to the fact several devices charged from USB, which could be generated by laptops or dedicated wall-warts.
- My laptop, with power supply. (Universal, able to run from plane, car or any voltage)
- Her laptop, with power supply.
- My unlocked GSM phone, which though mini-USB needs its dedicated charger, so that was brought
- My CDMA phone, functioning has a PDA, charges from mini-USB
- Her unlocked GSM phone, plus motorola charger
- Her CDMA Treo, as a PDA, with dedicated charger
- My Logger GPS, charges from mini-USB
- My old bluetooth GPS, because I had just bought the logger, charges from mini-USB
- My Canon EOS 40D, with plug in battery charger. 4 batteries.
- Her Canon mini camera, with different plug in battery charger. 2 batteries.
- Canon flash units, with NiMH AA batteries, with charger and power supply for charger.
- Special device, with 12v power supply.
- MP3 player and charger
- Bluetooth headset, charges from same Motorola charger. Today we would have two!
- External laptop battery for 12 hour flight, charges from laptop charger
- Electric shaver — did not bring charger as battery will last trip.
- 4 adapters for Euro plugs, and one 3-way extension cord. One adapter has USB power out!
- An additional USB wall-wart, for a total of 3 USB wall-warts, plus the computers.
- Cigarette lighter to USB adapter to power devices in car.
That’s the gear that will plug into a wall. There was more electronic gear, including USB memory sticks, flash cards, external wi-fi antennal, headsets and I’ve probably forgotten a few things. read more »
Submitted by brad on Mon, 2008-06-23 14:34.
My most important essay to date
Today let me introduce a major new series of essays I have produced on “Robocars” — computer-driven automobiles that can drive people, cargo, and themselves, without aid (or central control) on today’s roads.
It began with the DARPA Grand Challenges convincing us that, if we truly want it, we can have robocars soon. And then they’ll change the world. I’ve been blogging on this topic for some time, and as a result have built up what I hope is a worthwhile work of futurism laying out the consequences of, and path to, a robocar world.
Those consequences, as I have considered them, are astounding.
- It starts with saving a million young lives every year (45,000 in the USA) as well as untold injury in suffering.
- It saves trillions of dollars wasted over congestion, accidents and time spent driving.
- Robocars can solve the battery problem of the electric car, making the electric car attractive and inexpensive. They can do the same for many other alternate fuels, too.
- Electric cars are cheap, simple and efficient once you solve the battery/range problems.
- Switching most urban driving to electric cars, especially ultralight short-trip vehicles means a dramatic reduction in energy demand and pollution.
- It could be enough to wean the USA off of foreign oil, with all the change that entails.
- It means rethinking cities and manufacturing.
- It means the death of old-style mass transit.
All thanks to a Moore’s law driven revolution in machine vision, simple A.I. and navigation sponsored by the desire for cargo transport in war zones. In the way stand engineering problems, liability issues, fear of computers and many other barriers.
At 33,000 words, these essays are approaching book length. You can read them all now, but I will also be introducing them one by one in blog posts for those who want to space them out and make comments. I’ve written so much because I believe that of all short term computer projects available to us, no modest-term project could bring more good to the world than robocars. While certain longer term projects like A.I. and Nanotech will have grander consequences, Robocars are the sweet spot today.
I have also created a new Robocars topic on the blog which collects my old posts, and will mark new ones. You can subscribe to that as a feed if you wish. (I will cease to use the self-driving cars blog tag I was previously using.)
If you like what I’ve said before, this is the big one. You can go to the:
Master Robocar Index (Which is also available via robocars.net.)
or jump to the first article:
The Case for Robot Cars
You may also find you prefer to be introduced to the concept through a series of stories I have developed depicting a week in the Robocar world. If so, start with the stories, and then proceed to the main essays.
A Week of Robocars
These are essays I want to spread. If you find their message compelling, please tell the world.
Submitted by brad on Wed, 2008-05-21 18:23.
Recently, I wrote about thedata deposit box, an architecture where applications come to the data rather than copying your personal data to all the applications.
Let me examine some more of the pros and cons of this approach:
The biggest con is that it does make things harder for application developers. The great appeal of the Web 2.0 “cloud” approach is that you get to build, code and maintain the system yourself. No software installs, and much less portability testing (browser versions) and local support. You control the performance and how it scales. When there’s a problem, it’s in your system so you can fix it. You design it how you want, in any language you want, for any OS you want. All the data is there, there are no rules. You can update the software any time, other than the user’s browser and plugins.
The next con is the reliability of user’s data hosts. You don’t control it. If their data host is slow or down, you can’t fix that. If you want the host to serve data to their friends, it may be slow for other people. The host may not be located in the same country as the person getting data from it, making things slower.
The last con is also the primary feature of data hosting. You can’t get at all the data. You have to get permissions, and do special things to get at data. There are things you just aren’t supposed to do. It’s much easier, at least right now, to convince the user to just give you all their data with few or no restrictions, and just trust you. Working in a more secure environment is always harder, even if you’re playing by the rules.
Those are pretty big cons. Especially since the big “pro” — stopping the massive and irrevocable spread of people’s data — is fairly abstract to many users. It is the fundamental theorem of privacy that nobody cares about it until after it’s been violated.
But there’s another big pro — cheap scalability. If users are paying for their own data hosting, developers can make applications with minimal hosting costs. Today, building a large cloud app that will get a lot of users requires a serious investment in providing enough infrastructure for it to work. YouTube grew by spending money like water for bandwidth and servers, and so have many other sites. If you have VCs, it’s relatively inexpensive, but if you’re a small time garage innovator, it’s another story. In the old days, developers wrote software that ran on user’s PCs. Running the software didn’t cost the developer anything, but trying to support on a thousand different variations of the platform did.
With a data hosting architecture, we can get the best of both worlds. A more stable platform (or so we hope) that’s easy to develop for, but no duty to host most of its operations. Because there is no UI in the data hosting platform, it’s much simpler to make it portable. People joked that Java became write-once, debug everywhere for client apps but for server code it’s much closer to its original vision. The UI remains in the browser.
For applications with money to burn, we could develop a micropayment architecture so that applications could pay for your hosting expenses. Micropayments are notoroiusly hard to get adopted, but they do work in more restricted markets. Applications could send payment tokens to your host along with the application code, allowing your host to give you bandwidth and resources to run the application. It would all be consolidated in one bill to the application provider.
Alternately, we could develop a system where users allow applications to cache results from their data host for limited times. That way the application providers could pay for reliable, globally distributed resources to cache the results.
For example, say you wanted to build Flickr in a data hosting world. Users might host their photos, comments and resized versions of the photos in their data host, much of it generated by code from the data host. Data that must be aggregated, such as a search index based on tags and comments, would be kept by the photo site. However, when presenting users with a page filled with photo thumbnails, those thumbnails could be served by the owner’s data host, but this could generate unreliable results, or even missing results. To solve this, the photo site might get the right to cache the data where needed. It might cache only for users who have poor hosting. It might grant those who provide their own premium hosting with premium features since they don’t cost the site anything.
As such, well funded startups could provide well-funded quality of service, while no-funding innovators could get going relying on their users. If they became popular, funding would no doubt become available. At the same time, if more users buy high quality data hosting, it becomes possible to support applications that don’t have and never will have a “business model.” These would, in effect, be fee-paid apps rather than advertising or data harvesting funded apps, but the fees would be paid because the users would take on the costs of their own expenses.
And that’s a pretty good pro.
Submitted by brad on Fri, 2008-05-09 16:11.
I learned today that there is an exhibit about my father in the famous creation museum near Cincinnati. This museum is a multi-million dollar project set up by creationists as a pro-bible “natural history” museum that shows dinosaurs on Noah’s Ark, and how the flood carved the Grand Canyon and much more. It’s all completely bullocks and a number of satirical articles about it have been written, including the account by SF writer John Scalzi.
While almost all this museum is about desperate attempts to make the creation story sound like natural history, it also has the “Biblical Authority Room.” This room features my father, Charles Templeton in two sections. It begins with this display on bible enemies which tells the story of how he went to Princeton seminary and lost his faith. (Warning: Too much education will kill your religion.)
However, around the corner is an amazing giant alcove. It shows a large mural of photos and news stories about my father as a preacher and later. On the next wall is an image of a man (clearly meant to be him though the museum denied it) digging a grave with the tombstone “God is Dead.” There are various other tombstones around for “Truth,” “God’s Word” and “Genesis.” There is also another image of the mural showing it a bit more fully.
Next to the painting is a small brick alcove which for the life of me looks like a shrine.
In it is a copy of his book Farewell to God along with a metal plaque with a quote from the book about how reality is inconsistent with the creation story. (You can click on the photo, courtesy Andrew Arensburger, to see a larger size and read the inscription.)
I had heard about this museum for some time, and even contemplated visiting it the next time I was in the area, though part of me doesn’t want to give them $20. However now I have to go. But I remain perplexed that he gets such a large exibit, along with the likes of Darwin, Scopes and Luther.
Today, after all, only older people know of his religious career, though at his peak he was one of the most well known figures of the field. He and his best friend, Billy Graham, were taking the evangelism world by storm, and until he pulled out, many people would have bet that he, rather than Graham, would become the great star. You can read his memoir here online.
But again, this is all long ago, and a career long left behind. But there may be an explanation, based on what he told me when he was alive.
Among many fundamentalists, there is a doctrine of “Once Saved, Always Saved.” What this means is that once Jesus has entered you and become your personal saviour, he would never, ever desert you. It is impossible for somebody who was saved to fall. This makes apostacy a dreadful sin for it creates a giant contradiction. For many, the only way to reconcile this is to decide that he never was truly saved after all. That it was all fake. Only somebody who never really believed could fall.
Except that’s not the case here. He had the classic “religious experience” conversion, as detailed in his memoir. He was fully taken up with it. And more to the point, unlike most, when much later he truly came to have doubts, he debated them openly with his friends, like Graham. And finally decided that he couldn’t preach any more after decades of doing so, giving up fame and a successful career with no new prospects. He couldn’t do it because he could not feel honest preaching to people when he had become less sure himself. Not the act of somebody who was faking it all along.
However, this exhibit in the museum doesn’t try to paint it that way. Rather, it seems to be a warning that too much education by godless scientists can hurt your faith.
So there may be a second explanation. As a big-time preacher, with revival meetings filling sporting arenas, my father converted a lot of people to Christianity. He was one of the founders of Youth for Christ International, which is today still a major religious organization. I meet these converts from time to time. I can see how, if you came to your conversion through him, my father’s renunciation of it must be very hurtful — especially when combined with the once-saved-always-saved doctrine. So I have to wonder if somebody at the Creation Museum isn’t one of his converts, and thus wanted to tell the story of a man that many of the visitors to the museum will have forgotten.
Here are some other Charles Templeton links on my site:
Right now I’m in the process of scanning some of his books and will post when I have done this.
Submitted by brad on Fri, 2008-05-09 00:14.
I’m scanning my documents on an ADF document scanner now, and it’s largely pretty impressive, but I’m surprised at some things the system won’t do.
Double page feeding is the bane of document scanning. To prevent it, many scanners offer methods of double feed detection, including ultrasonic detection of double thickness and detection when one page is suddenly longer than all the others (because it’s really two.)
There are a number of other tricks they could do, I think. I think a paper feeder that used air suction or gecko-foot van-der-waals force pluckers on both sides of a page to try to pull the sides in two different directions could help not just detect, but eliminate such feeds.
However, the most the double feed detectors do is signal an exception to stop the scan. Which means work re-feeding and a need to stand by.
However, many documents have page numbers. And we’re going to OCR them and the OCR engine is pretty good at detecting page numbers (mostly out of desire to remove them.) However, it seems to me a good approach would be to look for gaps in the page numbers, especially combined with the other results of a double feed. Then don’t stop the scan, just keep going, and report to the operator which pages need to be scanned again. Those would be scanned, their number extracted, and they would be inserted in the right place in the final document.
Of course, it’s not perfect. Sometimes page numbers are not put on blank pages, and some documents number only within chapters. So you might not catch everything, but you could catch a lot of stuff. Operators could quickly discern the page numbering scheme (though I think the OCR could do this too) to guide the effort.
I’m seeking a maximum convenience workflow. I think to do that the best plan is to have several scanners going, and the OCR after the fact in the background. That way there’s always something for the operator to do — fixing bad feeds, loading new documents, naming them — for maximum throughput. Though I also would hope the OCR software could do better at naming the documents for you, or at least suggesting names. Perhaps it can, the manual for Omnipage is pretty sparse.
While some higher end scanners do have the scanner figure out the size of the page (at least the length) I am not sure why it isn’t a trivial feature for all ADF scanners to do this. My $100 Strobe sheetfed scanner does it. That my $6,000 (retail) FI-5650 needs extra software seems odd to me.
Submitted by brad on Mon, 2008-05-05 20:08.
I’ve been ranting of late about the dangers inherent in “Data Portability” which I would like to rename as BEPSI to avoid the motherhood word “portability” for something that really has a strong dark side as well as its light side.
But it’s also important to come up with an alternative. I think the best alternative may lie in what I would call a “data deposit box” (formerly “data hosting.”) It’s a layered system, with a data layer and an application layer on top. Instead of copying the data to the applications, bring the applications to the data.
A data deposit box approach has your personal data stored on a server chosen by you. That server’s duty is not to exploit your data, but rather to protect it. That’s what you’re paying for. Legally, you “own” it, either directly, or in the same sense as you have legal rights when renting an apartment — or a safety deposit box.
Your data box’s job is to perform actions on your data. Rather than giving copies of your data out to a thousand companies (the Facebook and Data Portability approach) you host the data and perform actions on it, programmed by those companies who are developing useful social applications.
As such, you don’t join a site like Facebook or LinkedIn. Rather, companies like those build applications and application containers which can run on your data. They don’t get the data, rather they write code that works with the data and runs in a protected sandbox on your data host — and then displays the results directly to you.
To take a simple example, imagine a social application wishes to send a message to all your friends who live within 100 miles of you. Using permission tokens provided by you, it is able to connect to your data host and ask it to create that subset of your friend network, and then e-mail a message to that subset. It never sees the friend network at all. read more »
Submitted by brad on Fri, 2008-04-25 14:00.
I’ve spoken about the Web 2.0 movement that is now calling itself “data portability.” Now there are web sites, and format specifications and plans are underway to make it possible to quickly export the personal data you put on one social networking site to another. While that sounds like a good thing — we like interoperability, and cooperation, and low barriers to entry on new players — I sometimes seem like a lone voice warning about some of the negative consequences of this.
I know I’m not going to actually stop the data portability movement, and nor is that really my goal. But I do have a challenge for it: Switch to a slightly negative name. Data portability sounds like motherhood, and this is definitely not a motherhood issue. Deliberately choosing a name that includes the negative connotations would make people stop and think as they implement such systems. It would remind them, every step of the way, to consider the privacy implications. It would cause people asking about the systems to query what they have done about the downsides.
And that’s good, because otherwise it’s easy to put on a pure engineering mindset and say, “what’s the easiest way we can build the tools to make this happen?” rather than “what’s a slightly harder way that mitigates some of the downsides?”
A name I dreamed up is BEPSI, standing for Bulk Export of Personal and Sensitive Information. This is just as descriptive, but reminds you that you’re playing with information that has consequences. Other possible names include EBEPSI (Easy Bulk Export…) or OBEPSI (One-click Bulk Export…) which sounds even scarier.
It’s rare for people to do something so balanced, though. Nobody likes to be reminded there could be problems with what they’re doing. They want a name that sounds happy and good, so they can feel happy and good. And I know the creator of dataportability.org thinks he’s got a perfectly good name already so there will be opposition. But a name like this, or another similar one, would be the right thing to do. Remind people of the paradoxes with every step they take.