Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Battlestar's "Daybreak:" The worst ending in the history of on-screen science fiction

Battlestar Galactica attracted a lot of fans and a lot of kudos during its run, and engendered this sub blog about it. Here, in my final post on the ending, I present the case that its final hour was the worst ending in the history of science fiction on the screen. This is a condemnation of course, but also praise, because my message is not simply that the ending was poor, but that the show rose so high that it was able to fall so very far. I mean it was the most disappointing ending ever.

(There are, of course, major spoilers in this essay.)

Other SF shows have ended very badly, to be sure. This is particularly true of TV SF. Indeed, it is in the nature of TV SF to end badly. First of all, it’s written in episodic form. Most great endings are planned from the start. TV endings rarely are. To make things worse, TV shows are usually ended when the show is in the middle of a decline. They are often the result of a cancellation, or sometimes a producer who realizes a cancellation is imminent. Quite frequently, the decline that led to cancellation can be the result of a creative failure on the show — either the original visionaries have gone, or they are burned out. In such situations, a poor ending is to be expected.

Sadly, I’m hard pressed to think of a TV SF series that had a truly great ending. That’s the sort of ending you might find in a great book or movie, the ending that caps the work perfectly, which solidifies things in a cohesive whole. Great endings will sometimes finally make sense out of everything, or reveal a surprise that, in retrospect, should have been obvious all along. I’m convinced that many of the world’s best endings came about when the writer actually worked out the ending first, then then wrote a story leading to that ending.

There have been endings that were better than the show. Star Trek: Voyager sunk to dreadful depths in the middle of its run, and its mediocre ending was thus a step up. Among good SF/Fantasy shows, Quantum Leap, Buffy and the Prisoner stand out as having had decent endings. Babylon 5’s endings (plural) were good but, just as I praise Battlestar Galactica (BSG) by saying its ending sucked, Babylon 5’s endings were not up to the high quality of the show. (What is commonly believed to be B5’s original planned ending, written before the show began, might well have made the grade.)

Ron Moore’s goals

To understand the fall of BSG, one must examine it both in terms of more general goals for good SF, and the stated goals of the head writer and executive producer, Ronald D. Moore. The ending failed by both my standards (which you may or may not care about) but also his.

Moore began the journey by laying out a manifesto of how he wanted to change TV SF. He wrote an essay about Naturalistic science fiction where he outlined some great goals and promises, which I will summarize here, in a slightly different order

  • Avoiding SF clichés like time travel, mind control, god-like powers, and technobabble.
  • Keeping the science real.
  • Strong, real characters, avoiding the stereotypes of older TV SF. The show should be about them, not the hardware.
  • A new visual and editing style unlike what has come before, with a focus on realism.

Over time he expanded, modified and sometimes intentionally broke these rules. He allowed the ships to make sound in space after vowing they would not. He eschewed aliens in general. He increased his focus on characters, saying that his mantra in concluding the show was “it’s the characters, stupid.”

The link to reality

In addition, his other goal for the end was to make a connection to our real world. To let the audience see how the story of the characters related to our story. Indeed, the writers toyed with not destroying Galactica, and leaving it buried on Earth, and ending the show with the discovery of the ship in Central America. They rejected this ending because they felt it would violate our contemporary reality too quickly, and make it clear this was an alternate history. Moore felt an alternative universe was not sufficient.

The successes, and then failures

During its run, BSG offered much that was great, in several cases groundbreaking elements never seen before in TV SF:

  • Artificial minds in humanoid bodies who were emotional, sexual and religious.
  • Getting a general audience to undertand the “humanity” of these machines.
  • Stirring space battles with much better concepts of space than typically found on TV. Bullets and missiles, not force-rays.
  • No bumpy-head aliens, no planet of the week, no cute time travel or alternate-reality-where-everybody-is-evil episodes.
  • Dark stories of interesting characters.
  • Multiple copies of the same being, beings programmed to think they were human, beings able to transfer their mind to a new body at the moment of death.
  • A mystery about the origins of the society and its legends, and a mystery about a lost planet named Earth.
  • A mystery about the origin of the Cylons and their reasons for their genocide.
  • Daring use of concepts like suicide bombing and terrorism by the protagonists.
  • Kick-ass leadership characters in Adama and Roslin who were complex, but neither over the top nor understated.
  • Starbuck as a woman. Before she became a toy of god, at least.
  • Baltar: One of the best TV villains ever, a self-centered slightly mad scientist who does evil without wishing to, manipulated by a strange vision in his head.
  • Other superb characters, notably Tigh, Tyrol, Gaeta and Zarek.

But it all came to a far lesser end due to the following failures I will outline in too much detail:

  • The confirmation/revelation of an intervening god as the driving force behind events
  • The use of that god to resolve large numbers of major plot points
  • A number of significant scientific mistakes on major plot points, including:
    • Twisting the whole story to fit a completely wrong idea of what Mitochondrial Eve is
    • To support that concept, an impossible-to-credit political shift among the characters
    • The use of concepts from Intelligent Design to resolve plot issues.
    • The introduction of the nonsense idea of “collective unconscious” to explain cultural similarities.
  • The use of “big secrets” to dominate what was supposed to be a character-driven story
  • Removing all connection to our reality by trying to build a poorly constructed one
  • Mistakes, one of them major and never corrected, which misled the audience

And then I’ll explain the reason why the fall was so great — how, until the last moments, a few minor differences could have fixed most of the problems.  read more »

Two wheeled robocars and the Twill

I have mostly written about 3 and 4 wheeled Robocars, even when the vehicles are narrow and light. Having 3 or 4 wheels of course means stability when stopped or slow, but I have also been concerned that even riding a 2 wheeled vehicle like a motorcycle requires a lot of rider participation. It is necessary to lean into turns. It’s disconcerting being the stoker on a tandem bicycle or the passenger on a motorcycle, compared to being a car passenger. You certainly don’t imagine yourself reading a book in such situations.

On the other hand 3/4 wheeled vehicles have their disadvantages. They must have a wide enough wheelbase to be stable because they can’t easiliy lean. In addition, for full stability you want to keep their center of gravity as low as you can. The extra width means a lot more drag, unless you have a design like the Aptera Motors entrant in the Progressive 100mpg X-prize, which puts the wheels out to the sides.

I recently met Chris Tacklind, who has a design-stage startup called Twill Tech. They have not produced a vehicle yet, but their concepts are quite interesting. Their planned vehicle, the Twill, has two wheels but uses computer control to allow it to stay stable when stopped. It does this by slight motions of the wheels, the same way that pro cyclists will do a track stand. They believe they can make a 2 wheeled electric motorcycle that can use this technique to stay stable when stopped, though it would need to extend extra legs when parked.

This is intended to be an enclosed vehicle, both for rider comfort and lower drag. The seat is very different from a motorcycle seat, in that you do not sit astride the vehicle, but in a chair similar to a spacecraft’s zero-G chair.

In addition, the vehicle is designed to have the rear wheel on a lever arm so that it can stand almost upright when stopped and then slope down low, with the rider reclined, at higher speeds. The reclined position is necessary for decent drag numbers at speed — the upright human creates a lot of the drag in a bicycle or motorcycle. However, the upright position when slow or stopped allows for much easier entry and exit of the vehicle. As everybody knows, really low cars are harder to get in and out of. Twill is not the first company to propose a vehicle which rises and lowers. For example the MIT CityCar plan proposes this so the vehicles can stack for parking. Even without stacking, such designs can park in a much smaller space.  read more »

Tales of the Michael Jackson lottery, eBay and security

I’ve been fascinated of late with the issue of eBay auctions of hot-hot items, like the playstation 3 and others. The story of the Michael Jackson memorial tickets is an interesting one.

17,000 tickets were given out as 8,500 pairs to winners chosen from 1.6 million online applications. Applicants had to give their name and address, and if they won, they further had to use or create a ticketmaster account to get their voucher. They then had to take the voucher to Dodger stadium in L.A. on Monday. (This was a dealbreaker even for honest winners from too far outside L.A. such as a Montreal flight attendant.) At the stadium, they had to present ID to show they were the winner, whereupon they were given 2 tickets (with random seat assignment) and two standard club security wristbands, one of which was affixed to their arm. They were told if the one on the arm was damaged in any way, they would not get into the memorial. The terms indicated the tickets were non-transferable.

Immediately a lot of people, especially those not from California who won, tried to sell tickets on eBay and Craigslist. In fact, even before the lottery results, people were listing something more speculative, “If I win the lottery, you pay me and you’ll get my tickets.” (One could enter the lottery directly of course, but this would increase your chances as only one entry was allowed, in theory, per person.)

Both eBay and Craigslist had very strong policies against listing these tickets, and apparently had staff and software working regularly to remove listings. Listings on eBay were mostly disappearing quickly, though some persisted for unknown reasons. Craiglist listings also vanished quickly, though some sellers were clever enough to put their phone numbers in their listing titles. On Craigslist a deleted ad still shows up in the search summary for some time after the posting itself is gone.

There was a strong backlash by fans against the sellers. On both sites, ordinary users were regularly hitting the links to report inappropriate postings. In addition, a brand new phenomenon emerged on eBay — some users were deliberately placing 99 million dollar bids on any auction they found for tickets, eliminating any chance of further bidding. (See note) In that past that could earn you negative reputation, but eBay has removed negative reputation for buyers. In addition, it could earn you a mark as a non-paying buyer, but in this case, the seller is unable to file such a complaint because their auction of the non-tranferable ticket itself violates eBay’s terms.  read more »

A standard OS mini-daemon, saving power and memory

On every system we use today (except the iPhone) a lot of programs want to be daemons — background tasks that sit around to wait for events or perform certain regular operations. On Windows it seems things are the worst, which is why I wrote before about how Windows needs a master daemon. A master daemon is a single background process that uses a scripting language to perform most of the daemon functions that other programs are asking for. A master daemon will wait for events and fire off more full-fledged processes when they happen. Scripts would allow detection of connection on ports, updated software versions becoming available, input from the user and anything else that becomes popular.

(Unix always had a simple master daemon for internet port connections, called inetd, but today Linux systems tend to be full of always-running deamons.)

Background tasks make a system slow to start up, and take memory. This is becoming most noticed on our new, lower powered devices like smartphones. So much so that Apple made the dramatic decision to not allow applications to run in the background. No multitasking is allowed. This seriously restricts what the iPhone can do, but Apple feels the increase in performance is worth it. It is certainly true that on Windows Mobile (which actually made it hard to terminate a program once you started it running) very quickly bloats down and becomes unusable.

Background tasks are also sucking battery life on phones. On my phone it’s easy to leave Google maps running in the background by mistake, and then it will sit there constantly sucking down maps, using the network and quickly draining the battery. I have not tried all phones, but Windows Mobile on my HTC is a complete idiot about battery management. Once you start up the network connection you seem to have to manually take it down, and if you don’t you can forget about your battery life. Often is the time you’ll pull the phone out to find it warm and draining. I don’t know if the other multitasking phones, like the Android, Pre and others have this trouble.

The iPhone’s answer is too draconian. I think the answer lies in a good master daemon, where programs can provide scripts in a special language to get the program invoked on various events. Whatever is popular should be quickly added to the daemon if it’s not too large. (The daemon itself can be modular so it only keeps in ram what it really needs.)

In particular, the scripts should say how important quick response time is, and whether the woken code will want to use the network. Consider an e-mail program that wants to check for new e-mail every 10 minutes. (Ideally it should have IMAP push but that’s another story.)

The master daemon scheduler should realize the mail program doesn’t have to connect exactly every 10 minutes, though that is what a background task would do. It doesn’t mind if it’s off by even a few minutes. So if there are multiple programs that want to wake up and do something every so often, they can be scheduled to only be loaded one or two at a time, to conserve memory and CPU. So the e-mail program might wait a few minutes for something else to complete. In addition, since the e-Mail program wants to use the network, groups of programs that want to use the network could be executed in order (or even, if appropriate, at the same time) so that the phone ends up setting up a network connection (on session based networks) and doing all the network daemons, and then closing it down.

The master daemon could also centralize event notifications coming from the outside. Programs that want to be woken up for such events (such as incoming emails or IMs) could register to be woken up on various events on ports. If the wireless network doesn’t support that it might allow notifications to come in via SMS that a new task awaits. When this special SMS comes in, the network connection would be brought up, and the signalled task would run, along with other tasks that want to do a quick check of the network. As much of this logic should be in the daemon script, so that the full program is only woken up if that is truly needed.

The daemon would of course handle all local events (key presses, screen touches) and also events from other sensors, like the GPS (wake me up if we get near hear, or more than 100 meters from there, etc.) It would also detect gestures with the accelerometer. If the user shakes the phone or flips it in a certain way, a program might want to be woken up.

And of course, it should be tied to the existing daemon that handles incoming calls and SMSs. Apps should be able to (if given permission) take control of incoming communications, to improve what the regular phone does.

This system could give the illusion of a full multitasking phone without the weight of it. Yes, loading in an app upon an event might be slightly slower than having it sitting there in ram. But if there is spare ram, it would of course be cached there anyway. An ideal app would let itself be woken up in stages, with a small piece of code loading quickly to give instant UI response, and the real meat loading more slowly if need be.

While our devices are going to get faster, this is not a problem which will entirely go away. The limiting factors in a portable device are mostly based on power, including the power to keep the network radios on. And applications will keep getting bigger the faster our CPUs get and the bigger our memories get. So this approach may have more lifetime than you think.

Review of Downfall / Der Untergang

Last month I released a parody video for the film “Downfall” (known as Der Untergang in German.) Having purchased the movie, I also watched it of course, and here is my review. At least in my case, the existence of the parody brought some new sales for the film. There are “spoilers” of a sort in this review, but of course you already know how it ends, indeed as history you may know almost everything that happens in it, though unless you are a detailed student of these events you won’t know all of it.

The movie, which deals with Hitler’s last days in the bunker, is dark and depressing. And there is the challenge of making some of the nastiest villains of the 20th century be the protagonists. This caused controversy, because people don’t like seeing Hitler and his ilk humanized even in the slightest. Hitler in this film is in some ways as you might expect him. Crazy, brutal and nasty. He’s also shown being kind to some friends, to Eva, to his dog, his secretaries and a few others. He has to be human or the film becomes just caricature, and not much as a drama. Goebbels gets little humanity, and his wife, who has the most disturbing scene in the film, has a very twisted sort of humanity.

While we have only a limited idea of what Hitler was like at this time, I feel the movie actually still made him a madman caricature. The real Hitler must have been highly charismatic and charming. He inspired people to tremendous loyalty, and got them to do horrible things for him, including taking their own lives at the end as we’re shown several times. The Nazis who were recruited by Hitler in his early days all spoke warmly of his charm, but none of this comes through in the film. We don’t like to think of him that way.

The movie is told in large part from the viewpoint of Frau Traudl Junge, one of Hitler’s private secretaries, who escaped the bunker and died a few years ago. The real Junge appears in the film, apologizing for how she just got caught up in the excitement of being Hitler’s secretary, and how she wished she never went down that road. Like all the people who were there, she says she was unaware of what was really going on. Considering she typed Hitler’s last testament, where he blames the Jews for the war, and other statements he dictated to her, it’s not something she could have been totally unaware of. Junge asks Eva Braun about Hitler’s brutality as a contrast to his nicer moods and she explains, “that’s when he’s being the Führer!” suggesting she compartmentalized the two men, lover and dictator, in two different ways.

During the movie the Soviets are bombing Berlin, and Hitler refuses surrender, in spite of urging from his generals and pleas for the civilians. Even Himmler, whose dastardly evil side is not shown in this film, is the “smart one” encouraging Hitler to leave Berlin, and who “betrays” Hitler in trying to negotiate a surrender. As in any war movie, when you see people being blown up by bombs and shot from their point of view, your instinct is to sympathise, and it’s easy to forget it is the allies who are doing the bombing, and the people dying are the ones who stuck with Hitler to the end. Some of them are “innocent,” including many of the citizens of Berlin, but many are not. Their loyalty may seem redeeming but they are giving that loyalty (and have reached a level of trust from Hitler) in a world where many in Germany wanted him out, where a number had been executed for plots to be rid of him.

A few Nazis get favourable treatment. Speer, for example. A scene from his memoirs, which is probably false, has Speer telling Hitler that he has disobeyed his “Nero” scorched Earth orders. This scene appears in Speer’s later memoirs but is denied in earlier ones, making it likely to be an invented memory. To give Speer credit, of course, he did disobey the orders, and he was the only top Nazi to own up, even partially, for what he did. Junge herself comes off as perfectly innocent and loyal. General Mohnke and SS Doctor Ernst-Günther Schenck (both of whom died moderately recently) get positive treatments.

The most disturbing scene involves Frau Goebbels executing her own children. There are conflicting stories on this, though the one piece of documentation, her last letter, makes it somewhat credible. Movie directors “like” such scenes, as they are incredibly chilling and nightmare-inducing. While Hitler was losing his grip on reality, the others were not, and these horrors are all a result of how much they embraced their bizarre ideology. Frau Goebbels could have sent her children to safety, she felt there was no point in them living in the world that was to come. Still, this scene will give you nightmares, along with a number of other gruesome suicides, even if you know in your mind that the people suiciding have done such incredibly nasty things.

But this is a part of history worth understanding. And it is worth trying to understand — though we may never do so — how human beings not as different from us as we would like to believe —could have been such monsters. The movie is well made, and powerful, if depressing and disturbing at the same time.

Design for a universal plug

I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. This week saw the announcement that all the companies selling cell phones in Europe will standardize on a single charging connector, based on micro-USB. (A large number of devices today use the now deprecated Mini-USB plug, and it was close to becoming a standard by default.) As most devices are including a USB plug for data, this is not a big leap, though it turned out a number of devices would not charge from other people’s chargers, either from stupidity or malice. (My Motorola RAZR will not charge from a generic USB charger or even an ordinary PC. It needs a special charger with the data pins shorted, or if it plugs into a PC, it insists on a dialog with the Motorola phone tools driver before it will accept a charge. Many suspect this was to just sell chargers and the software.) The new agreement is essentially just a vow to make sure everybody’s chargers work with everybody’s devices. It’s actually a win for the vendors who can now not bother to ship a charger with the phone, presuming you have one or will buy one. It is not required they have the plug — supplying an adapter is sufficient, as Apple is likely to do. Mp3 player vendors have not yet signed on.

USB isn’t a great choice since it only delivers 500ma at 5 volts officially, though many devices are putting 1 amp through it. That’s not enough to quickly charge or even power some devices. USB 3.0 officially raised the limit to 900ma, or 4.5 watts.

USB is a data connector with some power provided which has been suborned for charging and power. What about a design for a universal plug aimed at doing power, with data being the secondary goal? Not that it would suck at data, since it’s now pretty easy to feed a gigabit over 2 twisted pairs with cheap circuits. Let’s look at the constraints

Smart Power

The world’s new power connector should be smart. It should offer 5 volts at low current to start, to power the electronics that will negotiate how much voltage and current will actually go through the connector. It should also support dumb plugs, which offer only a resistance value on the data pins, with each resistance value specifying a commonly used voltage and current level.

Real current would never flow until connection (and ground if needed) has been assured. As such, there is minimal risk of arcing or electric shock through the plug. The source can offer the sorts of power it can deliver (AC, DC, what voltages, what currents) and the sink (power using device) can pick what it wants from that menu. Sinks should be liberal in what they take though (as they all have become of late) so they can be plugged into existing dumb outlets through simple adapters.

Style of pins

We want low current plugs to be small, and heavy current plugs to be big. I suggest a triangular pin shape, something like what is shown here. In this design, two main pins can only go in one way. The lower triangle is an optional ground — but see notes on grounding below.  read more »

Panoramas of Israel

Back in March, I took my first trip to the middle east, to attend Yossi Vardi’s “Kinnernet” unconference on the shores of lake Kinneret, also known as the Sea of Galilee. This is an invite-only conference and a great time, but being only 2 days long, it’s hard to justify 2 days of flying just to go to it. So I also conducted a tour of sites in Israel and a bit of Jordan.

Israel is another one of the fascinating must-do countries for an English speaker, not simply for its immense history and impressive scenery, but because it is fascinating politically, and a large segment of the population speaks English. There are other countries which are interesting politically and culturally, but you will only get to speak to that segment of the population that has learned English.

Israel is a complex country and of course one can’t understand it on a visit, since many of the natives will admit to not understanding it. Most of the people I associated with, being high-tech internet people, seemed to be on the less aggressive side, if I can call them that; people opposed to the settlers, for example, and eager for land-for-peace or two-state solutions. During my trip Gaza was in turmoil and I did not visit it. I drove through West Bank areas a couple of times but only to get from A to B — though many Israelis expressed shock that I would be willing to do that. (On our way back from Jordan, on the outskirts of Jericho, we saw a lone Haredi, wearing black hat and black coat, hitch-hiking after dark on the side of the road. Our car was full, but our driver, who was not much afraid of the west bank, did agree that was a man of particular bravery of foolishness.)

The Israelis have come to accept, like fish in water, many things that to an outside seem shocking. Having two very different levels of rights for large sections of the population. Having your car, and then later your bag, searched as you do something as simple as visiting a shopping mall. The presence of soldiers with machine guns slung on their backs almost everywhere you look. Being on the bus that simply shuttles all day along a 400 foot trip between the Jordan and Israel border stations, and having to go through a 20 minute security inspection even though it’s been in view of the Israel station the whole time. Showing ID cards all the time.

The latter is of course not unexpected but disturbing. Israelis are taught more than anybody else in school about the dangers of a society with too much identity information on its people, and which requires them to carry and show papers. So they would have been the last to accept this, but they have. It shows how extreme their situation is more than some of the other less subtle signs. If more buildings fall in the USA, we’ll become more and more like Israel.

And yet the people, both Israelis and Arabs, are all intensely friendly and gregarious. (The same whether I would reveal my Jewish ancestry or not. I do not, however, look Jewish.) Famously brusque but still warm hearted.

The food is Israel is much better than I expected. It starts with the extremely fresh ingredients grown in the tropical climate. The falafel stands on the sides of the streets put anything elsewhere to shame, and I became addicted to the fresh squeezed juices also found everywhere.

In Jerusalem, around my hotel near King George and Jaffa, I experienced an amazing contrast. On Thursday night the streets were packed full of young people, starting their weekend. On Friday night, Shabbat was observed so strictly in that area that you could hear nothing but the chirping of birds and a few distant cars. In Tel Aviv, and among the high-tech crowd, Shabbat was hard to detect.

The old city of Jerusalem is a great trip, and the Muslim quarter, which is the most lively, is not nearly so dangerous or scary, even after hours, as Israelis described it to be. Along it is the “Stations of the Cross” route which gets Christians all excited, even though it’s clearly not the original route, which was not dotted with hundreds of Muslim-run souvenir shops. Seeing an internet cafe, I joked, “And here, at station 5.5, is where Jesus stopped to check his E-mail and twitter about how tired he was.” Jerusalem, and the rest of Israel, is packed full of Christians on “holy land” tours. A friend described it as like Houston, in that it was full of Texans.

I have a very large gallery of panoramas of Israel, along with a second page of panos and a still yet to be processed gallery of regular photos to come. Also to come is the 2-day trip into Jordan to see Petra. I’m particularly pleased with the first one that I show here, a 360 degree view of the western wall (wailing wall) male section just before Shabbat. Check out the full sized version.

The overengineering and non-deployment of SSL/TLS

I have written before about how overzealous design of cryptographic protocols often results in their non-use. Protocol engineers are trained to be thorough and complete. They rankle at leaving in vulnerabilities, even against the most extreme threats. But the perfect is often the enemy of the good. None of the various protocols to encrypt E-mail have ever reached even a modicum of success in the public space. It’s a very rare VoIP call (other than Skype) that is encrypted.

The two most successful encryption protocols in the public space are SSL/TLS (which provide the HTTPS system among other things) and Skype. At a level below that are some of the VPN applications and SSH.

TLS (the successor to SSL) is very widely deployed but still very rarely used. Only the most tiny fraction of web sessions are encrypted. Many sites don’t support it at all. Some will accept HTTPS but immediately push you back to HTTP. In most cases, sites will have you log in via HTTPS so your password is secure, and then send you back to unencrypted HTTP, where anybody on the wireless network can watch all your traffic. It’s a rare site that lets you conduct your entire series of web interactions entirely encrypted. This site fails in that regard. More common is the use of TLS for POP3 and IMAP sessions, both because it’s easy, there is only one TCP session, and the set of users who access the server is a small and controlled set. The same is true with VPNs — one session, and typically the users are all required by their employer to use the VPN, so it gets deployed. IPSec code exists in many systems, but is rarely used in stranger-to-stranger communications (or even friend-to-friend) due to the nightmares of key management.

TLS’s complexity makes sense for “sessions” but has problems when you use it for transactions, such as web hits. Transactions want to be short. They consist of a request, and a response, and perhaps an ACK. Adding extra back and forths to negotiate encryption can double or triple the network cost of the transactions.

Skype became a huge success at encrypting because it is done with ZUI — the user is not even aware of the crypto. It just happens. SSH takes an approach that is deliberately vulnerable to man-in-the-middle attacks on the first session in order to reduce the UI, and it has almost completely replaced unencrypted telnet among the command line crowd.

I write about this because now Google is finally doing an experiment to let people have their whole gmail session be encrypted with HTTPS. This is great news. But hidden in the great news is the fact that Google is evaluating the “cost” of doing this. There also may be some backlash if Google does this on web search, as it means that ordinary sites will stop getting to see the search query in the “Referer” field until they too switch to HTTPS and Google sends traffic to them over HTTPS. (That’s because, for security reasons, the HTTPS design says that if I made a query encrypted, I don’t want that query to be repeated in the clear when I follow a link to a non-encrypted site.) Many sites do a lot of log analysis to see what search terms are bringing in traffic, and may object when that goes away.  read more »

Secrets of the "Clear" airport security line

Yesterday it was announced that “Clear” (Verified ID Pass) the special “bypass the line at security” card company, has shut its doors and its lines. They ran out of money and could not pay their debts. No surprise there, they were paying $300K/year rent for their space at SJC and only 11,000 members used that line.

As I explained earlier, something was fishy about the program. It required a detailed background check, with fingerprint and iris scan, but all it did was jump you to the front of the line — which you get for flying in first class at many airports without any background check. Their plan, as I outline below, was to also let you use a fancy shoe and coat scanning machine from GE, so you would not have to take them off. However, the TSA was only going to allow those machines once it was verified they were just as secure as existing methods — so again no need for the background check.

To learn more about the company, I attended a briefing they held a year ago for a contest they were holding: $500,000 to anybody who could come up with a system that sped up their lines at a low enough cost. I did have a system, but also wanted to learn more about how it all worked. I feel sorry for those who worked hard on the contest who presumably will not be paid.  read more »

The background check

Features for high-end digital cameras

I’m really enjoying my Canon EOS 5D Mark II, especially its ability to shoot at 3200 ISO without much noise, allowing it to be used indoors, handheld without flash. But as fine as this (and other high end) cameras are, I still see a raft of features missing that I hope will appear in future cameras.

Help me fix my mistakes

A high end camera has full manual settings, which is good. But even the best of us make mistakes with these settings, mistakes the camera should know about and warn us about. It should not stop us from making shots, or in many circumstances try to correct the mistakes. But it should notice them, and beep when I take a picture, and show the mistake on the display with menu options to correct it, to always correct it, or not not warn me again about it for a day or forever.

I write earlier about the general principle of noticing when we’ve left the camera in an odd mode. If we put the camera into incandescent white balance in the evening and then a day later the camera notice we’re shooting in a sunny environment, it should know and alert us, or even fix it. This is true of a variety of settings that are retained through a non-shooting period, including exposure compensation, white balance, shooting modes, ISO changes and many others. The camera should learn over time what our “normal” modes are that we do like to leave the camera in, and not warn us about them, but warn us about other unusual things.

Many things will be obvious to the camera. If I shoot in manual mode and then later take another shot in manual mode that’s obviously way overexposed or underexposed, I probably just forgot, and would not mind the reminder. The reminder might also offer to delete the bad shot.

There are many things the camera can detect, including big blobs of sensor dust. Lenses left in manual focus should be noticed after a long gap of time, and especially if the lens has been removed and returned to the camera. Again, this should not impinge on the UI much — just a beep and a chance to see what the problem was on the screen.

Add bluetooth and other communications protocols to the camera

Let the camera talk to other devices. One obvious method would be bluetooth. With that, let the camera use bluetooth microphones and headsets when it records video and annotations. Let me hear the camera’s beeps and audio in a bluetooth headset so as not to disturb others. Let the camera talk to a Bluetooth GPS or GPS equipped phone to get geolocation data for photos. Let the camera be controlled via bluetooth from a laptop, and let it upload photos to a computer as it currently can do over USB. Let me use my phone or any other bluetooth remote as a remote control for the camera — indeed, on a smart phone, let me go so far as to control all aspects of the camera and see the live preview. Start making bluetooth controlled flash modules to replace the infrared protocols — it’s more reliable and won’t trigger other people’s flashes. Build simple bluetooth modules that can connect to the hotshoe or IR of existing flashes to convert them to this new system. Bluetooth would also allow keyboards (and even mice) for fancier control of the camera, and configuration of parameters that today require software on a PC. A bluetooth mouse, with its wheels (like the camera’s wheels) could make an interesting remote control.

With Bluetooth 3.0, which can go 480 megabits, this is also a suitable protocol for downloading photos or live tethering. Wireless USB (also 480 megabits at short range) is another contender.

Let it be a USB master as well as slave, so it can also be connected to USB GPSs and other peripherals dream up, including cell phones, most of which can now be a USB slave. This would also allow USB microphones, speakers and video displays.

Finally, add a protocol (USB or just plain IP) to the hot shoe to make this happen. (See below.)

Make more use of the microphone

I’ve always liked the idea of capturing a few seconds of sound around every still photo. This can be used for mood, or it can be used for notes on the photo. Particularly if we can do speech-to-text on the audio later, so that I can put captions on photos right then and there. This would work especially well if I can get a bluetooth headset with high quality microphone audio, something that is still hard to do right now.

If your camera can shoot video, it can of course be used as an audio recorder by putting on the lens cap, but why not just offer a voice recorder mode once you have gone to the trouble to support a good microphone.

Treat the camera as a software platform

Let other people write code to run on the camera. Add-on modules and new features. For low-end, deliberately crippled cameras this might not be allowed, but if I’m paying more for my camera than a computer, I should be able to program it, or download other people’s interesting programs.

Furthermore, let this code send signals to other devices, over USB, the flash shoe, and even bluetooth. Consider including a few general purpose digital read/write pins for general microcontroller function, or make a simple module to allow that.

Letting others write code for your product has a cost — you must define APIs and support them. But the benefits are vast, and would generate great loyalty to the camera to do this first. I imagine software for panorama taking, high-dynamic range photography, timelapse, automatic exposure evaluation and much more — perhaps even the mistake-detection described above.

Create a fancy new hotshoe with data flow and power flow

The hotshoe should include a generalized data bus like two-way USB or just IP over something. Make all peripherals, including flashes, speak this protocol for control. But also allow the unit on the flash hot shoe to control the camera — this will be a two way street.

In the hotshoe, include pins for power — both to access the power from the camera, and to allow hotshoe devices to assist powering the camera and to charge the battery. This would allow the creation of low-powered flashes which are small and don’t need a battery because they draw from the camera battery. Not big, but suitable for fill flash and other purposes. The 5D has no built-in flash and I miss the fill-flash of the on-camera flash of the 40D. Obviously you don’t want devices sucking all the battery, and some might have their own batteries, but I would rather carry two camera batteries than have to carry a camera battery and then another battery type and charger type for my flash!

One could make a hotshoe device that holds more camera batteries, as an alternative to the battery grip. But hotshoe devices, with their data link, could do much more than control flashes. They could include fancy audio equipment, even a controller for the servo motors of a rotating pano-head or pan and scan tripod. Hotshoe devices could include wifi or bluetooth if it’s not already in the phone. Or GPS location.

The Hotshoe would offer 5v USB style power to start, but on approval, switch the power lines to high-current direct battery access, to allow extra power devices, and even battery chargers or AC adapters.

Support incremental download

Perhaps some cameras do this but I have not seen it. Instead of deleting photos from cards, just let things cycle through, and have the downloader only fetch the new photos, and mark the ones fetched as ready for deletion when needed. It’s always good to have your photos in multiple places — why delete them from the card before you need to? Possibly make the old photos semi-invisible. And, as I have asked before, when a photo is deleted, don’t delete it, but move it to a recycle bin where I can undelete. Of course, as space is needed, purge things from that bin in order. Though still call it delete, so that when rent-a-cops try to make you delete photos, you can fake it.

Put an Arca-swiss style plate on the bottom of the camera

Serious photographers have all settled on this plate, and have one stuck to the bottom of their camera, which is annoying when the camera is on your neck. Put these dovetails right into the base of the camera, with a standard tripod hole in the center (as these plates often can’t quite do as they must put the screw in the center.) I pay $50 for every new camera to get a custom plate. Just build it in. Those with other QR systems can still connect to the 1/4-20 tripod hole.

Consider a new format between jpeg and raw

The jpeg compression is good enough that detail is not lost. What is lost is exposure range. Raw format preserves everything, but is very large and slower and harder to use when organizing photographs — its main value is in post-processing photographs. A 12 bit jpeg standard exists, but is not widely used, but if cameras started offering it, I expect we would see support for it proliferate, even faster than support for raw has done.

Show me the blurries

A feature I have been requesting for some time. After I take a photo, let one of the review modes offered provide a zoom in of something that is supposed to be in focus. That could be the best focus point, or simply the most contrasty part of the photo. If, when I see the most contrasty part of the photo, it’s still blurry, I can know I didn’t focus right or hold the camera steady enough. If using focus points, the wheel could rotate around the focus points that were supposed to be in focus, so I can see what was probably my subject and how well it was shot.

Have a good accelerometer, and use it

Most cameras have a basic accelerometer to know if the camera is in portrait mode. (Oddly, they don’t all use it to know how to display photos on the screen.) But you can do much more. For example, you should be able to tell if the camera is on a tripod or handheld, based on how steady it is. That knowledge can be used to enable or disable the image stabilizer. It can also be used to add stability, by offering to delay the shutter release until the camera is being held steady when doing longer exposures. (Nikon had a feature called BSS, where it would shoot several long exposure shots, and retain the one that was least blurry. This should be a regular feature for all cameras.) Knowing the camera is stable on a tripod should also allow automatic exposure controllers to make more use of longer exposures if they need to in low light, though of course with moving subjects you still need manual control. (The camera should also be able to tell if the subjects are moving if it knows the camera itself is stable.)

Like new phones, also have a compass, and record the direction of all photos, to add to GPS data. This would allow identification of subjects. It would also allow “panorama” modes that know when you have rotated the camera sufficiently for the next overlapping shot. Finally, the accelerometer should offer me a digital level on the screen so I can quickly level the camera.

Embrace your inner eBook

I wrote about this last month — realize we are using cameras to do more than just take pictures.

Use the battery to power AC startup surge in an RV

Many RVs come with generators, and the air conditioner is the item that demands it be a high power generator. The Generator needs to be big enough to run the AC, and in theory let you do other things like microwave when you run it. It also has to be big enough to handle the surge that the AC motor takes when the AC starts up.

This surge is huge, and will often overload a generator, particularly external generators that are commonly used on smaller RVs. To fix this problem, there’s been a bit of effort to develop “soft start” electric motor technologies that start up motors slowly, and store charge in a big capacitor in order to provide the surge.

However, the RV also has a deep cycle battery and (if a motorhome) an engine starting battery. Both these batteries can usually deliver 100 or more amps in a burst. (The engine starting battery can deliver several hundred.)

Today, high-power inverters have gotten much cheaper, even those that can deliver 500 to 1,000 watts (and peak to far more) are getting cheap. I have wondered why it has not become standard to include a high power inverter in any RV so that small 110v appliances can’t be run off the battery for short times, rather than firing up the generator. To microwave something for 30 seconds requires starting the generator which is quite wasteful, and also noisy. Of course, what runs off the battery should still run on 12 volts, and some things (like the fridge in electric mode) should not run off an inverter. Short microwave bursts, and a few hours of flatscreen TV watching can run off an inverter.

And so my proposal is that such an inverter also be available to provide surge power to the AC compressor when it starts, even if the generator or shore power is on. The extra 1000 or so watts the inverter can provide would allow the use of a smaller, cheaper generator. This requires an inverter that can sync to the phase of the incoming AC, and of course safety circuits to assure that power is not fed back into the shore power port when it is disconnected.

Today, the big trend in generators is actually to have them use such high-power inverters. The generators are thus free to generate dirty power, and to run at whatever RPM is best for them at the time. The inverter cleans up the power and puts out clean, constant voltage. There are modest losses but overall it’s a win, as you get a generator that is much more efficient and quiet, and better quality power. Many suspect that RV generators will switch to that approach. In this case, it becomes much easier to have an integrated inverter generator able to also draw from the battery for its surges. No need for grid tie logic in this case.

To wit, one could see a system where a 2kw inverter generator, able to boost to 3.5kw by adding in the battery, could be enough for a typical RV, even with a decent sized AC. You might have to have a circuit that says “If the microwave or other big load is on, don’t start the compressor” but that would only be an issue if you wanted to microwave something for a long time on high. Note in a proper AC the compressor is not running all the time, so the AC would not be off — it would just not be doing on cycles during the microwave use.

There would probably be some 110v plugs in the RV which are marked “On under shore or generator power only” vs “always on,” or possibly switches to control if they are on the inverter or not, since there are loads you would want to make sure stay off if running only on battery. A little more complexity to the internal wiring, but a big saving on generator size and a better dry camping experience. It also means a more usable RV when plugging into a 15 amp external shore power line. In many RVs, plugging into 15 amps is not enough to start the AC, and certainly not enough to run the AC and another device. The power control system would want to know if it’s plugged into 15A, 20A or the normal 30A. And it would also want to notice if something is drawing too much battery power and shut it off before the battery gets too low.

Obviously as well, the 12 volt converter and battery charger must only run off true shore power or the generator, never off the inverter!

Anti-atrocity system with airdropped video cameras

Our world has not rid itself of atrocity and genocide. What can modern high-tech do to help? In Bosnia, we used bombs. In Rwanda, we did next to nothing. In Darfur, very little. Here’s a proposal that seems expensive at first, but is in fact vastly cheaper than the military solutions people have either tried or been afraid to try. It’s the sunlight principle.

First, we would mass-produce a special video recording “phone” using the standard parts and tools of the cell phone industry. It would be small, light, and rechargeable from a car lighter plug, or possibly more slowly through a small solar cell on the back. It would cost a few hundred dollars to make, so that relief forces could airdrop tens or even hundreds of thousands of them over an area where atrocity is taking place. (If they are $400/pop, even 100,000 of them is 40 million dollars, a drop in the bucket compared to the cost of military operations.) They could also be smuggled in by relief workers on a smaller scale, or launched over borders in a pinch. Enough of them so that there are so many that anybody performing an atrocity will have to worry that there is a good chance that somebody hiding in bushes or in a house is recording it, and recording their face. This fear alone would reduce what took place.

Once the devices had recorded a video, they would need to upload it. It seems likely that in these situations the domestic cell system would not be available, or would be shut down to stop video uploads. However, that might not be true, and a version that uses existing cell systems might make sense, and be cheaper because the hardware is off the shelf. It is more likely that some other independent system would be used, based on the same technology but with slightly different protocols.

The anti-atrocity team would send aircraft over the area. These might be manned aircraft (presuming air superiority) or they might be very light, autonomous UAVs of the sort that already are getting cheap in price. These UAVs can be small, and not that high-powered, because they don’t need to do that much transmitting — just a beacon and a few commands and ACKs. The cameras on the ground will do the transmitting. In fact, the UAVs could quite possibly be balloons, again within the budget of aid organizations, not just nations.  read more »

Authenticated actions as an alternative to login

The usual approach to authentication online is the “login” approach — you enter userid and password, and for some “session” your actions are authenticated. (Sometimes special actions require re-authentication, which is something my bank does on things like cash transfers.) This is so widespread that all browsers will now remember all your passwords for you, and systems like OpenID have arise to provide “universal sign on,” though to only modest acceptance.

Another approach which security people have been trying to push for some time is authentication via digital signature and certificate. Your browser is able, at any time, to prove who you are, either for special events (including logins) or all the time. In theory these tools are present in browsers but they are barely used. Login has been popular because it always works, even if it has a lot of problems with how it’s been implemented. In addition, for privacy reasons, it is important your browser not identify you all the time by default. You must decide you want to be identified to any given web site.

I wrote earlier about the desire for more casual athentication for things like casual comments on message boards, where creating an account is a burden and even use of a universal login can be a burden.

I believe an answer to some of the problems can come from developing a system of authenticated actions rather than always authenticating sessions. Creating a session (ie. login) can be just one of a range of authenticated actions, or AuthAct.

To do this, we would adapt HTML actions (such as submit buttons on forms) so that they could say, “This action requires the following authentication.” This would tell the browser that if the user is going to click on the button, their action will be authenticated and probably provide some identity information. In turn, the button would be modified by the browser to make it clear that the action is authenticated.

An example might clarify things. Say you have a blog post like this with a comment form. Right now the button below you says “Post Comment.” On many pages, you could not post a comment without logging in first, or, as on this site, you may have to fill other fields in to post the comment.

In this system, the web form would indicate that posting a comment is something that requires some level of authentication or identity. This might be an account on the site. It might be an account in a universal account system (like a single sign-on system). It might just be a request for identity.

Your browser would understand that, and change the button to say, “Post Comment (as BradT).” The button would be specially highlighted to show the action will be authenticated. There might be a selection box in the button, so you can pick different actions, such as posting with different identities or different styles of identification. Thus it might offer choices like “as BradT” or “anonymously” or “with pseudonym XXX” where that might be a unique pseudonym for the site in question.

Now you could think of this as meaning “Login as BradT, and then post the comment” but in fact it would be all one action, one press. In this case, if BradT is an account in a universal sign-on system, the site in question may never have seen that identity before, and won’t, until you push the submit button. While the site could remember you with a cookie (unless you block that) or based on your IP for the next short while (which you can’t block) the reality is there is no need for it to do that. All your actions on the site can be statelessly authenticated, with no change in your actions, but a bit of a change in what is displayed. Your browser could enforce this, by converting all cookies to session cookies if AuthAct is in use.

Note that the first time you use this method on a site, the box would say “Choose identity” and it would be necessary for you to click and get a menu of identities, even if you only have one. This is because a there are always tools that try to fake you out and make you press buttons without you knowing it, by taking control of the mouse or covering the buttons with graphics that skip out of the way — there are many tricks. The first handover of identity requires explicit action. It is almost as big an event as creating an account, though not quite that significant.

You could also view the action as, “Use the account BradT, creating it if necessary, and under that name post the comment.” So a single posting would establish your ID and use it, as though the site doesn’t require userids at all.  read more »

ClariNet history and the 20th anniversary of the dot-com

Twenty years ago (Monday) on June 8th, 1989, I did the public launch of ClariNet.com, my electronic newspaper business, which would be delivered using USENET protocols (there was no HTTP yet) over the internet.

ClariNet was the first company created to use the internet as its platform for business, and as such this event has a claim at being the birth of the “dot-com” concept which so affected the world in the two intervening decades. There are other definitions and other contenders which I discuss in the article below.

In those days, the internet consisted of regional networks, who were mostly non-profit cooperatives, and the government funded “NSFNet” backbone which linked them up. That backbone had a no-commercial-use policy, but I found a way around it. In addition, a nascent commercial internet was arising with companies like UUNet and PSINet, and the seeds of internet-based business were growing. There was no web, of course. The internet’s community lived in e-Mail and USENET. Those, and FTP file transfer were the means of publishing. When Tim Berners-Lee would coin the term “the web” a few years later, he would call all these the web, and HTML/HTTP a new addition and glue connecting them.

I decided I should write a history of those early days, where the seeds of the company came from and what it was like before most of the world had even heard of the internet. It is a story of the origins and early perils and successes, and not so much of the boom times that came in the mid-90s. It also contains a few standalone anecdotes, such as the story of how I accidentally implemented a system so reliable, even those authorized to do so failed to shut it down (which I call “M5 reliability” after the Star Trek computer), stories of too-early eBook publishing and more.

There’s also a little bit about some of the other early internet and e-publishing businesses such as BBN, UUNet, Stargate, public access unix, Netcom, Comtex and the first Internet World trade show.

Extra, extra, read all about it: The history of ClariNet.com and the dawn of the dot-coms.

Apple blocks iPhone App because EFF blog points to my Downfall Parody

Last week, I posted a pointer to my parody of a famous clip from the movie Downfall and I hope you enjoyed it. While the EFF itself didn’t make this video, I do chair the foundation and they posted a pointer to it on the “Deep Links” blog. All well and good.

Some time earlier, an iPhone app developer put together an iPhone app which would display the EFF blog feed. This wasn’t an EFF effort, but the EFF gave them permission to put the logo in the app.

Recently, Apple’s App Store team evaluated the app. The pulled up the EFF blog feed, and played the video, presumably using the built in YouTube playing App which Apple provides for the iPhone. And in the subtitles I wrote, at one point when Hitler was particularly angry, the fake text had him say “fucking.” This is quite mild compared to most of the Downfall parodies on YouTube, and indeed many other videos on YouTube. I debated taking it out, but it’s appropriate for the character to be using strong angry language at that point in his rant. And it’s funny to see Hitler swear in English so I left it in.

The App Store team — dare I call them the Apple App Store content Nazis, or is that too meta? — declared the app unsuitable for the iPhone store. Note that the app doesn’t contain any dirty words, and the EFF blog rarely contains them, and didn’t contain them in this case, only pointing to the video. Of course, the EFF as a free speech organization is not about to declare its blog will be free of bad words in the future, though they are a fairly unlikely event.

Yet this, it seems, is what Apple is protecting its users from. Apple claims that it needs to control what Apps you can install on an iPhone. You need to “jailbreak” the iPhone to install other apps, and Apple says you don’t have the right to do that. Sometimes such walled gardens start off with what you may agree are good intentions, such as stopping malicious apps, or assuring a quality experience with a product. But always, it seems, it devolves to this.

You can also read the EFF Deep Links article on this bizarre denial. Apple seems to have become a parody of itself. How long before we see a Downfall clip where Hitler is an Apple app store evaluator, or a fake Steve Jobs? Of course, that had better not contain any upsetting words, even in links.

Gallery of my favourite panoramas

While I have over 30 galleries of panoramic photos up on the web, a while ago I decided to generate some pages of favourites as an introduction to the photography. I’m way behind on putting up galleries from recent trips to Israel, Jordan, Russia and various other places, but in the meantime you can enjoy these three galleries:

My Best Panoramas — favourites from around the world

Burning Man Sampler — different sorts of shots from each year of Burning Man

Giant Black Rock City Shots — Each year I shoot a super-large shot of the whole of Black Rock City. This shows this shot for each year.

As always, I recommend you put your browser in full-screen mode (F11 in Firefox) to get the full width when clicking on the panos.

Hitler tries a DMCA takedown

New Update, April 2010: Yes, even this parody video has been taken down though the YouTube Content-ID takedown system — just as my version of Hitler says he is going to do at the end. I filed a dispute, and it seems that now you can watch it again on YouTube, at least until Constantin responds as well as on Vimeo. I have a new post about the takedown with more details. In addition, YouTube issued an official statement to which I responded.

Unless you’ve been under a rock, you have probably seen a parody clip that puts new subtitles on a scene of Hitler ranting and raving from the 2004 German movie Downfall (Der Untergang). Some of these videos have gathered millions of views, with Hitler complaining about how he’s been banned from X-box live, or nobody wants to go to Burning Man, or his new camera sucks. The phenomenon even rated a New York Times article.

It eventually spawned meta-parodies, where Hitler would rant about how many Hitler videos were out on the internet, or how they sucked. I’ve seen at least 4 of these. Remarkably, one of them, called Hitler is a Meme was pulled from YouTube by the studio, presumably using a DMCA takedown. A few others have also been pulled, though many remain intact. (More on that later.)

Of course, I had to do my own. I hope, even if you’ve seen a score of these, that this one will still give you some laughs. If you are familiar with the issues of DRM, DMCA takedowns, and copyright wars, I can assure you based on the reviews of others that you will enjoy this quite a bit. Of course, as it criticises YouTube as well as the studio, I have put it on YouTube. But somehow I don’t think they would be willing to try a takedown — not on so obvious a fair use as this one, not on the chairman of the most noted legal foundation in the field. But it’s fun to dare them.

(Shortly I may also provide the video in some higher quality locations. I do recommend you click on the “HQ” button if you have bandwidth.)  read more »

Making of the Video, Legally

On ultralight vehicles vs. large mass transit vehicles

One of the questions raised by the numbers which show that U.S. transit does not compete well on energy-efficiency was how transit can fare so poorly. Our intuition, as well as what we are taught, makes us feel that a shared vehicle must be more efficient than a private vehicle. And indeed a well-shared vehicle certainly is better than a solo driver in one of todays oversized cars and light trucks.

But this is a consequence of many factors, and surprisingly, shared transportation is not an inherent winner. Let’s consider why.

We have tended to build our transit on large, heavy vehicles. This is necessary to have large capacities at rush hour, and to use fewer drivers. But a transit system must serve the public at all times if it is to be effectively. If you ride the transit, you need to know you can get back, and at other than rush hour, without a hugely long wait. The right answer would be to use big vehicles at rush hour and small ones in the off-peak hours, but no transit agency is willing to pay for multiple sets of vehicles. The right answer is to use half-size vehicles twice as often, but again, no agency wants to pay for this or to double the number of drivers. It’s not a cost-effective use of capital or the operating budget, they judge.

Weight

The urban vehicle of the future, as I predict it, is a small, one-person vehicle which resembles a modern electric tricycle with fiberglass shell. It will be fancier than that, with nicer seat, better suspension and other amenities, but chances are it only has to weigh very little. Quite possibly it will weigh less than the passenger — 100 to 200lbs.

Transit vehicles weigh a lot. A city bus comes in around 30,000 lbs. At its average load of 9 passengers, that’s over 3,000lbs of bus per passenger. Even full-up with 60 people (standing room) it’s 500lbs per passenger — better than a modern car with its average of 1.5 people, but still much worse than the ultralight.  read more »

Can airports do paging as well as a restaurant?

I have a lot of peeves about airports, like almost everybody. One of them is the constant flow of public address announcements. They make it hard to read, work or concentrate for many people. Certainly it’s hard to sleep. It’s often even hard to have a phone call with the announcements in the background.

One solution to this is the premium airline lounges. These are announcement-free, but you must watch the screens regularly to track any changes. And of course they cost a lot of money, and may be far from your gate.

Some airlines have also improved things by putting up screens at the gates that list the status of standby passengers and people waiting for upgrades. This also saves them a lot of questions at the gate, which is good.

But it’s not enough. Yet, even in a cheap restaurant, they often have a solution. They give you a special pager programmed to summon you when your table or food is ready. It vibrates (never beeps) and they are designed to stack on top of one another for recharging.

Airports could do a lot better. Yes, they could hand you an electronic pager instead of/in addition to a boarding pass. This could be used to signal you anywhere in the airport. It could have an active RFID to allow you to walk though an automatic gate onto the plane with no need for even a gate agent, depositing the pager as you board.

Each pager could also know where it is in the airport. Thus a signal could go out about the start of boarding, and if your pager is not at the gate, it could tell the airline where you are. If you’re in the security line, it might tell you to show the pager to somebody who can get you through faster (though of course if you make this a regular thing that has other downsides.)  read more »

Electronic panorama head with rotation sensor

In my quest for the idea panorama head, I have recently written up some design notes and reviews. I found that the automatic head I tried, the beta version of the Gigapan turned out to be too slow for my tastes. I can shoot by hand much more quickly.

Manual pano heads either come with a smooth turning rotator with markers, or with a detent system that offers click-stops at intervals, like 15, 20 or 30 degrees. Having click-stops is great in theory — easy to turn, much less chance of error, more exact positioning. But it turns out to have its problems.

First, unless you shoot with just one lens, no one interval is perfect. I used to shoot all my large panos with a 10 degree interval which most detent systems didn’t even want to support. Your best compromise is to pick a series of focal lengths that are multiples. So if you shoot with say a 50mm and near-25mm lens, you can use a 15 degree interval, and just go 2-clicks for 30 degrees and so on. (It’s not quite this simple, you need more overlap at the wider focal lengths.)

Changing the click stops is a pain on some rotators — it involves taking apart the rotator, which is too much no matter how easy they make that. The new Nodal Ninja rotators and some others use a fat rotator with a series of pins. This is good, but the rotator alone is $200.

Click stops have another downside. You want them to be firm, but when they are, the “click” sets up vibrations in the assembly, which has a long lever arm, especially if there is a telephoto lens. Depending on the assembly it can take a few seconds for those vibrations to die down.

So here’s a proposal that might be a winner: electronic click stops. The rotator ring would have fine sensor marks on it, which would be read by a standard index photosensor. This would be hooked up to an inexpensive microcontroller. The microcontroller in turn would have a small piezo speaker and/or a couple of LEDs. The speaker would issue a beep when the camera was in the right place, and also issue a sub-tone which changes as you get close to the right spot — a “warmer/colder” signal to let you find it quickly. LEDs could blink faster and faster as you get warmer, and go solid when on the right spot. They would also warn you if you drifted too far from the spot before shooting.

Now this alone would be quite useful, and of course, fully general as it could handle any interval desired. Two more things are needed — a way to set the interval, and optionally a way to ease the taking of the photos.

To set the interval, you might first reset the device by giving it a quick spin of 360 degrees. It would give a distinctive beep when ready. Then you would look through the viewfinder and move the desired interval. Your interval would be set. If doing a multi-row you would have 2 sensors for angle, and you would do this twice. You could have a button for this, but I am interested in avoiding buttons. Now you would be ready to shoot. It would give a special signal after you had shot 360 degrees or the width of the first row in a multi-row. Other modes could be set with other large motions of the rotator, such as moving it back and forth 2 times quickly, or other highly atypical rotations.

(If you want buttons, an interesting way to do this is to have an IR sensor and to accept controls from other remotes, such as a universal TV remote set to a Sony TV, or some other tiny remote control which is readily available. Then you can have all the buttons and modes you want.)

We might need to have one button (for on/off) and since off could be a long press-and-hold, the button could also be used for interval setting and panorama starting.

The next issue is automatic shooting or shot detection. The sensor, since it will be finely tuned, will be able to tell when you’ve stopped at the proper stop. When all movement ceases, it could take the shot without you pressing the shutter using a bunch of methods. It might also be useful to have you manually control the shutter, but via a button on the panohead rather than the camera’s own shutter or cable release. First of all, this would let the head know you had taken the shot, so it could warn you about any shot that was missing. It could also know if you bumped the head or moved it during any shot — when doing long exposures there is a risk of doing this, especially if you are too eager for the next shot. Secondly, you should always be using a cable release anyway, so building one into the pano head makes some sense. However, this need not be included in the simplest form of the product.

One very cheap way of having the pano head fire the shutter is infrared. Many cameras, though sadly not all, will let you control the shutter with infrared. Digital SLRs stopped doing this for a while, but now Canon at least has reversed things and supports infrared remote on the 5D Mark II. I think we can expect to see more of this in future. Another way is with a custom cable into the camera’s cable release port. The non-standard connectors, such as the Canon N3, can now be bought but this does mean having various connector adapters available, and plugging them in.

A third way is via USB. This is cheap and the connector is standard, but not all cameras will fire via USB. Fortunately more and more microcontroller chipsets are getting USB built in. The libgphoto2 open source library will control a lot of cameras. Of course, if you have a fancy controller, you can do much more with USB, such as figure out the field of view of the camera from EXIF but that’s beyond the scope of a simple system like this.

The fourth way is a shutter servo, again beyond the scope of a small system like this. In addition, all these methods beg more UI, and that means more buttons and even eventually a screen if an LED and speaker can’t tell you all you need. However, in this case what’s called for is a button which you can use to fire the shutter, and which you can press and hold before starting a pano to ask for auto firing.

The parts cost of all this is quite small, especially in any bulk. Cheaper than a machined detent system, in fact. In smaller volumes, a pre-assembled microcontroller board could be used, such as the Arduino or its clones. The only custom part might be the optical rotary encoder disk, but a number of vendors make these in various sizes.

I’ve talked about this system being cheap but in fact it has another big advantage, which is it can be small. It’s also not out of the question that it could be retrofitted onto existing pano heads, as just about everybody is already carrying a ballhead or pan/tilt head. For retrofit, one would glue an index mark tape around the outside of your existing head near where it turns, and mount the sensor and other equipment on the other part. The result is a panohead that weighs nothing because you are already carrying it.

Update: I am working on even more sophisticated plans than this which could generate a panohead which is the strongest, smallest, fastest, most versatile and lightest all at the same time — and among the less expensive too. But I would probably want some partners if I were to manufacture it.

Syndicate content