Topic

Holding an election over SMS

In 2004, I described a system that would allow secure voting over an insecure internet and PC. Of late, I have been pondering the question of how to build a “turn-key democracy kit” — a suite of tools and services that could be used by a newly born democracy to smoothly create a new state. We’ve seen a surprising number of new states and revolutions in the last few years, and I expect we’ll see more.

One likely goal after any revolution is to quickly hold some sort of meaningful election so that it’s clear the new regime has popular support and is not just another autocracy replacing the old one. You don’t have time to elect a full government (and may not want to due to passions) but at some point you need some sort of government that is accountable to the people to oversee the transition to a stable democracy.

This may create a need for a quick, cheap, simple and reliable election. Even though I am generally quite opposed to the use of voting machines, particularly voting machines which only record results in digital form, there are a number of advantages to digital voting over cell phones and PCs in a new country, at least in a country that has a digital or mobile phone infrastructure established enough so that everybody, even if they don’t have a phone, knows someone who has one.

Consider:

  • In a new country, fresh out of autocracy, powerful forces will oppose the election. They will often try to prevent it or block voters.
  • A common technique is intimidation, scaring people away from voting with threats of violence around polling places.
  • The attacks against digital voting systems tend to require both sophistication and advanced planning.
  • For a revolutionary election, the digital voting systems may well be brought in and operated by disinterested foreign parties, backed by the U.N. or other agencies.
  • An electronic system is also immune to problems like boxes of ballots disappearing or being stuffed or altered.

It may be judged that the risks of corruption of a digital or partially digital election may be less than the risks of a traditional polling place election in a volatile area. It may also be hard to build and operate trustable polling places in remote locations, and do it quickly.

The big issue I see is maintaining secret ballot. It is difficult to protect secret ballot with remote voting, and much easier in polling-station voting. If secret ballot is not adequately protected, forces could use intimidation to make sure people vote the right way, or in some cases to buy votes. I am not sure I have a really good solution to this and welcome input; this is an idea in the making.  read more »

Japan, and nuclear disasters

The images from Japan are shocking and depressing, and what seemed at first an example of the difference between a 1st and 3rd world earthquake has produced a 5 figure death toll. But the nerd and engineer in me has to wonder about some of the things I’ve seen.

Youtube disaster?

While there has been some remarkable footage, some of it in HD, I was surprised at how underdocumented things were, considering Japan’s reputation as the most camera-carrying nation of the world — and the place where all the best cameras come from. I had expected this would be the “Youtube disaster” where sites like YouTube would fill with direct observer HD videos from every town, but the most of what was uploaded there in the first few days was stuff copied from the TV (in fact, due to DRM, often camcorders pointed at TVs.) Of course, the TV networks were getting videos from private individuals, but we saw the same dramatic videos over and over again, particularly the one from destroyed village of Miyako where the water swept boats and cars over the seawall and under a bridge.

Yes, there was a lot of individual reporting, but I expected a ton, an unprecedented amount, and I expected to see it online first, not on the news first.

Cell phone shutdown

Japan is also one of the world’s most connected countries, with phones for all. Not a lot has emerged about the lost of cell phone service. Some reports suggest some areas of the network were switched into texting-only mode for civilians to leave capacity for emergency workers. Other reports say that landlines were often up when cell lines were down. The world still awaits Klein Gilhousen’s plan to allow cell phones to text peer to peer which I reported on in 2005.

Nuclear plant worst case

The public is now fully aware of many of the issues with nuclear reactors which require active stabilization using external electricity. A lot had to happen to get to the pump shutdown:

  • The reactors themselves were auto-shutdown after the earthquake. Wise, though in theory the subsequent problems would not have happened if one reactor had remained up and powering the plant.
  • The quake or tsunami shut off the external power. A week later it’s still not up. It seems that restoring it should have been a top priority for TEPCO. Was the line so destroyed or did they not prioritize this?
  • The backup generators were damanged by the tsunami, all 19 of them. I have to admit, most people would think having 19 backup generators is a very nice amount of backup. But this teaches that if you have lots of backups, you have to think about what might affect all of them. 1 backup generator or 100, they all would have failed if unable to withstand the wave
  • The batteries supposedly lasted for 8 hours. This does not seem unreasonable. But they either did not realize that they had to get something else going in the 8 hours, or expected other power. Their procedure manuals should have had a “what to do if you have only 8 hours of battery left” contingency, but I can believe they didn’t because it seemed so unlikely.
  • That said, I believe the best backup plan has a fallback that involves emergency-level external resources. In particular, I have heard of no talk of sending a ship with a few hundred meters of cable to the docks there, one of which appears to be under 100m from reactor #1 and presumably the internal power grid. Many ships have big generators onboard or can deliver them.
  • Failing that, a plan for helicopter delivery of a generator and fuel in case all other channels are out.
  • Apparently they did bring in a backup generator by truck, but it was incompatible, and they are still without power.
  • It’s a hard question to consider whether they should have restarted a reactor while on batteries. There would not be enough time for a full post-quake, post-tsunami inspection of the reactor. On the other hand, they clearly didn’t realize just how bad it was to lose all power, and/or probably presumed they would get power before too long.
  • Everybody has now figured out the problem with spent fuel storage without containment in a zone where the chamber might crack and drain. Had nobody worried about that before? Most reactors don’t store all their spent fuel this way, but some do, and I have to presume work is underway to address this.

Robots

Japanese skill in robotics is world-leading. I’ve seen examples of some of that going on, but I’m surprised that they haven’t moved just about every type of robot that might be useful in the nuclear situation to near the nuclear plant. If they should ever have a situation where they must evacuate the plant again, as they did on Wednesday, it could be useful to have robots there, even if only to act as remote cameras to see what is happening in the reactors or control rooms.

There are also remote manipulator robots, and I am surprised no media organization has managed to get some sort of camera robot in the plant to report. Of course, keeping the robot powered is an issue. Few robots are actually able to hook themselves up to power easily, but a number of the telepresence robots can do that.

Many of the “work in danger zone” robots have been built for military applications, and the Japanese don’t have that military need so perhaps they are not so common in Japan. But they do have stair climbers, telepresence and basic manipulators. Even if the robots can do very little it would make the public feel better to know that something is there.

The Chernobyl cleanup was in part done by remote control bulldozers that the Russians made.

Future of Nuclear Power

The reactor failure is causing much public examination of nuclear power. This disaster does show just how bad the older designs are, but makes us question why companies were running them when it’s been known for decades that those designs were a poor idea. Obviously investors will not be keen on saying, “Oh, we made mistakes back then, let’s write off the billions.”

There is also an argument that a technology can’t develop without going through a phase where it is less well understood and designs are not as safe as can be. Would we have developed newer, safer designs if nobody had been able to build the older ones?

I have been seeing tons of ads on CNN by the coal, gas and oil industries about how wonderful their technology is. In spite of the fact that there have been quite a large number of deaths from these technologies, and tons of pollution, and now the fear of greenhouse gases.

According to one agency in Europe, I found a quote that the world’s nuclear plants had generated 64.6 trillion kwh in the period up to 2009, or 6.4 x 10^16 watt-hours. A watt-hour of coal produces about a gram of CO2. A watt-hour from the coal and gas plants at the US average is less than that, let’s call it 0.7 grams/watt-hour more than nuclear (there is some CO2 output from the full lifecycle nuclear industry.) Correcting from original where I had used euro-billion = 10^12 which can’t be right.

That’s about 4 * 10^16 grams of CO2 not put into the air by the nuclear industry. I’m looking for figures to see what that means, but one that I found says that the whole atmosphere of the planet has 2.7 x 10^18 grams of CO2 in it.

The number I would like to see is what difference those 10^16 grams of CO2 have made to the total PPM of CO2 in the atmosphere, which is to say, how much did those nuclear plants retard global warming according to accepted climate models. Anybody have info?

To solve the world’s energy needs, while we eventually would like to develop economical solar plants, biofuels that don’t use cropland, geothermal, fusion and other sources, right now it seems that there is no choice but to build lots more nuclear if we want to stop burning so much coal. Other choices are coming but are not assured yet. If this disaster scares the public away from newer reactor designs which go to a safe state without active support or human intervention, I think that would be a mistake.

I hope that Japan is able to recover as quickly as possible, and that more of the missing are found alive. Someday something like this is going to happen here in the Bay Area — though probably not a 9.0, but possibly an 8 — and it won’t be pretty.

Erin go Brad -- registering Irish citizenship

It’s St. Paddy’s day but I can celebrate a little harder this time. Two days ago, I got my notice of entry into Ireland’s Foreign Birth Registry, declaring me an Irish citizen. I’m able to do that because I have 3 Irish grandparents (2 born in Ireland.) Irish law declares that anybody born to somebody born in Ireland is automatically Irish. That made my father, whose parents were both born there, an Irish citizen even though he never got a passport. Because my father was an Irish citizen (not born on the Island) that also gives me the right to claim it, though I had to do the paperwork, it is not automatic. If I had children after this, they could also claim it, but if I had any before this registration, they would not.

I decided to do this for a few reasons. First, it will allow me to live, work and travel freely in Ireland or anywhere else in the E.U. The passport control lines for Canadians are not usually that long, but it’s nicer to not be quizzed. But in the last few years, I have encountered several situations where it would have been very useful to have a 2nd passport:

  • On a trip to Russia, I discovered there was a visa war between Canada and Russia, and Russia was making Canadians wait 21 days for a Visa while the rest of the world waited 6 or less. I had to change a flight over that and barely made my conference. It would have been handy to use an Irish passport then. (Update: Possibly not. Russia and others require you to use the passport which allows residence, and you must apply where you live. So my Irish documents are no good at the San Francisco consulate as I don’t live there using the Irish passport.)
  • Getting stamps in your passport for Israel or its border stations means some other countries won’t let you in. Israelis will stamp a piece of paper for you but resent it, and you can lose it. A 2nd passport is a nice solution. (For frequent visitors, I believe Canada and the USA both offer a 2nd passport valid only for travel to Israel.)
  • Described earlier, last year I lost my passport in Berlin. While I got tremendous service in passport replacement, this was only because my mother was in hospital. Otherwise I would have been stuck, unable to travel. With 2 passports, you can keep them in two places, carry one and leave one in the hotel safe etc. While Canada does have an emergency temporary passport, some countries only offer you a travel document to get you home, and you must cancel any other travel on your trip.
  • On entry to Zimbabwe, I found they charged Canadians $75 per entry, while most other nations paid $30 for 1 and $45 for two. Canada is charging Zimbabweans $75 so they reciprocate. Stupid External Affairs, I bet far more Canadians go to Zimbabwe than the other way around
  • On entry to Zambia, it was $50 to transit for most countries but free/no-visa for the Irish. I got my passport 1 week after this, sigh. Ireland has a visa abolition deal.
  • Argentina charges a $150 “reciprocity fee” to US and Canadian passports, good for 10 years. Free for Irish, though. Yay!

All great reasons to have two passports. I don’t have that yet, though. (Update: I got it in June) Even though I presume that the vast majority of those who do the Irish foreign birth registry immediately want a passport, it doesn’t work that way. After a 21 month wait, I have my FBR certificate, which I now must mail back to the same consulate that sent it, along with several of the same documents I used in getting the FBR like my original birth certificate. While it makes huge sense to do them together, it doesn’t work that way.  read more »

ICANN prepares to auction off the English language

ICANN is meeting in San Francisco this week. And they’re getting closer to finally implementing a plan they have had in the works for some time to issue new TLDs, particularly generic top level domains.

Their heart is in the right place, because Verisign’s monopoly on “.com” — which has become the de facto only space where everybody wants a domain name, if they can get it — was a terrible mistake that needs to be corrected. We need to do something about this, but the plan of letting other companies get generic TLDs which are ordinary English words, with domains like “.sport” and “.music” (as well as .ibm and .microsoft) is a great mistake.

I have an odd ambivalence. This plan will either fail (as the others like .travel, .biz, .museum etc appear to have) or it will succeed at perpetuating the mistake. Strangely it is the trademark lawyers who know the answer to this. In trademark law, it was wisely ruled centuries ago that nobody gets ownership of generic terms. But some parties will offer the $185,000 fee to own .music precisely because they hope it will give them a monopoly on naming of music related internet sites. Like all monopolies these TLDs will charge excessive fees and give poor customer service. They’ll also get to subdivide the monopoly selling domains like rock.music or classical.music. And while .music will compete with .com, the new TLDs will largely not compete with one another — ie. nobody will be debating whether to go with .music or .sport, and so we won’t get the competition we truly need.

I’ve argued this before, but I have just prepared two new essays in my DNS sub-site:

Since I don’t like either of the two main consequences, what do I propose? Well for years I have suggested we should instead have truly competitive TLDs which can compete on everything — price, policies, service, priority and more. They should each start on an equal footing so they are equal competitors. That means not giving any one a generic name that has an intrinsic value like “.music.” People will seek out the .music domain not because the .music company is good or has good prices, they will seek it out because they want to name a site related to music, and that’s not a market.

Instead I propose that new TLDs be what trademark people call “coined terms” which are made up words with no intrinsic meaning. Examples from the past include names like Kodak, Xerox and Google. Today, almost every new .com site has to make up a coined term because all the generics are taken. If the TLDs are coined terms, then the owners must build the value in them by the sweat of their brow (or with mone) rather than getting a feudal lordship over an existing space. That means they can all compete for the business of people registering domains, and competition is what’s good for the market and the users.

Sadly the .com monopoly remains (along with the few other generic TLDs.) The answer there is to announce a phase out. All .com sites with generic meanings should get new names in the new system, but after a year or two they’ll get redirect as long as they want to pay. (Their new registrar will manage this and set the price.) All http requests, in particular would get an HTTP Redirect Permanent (301) so the browser shows the new name. E-mail MX would be provided but all sent email would use the new name. All old links and addresses would still work forever, but users would switch advertising and everything else to the new names at a reasonable pace. Yes, people who invested lots of money in trying to own words like “drugstore.com” lose some of that value, but it’s value they should never have been sold in the first place. (Companies with unique strings like microsoft.com could avoid the switch, but not non-unique ones like apple.com or ibm.com)

Check out the essays for the real details. Of course, at this point the forces of the “stakeholders” at ICANN are so powerful that I am tilting at windmills. They will go ahead even though it’s the wrong answer. And once done, it will be as hard to undo as .com is. But the right answer should still be proclaimed.

The "Forgetful Broker" is needed for Data Deposit Box

For some time I’ve been advocating a concept I call the Data Deposit Box as an architecture for providing social networking and personal data based applications in a distributed way that tries to find a happy medium between the old PC (your data live on your machine) and the modern cloud (your data live on 3rd party corporate machines) approach. The basic concept is to have a piece of cloud that you legally own (a data deposit box) where your data lives, and code from applications comes and runs on your box, but displays to your browser directly. This is partly about privacy, but mostly about interoperability and control.

This concept depends on the idea of publishing and subscribing to feeds from your friends (and other sources.) Your friends are updating data about themselves, and you might want to see it — ie. things like the Facebook wall, or Twitter feed. Feeds themselves would go through brokers just for the sake of efficiency, but would be encrypted so the brokers can’t actually read them.

There is a need for brokers which do see the data in certain cases, and in fact there’s a need that some types of data are never shown to your friends.

Crush

One classic example is the early social networking application the “crush” detector. In this app you get to declare a crush on a friend, but this is only revealed when both people have a mutual crush. Clearly you can’t just be sending your crush status to your friends. You need a 3rd party who gets the status of both of you, and only alerts you when the crush is mutual. (In some cases applications like this can be designed to work without the broker knowing your data through the process known as blinding (cryptography).)  read more »

Time for the fourth screen -- the always on wall computer

In media today, it’s common to talk about three screens: Desktop, mobile and TV. Many people watch TV on the first two now, and tools like Google TV and the old WebTV try to bring interactive, internet style content to the TV. People like to call the desktop the “lean forward” screen where you use a keyboard and have lots of interactivity, while the TV is the “lean back” couch-potato screen. The tablet is also distinguishing itself a bit from the small screen normally found in mobile.

More and more people also find great value in having an always-on screen where they can go to quickly ask questions or do tasks like E-mail.

I forecast we will soon see the development of a “fourth screen” which is a mostly-always-on wall panel meant to be used with almost no interaction at all. It’s not a thing to stare at like the TV (though it could turn into one) nor a thing to do interactive web sessions on. The goal is to have minimal UI and be a little bit psychic about what to show.

One could start by showing stuff that’s always of use. The current weather forecast, for example, and selected unusual headlines. Whether each member of the household has new mail, and if it makes sense from a privacy standpoint, possibly summaries of that mail. Likewise the most recent status from feeds on twitter or Facebook or other streams. One could easily fill a screen with these things so you need a particularly good filter to find what’s relevant. Upcoming calendar events (with warnings) also make sense.

Some things would show only when important. For example, when getting ready to go out, I almost always want to see the traffic map. Or rather, I want to see it if it has traffic jams on it, no need to show it when it’s green — if it’s not showing I know all is good. I may not need to see the weather if it’s forecast sunny either. Or if it’s raining right now. But if it’s clear now and going to rain later I want to see that. Many city transit systems have a site that tracks when the next bus or train will come to my stop — I want to see that, and perhaps at morning commute time even get an audio alert if something unusual is up or if I need to leave right now to catch the street car. A view from the security camera at the door should only show if somebody is at the door.

There are so many things I want to see that we will need some UI for the less popular ones. But it should be a simple UI, with no need to find a remote (though if I have a remote — any remote — it should be able to use it.) Speech commands would be good to temporarily see other screens and modes. A webcam (and eventually Kinect style sensor) for gestural UI would be nice, letting me swipe or wave to get other screens.  read more »

Google car demo, Toyota Vision and mind-driving

A few recent Robocar updates for you:

Google took its car down to the TED conference in Long Beach and did a few demo drives for people. In this mashable story you can catch some videos, inside and outside, of the car driving around a cone-based course on top of a parking lot near TED.

Toyota recently released a video with a vision of future transportation, including lots of self-driving cars in a city of the future. This short animated video has trucks in platoons and call-on-demand cars that come to your location and drive you around the city, or let you disengage and self-drive outside the automatic lane. In Toyota’s city there are special lanes which have guide markers and also inductive powering of the electric cars. While the powering may be valuable, I believe that the special infrastructure vision is an old one, and there are already several demonstrations of driving on existing roads without modification.

Once such demonstration comes from the “Made in Germany” team which certainly likes to come up with demos to attract attention. In this case they combined a headband that reads EEG/EMG signals with the controls of their robocar for what they call Brain Driver. The car does most of the driving, but signals from the headband can be used to tell it to go left or right at intersections, or accelerate and brake. My general experience of such EEG headbands has indicated that getting 5 unambiguous signals like that quickly is a tough job from pure EEG, so I am curious if they added some EMG (muscle) to it.

The brain driver is just a demo, but it does show one interesting technology, which is the ability for robocar technology to allow a vehicle to be driven through a very, very simple user interface — ie. just a few buttons or a joystick. I suspect that for people so disabled that they can only communicate via EEG — that’s majorly disabled — it will be better to wait for a full robocar technology that doesn’t require any human input for the driving part.

(Disclaimer: Google is a consulting client of mine.)

Ride-sharing apps instead of Bus Rapid Transit?

You may have heard of Bus Rapid Transit — a system to give a bus line a private or semi-private right-of-way, along with bus stops that are more akin to stations than bus shelters (with ticket-taking machines and loading platforms for multiple doors.) The idea is to make bus transit competitive with light-rail (LRT) in terms of speed and convenience. Aside from getting caught in slow traffic, buses also are slow to board. BRT is hoped to be vastly less expensive than light rail — which is not hard because LRT (which means light capacity rail, not lightweight rail) has gotten up to $80 to $100M per mile. When BRT runs down the middle of regular roads, it gets signal timing assistance to help it have fewer stops. It’s the “hot new thing” in transit. Some cities even give it bits of underground or elevated ROW (the Boston Silver Line) and others just want to wall off the center of a road to make an express bus corridor. Sometimes BRT gets its own highway lane or shares a special carpool lane.

At the same time just about anybody who has looked at transit and the internet has noticed that as the buses go down the street, they travel with tons of cars carrying only one person and lots of empty seats. Many have wondered, “how could we use those empty private car seats to carry the transit load?” There are a number of ride-sharing and carpooling apps on web sites and on smartphones, but success has been modest. Drivers tend to not want to take the time to declare their route, and if money is offered, it’s usually not enough to counter the inconvenience. Some apps are based on social networks so friends can give rides to friends — great when it works but not something you can easily do on demand.

But one place I’ve seen a lot of success at this is the casual carpooling system found in a number of cities. Here it’s very popular to cross the Oakland-SF Bay Bridge, which has a $6 toll to cross into SF. It used to be free for 3-person carpools, now it’s $2.50, but the carpools also get a faster lane for access to the highly congested bridge both going in and out of SF.

Almost all the casual carpool pickup spots coming in are at BART (subway) stations, which are both easy for everybody to get to, and which allow those who can’t get a carpool to just take the train. There is some irony that it means that the carpools mostly take people who would have ridden BART trains, not people who would have driven, the official purpose of carpool subsidies. In the reverse direction the carpools are far fewer with no toll to be saved, but you do get a better onramp.

People drive the casual carpools because they get something big for for it — saving over $1,000/year, and hopefully a shorter line to the bridge. This is the key factor to success in ride share. The riders are saving a similar amount of money in BART tickets, even more if they skipped driving.

Let’s consider what would happen if you put in the dedicated lane for BRT, but instead of buses created an internet mediated carpooling system. Drivers could enter the dedicated lane only if:

  • They declared their exit in advance to the app on their phone, and it’s far enough away to be useful to riders.
  • They agree to pick up riders that their phone commands them to.
  • They optionally get a background check that they pay for so they can be bonded in some way to do this. (Only the score of the background check is recorded, not the details.)

Riders would declare their own need for a ride, and to what location, on their own phones, or on screens mounted at “stops” (or possibly in nearby businesses like coffee shops.) When a rider is matched to a car, the rider will be informed and get to see the approach of their ride on the map, as well as a picture of the car and plate number. The driver will be signaled and told by voice command where to go and who to pick up. I suggest calling this Carpool-Rapid-Transit or CRT.  read more »

Watson, game 2

Not much new to report after the second game of the Watson Jeopardy Challenge. I’ve added a few updates to yesterday’s post on Watson and the result was as expected, though Watson struggled a lot more in this game than in the prior round, deciding not to answer many questions due to low confidence and making a few mistakes. In a few cases it was saved by not buzzing fast enough even though it had over 50% confidence, as it would have answered slightly wrong.

Some quick updates from yesterday you will also find in the comments:

  • Toronto’s 2nd busiest airport, the small Island airport, has the official but rarely used name of Billy Bishop. Bishop was one of the top flying aces of WWI, not WWII. Watson’s answer is still not clear, but that it made mistakes like this is not surprising. That it made so few is surprising
  • You can buzz in as soon as Trebek stops speaking. If you buzz early, you can’t buzz again for 0.2 seconds. Watson gets an electronic signal when it is time to buzz, and then physically presses the button. The humans get a light, but they don’t bother looking at it, they try timing when Trebek will finish. I think this is a serious advantage for Watson.
  • This IBM Blog Post gives the details on the technical interface between Watson and the game.
  • Watson may have seemed confident with its large bet of $17,973. But in fact the bet was fixed in advance:
    • Had Jennings bet his whole purse (and got it right) he would have ended up with $41,200.
    • If Watson had lost his bet of 17,973, he would have ended up with $41,201 and bare victory.
    • Both got it right, and Jennings bet low, so it ended up being $77,147 to $24,000.
    • Jennings’ low bet was wise at it assured him of 2nd place and a $300K purse instead of $200K. Knowing he could not beat Watson unless Watson bet stupidly, he did the right thing.
    • Jennings still could have bet more and got 2nd, but there was no value to it, the purse is always $300K
    • If Watson had wanted to 2nd guess, it might have realized Jennings would do this and bet appropriately but that’s not something you can do more than once.
    • As you might expect, the team put a bunch of thought into the betting algorithm as that is one thing computers can do perfectly sometimes. I’ve often seen Jeopardy players lose from bad betting.
  • It still sure seemed like a program sponsored by IBM. But I think it would have been nice if the PI of DeepQA was allowed up on stage for the handshake.
  • I do wish they had programmed a bit of sense of humour into Watson. Fake, but fun.
  • Amusingly Watson got a category about computer keyboards and didn’t understand it.
  • Unlike the human players who will hit the buzzer before they have formed the answer in their minds, in hope that they know it, Watson does not hit unless it has computed a high confidence answer.
  • Watson would have bombed on visual or audio clues. The show has a rule allowing those to be removed from the game for a disabled player, these were applied!
  • A few of the questions had some interesting ironies based on what was going on. I wonder if that was deliberate or not. To be fair, I would think the question-writers would not be told what contest they were writing for.

Watson, come here, I want you

The computer scientist world is abuzz with the game show world over the showdown between IBM’s “Watson” question-answering system and the best human players to play the game Jeopardy. The first game has been shown, with a crushing victory by Watson (in spite of a tie after the first half of the game.)

Tomorrow’s outcome is not in doubt. IBM would not have declared itself ready for the contest without being confident it would win, and they wouldn’t be putting all the advertising out about the contest if they had lost. What’s interesting is how they did it and what else they will be able to do with it.

Dealing with a general question has long been one of the hard problems in AI research. Watson isn’t quite there yet but it’s managed a great deal with a combination of algorithmic parsing and understanding combined with machine learning based on prior Jeopardy games. That’s a must because Jeopardy “answers” (clues) are often written in obfuscated styles, with puns and many idioms, exactly the sorts of things most natural language systems have had a very hard time with.

Watson’s problem is almost all understanding the question. Looking up obscure facts is not nearly so hard if you have a copy of Wikipedia and other databases on hand, particularly one parsed with other state-of-the-art natural language systems, which is what I presume they have. In fact, one would predict that Watson would do the best on the hardest $2,000 questions because these are usually hard because they refer to obscure knowledge, not because it is harder to understand the question. I expect that an evaluation of its results may show that its performance on hard questions is not much worse than on easy ones. (The main thing that would make easy questions easier would be the large number of articles in its database confirming the answer, and presumably boosting its confidence in its answer.) However, my intuition may be wrong here, in that most of Watson’s problems came on the high-value questions.

It’s confidence is important. If it does not feel confident it doesn’t buzz in. And it has a serious advantage at buzzing in, since you can’t buzz in right away on this game, and if you’re an encyclopedia like the two human champions and Watson, buzzing in is a large part of the game. In fact, a fairer game, which Watson might not do as well at, would involve randomly choosing which of the players who buzz in in the first few tenths of a second gets to answer the question, eliminating any reaction time advantage. Watson gets the questions as text, which is also a bit unfair, unless it is given them one word a time at human reading speed. It could do OCR on the screen but chances are it would read faster than the humans. It’s confidence numbers and results are extremely impressive. One reason it doesn’t buzz in is that even with 3,000 cores it takes 2-6 seconds to answer a question.

Indeed a totally fair contest would not have buzzing in time competition at all, and just allow all players who buzz in to answer an get or lose points based on their answer. (Answers would need to be in parallel.)

Watson’s coders know by now that they probably should have coded it to receive wrong answers from other contestants. In one instance it repeated a wrong answer, and in another case it said “What is Leg?” after Jennings had incorrectly answered “What is missing an arm?” in a question about an Olympic athlete. The host declared that right, but the judges reversed that saying that it would be right if a human who was following up the wrong answer said it, but was a wrong answer without that context. This was edited out. Also edited out were 4 crashes by Watson that made the game take 4 hours instead of 30 minutes.

It did not happen in what aired so far, but in the trials, another error I saw Watson make was declining to answer a request to be more specific on an answer. Watson was programmed to give minimalist answers, which often the host will accept as correct, so why take a risk. If the host doesn’t think you said enough he asks for a more specific answer. Watson sometimes said “I can be no more specific.” From a pure gameplay standpoint, that’s like saying, “I admit I am wrong.” For points, one should say the best longer phrase containing the one-word answer, because it just might be right. Though it has a larger chance of looking really stupid — see below for thoughts on that.

The shows also contain total love-fest pieces about IBM which make me amazed that IBM is not listed as a sponsor for the shows, other than perhaps in the name “The IBM Challenge.” I am sure Jeopardy is getting great ratings (just having their two champs back would do that on its own but this will be even more) but I have to wonder if any other money is flowing.

Being an idiot savant

Watson doesn’t really understand the Jeopardy clues, at least not as a human does. Like so many AI breakthroughs, this result comes from figuring out another way to attack the problem different from the method humans use. As a result, Watson sometimes puts out answers that are nonsense “idiot” answers from a human perspective. They cut back a lot on this by only having it answer when it has 50% confidence or higher, and in fact for most of its answers it has very impressive confidence numbers. But sometimes it gives such an answer. To the consternation of the Watson team, it did this on the Final Jeopardy clue, where it answered “Toronto” in the category “U.S. Cities.”  read more »

Air New Zealand "Cuddle Class"

Some years ago I made the proposal that airlines sell half of a middle seat at half price or less so that two coach passengers could assure they would have an empty middle next to them.

I learned a while ago about one approach to this plan, a new “cuddle class” from Air New Zealand also known as the skycouch. It’s a row of 3 coach seats that folds down into a very narrow and short bed for two. The idea is that couples can book the whole row for 2.5x the cost of one seat, ie. the empty middle is being sold at a pretty reasonable half-price, or 1/4 price per person.

As I noted earlier, that alone would be worthwhile. Many people would gladly pay 25% more for an aisle or window with a guarantee that nobody was in the middle, and would get together with other solo voyagers to do this. Air New Zealand has for some time offered what it calls the “Twinseat” which is the ability to buy (for a fairly low price around $60) an assured empty adjacent seat “subject to availability.” This is something different — it’s simply saying that, if there are going to be empty middles on the plane anyway, the people who pay more at the gate will get those next to them. You can’t assure it on a flight unless you make sure you take a flight that won’t fill up.

This skycouch seat however has armrests that really go all the way up, and a footrest that comes up to make the whole thing a platform. Frankly, since 3 seats is only 4.5’ long and the bed is narrower than a twin bed, you need a couple that sleeps together very comfortably while spooning. While everybody likes doing that for a little while, it’s fewer who can do that for a whole night. One person could buy the whole row, I guess, but at 2.5x it starts to approach a nice business class seat, many of which now lie flat. (Mind you I’m picky enough that I don’t sleep that well in the business class flat seats, and I have yet to want to pay for the 1st class ones.)

It’s nice to the see the innovation, though. I mean some airlines even have coach armrests that don’t go up all the way when reclined, and that’s a real pain for couples who want to relax together even in the old seating designs.

What would be more interesting, if less romantic, would be a way to have a portable platform that could be installed on top of this row to turn it into two bunkbeds. From a physical standpoint, you could have 4 slots for poles, some reinforcing straps to form X braces on the poles, and a board with inflatable mattress on the top, such boards packed somewhere compactly in the ceiling when not in use. The poles would have to go up and hold a net and bars to stop the top bunkmate from falling out. But the hard part would be making this strong enough to qualify as safe in an emergency landing, since an emergency might arise while these are still assembled, though they would all be dismantled well before landing and they would only be used on flights 10 hours and up. If there were a section of these you could help it along by having no recline in these seats so the seat backs are solid and able to support the upper berth.

In this case, you could have strangers happily paying 125% of the base ticket price for one of these bunks. Lot of work to set up and tear down, though. Probably need a weight limit in the upper bunk. If you can do it at all.

Definition of pixels for the world's biggest photos

I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.

Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.

Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.

The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.

There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.

Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)

Ideally we should seek an objective measure of this. I would propose:

  • There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
  • The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
  • I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.

Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.

Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.

Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)

These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.

Blind man drives, sort of, with a robocar

A release from the National Federation for the Blind reports a blind person driving and avoiding obstacles on the Daytona speedway. They used a car from the TORC team at Virginia Tech, one of the competitors in the Darpa Grand Challenges. In effect, the blind driver replaced the “drive by wire” component of a robocar with a more intelligent and thinking human also able to feel acceleration and make some judgements. As the laser and other sensors in the car detected obstacles and turns, the computer sent audio and vibratory signals to the driver to turn, speed up or slow down.

While this demo is pretty simple, it was part of a larger project the NFB has to encourage computer and robotic technologies to let the blind do what the sighted can do. In my robocar roadmap I outlined a number of bodies who might promote and lobby for robocar technology, in particular the blind, so it’s good to see that step underway. They did it as well in 2009 with a simpler dune buggy.

This car did not use the fancy and expensive 64 line Velodyne LIDAR sensor that has become the norm on most other working robocars. The Virginia Tech team (Victor Tango) was the only one of the 6 teams to complete the urban challenge not to use that LIDAR. The car shown isn’t nearly as decorated with sensors as Victor Tango was, at least from looking at it visually, indicating good improvements in their system.

Another pedal-powered monorail: Skyride

Last year I wrote about an interesting but simple pedal powered monorail/PRT system called Shweeb which had won a prize/investment from Google. Recent announcements show they are not alone in this concept. Scott Olson, the original developer of the Rollerblade, has founded a company called Skyride Technologies to build their own version of a pedal powered suspended monorail.

You will find much that is similar between the two concepts, though they were developed independently. I will have to give Skyride the nod of picking names, though. Skyride offers both pedaling and a rowing-machine style interface, the latter aimed both at the disabled and those seeking a different kind of workout.

At present, the Skyride car is also open to the air, which has both advantages and disadvantages when it comes to cooling, drag, and exposure to the elements. Skyride does not also seem to offer the “bumper” system in the wheel cartridge which Shweeb claims will allow vehicles to safely hit one another and then push one another in trains.

Both are confined to prototype tracks for now, though the Schweeb one is an amusement ride that is open to the public. Both have plans to solve the most important problem in turning this into a real transportation system for campuses or urban areas, namely a switch that lets the vehicle smoothly and safely change tracks. Switching has always been an issue in monorails — not that it can’t be solved, but it’s just a little harder than changing lanes in a car. Rail systems sometimes put the switching in the track (that’s what regular heavy rail does) but that’s not very practical if you are going to have very frequent small vehicles. You want in-vehicle switching but with no risk of derailing.

While this concept is interesting, and even more fun if they can prove it works and then add some automation, I am not sure it will ever become a really big space. Still, having 2 companies will not doubt spur a bit more innovation.

Working on Robocars at Google

As readers of this blog surely know, for several years I have been designing, writing and forecasting about the technology of self-driving “robocars” in the coming years. I’m pleased to announce that I have recently become a consultant to the robot car team working at Google.

Of course all that work will be done under NDA, and so until such time as Google makes more public announcements, I won’t be writing about what they or I are doing. I am very impressed by the team and their accomplishments, and to learn more I will point you to my blog post about their announcement and the article I added to my web site shortly after that announcement. It also means I probably won’t blog in any detail about certain areas of technology, in some cases not commenting on the work of other teams because of conflict of interest. However, as much as I enjoy writing and reporting on this technology, I would rather be building it.

My philosophical message about Robocars I have been saying for years, but it should be clear that I am simply consulting on the project, not setting its policies or acting as a spokesman.

My primary interest at Google is robocars, but many of you also know my long history in online civil rights and privacy, an area in which Google is often involved in both positive and negative ways. Indeed, while I was chairman of the EFF I felt there could be a conflict in working for a company which the EFF frequently has to either praise or criticise. I will be recusing myself from any EFF board decisions about Google, naturally.  read more »

My phone should know when I start a trip

Every day I get into my car and drive somewhere. My mobile phone has a lot of useful apps for travel, including maps with traffic and a lot more. And I am usually calling them up.

I believe that my phone should notice when I am driving off from somewhere, or about to, and automatically do some things for me. Of course, it could notice this if it ran the GPS all the time, but that’s expensive from a power standpoint, so there are other ways to identify this:

  • If the car has bluetooth, the phone usually associates with the car. That’s a dead giveaway, and can at least be a clue to start looking at the GPS.
  • Most of my haunts have wireless, and the phone associates with the wireless at my house and all the places I work. So it can notice when it disassociates and again start checking the GPS. To get smart, it might even notice the MAC addresses of wireless networks it can’t see inside the house, but which it does see outside or along my usual routes.
  • Of course moving out to the car involves jostling and walking in certain directions (it has a compass.)

Once it thinks it might be in the car, it should go to a mode where my “in the car” apps are easy to get to, in particular the live map of the location with the traffic displayed, or the screen for the nav system. Android has a “car mode” that tries to make it easy to access these apps, and it should enter that mode.

It should also now track me for a while to figure out which way I am going. Depending on which way I head and the time of day, it can probably guess which of my common routes I am going to take. For regular commuters, this should be a no-brainer. This is where I want it to be really smart: Instead of me having to call up the traffic, it should see that I am heading towards a given highway, and then check to see if there are traffic jams along my regular routes. If it sees one, Then it should beep to signal that, and if I turn it on, I should see that traffic jam. This way if I don’t hear it beep, I can feel comfortable that there is light traffic along the route I am taking. (Or that if there is traffic, it’s not traffic I can avoid with alternate routes.)

This is the way I want location based apps to work. I don’t want to have to transmit my location constantly to the cloud, and have the cloud figure out what to do at any given location. That’s privacy invading and uses up power and bandwidth. Instead the phone should have a daemon that detects location “events” that have been programmed into it, and then triggers programs when those events occur. Events include entering and leaving my house or places I work, driving certain roads and so on.

And yes, for tools like shopkick, they can even be entering stores I have registered. And as I blogged at the very beginning of this blog many years ago, we can even have an event for when we enter a store with a bad reputation. The phone can download a database of places and wireless and Bluetooth MACs that should trigger events, and as such the network doesn’t need to know my exact location to make things happen. But most importantly, I don’t want to have to know to ask if there is something important near me, I want the right important things to tell me when I get near them.

TVs should be universal, not remote controls

Like me, you probably have a dozen “universal” remote controls gathered over the years. With each new device and remote you go through a process to try to figure out special codes to enter into the remote to train it to operate your other devices. And it’s never very good, except perhaps in the expensive remotes with screens and macros.

The first universal remotes had to do this because they were made after the TVs and other devices, and had to control old ones. But the idea’s been around for decades, and I think we have it backwards. It’s not the remote that should work with any TV, it’s the TV that should work with any remote. I’m not even sure in most cases we need to have the remote come with the TV, though I know they like designing special magic buttons and layouts for each new remote.

It would be trivial for any TV or other device that displays video to figure out exactly what sort of remote you are pointing at it, and then figure out what to do with all its buttons. Since these devices now all have USB plugs and internet connections, they can even get their data updated. With the TV in a remote setting mode (which you must of course reach by the few keys on the TV) a few buttons from any remote should let the TV figure out what it’s seeing. If it can’t figure out the difference it can ask on the screen to push specific buttons until you you see a picture of your remote on the screen and confirm.

If it can’t figure it out, it can still program the codes from any device by remembering. This would let it prompt you “push the button you want to change the channel” and you would push it and it would know. You could also tweak any remotes. But most people would see the very simple interface of “press these keys and we’ll figure out which you have.” Also makes it easy to have more than one device of the same type. But in particular makes it easy to not have so many “modes” where you have to tell the remote you want to control the TV now, then the satellite box, then the stereo, then the dvd player. Instead just tell the TV “ignore the buttons I am about to press” (for example the volume buttons) and tell the stereo to obey them. Or program a button to do different things on different devices — not a macro where a smart remote sends all the codes needed to tell the TV and stereo to switch inputs while turning on the DVD player, but just each box responding in its own way.

For outlying cases, you could tell the user to program their universal remote for some well established old devices. Every universal remote there is can control a Sony TV for example. That makes it sure the TV will know a set of codes.

The TVs and other devices might as well recognize all the infrared keyboards out there while they are at it.

Of course, as TVs figure out how to do this, the remotes can change. They can become a bit more standardized, and instead of trying to figure everything out, they can be the dumb device and the AV equipment can be the smart device. It’s the AV equipment that has storage, a screen, audio and so much more.

You can also train devices to understand there are multiple remotes that belong to some people. For example, the adult remote can be different from the child’s remote, and only the adult remote can see the Playboy channel, and is kept private. The child’s remote can also be limited to a number of hours of TV as I first suggested six years ago at the birth of this blog.

You can even fix the annoying problem of most remote protocols — “on” and “off” are the same button. This makes it very hard to do things like macro control because you can’t be sure what that code can do. You can have a “turn everything off” button that really works (I presume some of the ones out there use hidden non-toggle codes when they can) or codes to do things like switch on the DVD if it’s not already on, switch video and audio inputs to it, and start playing — something many systems have tried to do but rarely done well.

There are a few things to tweak to make sure “IR blasters” work properly. (These are IR outputs found on DVRs which send commands to cable and satellite boxes to change their channel etc. They are a horrible kludge and the best way rid of them are the new protocols that connect the devices up to IP or the new IP over HDMI 1.4, or failing that the badly-done anynet.)

But the key point here is this: Remotes put the smarts in the wrong place.

Comparing electricity to a gallon of gasoline

The “burning” question for electric cars is how to compare them with gasoline. Last month I wrote about how wrong the EPA’s 99mpg number for the Nissan Leaf was, and I gave the 37mpg number you get from the Dept. of Energy’s methodology. More research shows the question is complex and messy.

So messy that the best solution is for electric cars to publish their efficiency in electric terms, which means a number like “watt-hours/mile.” The EPA measured the Leaf as about 330 watt-hours/mile (or .33 kwh/mile if you prefer.) For those who really prefer an mpg type number, so that higher is better, you would do miles/kwh.

Then you would get local power companies to publish local “kwh to gallon of gasoline” figures for the particular mix of power plants in that area. This also is not very easy, but it removes the local variation. The DoE or EPA could also come up with a national average kwh/gallon number, and car vendors could use that if they wanted, but frankly that national number is poor enough that most would not want to use it in the above-average states like California. In addition, the number in other countries is much better than in the USA.

The local mix varies a lot. Nationally it’s about 50% coal, 20% gas, 20% nuclear and 10% hydro with a smattering of other renewables. In some places, like Utah, New Mexico and many midwestern areas, it is 90% or more coal (which is bad.) In California, there is almost no coal — it’s mostly natural gas, with some nuclear, particularly in the south, and some hydro. In the Pacific Northwest, there is a dominance by hydro and electricity has far fewer emissions. (In TX, IL and NY, you can choose greener electricity providers which seems an obvious choice for the electric-car buyer.)

Understanding the local mix is a start, but there is more complexity. Let’s look at some of the different methods, staring with an executive summary for the 330 wh/mile Nissan Leaf and the national average grid:  read more »

  • Theoretical perfect conversion (EPA method): 99 mpg-e(perfect)
  • Heat energy formula (DoE national average): 37 mpg-e(heat)
  • Cost of electricity vs. gasoline (untaxed): 75 mpg-e($)
  • Pollution, notably PM2.5 particulates: Hard to calculate, could be very poor. Hydrocarbons and CO: very good.
  • Greenhouse Gas emissions, g CO2 equivalent: 60 mpg-e(CO2)

Designing a better, faster, secure, vastly cheaper airport with proto-robocars

Like just about everybody, I hate the way travel through airports has become. Airports get slower and bigger and more expensive, and for short-haul flights you can easily spend more time on the ground at airports than you do in the air. Security rules are a large part of the cause, but not all of it.

In this completely rewritten essay, I outline the design on a super-cheap airport with very few buildings, based on a fleet of proto-robocars. I call them proto models because these are cars we know how to build today, which navigate on prepared courses on pavement, in controlled situations and without civilian cars to worry about.

In this robocar airport, which I describe first in a narrative and then in detail, there are no terminal buildings or gates. Each plane just parks on the tarmac and robotic stairs and ramps move up and dock to all its doors. (Catering trucks, fuel trucks and luggage robots also arrive.) The passengers arrive in a perfect boarding order in robocars that dock at the ramps/steps to let them get on the plane through every entrance. Luggage is handled by different robots, and is checked and picked up not in carousels and check-in desks, but at curbs, parking lots, rental car centers and airport hotels.

The change is so dramatic that (even with security issues) people could arrive at airports for flights under 20 minutes before take-off, and get out even faster. Checked luggage would add time, but not much. I also believe you could build a high capacity airport for a tiny fraction of the cost of today’s modern multi-billion dollar edifices. I believe the overall experience would also be more pleasant and more productive for all.

This essay is a long one, but I am interested in feedback. What will work here, and what won’t? Would you love to fly through this airport or hate it? This is an airport designed not to give you a glorious building in which to wait but to get you through it without waiting most of the time.

The airport gets even better when real robocars, that can drive on the streets to the airport, come on the scene.

Give me your feedback on The Robocar Airport.

Key elements of the design include:  read more »

Where will 3-D cameras like Kinect lead?

This year, I bought Microsoft Kinect cameras for the nephews and niece. At first they will mostly play energetic X-box games with them but my hope is they will start to play with the things coming from the Kinect hacking community — the videos of the top hacks are quite interesting. At first, MS wanted to lock down the Kinect and threaten the open source developers who reverse engineered the protocol and released drivers. Now Microsoft has official open drivers.

This camera produced a VGA colour video image combined with a Z (depth) value for each pixel. This makes it trivial to isolate objects in the view (like people and their hands and faces) and splitting foreground from background is easy. The camera is $150 today (when even a simple one line LIDAR cost a fortune not long ago) and no doubt cameras like it will be cheap $30 consumer items in a few years time. As I understand it, the Kinect works using a mixture of triangulation — the sensor being in a different place from the emitter — combined with structured light (sending out arrays of dots and seeing how they are bent by the objects they hit.) An earlier report that it used time-of-flight is disputed, and implies it will get cheaper fast. Right now it doesn’t do close up or very distant, however. While projection takes power, meaning it won’t be available full time in mobile devices, it could still show up eventually in phones for short duration 3-D measurement.

I agree with those that think that something big is coming from this. Obviously in games, but also perhaps in these other areas.

Gestural interfaces and the car

While people have already made “Minority Report” interfaces with the Kinect, studies show these are not very good for desktop computer use — your arms get tired and are not super accurate. They are good for places where your interaction with the computer will be short, or where using a keyboard is not practical.

One place that might make sense is in the car, at least before the robocar. Fiddling with the secondary controls in a car (such as the radio, phone, climate system or navigation) is always a pain and you’re really not supposed to look at your hands as you hunt for the buttons. But taking one hand off the wheel is OK. This can work as long as you don’t have to look at a screen for visual feedback, which is often the case with navigation systems. Feedback could come by audio or a heads up display. Speech is also popular here but it could be combined with gestures.

A Gestural interface for the TV could also be nice — a remote control you can’t ever misplace. It would be easy to remember gestures for basic functions like volume and channel change and arrow keys (or mouse) in menus. More complex functions (like naming shows etc.) are best left to speech. Again speech and gestures should be combined in many cases, particularly when you have a risk that an accidental gesture or sound could issue a command you don’t like.

I also expect gestures to possibly control what I am calling the “4th screen” — namely an always-on wall display computer. (The first 3 screens are Computer, TV and mobile.) I expect most homes to eventually have a display that constantly shows useful information (as well as digital photos and TV) and you need a quick and unambiguous way to control it. Swiping is easy with gesture control so being able to just swipe between various screens (Time/weather, transit arrivals, traffic, pending emails, headlines) might be nice. Again in all cases the trick is not being fooled by accidental gestures while still making the gestures simple and easy.

In other areas of the car, things like assisted or automated parking, though not that hard to do today, become easier and cheaper.

Small scale robotics

I expect an explosion in hobby and home robotics based on these cameras. Forget about Roombas that bump into walls, finally cheap robots will be able to see. They may not identify what they see precisely, though the 3D will help, but they won’t miss objects and will have a much easier time doing things like picking them up or avoiding them. LIDARs have been common in expensive robots for some time, but having it cheap will generate new consumer applications.

Mobile

There will be some gestural controls for phones, particularly when they are used in cars. I expect things to be more limited here, with big apps to come in games. However, history shows that most of the new sensors added to mobile devices cause an explosion of innovation so there will be plenty not yet thought of. 3-D maps of areas (particularly when range is longer which requires power) can also be used as a means of very accurate position detection. The static objects of a space are often unique and let you figure out where you are to high precision — this is how the Google robocars drive.

Security & facial recognition

3-D will probably become the norm in the security camera business. It also helps with facial recognition in many ways (both by isolating the face and allowing its shape to play a role) and recognition of other things like gait, body shape and animals. Face recognition might become common at ATMs or security doors, and be used when logging onto a computer. It also makes “presence” detection reliable, allowing computers to see how and where people are in a room and even a bit of what they are doing, without having to object recognition. (Though as the kinect hacks demonstrate, they help object recognition as well.)

Face recognition is still error-prone of course so its security uses will be initially limited, but it will get better at telling among people.

Virtual worlds & video calls

While some might view this as gaming, we should also see these cameras heavily used in augmented reality and virtual world applications. It makes it easy to insert virtual objects into a view of the physical world and have a good sense of what’s in front and what’s behind. In video calling, the ability to tell the person from the background allows better compression, as well as blanking of the background for privacy. Effectively you get a “green screen” without the need for a green screen.

You can also do cool 3-D effects by getting an easy and cheap measurement of where the viewer’s head is. Moving a 3-D viewpoint in a generated or semi-generated world as the viewer moves her head creates a fun 3-D effect without glasses and now it will be cheap. (It only works for one viewer, though.) Likewise in video calls you can drop the other party into a different background and have them move within it in 3-D.

With multiple cameras it is also possible to build a more complete 3-D model of an entire scene, with textures to paint on it. Any natural scene can suddenly become something you can fly around.

Amateur video production

Some of the above effects are already showing up on YouTube. Soon everybody will be able to do it. The Kinect’s firmware already does “skeleton” detection, to map out the position of the limbs of a person in the view of the camera. That’s good for games but also allows motion capture for animation on the cheap. It also allows interesting live effects distorting the body or making light sabres glow. Expect people in their own homes to be making their own Avatar like movies, at least on a smaller scale.

These cameras will become so popular we may need to start worrying about interference by their structured light. These are apps I thought of in just a few minutes. I am sure there will be tons more. If you have something cool to imagine, put it in the comments.

Happy Seasons to all! and a Merry New Year.