Rise of the selfie drones. Is tethered a good idea?

At CES, there were a couple of “selfie drones.” The Nixie is designed to be worn on your wrist, taken off, thrown, and then it returns to you after taking a photo or video. There was also the Zano which is fancier and claims it will follow you around, tracking you as you mountain bike or ski to make a video of you just as you do your cool trick.

The selfie is everywhere. In Rome, literally hundreds of vendors tried to sell me selfie sticks in all the major tourist areas, even with a fat Canon DSLR hanging from my neck. It’s become the most common street vendor gadget. (The blue LED wind up helicopters were driving me nuts anyway.)

I also had been thinking about this, and came up with a design that’s not as capable as these designs, but might be better. My selfie drone would be tethered. You would put down the base which would have the batteries and a retractable cord. Up would fly the camera drone, which would track your phone to get a great shot of you. (If it were for me, it would also offer panorama mode where it spun around at the top shooting a pano, with you or without you.)

This drone could not follow you as you do a sport, of course, or get above a certain height. But unlike the free designs, it would not get lost over the cliff in the winds, as I think might happen to a number of these free selfie drones. It turns out that cliffs and outlook points are a common place to want to take these photos, they are the place you really need a high view to capture you and what’s below you.

Secondly, with the battery on the ground, and only a short tether wire needed, you can have a much better camera as payload. Only needing a short flight time and not needing to carry the batteries means more capabilities for the drone.

It’s also less dangerous, and is unlikely to come under regulation because it physically can’t fly beyond a certain altitude or distance from the base. It could not shoot you from water or from over the edge of the cliff as the other drones could if you were willing to risk them.

My variation would probably be a niche. Most selfies are there to show off where you were, not to be top quality photos. Only more serious photographers would want one capable of hauling up a quality lens. Because mine probably wants a motor in the base to reel it back in (so you don’t have to wind the cables) it might even cost more, not less.

The pano mode would be very useful. In so many pano spots, the view is fantastic but is blocked by bushes and trees, and the spectacular pano shot is only available if you go up enough. For daytime a tethered drone would probably do fine. I’m still waiting on the Panono — a ball, studded with cameras from Berlin that was funded by Kickstarter. You throw the ball up, and it figures when it is at the top of its flight and shoots the panorama all at once. Something like that could also be carried by a tethered drone, and it has the advantage of not moving between shots as a spinning drone would be at risk for. This is another thing I’ve wanted for a while. After my first experiments in airplane and helicopter based panoramas showed you really want to shoot everything all at once, I imagined having decent digital cameras getting cheap enough to buy 16 of them and put them in a circle. Sadly, once cameras starting doing that, there were always better cameras that I now decided I needed that were too expensive to buy for that purpose.

An instant online debate for everybody ("Youtube" debate)

In continuation of my series on fixing politics I would like to address the issue of debates. Not just presidential debates, but all levels.

The big debates are a strange animal. You need to get the candidates to agree to come, and so a big negotiation takes place which inherently waters down the debate. Only the big 2 candidates appear in Presidential debates, usually, and they put in rules that stop the candidates form actually actively debating one another. Most debates outside the big ones get little attention, and they are a lot of work.

I propose the creation, on an online video site — Youtube is an obvious choice but it need not be there — of a suite of tools to allow the creation of a special online video debate. Anybody, in any race, could create a debate using these tools, and do it easily.

To run a debate, some group with some reputation — press, or even election officials, would use the system to create a new debate. They would then gather some initial questions, and invite candidates — usually all candidates in the race, there being no reason to exclude anybody (as you’ll see below.) The initial questions could be in video, coming from press or voters as desired.

The first round of questions would be released to the candidates. They would then be able to record video answers to those questions, in addition to opening statements. They could record answers of any length, or even record answers of multiple lengths, or answers with logical stopping points marked at different lengths. They could also write written answers or record just audio, which is much less work.

After this, candidates could look at what the other candidates said, and then record responses, again in varying lengths if they like. They could then record responses to the responses, and so on. They could record a response to a specific candidate’s statements, or a response applying to more than one, as they wish.

It could also be enabled that candidates could ask questions of other candidates, and those candidates could elect to answer or not answer. They could also agree in advance that they will trade answers, ie. “I will answer one of yours if you will answer one of mine.”

This process would create a series of videos, and we then get to the next part of the tool, which would allow the voter to program what sort of debate they want.

For example, a voter could say:

  • I want a debate between the Republican and Democrat, initial answers limited to around 2 minutes, follow-ups to one minute, up to 2 each.
  • I want a debate between the Republican, Democrat and Libertarian, with follow-ups and videos until I hit “next”
  • I want a debate between all candidates on Climate Change (or any other issue that’s been put in the debate)
  • I want a debate on foreign policy among the top candidates as ranked by feedback scores/Greenpeace/etc.

The voter could have exactly the debate they wanted, and candidates could go back and forth rebutting one another as long as they wanted. Candidates would be able to get statistics on the length of answers that voters are looking for, and know how long a response to give. Typically they would do one short and one long, but they could also make a long response that is structured so it can be stopped reasonably at several different points when the voter gets bored.

Sure, the Republican might decide not to respond to the Green Party candidate’s view on Climate Change. If the viewer asked for a Republican-Green debate, the system would just say “the candidate offered no response.” Voters who wanted could even accept seeing material from other voters.

Candidates would duplicate themselves in answers, so software would convert the answers to text (or campaigns would provide the captions) and the system could automatically remove things you’ve seen, quickly popping up the text for a few seconds. If desired, campaign workers could spend a fair bit of time tuning just what to show based on the history of the viewer’s watching.

For the Presidential debates, building a well crafted set of videos would take time, but probably less time than the immense prep and rehearsal they do for those debates. On the other hand, they get to do multiple takes, so they don’t need to rehearse, just say it until it feels right. It does mean you don’t get to see the candidate under pressure — there is no Rick Perry saying he will close 3 agencies and only being able to name 2. As such it may not substitute fully for that, but it would also allow a low-effort debate at every level of contest, and bring the candidates in front of more voters.

Is Apple building a robocar? Maybe, maybe not

There is great buzz about some sensor-laden vehicles being driven around the USA which have been discovered to be owned by Apple Computer. The vehicles have cameras and LIDARs and GPS antennas and many are wondering is this an Apple Self-Driving Car? See also speculation from cult of Mac.

Here’s a video of the vehicle driving around the East Bay (50 miles from Cupertino) but they have also been seen in New York.

We don’t see the front of the vehicle, but it sure has plenty of sensors. On the front and back you see two Velodyne 32E Lidars. These are 32 plane LIDARS that cost about $30K. You see two GPS antennas and what appear to be cameras in all directions. You don’t see the front in these pictures, which is where the most interesting sensors will be.

So is this a robocar, or is this a fancy mapping car? Rumours about Apple working on a car have been swirling for a while, but one thing to contradict that has been the absence of sightings of cars like this. You can’t have an active program without testing on the roads. There are ways to hide LIDARS (and Apple is super secretive so they might) and even cameras to a degree, but this vehicle hides little.

Most curious are the Velodynes. They are tilted down significantly. The 32E unit sees from about 10 degrees up to 30 degrees down. Tilting them this much means you don’t see out horizontally, which is not at all what you want if this is for a self-driving car. These LIDARs are densely scanning the road close around the car, and higher things in the opposite direction. The rear LIDAR will be seeing out horizontally, but it’s placed just where you wouldn’t place it to see what’s in front of you. A GPS antenna is blocking the direct forward view, so if the goal of the rear LIDAR is to see ahead, it makes no sense.

We don’t see the front, so there might be another LIDAR up there, along with radars (often hidden in the grille) and these would be pretty important for any research car.

For mapping, these strange angles and blind spots are not an issue. You are trying to build a 3D and visible light scan of the world. What you do’t see from one point you get from another. For stree mapping, what’s directly in front and behind are generally road and not interesting, but what’s to the side is really interesting.

Also on the car is an accurate encoder on the wheel to give improved odemetry. Both robocars and mapping cars are interested in precise position information.

Arguments this is a robocar:

  • The Velodynes are expensive, high end and more than you need for mapping, though if cost is no object, they are a decent choice.
  • Apple knows it’s being watched, and might try to make their robocar look like a mapping car
  • There are other sensors we can’t seee

Arguments it’s a mapping car

  • As noted, the Velodynes are titled in a way that really suggests mapping. (Ford uses tilted ones but paired with horizontal ones.)
  • The cameras are aimed at the corners, not forward as you would want
  • They are driving in remote locations, which eventually you want to do, but initially you are more likely to get to the first stage close to home. Google has not done serious testing outside the Bay Area in spite of their large project.
  • The lack of streetview is a major advantage Google has over Apple, so it is not surprising they might make their own.

I can’t make a firm conclusion, but this leans toward it being a mapping car. Seeing the front (which I am sure will happen soon) will tell us more.

Another option is it could be a mapping car building advanced maps for a different, secret, self-driving car.

Would Bitcoin fall off a cliff if it dropped to $100 or $150?

Bitcoin’s been on a long decline over the past year, and today is around $220 per coin. The value has always been based on speculation about Bitcoin’s future value, not its present value, so it’s been very hard to predict and investment in the coins has been risky.

Some thinking led me to a scary conclusion. Recent news has revealed that a number of “cloud mining” companies have shut down after the price drop. Let me explain why.

Over time, all bitcoin mining has been done using specialized ASIC hardware. The hardware is priced so that you can make a decent but not ridiculous profit with it. All the bitcoins mined go mostly into paying for mining hardware and electricity — much less goes into profit for the miners. In the past, the electricity was the big cost, but mining hardware got fast enough and expensive enough that most of the cost of mining has been paying off your mining hardware, with electricity dropping to being 20% or less of the cost.

In other words, most of the 3600 btc/day mining revenues of the bitcoin system have been going into the people making mining chips and rigs, but that’s another story.

With the drop in price, electricity is back up to being half your cost. That puts a squeeze on the cost of mining equipment. With cloud mining, as with Amazon Web Services, you rented mining equipment and power by the hour. People who bought their mining equipment will still run it as long as the revenue is more than the operating cost. For cloud mining, you need the revenue to exceed the operating and capital cost, because the capital costs are amortized into the operating cost. While cloud mining companies could cut their fees to cut their losses, some have instead just left the business. As noted, those who bought mining equipment are running it now at less profit, but as long as the mining brings in more than the electricity cost, it’s still worth running — the mining gear is all paid for, and even though you will never make back your money, it’s worse if you shut it off.

You can get a good analysis of the cost and profitability of mining rigs at this mining calculator.

What if a panic dropped a bitcoin under $100?

It’s not out of the question that a sudden panic might drop Bitcoin quickly down to $100. It probably won’t happen, but it certainly could. At this point, with current generation mining equipment, most miners then see their revenue drop below the cost of electricity. If they are rational and strictly profit-oriented, they cry into their beer and turn off the mining rig. And the cloud miners have already done that, and some other miners have done the same sooner than they expected, and the network hashrate (the measure of how much mining power there is) has had minor sustained drops for the first time in years.

(It’s worst than this. Even at $150, all but the most recent mining rigs become unprofitable to keep turned on, and so a major drop would happen with much less of a drop needed. New mining equipment expected to ship in the next few months is profitable at even lower prices, though.)

The way Bitcoin works, when they turn off the rig, it doesn’t mean more coins for the other miners. Bitcoin sets the reward rate with a “difficulty” number that makes the Bitcoin lottery problem harder the more mining capacity is out there. Your reward rate is a strict function of the difficulty and the power of your miners.

Every 2016 blocks, the difficulty adjusts based on how much capacity seems to be mining. Under normal operations, 2016 blocks is two weeks, as long as people are mining at the rate seen in the 2 weeks prior to setting the current difficulty. If large volumes of miners shut off their rigs as non-productive, the mining rate would crash. The wait for a new difficulty could be not just two weeks if this happened at the wrong time, but 4 weeks if half the miners shut down, or 8 weeks if 3/4 of them left. In terms of the Bitcoin world, it’s effectively forever, and long before that, confidence in the coin price would probably drop further, causing more miners to shut off their rigs. Only dedicated fans willing to lose money to preserve the system would keep mining.

In such a panic, the Bitcoin Foundation and others might propose an emergency modification of the Bitcoin software base which is able to do an emergency reduction of the difficulty number. Alternately they could propose bumping the mining reward back to 50 coins instead of 25. This would still take days, which I think is too long. But if they did, it’s a sticky issue. As soon as you drop the difficulty enough, all those miners come back online, and now the difficulty is too low. To do it right, an estimate would have to be made of how much mining capacity is cost effective and set the difficulty so that only some of the miners come back online, a number tied to that difficulty. For example, one might look at the various mining rigs out there, and set the difficulty such that they are (barely) profitable while others are not. Problem is, the profitability depends on the price of a bitcoin, which will be wildly fluctuating. It’s not clear how to solve this.

If the electricity cost exceeds the reward, but you still want bitcoins for future investment, the rational thing is not to mine, but to just buy bitcoins on the exchanges and keep the price up.

What would happen after such a collapse? Could it be stopped?

The collapse would probably spread to altcoins, but some might survive and become successors to Bitcoin. In addition, there are many people devoted to Bitcoin who would continue to mine, even at a loss, to get it back on its feet. After all, the early years of Bitcoin, all mining was at a loss, though it turned into a huge bonanza later and was a wise idea in hindsight. With the large number of well funded companies in the space, we could see companies willing to maintain unprofitable mining for some time if the alternative is the destruction of the thing they’ve based their business on. They might even buy up the rigs of failed miners, or pay them to mine. Perhaps, if they are ready, they could heed the warning in this message and make contracts with enough miners to say, “we’ll pay you to keep mining if a collapse happens.”

Alternately, Bitcoin users and boosters could just start deliberately leaving large transaction fees in their transactions to make the cost of mining worthwhile again. While hard to sustain long term, it is in their interest to spend their bitcoins to keep the mining system going, since those coins probably drop immensely if it falls down. It also keeps faith in the mining system since if the coin owners ran the miners, they might corrupt the network with that much power. It should be noted that it’s always been part of the plan for Bitcoin that higher transaction fees would arise as the coinbase rewards dropped, but not this early, and because the reward dropped in btc, not dollars.

The subsidy would have to be enough to overcome losses and provide a modest or even very small profit. The network cost pays 3600 bitcoins/day in mining fees (or $360K at $100/bitcoin.) The subsidy might be more in the range of $50K or $100K per day — affordable to keep the network alive for up to 14 days to survival.

Another idea would be to develop a way to make the difficulty more dynamic, or provide some mechanism for an emergency reduction. (An emergency increase would mean something was really wrong and would probably also mean somebody had more than half the mining capacity, another must-not-happen.)

What sort of events could cause such a huge drop, to 45% of the current value? That’s not been seen in a short time, but a big political event, such as a suggestion the USA or EU might forbid or impede Bitcoin could do it. But there are many other things that can cause panic. A shutdown of exchanges (a common technique in stock market panics) would probably do little, as there are exchanges all over the world and all will not shut down. A call to miners to sacrifice might work, at least for a while, to allow time to fix the problem.

Latent mining capacity

Mining rigs are shut down all the time as non-profitable, but in the past that’s always been because newer, better rigs were out there dominating the mining space and pushing up the difficulty. It would be a new idea to have rigs shut down because the dollar price dropped. When such rigs shut down, they would not be permanently useless, and unless torn down, they would be able to restart at any time. For example, if the difficulty dropped (because they all shut down) they would all start running again, and blocks would come out faster than intended. Then, 2016 blocks later, the difficulty would be recalculated up again — and they would stop again. Miners would also start and stop based on the day’s price as well, and the price might even swing around the expected rises and drops in difficulty. This seems like it would be chaos.

Once the electricity cost dominates, the important metric in mining equipment is not gigahashes/second, but gigahashes per joule. At 10 cents/kwh, you need around 2 gigahashes/joule to beat the electricity cost with $100 bitcoins and today’s difficulty number. At today’s $220 bitcoins, 0.9 gigahash/joule will do. Most miners are under 2, but there are some that do close to 3, and there is the promise of 5. If the trends in the rest of computing are an indicator, operations per joule will eventually level off, even as transistor counts continue to increase. If that happens we will stop seeing big increases in mining power and the upward spiral would end.

Uber and Google are not breaking up quite yet

After yesterday’s story about Uber and CMU, a lot of speculation has flown that Uber will now be at odds with Google, both about building robocars and also on providing network taxi service, since another rumour said Google plans to launch an Uber like “ride share” service.

Since then, the Uber blog post and this interview with Uber folks tell a slightly different story. Uber is funding a research center at CMU, and giving lots of grants to academics. Details are not fully available, but typically this means being at an early research stage. With these research labs, academics are keen to publish all they do, so little gets done in secret. In many cases the sponsor gets a licence to the technology but it’s often not exclusive. If Uber wanted to build their own car, chances are they would do it in a more private lab.

Rumours that David Drummond would resign from the Uber board also have not panned out. Google has invested hugely in Uber (already for good return at the present valuation) and Google Maps offers you an Uber if you ask it for directions somewhere — it’s actually one of the easier interfaces for ordering one.

Rumours around Google’s efforts suggest that Big G has been testing a “ride share” app with employees and plans to launch it. Google has denied that, and says it loves Uber and Lyft. Further news revealed the rumours were about an internal carpooling system, not involving the self-driving cars. I could imagine confusion because Uber and others call themselves “ride sharing” which is a bit of a fabrication to not look like a taxi, while a carpooling app would be real ride sharing. (UberPool is real ride sharing.) Google, which has a terrible undersupply of parking is very keen on getting employees to ride its bus system and to carpool.

That said, Google has talked about the same thing I talk about — the true goal of robocar technology being the creation of a mobility on demand taxi service, like Uber but at a much lower cost. Google has not said that they would provide that themselves, or who they would partner with if they did it. Most people have presumed it might be Uber but I don’t think that’s at all assured.

At the same time, Uber has assured its drivers they are not going away for the foreseeable future. I suspect that’s an equivocation, and just means that we can’t see very far in the future right now!

Will robocars use V2V at all?

I commonly see statements from connected car advocates that vehicle to vehicle (V2V) and vehicle to infrastructure communications are an important, even essential technology for robocar development. Readers of this blog will know I disagree strongly, and while I think I2V will be important (done primarily over the existing mobile data network) I suspect that V2V is only barely useful, with minimal value cases that have a hard time justifying its cost.

Of late, though, my forecast for V2V grows even more dismal, because I wonder if robocars will implement V2V with human-driven cars at all, even if it becomes common for ordinary cars to have the technology because of a legal mandate.

The problem is security. A robocar is a very dangerous machine. Compromised, it can cause a lot of damage, even death. As such, security will have a very strong focus in development. You don’t want anybody breaking into the computer systems or your car or anybody else’s. You really don’t want it.

One clear fact that people in security know — a very large fraction of computer security breaches caused by software faults have come from programs that receive input data from external sources, in particular when you will accept data from anybody. Internet tools are the biggest culprits, and there is a long history of buffer overflows, injection attacks and other trouble that has fallen on tools which will accept a message from just anyone. Servers (which openly accept messages from outside) are at the greatest risk, but even client tools like web browsers run into trouble because they go to vast numbers of different web sites, and it’s not hard to trick people to sending them to a random web site.

We work very hard to remove these vulnerabilities, because when you’re writing a web tool, you have no choice. You must accept input from random strangers. Holes still get found, and we pay the price.

The simplest strategy to improve your chances is to go deaf. Don’t receive inputs from outside at all. You can’t do that in most products, but if you can close off a channel without impeding functionality it’s a good approach. Generally you will do the following to be more secure:

  1. Be a client, which means you make communications requests, you do not receive them.
  2. You only connect to places you trust. You avoid allowing yourself to be directed to connect to other things
  3. You use digital signature and encryption to assure that you really are talking to your trusted server.

This doesn’t protect you perfectly. Your home server can be compromised — it often will be running in an environment not as locked down as this. In fact, if it becomes your relay for messages from outside, as it must, it has a vector for attack. Still, the extra layer adds some security.  read more »

Uber to research robocars?

Rumours reported in TechCrunch suggest Uber is opening a robocar lab in Pittsburgh and hiring up to 50 CMU folks to staff it.

Update: On the Uber blog we now see it’s more funding of research labs at CMU, on many topics

That’s a major step, if true. People have often pointed out how well Uber is poised to make use of robocar technology to bring computer summoned taxi service to the next level. If Uber did not exist, I would surely be building it to get that advantage. Many have assumed that since Google is a major investment partner in Uber that they would partner on this technology, but this suggests otherwise.

I write about Uber a lot here not just because of interest in what they do today, but because it teaches us a lot about how people will view Robocars in the future. Uber’s interface is very similar to what you might see for a robocar service, and the experience is fairly similar, just much more expensive. UberX is $1.30/mile plus 26 cents/minute with $2.20 flag drop. The Black service is $3.75/mile and 65 cents/minute with an $8 flag drop. I expect robocar tax service to be cheaper than 50 cents/mile with minimal per-minute charges. The flag drop is not yet easy to calculate. What richer people do with Uber teaches us what the whole public will do with robocars.

Uber lets you say where you are going but doesn’t demand it. That’s one thing I suspect will be different with your robotaxi, because it’s really nice if they can send you a vehicle chosen for the trip you have in mind. Ie. a small, efficient car without much range for short, single person trips. Robotaxi services will offer you the ability to not say your destination — but they will probably charge more for it, and that means most people will be willing to say their destination.

Uber does not hide their desire to get rid of all their drivers, which sounds like a strange strategy, but the truth is that cab driving is not something most people view as a career. It’s a quick source of money with no special skills, something people do until something better comes along, or in the gaps in their day to make extra cash. Unlike people losing jobs to robots on a factory line, nobody is particularly upset at the idea.

Uber starts to improve their surge pricing public relations

Uber’s gotten a lot of bad press over its surge pricing system. As prices soared during Storm Sandy and a hostage crisis in Sydney, people saw it as price gouging when times are tough.

I’ve always thought the public reaction to price gouging in times of scarcity and emergency was irrational. While charging double or triple for food, rides or generators does mean that the rich get more access to them, it also does at least a partial job of assuring that people who truly need or want things the most get access over those who need them less. I do not quite understand why the alternative — keeping prices flat, and allocating items to whoever gets there first — is so broadly preferred.

Uber has promoted another reason to have surge pricing. They argue that as they raise the prices, it causes an increase in supply. Unlike generators, where there are only so many in the stores during a storm, doubling the price of a ride can mean a sudden influx of rides, both from people in the area and even those who rush in from outside to make the extra buck. I suspect that does happen, but Uber also makes more money and poorer people are priced out of the market, which has been a PR nightmare.

For the recent snowstorm that didn’t end up being too bad in NY, Uber announced some new policies — a cap of 2.8x on the price increase, and donation of all proceeds to the Red Cross. The mayor of New York even declared the surge-pricing was illegal.

It’s an interesting start, but what do they mean by all proceeds? If they’re not increasing the income of the drivers — many of whom are low enough income that the double-time or more rates can make a real difference — then they are defeating the whole point of this.

Here are some potential ideas I was thinking about for how to play surge pricing:

  • Keep Uber’s fee during a surge the same. Ie. it’s always 20% of the rack rate, not of the surged price. So Uber is making no extra money (except from the extra volume,) just the drivers.
  • To get really extreme, Uber could reduce its cut as volume increases, so they don’t even make money from the increased volume.
  • They could just donate all their cut (which may be what they mean when they say all proceeds.)
  • The extra could be split between drivers and a charity. You get more drivers, and they make more, but good deeds are also done.

Another option would be to do something like a “buy one give one” as we’ve seen in physical products. This would mean that during the surge, riders could elect to pay more to get priority (and to attract drivers.) But if the surge is for 2x, they might pay 3x, and the overage would go to provide a regular priced ride (1x) for somebody else, while still paying the driver 2x.

The tricky part is how to make sure the subsidized rides only go to those who can’t afford to pay the surge price. The subsidized rides will presumably still be in short supply. You want them to go only to those who truly need them. Options might include:

  • Offer subsidies primarily for those who use UberX almost exclusively. Use a lot of black car and you don’t get a subsidy. (Yes, some people use black car on expense account and UberX on personal rides, including myself, so this is not perfect.)
  • Require a declaration of low income. Subject those who declare low income to random audits after the fact, pulling up credit scores or asking them to actually demonstrate the low income. If they lied, charge them the full amount plus a penalty for all subsidized rides they took.
  • Drivers could also elect to subsidize, and say they will drive for 1x, or any other amount, to really increase the supply of subsidized rides and the amount of subsidy. They might get a tax donation receipt for doing so if Uber could set up the tax structures properly with a non-profit. (A non-profit would probably need to work over all companies or be fully independent of the company.)

As already happens with the surge system, adjust the surcharge and subsidy to try and make demand match supply.

You could even offer rides to those in need for 0.5x, a flat fee, or even nothing, though nothing is very easy to abuse.

Singularity University summer GSP now free (for those who get in.) Wanna come? Wanna speak?

As some of you may know, I have been working as chair of computing and networking at Singularity University. The most rewarding part of that job is our ten week summer Graduate Studies Program. GSP15 will be our 7th year of it. This program takes 80 students from around the world (typically over 30 countries and only 10-15% from North America) and gives them 5 weeks of lectures on technology trends in a dozen major fields, and then 5 weeks of forming into teams to try to apply that knowledge and thinking to launch projects that can seriously change the world. (We set them the goal of having the potential to help a billion people in 10 years.)

The classes have all been fantastic, and many of the projects have gone on to be going concerns. A lot of the students come in with one plan for their life and leave with another.

It’s about to get better. One big problem was that the program is expensive. Last year we charged almost $30,000 (it includes room and board) and most of the scholarships were sponsored competitions in different countries and regions. This limits who can come.

Larry Page and Google helped found Singularity U in 2009, and has stepped up massively this year with a scholarship fund that assures that all accepted students will attend free of charge. Students will either get in through one of the global contests, or be accepted by the admissions team and given a full scholarship. It means we’ll be able to select from the best students in the world, regardless of whether they can afford the cost.

In spite of the name, SU is not really about “the singularity” and not anything like a traditional university. The best way to figure it out is to read the testimonials of the graduates.

Students come in many age ranges — we have had early 20s to late 50s, with a mix of backgrounds in technology, business, design and art. Show us you’re a rising star (or a star that has done it before and is ready to do it again even bigger) and consider applying.

Speaking at SU

In the rest of the year we do a lot of shorter programs, from a couple of days to a week, aimed at providing a compressed view of the future of technology and its implications to a different crowd — typically corporate, entrepreneur and investor based. As that grows, we need more speakers, and I’m particularly interested in finding new folks to add related to computing and networking technologies. We do this all over the planet, which can be a mix of rewarding and draining, though about half the events are in Silicon Valley. There are 3 things I am looking for:

  • The chops and expertise in your field to do a cutting edge talk — why do we start listening to you?
  • Great speaking skills — why do we keep listening to you?
  • All else being equal, I seek more great female and minority speakers to reverse Silicon Valley’s imbalances, which we suffer as well.

Is this you, or do you have somebody to recommend? Contact me ( for more details. While top-flight people generally have some of their own work to talk about, and I do use speakers sometimes on very specific topics, the ideal speaker is a great teacher who can cover many topics for audiences who are very smart but not always from engineering backgrounds.

Our next public event is March 12-14 in Seville, Spain — if you’re in Europe try to make it.

UMich team works on perception and localization using cameras

Some new results from the NGV Team at the University of Michigan describe different approaches for perception (detecting obstacles on the road) and localizations (figuring out precisely where you are.) Ford helped fund some of the research so they issued press releases about it and got some media stories. Here’s a look at what they propose.

Many hope to be able to solve robotics (and thus car) problems with just cameras. While LIDAR is going to become cheap, it is not yet, and cameras are much cheaper. I outline many of the trade-offs between the systems in my article on cameras vs lasers. Everybody hopes for a research breakthrough or computer vision breakthrough to make vision systems reliable enough for safe operation.

The Michigan lab’s approach is a special machine vision one. They map the road in advance in 3D and visible light by using a mapping car equipped with lots of expensive LIDAR and other sensors. They build a 3D representation of the road similar to what you need for a video game engine, and from that, with the use of GPUs, they can indeed create a 2D image of what a camera should see from any given point.

The car goes out into the world and its actual camera delivers a 2D frame of what it sees. Their system then compares that with generated 2D images of what the camera should see until it finds the closest match. Effectively, it’s like you looking out a window and then going into a video game and wandering around looking for a place that looks like what you see out that window, and then you know where the window is.

Of course it is not “wandering,” and they develop efficient search algorithms to quickly find the location that looks most like the real world image. We’ve all seen video games images, and know they only approximate the real world, so nothing will be an exact match, but if the system is good enough, there will be a “most similar” match that also corresponds with what other sensors, like your GPS and your odometer/dead reckoning system, tell you about where you probably are.

Localization with cameras has been done before, and this is a new approach taking advantage of new generations of GPUs, so it’s interesting. The big challenge is simulating the lighting, because the real world is full of different lighting, high dynamic range, and shadows. The human system has no problem understanding a stripe on the road as it moves through the shadow of a tree, but computer systems have a pretty tough time with that. Sun shadows can be mapped well with GPUs, but shadows from things like the moving limbs of trees are not possible to simulate, as are the shadows of other vehicles and road users. At night, light and shadows come from car headlights and urban lights. The team is optimistic about how well they will handle these problems.

The much larger challenge is object perception. Once you have a simulation of what the camera should see, you can notice when there are things present that are not in the prediction — like another car or pedestrian, or a new road sign. (Right now their system mostly is looking at the ground.) Once you identify the new region, you can attempt to classify it using computer vision techniques, and also by watching it move against the expected background.

This is where it gets challenging, because the bar is very high. To be used for driving it must effectively always work. Even if you miss 1 pedestrian in a million you have a real problem because there are billions of pedestrians encountered by a billion drivers every day. This is why people love LIDAR — if something (other than a mirror or sheet of glass) sufficiently large is sufficiently close you, you’re going to get laser returns from it, and not from what’s behind it. It has the reliability number that is needed. The challenge of vision systems is to meet that reliability goal.

This work is interesting because it does a lot without relying on AI “computer vision” techniques. It is not trying to look at a picture and recognize a person. Humans are able to look at 2D pictures with bizarre lighting and still tell you not just what the things in the picture are, but often how far away they are and what they are doing. While we can be fooled in a 2D image, once you have a moving dynamic world, humans are, generally reliable enough at spotting other things on the road. (Though of course, with 1.2 million dead each year, and probably 50 million or more accidents, the majority because somebody was “not looking,” we are far from perfect.)

Some day, computer vision will be as good at recognizing and understanding the world as people are — and in fact surpass us. There are fields (like identifying traffic signs from photos) where they already surpass us. For those not willing to wait until that day, new techniques in perception that don’t require full object understanding are always interesting.

I should also point out that while lowering cost is of course a worthwhile goal, it is a false goal at this time. Today, maximal safety is the overriding goal, and as such, nobody will actually release a vehicle to consumers without LIDAR just to save the estimated 2017 cost of LIDAR, which will be sub-$500. Only later, when cameras get so good they completely replace LIDAR safety capabilities for less money would people release such a system to save cost. On the other hand, improving cameras to be used together with LIDAR is a real goal; superior safety, not lower cost.

Might the first, supervised robocars be... well... boring?

Let me confess a secret fear. I suspect that the first “autopilot” functions on cars is going to be a bit boring.

I’m talking the offerings like traffic jam assist from Mercedes, super cruise from Cadillac and others. The faster highway assist versions which combine ADAS functions like lane-keeping and adaptive cruise control to keep the car in its lane and a fixed distance from the car in front of you. What Tesla has promoted and what scrappy startup “Cruise” plans to offer as a retrofit later this year. This is, in NHTSA’s flawed “levels” document what could be called supervision type 2.

Some of them also offer lane change, if you approve the safety of the change.

All these products will drive your car, slow or fast on highways, but they require your supervision. They may fail to find the lane in certain circumstances, because the makers are badly painted, or confusing, or just missing, or the light is wrong. When they do they’ll kick out and insist you drive. They’ll really insist, and you are expected to be behind the wheel, watching and grabbing it quickly — ideally even noticing the failure before the system does.

Some will kick out quite rarely. Others will do it several times during a typical commute. But the makers will insist you be vigilant, not just to cover their butts legally, but because in many situations you really do need to be vigilant.

Testing shows that operators of these cars get pretty confident, especially if they are not kicking out very often. They do things they are told not to do. Pick up things to read. Do e-mails and texts. This is no surprise — people are texting even now when the car isn’t driving for them at all.

To reduce that, most companies are planning what they call “countermeasures” to make sure you are paying attention to the road. Some of them make you touch the wheel every 8 to 10 seconds. Some will have a camera watching your eyes that sounds an alarm if you look away from the road for too long. If you don’t keep alert, and ignore the alarms, the cars will either come to a stop in the middle of the freeway, or perhaps even just steer wild and run off the road. Some vendors are talking about how to get the car to pull off safely to the side of the road.

There is debate about whether all this will work, whether the countermeasures or other techniques will assure safety. But let’s leave that aside for a moment, and assume it works, and people stay safe.

I’m now asking the harder question, is this a worthwhile product? I’ve touted it as a milestone — a first product put out to customers. That Mercedes offered traffic jam assist in the 2014 S-Class and others followed with that and freeway autopilots is something I tell people in my talks to make it clear this is not just science fiction ideas and cute prototypes. Real, commercial development is underway.

That’s all true, and I would like these products. What I fear though, is whether it will be that much more useful or relaxing as adaptive cruise control (ACC.) You probably don’t have ACC in your car. Uptake on it is quite low — as an individual add-on, usually costing $1,000 to $2,000, only 1-2% of car buyers get it. It’s much more commonly purchased as part of a “technology package” for more money, and it’s not sure what the driving force behind the purchase is.

Highway and traffic jam autopilot is just a “pleasant” feature, as is ACC. It makes driving a bit more relaxing, once you trust it. But it doesn’t change the world, not at all.

I admit to not having this in my car yet. I’ve sat in the driver’s seat of Google’s car some number of times, but there I’ve been on duty to watch it carefully. I got special driver training to assure I had the skills to deal with problem situations. It’s very interesting, but not relaxing. Some folks who have commuted long term in such cars have reported it to be relaxing.

A Step to greater things?

If highway autopilot is just a luxury feature, and doesn’t change the world, is it a stepping stone to something that does? From a standpoint of marketing, and customer and public reaction, it is. From a technical standpoint, I am not so sure.  read more »

Camera mounting -- beyond the tripoid screw and dovetail plate

For many decades, cameras have come with a machine screw socket (1/4”-20) in the bottom to mount them on a tripod. This is slow to use and easy to get loose, so most photographers prefer to use a quick-release plate system. You screw a plate on the camera, and your tripod head has a clamp to hold those plates. The plates are ideally custom made so they grip an edge on the camera to be sure they can’t twist.

There are different kinds of plates, but in the middle to high end, most people have settled on a metal dovetail plate first made by Arca Swiss. It’s very common with ball-heads, but still rare on pan-heads and lower end tripods, which use an array of different plate styles, including rectangles and hexagons.

The plates have issues — the add weight to your camera and something with protruding or semi-sharp edges on the bottom. They sometimes block doors on the bottom of the camera. If they are not custom, they can twist, and if they are custom they can be quite expensive. They often have tripod holes but those must be off-center.

Arca style dovetails are quite sturdy, but must be metal. With only the 2 sides clamped they can slide to help you position the camera. It is hard, but not impossible to make them snap in, so they usually are screwed and unscrewed which takes time and work and often involves a knob which can get in the way of other things. They are 38mm wide, and normally the dovetails are parallel to the sensor plane, though for strength the plates on big lenses are sometimes perpendicular, which is not an issue for most ball heads.

It’s time the camera vendors accepted that the tripod screw is a legacy part and move to some sort of quick release system standardized and built right into the cameras. The dovetail can probably be improved on if you’re going to start from scratch, and I’m in favour of that, but for now it is almost universal among serious photographers so I will discuss how to use that.

I have seen a few products like this — for example the E-mount to EOS adapter I bought includes a tripod wedge which has both a screw and ARCA dovetails. (Considering the huge difference in weight between my mirrorless cameras and old Canon glass, this mount is a good idea.)

The screens

Many cameras are deep enough that a 38mm wide dovetail (with tripod hole) could be built into the base of the camera. You would have to open the clamp fully to insert unless you wanted the dovetails to run the entire length, which you don’t, but I think most photographers would accept that to have something flush. It would expand the size of the camera slightly, perhaps, but much less than putting on a plate does — and everybody with high end cameras puts on a plate.

Today, though, many cameras have flip-up screens. They are certainly very handy. As people want their screens as big as possible, this can be an issue as the screen goes down flush with the bottom. If there’s a clamp on the bottom, it can block your screen from getting out. One idea would be to design clamps that taper away at the back, or to accept the screen won’t go down all the way.

The smaller cameras

A lot of new cameras are not 38mm deep, though. Putting plates on them is even worse as they stick out a lot. While again, a new design would help solve this problem, one option would be to standardize on a narrower dovetail, and make clamps that have an adapter that can slide in, seat securely so it won’t pop when the pressure is applied, and hold the narrower plate. That or have a clamp with a great deal of travel but that tends to take a lot of time to adjust. (I will note that there are 2 larger classes of dovetails used for heavy telescopes, known as the Vixen and the Losmandy “D”. Some vixen clamps are actually able to grab an arca plate, even though they are not as deep because of the valley often formed with the dovetail and the top of the plate.

It’s also possible to have a 2 level clamp that can grab a smaller plate but there must be a height gap, which may or may not work.

Narrower plates would be used only on smaller and lighter cameras, where not as much strength is needed. However, here again it might be time to design something new.

A locking pin

For some time, camcorders have established a pattern of having a small hole forward of the tripod screw for a locking pin. This allows a much sturdier mount that can’t twist with no need to grab edges of the camera body. Still cameras could do well to establish pin positions — perhaps one one forward, and one to the side. All they have to do is have small indentations for these pins, which typically come spring-loaded on the plates so you can still use them if the hole is not there. (The camcorder pin is placed forward of the tripod hole, but often “forward” is in the direction of the rails.)

For small cameras, it would be necessary to put the dovetail rails perpendicular to the sensor, and they would be very short. That’s OK because those cameras are small and light. The clamps screws would need to be flush with the top of the clamp. (This is sometimes true but not always.)

The presence of a pin would allow small, generic clamps to sturdily hold many cameras. For larger cameras, bigger plates would be available. The cost and size of plates would go down considerably.

The tripod leg screw

The world also standardized on using a bigger machine screw — 3/8”-16 thread — to connect tripod legs to tripod heads. This is a stronger screw, but could also use improvement. The fact that it takes time to switch tripod heads is not that big a deal for most photographers, but the biggest problem is there is no way, other than friction, to lock it, and many is the time that I have turned my tripod head loose from my legs. Here, some sort of clamp or retractable pin would be good, but frankly another clamp (quick release or not) might make sense, and it could become a standard for heavier duty cameras as well.

Something entirely new

I would leave it to a professional mechanical engineer to design something new, but I think a great system would scale to different sizes, so that one can have variants of it for small, light devices, and variants for big, heavy gear, with a way that the larger clamps could easily adapt to hold some of the smaller sizes. I would also design it to be backwards compatible if practical — it is probably easy to leave a 1/4-20 hole in the center, and it may even be possible in the larger sizes to have dovetails that can be gripped by such clamps.

Robocar Parking

In my earlier article on robocar challenges I gave very brief coverage to the issue of parking. Challenged on that, I thought it was time to expand.

The world “parking” means many things, and the many classes of parking problems have varying difficulties.

The taxi doesn’t park

One of the simplest solutions to parking involves robotaxi service. Such vehicles don’t really park, at least not where they dropped you off. They drop you off and go to their next customer. If they don’t have another ride, they can deliberately go to a place where they know they can easily park to wait. They don’t need to tackle a parking space that’s challenging at all.

Simple non-crowded lots

Parking in basic parking lots — typical open ground lots that are not close to full — is a pretty easy problem. So easy in fact, that we’ve seen a number of demonstrations, ranging back to Junior 3 and Audi Piloted Parking. Cars in the showroom now will identify parking spots for you (and tell you if you fit.) They have done basic parallel parking (with you on the brakes) for several years, and are starting to now even do it with you out of the car (but watching from a distance.) At CES VW showed the special case of parking in your own garage or driveway, where you show the car where it’s going to go.

The early demos required empty parking lots with no pedestrians, and even no other moving cars, but today reasonably well-behaved other cars should not be a big problem. That’s the thing about non-crowded lots: People are not hunting or competing for spaces. The robocars actually would be very happy to seek out the large empty sections at the back of most parking lots because you aren’t going to be walking out that far, the car is going to come get you.

The biggest issue is the question of pedestrians who can appear out from behind a minivan. The answer to this is simply that vehicles that are parking can and do go slow, and slow automatically gives you a big safety boost. At parking lot speed, you really can stop very quickly if a pedestrian appears out of nowhere. The car, after all, is not in a hurry, and can slow itself when close to minivans, or if it has noticed pedestrians who are moving near it and have disappeared behind vehicles. Out at the back of a parking lot, nobody cares if you go 5 km/h, or even right down the center of the lane to assure there are no surprises.

To the right we see a picture of Junior 3 entering a parking lot, hunting for a space and taking it — in 2009.


Mapping is still desirable for parking lots. This is particularly true because parking lots, not being public roads, set up their own sets of rules and put up signs meant only for humans. They may direct traffic to be one-way in certain areas in nonstandard ways. They may have gates when you have to pay or insert tickets. Parking spots will be marked reserved for certain cars (Electric vehicle, expectant mother, wheelchair, employee of the month, CEO, customers of company X) with signs meant for humans.

It’s not necessarily super hard to map a parking lot, just time consuming to encode all these rules. Unlike roads, which everybody drives, any given parking lot likely only serves the people who live, work or shop next to it — you will never park in 95% of the lots in your city, though you will drive most of its main roads. Somebody has to pay for the cost of that mapping — either because lots of people want to use the lot, or because the owner of the lot wants to encourage robocars. Fortunately, with the robocars doing things like using the least popular spots, or even valet parking as described below, there is a strong incentive to the owner of a lot to get it mapped and keep it mapped. Only lots that never fill out would have no incentive, and those lots can often be parked in without a map.

While you want trained mappers to confirm the geometry of a parking lot, coding in the signs and special rules is a task easily left to the parking lot owner. If the lot manager forgets to tag the CEO’s space as reserved, nobody is hurt (except the lot manager when the CEO arrives.)

Robocar parking mistakes are easy to fix. Robocars can put a phone number or URL on the back where you can go to complain about a robocar that is parked badly or blocking things. As long as that doesn’t happen too often, the cost of the support desk is manageable. The folks at the support desk can look out with the robot’s sensors and tell it to move. It’s not like finding a human driven car blocking something, where you have to find the owner. In a minute, the robocar will be gone.

More crowded lots

The challenge of parking lots, in spite of the low speeds, is that they don’t have well defined rules of the road. People ignore the arrows on the ground. They pause and wait for cars to exit. In really crowded lots, cars follow people who are leaving at walking speed, hoping to get dibs on their spot. They wait, blocking traffic, for a spot they claim as theirs. People fight for spots and steal spots. People park badly and cross over the lines.

As far as I know, nobody has tried to solve this challenge, and so it remains unsolved. It is one of the few problems in robocars that actually deserves the label of “AI,” though some think all driving is AI.

Even so, on the grand scheme of things, my intuition is that this is not one of the grand unsolved challenges of AI. Parking lots don’t have legalized rules of the road, but they do have rules and principles, and we all learn them the more we park. Creating a system that can do well with these rules using various AI tools seems like a doable challenge when the time comes. My intuition is that it’s a lot easier than winning on Jeopardy. This system will be able to take advantage of a couple of special abilities of the robocars:

  • They will be able to park and exit spots quickly and efficiently. They won’t be like the people you always see who do a 5 point turn to exit their parking spot when you (but not they) can see they still have 5 feet of room behind them.
  • In general, they will be superb parkers, centering themselves as well as possible inside spots
  • They don’t need room to open their doors, so they can park right next to walls and pillars.
  • Yes, they could also park right next to badly parked cars which have encroached into other spaces and thus made a space no human can use. There is a risk of course that the bad parker, who finds they can’t get in one side, might retaliate. (I’ve had a guy rip my mirror off in revenge.) In this case, though, they will have a photo of the licence plate and a sensor record of the revenge taking place!
  • In the event of problems or deadlock, they are open to the idea of just giving up and parking somewhere farther away that is easier to park in. Unlike humans they could drive as quickly in reverse as forward to back out of situations.

In spite of all this, the cars will want to avoid the full parking lots where the chaos happens. If there is another lot not far away, they will just go there, and require a couple minutes more advance notice from their master when summoned to pick them up. If there is nowhere nearby to park, the car will tell its passenger that she has to do the parking.

Robo-valet zones

Even in the most crowded lots, there is the potential to easily create zones of the parking lot that are marked:

“Robot Valet Parking only. All other cars may be blocked in or towed. No pedestrians.”

In the car’s map, it will indicate what server is handling the robo-valet section, though it is possible to have it work without any communication at all.

In the most basic version the car would ask permission to enter the lot. The database might even assign it a spot, but generally it would just enter and take any spot. By “any spot”, I mean any piece of pavement, ignoring the lines on the ground. At first the cars would choose spots that let them have an unblocked pack to leave. As soon as too many cars arrive to do that, they would switch to a more dense, valet pattern that blocks in some cars (the ones who said they were leaving latest.) It would report where it parked to the database, as well as how to send it a message, and when it expects to leave.

Other cars would arrive. Eventually one would block in your car. If the database has given them a way to communicate (probably over the internet, though if they had V2V they could use that) they might discuss who plans to leave first, and the cars would adjust themselves to put the cars that will leave sooner at the front. This is strongly in the interests of the cars. If you plan to be there a while, you want to go to the back so you don’t have to keep moving to let cars behind you out. But it still works, just not as well, if the cars just take any available spot.

When it’s time to leave, the cars could try to send a message over the data networks to the cars in front of them, but a simpler approach might be to just nudge slightly forward — a few cm will do it. This will cause the car in the direction of the nudge to notice, and it too would nudge forward, and so on, and so on until the front car moves out, and then all the cars in that row can move out, including your car, which leaves the lot. Then the other cars can move in to fill the spot. If they have a database which maps the cars in that section, they could try to be clever in how they re-fill the empty column to minimize movement.

There are even faster algorithms if you leave a few empty spaces. Robocars have the ability to move in concert to “move the space” and put it next to a car that wants to exit. It’s more efficient, but not needed.

The database becomes more useful if a human driver ignores the signs and tries to park in the lot. That’s because the database is the simplest way of spotting a vehicle that’s not supposed to be there. As a first step, the cars in the lot could start flashing their lights and honking their horns at the interloper, or even speak human language messages out a speaker. “Hey, this is the robot valet lot, you are blocking me in! We’re calling a tow truck to come remove you if you don’t leave.” Some idiots may still try, and the robots could arrange so that almost all of them can still get out, and if not, they might call that tow truck.

The robo-valet section can be at the back of the parking lot, or the top of a structure — those places the humans park in last. The owner of the lot has a huge incentive to do this, since they can make much more efficient use of their land with the tight valet-dense parking. All the owner has to do is register the lot section in a database — a database that a company like Google would probably be happy to offer for free to benefit their cars.

Human valets could also park cars in this area. They would just need to use an app on their smartphone that tells them where to park and allows them to register that they did it. The robots will want the human-parked cars to park at the back, because they will move out of the way when it’s time for the human parked car to be driven back out.

The main requirements for this parking area would be that it be reachable from the outside without going through a zone of chaos, and that it then be possible to also reach the pickup/dropoff point for passengers without the risk of getting stuck in chaos. Larger lots tend to have entrance lanes without spots on them that serve this purpose.

Pedestrians will still enter the lot, in spite of the sign. Just go extra slow if they are there, and perhaps talk to them and ask them to leave. While you won’t actually present a danger to them at your low speed, they probably will heed the advice of 3000lb robots. Perhaps tell them they have 15 seconds to put down their weapon.

Robotic sign?

To get really clever, the sign marking the border of the Robo-Valet area might itself be on a small robot. Thus, when the robo-valet area gets full, the sign can move to expand the area if space is available. You could expand even into areas occupied by human-parked cars — just know that they are there and don’t block them in — or move out of their way when needed. Eventually they leave and only robocars enter.

When the demand goes down, the sign can easily move to shrink the valet area.

The world needs standardized LEDs which adjust brightness

I’m sure, like me, you have lots of electronic gadgets that have status LEDs on them. Some of these just show the thing is on, some blink when it’s doing things. Of late, as blue LEDs have gotten cheap, it has been very common to put disturbingly bright blue LEDs on items.

These become much too bright at night, and can be a serious problem if the device needs to be in a bedroom or hotel room. Which things like laptops, phone and camera chargers and many other devices need to do. I end up putting small pieces of electrical tape over these blue LEDs.

I call upon the factories of Shenzen and elsewhere to produce low cost, standardized status LEDs. These LEDs will come with an included photosensor that measures the light in the room, and adjusts the LED so that it is just visible at that lighting level. Or possibly turns it off in the dark, because do we really need to know that our charger is on after we’ve turned off the lights?

Of course, one challenge is that the light from the LED gets into the photosensor. For most LEDs, the answer is pretty easy — put a filter that blocks out the colour of the LED over the photosensor. If you truly need a white LED, you could make a fancy circuit that turns it off for a few milliseconds every so often (the eye won’t notice that) and measures the ambient light while it’s off. All of this is very simple, and adds minimally to the cost. (In fact, the way you adjust the brightness of an LED is typically to turn it on and off very fast.)

Get these made and make it standard that all our gear uses them for status LEDs. Frankly, I think it would be a good idea even for consumer goods that don’t get into our bedrooms. My TV rooms and computer rooms don’t need to look like Christmas scenes.

Detroit Auto Show and more news

Robocar news continues after CES with announcements from the Detroit Auto Show (and a tiny amount from the TRB meeting.)

Google doesn’t talk a lot about their car, so address by Chris Urmson at the Detroit Auto Show generated a lot of press. Notable statements from Chris included:

  • A timeline of 2 to 5 years for deployment of a vehicle
  • Public disclosure that Roush of Michigan acted as contract manufacturer to build the new “buggy” models — an open secret since May
  • A list of other partners involved in building the car, such as Continental, LG (batteries), Bosch and others.
  • A restatement that Google does not plan to become a car manufacturer, and feels working with Detroit is the best course to make cars
  • A statement that Chris does not believe regulation will be a major barrier to getting the vehicles out, and they work regularly to keep NHTSA informed
  • A few more details about Google’s own LIDAR, indicating that units are the size of coffee cups. (You will note the new image of the buggy car does not have a Velodyne on the roof.)
  • More indication that things like driving in snow are not in the pipeline for the first vehicles

Almost all of this has been said before, though the date forecasts are moved back a bit. That doesn’t surprise me. As Google-watchers know, Google began by doing extensive, mostly highway based testing of modified hybrid cars, and declared last May that they were uncomfortable with the safety issues of doing a handoff to a human driver, and also that they have been doing a lot more on non-highway driving. This culminated with the unveiling of the small custom built buggy with no steering wheel. The shift in direction (though the Lexus cars are still out there) will expand the work that needs to be done.

Car company announcements out of the Detroit show were minor. The press got all excited when one GM executive said they “would be open to working with Google.” While I don’t think it was actually an official declaration, Google has said many times they have talked to all major car companies, so there would be no reason for GM to go out to the press to say they want to talk to Google. Much PR over nothing, I suspect.

Ford, on the other hand, actually backtracked and declared “we won’t be first” when it comes to this technology. I understand their trepidation. Being first does not mean being the winner in this game. But neither does being 2nd — there will be a time after which the game is lost.

There were concept vehicles displayed by Johnson Controls (a newcomer) and even a Chinese company which put a fish tank in the rear of the car. You could turn the driver’s seat around and watch your fish. Whaa?

In general, car makers were pushing their dates towards 2025. For some, that was a push back from 2020, for others a push forward from 2030, as both of those numbers have been common in predictions. I guess now that it’s 2015, 2020 is just to realistic a number to make an uncertain prediction about.

Earlier, Boston Consulting Group released a report suggesting robocars would be a $42B market in 2025 — the car companies had better get on it. With the global ground transportation market in the range of $7 trillion in my guesstimate, that’s a drop in the bucket, but also a huge number.

News from the Transportation Research Board annual meeting has been sparse. The combined conference of the TRB and AUVSI on self-driving cars in the summer has been the go-to conference of late, and other things usually happen at the big meeting. Released research suggested 10% of vehicles could be robocars in 2035 — a number I don’t think is nearly aggressive enough.

There also was tons of press over the agreement between NASA Ames and Nissan’s Sunnyvale research lab to collaborate. Again, not a big surprise, since they are next door to one another, and Martin Sierhuis the director of the research lab made his career over at Nasa. (Note of disclosure: I am good friends with Martin, and Singularity U is based at the NASA Research Park.)

Day 3 of CES -- BMW and robots

Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.

Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.

More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.

Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.

I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.

I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.

CES Day 2 Gallery and notes

After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery

CES Day 1 -- Mercedes concept

A reasonable volume of robocar related stuff here at CES. I just had a few hours today, and went to see the much touted Mercedes F015 “Luxury in Motion.” This is a concept and not a planned vehicle, but it draws together a variety of ideas — most of which we’ve seen before — with some new explorations.

The vehicle has a long wheelbase design to allow it to have a very large passenger compartment, which features just 4 bucket seats, the front two of which can rotate to create face to face seating. (In addition, they can rotate to make it easier to get into the car.) We’ve seen a number of face to face concepts and designs and I’ve been interested in the idea from the start, the idea of making car travel more social and better for both families and co-workers. As a plus, rear facing seats, though less comfortable for some fraction of the population, are going to be safer in a front end collision.

The vehicle features a bevy of giant touchscreens. We see a lot of this, but I actually will note that we don’t have this at our desks or in our homes. I suspect passengers in robocars will prefer the tablets they already have, though there is the issue that looking down at a tablet generates motion sickness sometimes.

The interior has an odd mix of carpet and hardwood, perhaps trying to be more like a living room.

More interesting, though not on display, are the vehicle’s systems for communicating with pedestrians and other road users. These include LEDs that can indicate if the car is self-driving (boring, and something I pushed to have removed from the Nevada law,) but more interesting are indicators that help to tell pedestrians the vehicle has seen them. One feature, which only is likely to work at night, laser projects a crosswalk in front of the vehicle when it stops, to tell a pedestrian it sees them and is expecting them to cross in front. It can also make LED words at the back for other cars (something that is I think illegal in some jurisdictions.

Also interesting has been the press reaction. Wired thinks it’s bonkers and not designed very well. The bonkers part is because the writer thinks it de-emphasizes driving too much. Of course, those of that stripe are quite upset at Google’s car with no controls. Other writers have liked the design, and find it quite superior to Google’s non-threatening design, suggesting the Google design is for regulators and the Mercedes design is for customers. Google plans to get approval for their car and operate it, while Mercedes is just using the F015 as a concept.

I have a gallery of several pictures of the car which I will add to during the week. In the gallery you will also see:

Audio Piloted Driving prototype

Audi drove one of their cars from the Bay Area to CES, letting press take 100 mile stints. It also helped them learn things about different conditions. One prototype is in the booth, I will go out to see the real car outdoors tomorrow.


TRW was showing off their technology with a transparent model showing where they had put an array of radars to make 360 degree radar and camera coverage. No LIDAR, but they will probably get one eventually. Radar’s resolution is low, but they believe that by fusing the radar and the camera views they can get very good perception of the road.


There are more for me to see tomorrow. Ford showed more of their ADAS systems and also their Focus which has 4 of the 32 plane velodyne LIDARs on it. Toyota showed only a hydrogen fuel cell car. Valeo has some interesting demos I will want to see — they have promised doing a good traffic jam assist. While they have not said so, I think the most interesting car company robocar function will be a traffic jam assist which does not require supervision — ie. you can read. While no car company is ready to have the driver out of the loop at high speeds, doing it at traffic jam speeds is much easier, because mainly you just have to follow the other cars, and you stop self-driving if the jam opens up. Several companies are working on a product like this and I suspect it will be the first real robocar product to reach the market that is actually practical. The “super cruise” products which drive while you watch are pleasant, but not much more world-changing than adaptive cruise control. When the car can give people time back, even if it’s only the traffic jam time, then something interesting starts happening.

Fixing the sad state of in-flight entertainment (your own or the airline's)

When Southwest started using tablets for in-flight entertainment, I lauded it. Everybody has been baffled by just how incredibly poor most in-flight video systems are. They tend to be very slow, with poor interfaces and low resolution screens. Even today it’s common to face a small widescreen that takes a widescreen film, letterboxes it and then pillarboxes it, with only an option to stretch it and make it look wrong. All this driven by a very large box in somebody’s footwell.

I found out one reason why these systems are so outdated. Apparently, all seatback screens have to be safety tested, to make sure that if you are launched forward and hit your head on the screen, it is not more dangerous than it needs to be. Such testing takes time and money, so these systems are only updated every 10 years. The process of redesigning, testing and installing takes long enough that it’s pretty sure the IFE system will seem like a dinosaur compared to your phone or tablet.

One airline is planning to just safety test a plastic case for the seatback into which they can insert different panels as they develop. Other airlines are moving to tablets, or providing you movies on your own tablet, though primarily they have fallen into the Apple walled garden and are doing it only for the iPad.

The natural desire is just to forget the airline system and bring your own choice of entertainment on your own tablet. This is magnified by the hugely annoying system which freezes the IFE system on every announcement. Not just the safety announcements. Not just the announcements in your language, but also the announcement that duty free shopping has begun in English, French and Chinese. While a few airlines let you start your movie right after boarding, you don’t want to do it, as you will get so many interruptions until the flight levels off that it will drive you crazy. The airline provided tablet services also do this interruption, so your own tablet is better.

In the further interests of safety, new rules insist you can only use the airline’s earbud headphones during takeoff and landing, not your nice noise cancellation phones. But you didn’t pick up earbuds since you have the nicer ones. The theory is, your nice headphones might make you miss a safety announcement when landing, even though they tend to block background noise and actually make speech clearer.

One of the better IFE systems is the one on Emirates. This one, I am told, knows who you are, and if you pause a show on one flight, it picks up there on your next flight. (Compare that to so many systems that often forget where you were in the film on the same flight, and also don’t warn you if you won’t be able to finish the movie before the system is turned off.)

Using your own tablet

It turns out to be no picnic using your own tablet.

  • You have to remember to pre-load the video, of course
  • You have to pay for it, which is annoying if:
    • The airline is already paying for it and providing it free in the IFE
    • You have it on netflix/etc. and could watch it at home at no cost
    • You wish to start a movie one day and finish it on another flight, but don’t want to pay to “own” the movie. (Because of this I mostly watch TV shows, which only have a $3 “own” price and no rental price.)

How to fix this:

  1. IFE systems should know who I am, know my language, know if I have already seen the safety briefing, and not interrupt me for anything but new or plane-specific safety announcements in my chosen language.
  2. Like the Emirates systems, they should know where I am in each movie, as well as my tastes.
  3. How to know the language of the announcement? Well, you could have a button for the FA to push, but today software is able to figure out the language pretty reliably, so an automated system could learn the languages and the order in which they are done on that flight. Software could also spot phrases like “Safety announcement” at the start of a public address, or there could be a button.
  4. Netflix should, like many other services, allow you to cache material for offline viewing. The material can have an expiration date, and the software can check when it’s online to update those dates, if you are really paranoid about people using the cache as a way to watch stuff after it leaves Netflix. Reportedly Amazon does this on the Kindle Fire.
  5. Online video stores (iTunes, Google Play, etc.) should offer a “plane rental” which allows you to finish a movie after the day you start it. In fact, why not have that ability for a week or two on all rentals? It would not let you restart, only let you watch material you have not yet viewed, plus perhaps a minute ahead of that.
  6. Perhaps I am greedy, but it would be nice if you could do a rental that lets 2 or more people in a household watch independently, so I watch it on my flight and she watches it on hers.
  7. If necessary, noise-cancelling headphones should have a “landing mode” that mixes in more outside sound, and a little airplane icon on them, so that we can keep them on during takeoff and landing. Or get rid of this pretty silly rule.

Choosing your film

There’s a lot of variance in the quality of in-flight films. Air Canada seems particularly good at choosing turkeys. Before they close the doors, I look up movies — if I can get the IFE system to work with all the announcements — in review sites to figure out what to watch. In November, at Dublin Web Summit, I met the developers of a travel app called Quicket, which specialized in having its resources offline. I suggested they include ratings for the movies on each flight — the airlines publish their catalog in advance — in the offline data, and in December they had implemented it. Great job, Quicket.

Let me be a bit late for the plane, occasionally.

One of air travel’s great curses is that you have to leave for the airport a long time before your flight. Airlines routinely “recommend” you be there 2 or 3 hours ahead, and airport ride companies often take it to heart and want to pick you up many hours before even short flights. The curse is strongest on short flights, where you can easily spend as much as twice the time getting to the flight as you spend in the air.

The reality, though, is that it’s not nearly that strict. I often arrive much later. I’ve missed 3 flights in my life — in two cases because cheap airlines literally had nobody at the counter past their cutoff deadline, and once because United’s automated bag check line was very long (I got there before the deadline) but their computer is fully strict on the deadline while humans usually are not. In all cases, I got on another flight, and the time lost to these missed flights is vastly less than the time gained by not being at the airport so early.

But it’s getting harder. Airlines are getting stricter, and in a few cases offering no flexibility.

The big curse is that many of the delays can’t be predicted. It may almost always take 20 minutes to get to the airport, but every so often traffic will make it 40. Security is usually only 5-10 minutes but there are times when it’s 30. Car rental return, parking shuttles, called taxis and Ubers can have unexpected delays. Parking lots can be full (as happened to me this xmas after Uber failed me.) Immigration can range from 2 minutes to 1.5 hours if you have to go to secondary screening. While in theory you could research this, sometimes at strange airports you are surprised to find it’s 30 minutes walk and people-mover to your gate.

If you ever fly privately, though, you will discover a different world, where even if you’re just a guest you can arrive a very short time before your flight. (If you’re the owner, of course, it doesn’t take off until you get there.) But there are many options that can speed your trip through the airport without needing to fly a private jet:

  • Tools like Google Now track traffic and warn you when you need to leave earlier to get to the airport
  • If you take a cab to the airport, you eliminate the delays of parking and car return
  • Though rarer today, ability to check bags in advance at remote locations helps a lot
  • Curb checking of bags is great, as of course is online check-in sent to your phone
  • (Not checking bags is of course better, and any savvy flyer avoids it whenever they can, but sometimes you can’t.)
  • Premium passengers get check-in gates with minimal lines, and premium security lines
  • If you have a Global Entry or Nexus card, you can skip the immigration/customs line
  • TSA PRE, “Clear” and premium passenger security lines provide a no-wait experience. Of course nobody should ever have to wait, ever.
  • Failing that, offering appointments at security for a predictable security trip can remove the time risk
  • Sometimes they also let people who are at risk of missing a flight skip past the security line (and some other lines)
  • In some cases, premium passengers are shuttled in vehicles within the terminal or on the tarmac
  • Business class passengers can board as late as they want (or as early) and still get a place in the bins on most flights

In addition, I believe that if you wanted to get your checked bag cleared quickly by the TSA for money, it could happen. Of course, we can’t have everybody do this all the time, or so I presume, because it would require too much in the way of resources. But what if we allow you to do this occasionally when factors beyond your control have made you late.

What is proposed is that every so often — perhaps one time in twenty — when factors like traffic, long security lines or other things mostly beyond your control made you late, you could invoke an urgent need, and still make your flight.

This would allow you to budget a more reasonable time to arrive

What does this all add up to? It should be possible, at an extra cost, to get a quick trip through the airport. Say that cost is $200 (I don’t think it’s that much, but say that it is.) You could pay $10 extra per flight for “insurance” and be able to invoke an urgent trip every so often when things go wrong. It’s worth it to pay every trip because it gives you a benefit on every trip — you leave later, knowing you will make it even if traffic, security lines or similar factors would delay you too much.

Some of the services you might get would include:

  • Somebody meets your car at the curb, takes your keys, and then parks it or returns it to the car rental facility
  • Another employee meets you and checks in your bags at the curb. Your bags are put in a special urgent queue in TSA inspection. If need be a staffer walks it through.
  • A golf cart takes you to security if it’s not close, and you get to the front of the line.
  • If your gate is far, another golf cart or escort takes you there

The natural question is, “why wouldn’t you want this all the time?” And indeed you would, and a large fraction of passengers would pay a fairly high fee to get this when they need it. Airlines might make it just part of the service with high-priced tickets or super-elite flyers, and I see no reason that should not happen. The price can be set so that the demand matches the supply, based on the cost of having extra employees to handle urgent passengers.

When it comes to more “public” resources like TSA screening, they have a simple rule. You can give premium services to premium passengers if what you do also speeds up the line for ordinary passengers. A simple implementation of this is to just pay for an extra screening station for the premium passengers, because now you don’t butt in line and in fact by not being in the regular line at all, you speed it up for all in it. You don’t need to be so extravagant, however. For example, the “TSE PRE” line, which allows a faster trip through the X-ray (you don’t have to take anything out, or remove your shoes in this line) speeds up everybody because we all wait behind people doing that. If you can show that the amount you speed up the whole process is greater than the delay you add by letting premium passengers jump the queue, it is allowed.

But as fancy as these services sound, with extra staff, they are really not that expensive. Perhaps just 20 minutes of employee time for most of it — more if they are driving your car to a parking lot for you. (Note that this curb hand-off is forbidden by most airports because car rental companies already would like to offer it to their top customers but it is believed that would be too popular and increase traffic. Special permission would need to be arranged.)

For the “insurance” approach, a few techniques could assure it was not being abused. The frequency of use is one of them, of course, but you could also give people an app for their phones. This app, using GPS and knowing a flight is coming, would know when you left for the airport. In fact, it could give you alerts as to when to leave based on information about traffic, parking and security wait times. If you left at the reasonable departure deadline, you would get the urgent service if traffic or other surprise factors made you late. If you left after that deadline, you would not be assured the fast track path.

What would be better would be an app that actually works with all the airport functions you will interact with — check in, the gate, bag check, passenger screening, parking lots, rental cars, traffic etc. Their databases could know their state, any special conditions, and both recommend a time to leave that will work, but even make appointments for you and tell you when to leave for them. Then your phone could guide you through the airport and do all the hard work. It would provide an ID to get you your appointment at security. It might tell you to not drive your own car and take a car service instead if that’s easier than parking your car for you. It would coordinate for all the passengers using the system to make sure they flow through the airport in a well regulated manner, with no surprises, so that people don’t have to try to get there hours in advance.