Future proofing video with high-res stills

On Saturday I wrote about how we’re now capturing the world so completely that people of the future will be able to wander around it in accurate VR. Let’s go further and see how we might shoot the video resolutions of the future, today.

Almost everybody has a 1080p HD camera with them — almost all phones and pocket cameras do this. HD looks great but the future’s video displays will do 4K, 8K and full eye-resolution VR, and so our video today will look blurry the way old NTSC video looks blurry to us. In a bizarre twist, in the middle of the 20th century, everything was shot on film at a resolution comparable to HD. But from the 70s to 90s our TV shows were shot on NTSC tape, and thus dropped in resolution. That’s why you can watch Star Trek in high-def but not “The Wire.”

I predict that complex software in the future will be able to do a very good job of increasing the resolution of video. One way it will do this is through making full 3-D models of things in the scene using data from the video and elsewhere, and re-rendering at higher resolution. Another way it will do this is to take advantage of the “sub-pixel” resolution techniques you can do with video. One video frame only has the pixels it has, but as the camera moves or things move in a shot, we get multiple frames that tell us more information. If the camera moves half a pixel, you suddenly have a lot more detail. Over lots of frames you can gather even more.

This will already happen with today’s videos, but what if we help them out? For example, if you have still photographs of the things in the video, this will allow clever software to fill in more detail. At first, it will look strange, but eventually the uncanny valley will be crossed and it will just look sharp. Today I suspect most people shooting video on still cameras also shoot some stills, so this will help, but there’s not quite enough information if things are moving quickly, or new sides of objects are exposed. A still of your friend can help render them in high-res in a video, but not if they turn around. For that the software just has to guess.

We might improve this process by designing video systems that capture high-res still frames as often as they can and embed them to the video. Storage is cheap, so why not?

I typical digital video/still camera has 16 to 20 million pixels today. When it shoots 1080p HD video, it combines those pixels together, so that there are 6 to 10 still pixels going into every video pixel. Ideally this is done by hardware right in the imaging chip, but it can also be done to a lesser extent in software. A few cameras already shoot 4K, and this will become common in the next couple of years. In this case, they may just use the pixels one for one, since it’s not so easy to map a 16 megapixel 3:2 still array into a 16:9 8 megapixel 4K image. You can’t just combine 2 pixels per pixel.

Most still cameras won’t shoot a full-resolution video (ie. a 6K or 8K video) for several reasons:

  • As designed, you simply can’t pull that much data off the chip per unit time. It’s a huge amount of data. Even with today’s cheap storage, it’s also a lot to store.
  • Still camera systems tend to compress jpegs, but you want a video compression algorithm to record a video even if you can afford the storage for that.
  • Nobody has displays to display 6K or 8K video, and only a few people have 4K displays — though this will change — so demand is not high enough to justify these costs
  • When you combine pixels, you get less noise and can shoot in lower light. That’s why your camera can make a decent night-time video without blurring, but it can’t shoot a decent still in that lighting.

What is possible is a sensor which is able to record video (at the desired 30fps or 60fps rate) and also pull off full-resolution stills at some lower frame rate, as long as the scene is bright enough. That frame rate might be something like 5 or even 10 fps as cameras get better. In addition, hardware compression would combine the stills and the video frames to eliminate the great redundancy, though only to a limited extent because our purpose is to save information for the future.

Thus, if we hand the software of the future an HD video along with 3 to 5 frames/second of 16megapixel stills, I am comfortable it will be able to make a very decent 4K video from it most of the time, and often a decent 6K or 8K video. As noted, a lot of that can happen even without the stills, but they will just improve the situation. Those situations where it can’t — fast changing objects — are also situations where video gets blurred and we are tolerant of lower resolution.

It’s a bit harder if you are already shooting 4K. To do this well, we might like a 38 megapixel still sensor, with 4 pixels for every pixel in the video. That’s the cutting edge in high-end consumer gear today, and will get easier to buy, but we now run into the limitations of our lenses. Most lenses can’t deliver 38 million pixels — not even many of the high-end professional photographer lenses can do that. So it might not deliver that complete 8K experience, but it will get a lot closer than you can from an “ordinary” 4K video.

If you haven’t seen 8K video, it’s amazing. Sharp has been showing their one-of-a-kind 8K video display at CES for a few years. It looks much more realistic than 3D videos of lower resolution. 8K video can subtend over 100 degrees of viewing angle at one pixel per minute of arc, which is about the resolution of the sensors in your eye. (Not quite, as your eye also does sub-pixel tricks!) At 60 degrees — which is more than any TV is set up to subtend — it’s the full resolution of your eyes, and provides an actual limit on what we’re likely to want in a display.

And we could be shooting video for that future display today, before the technology to shoot that video natively exists.

Near-perfect virtual reality of recent times and tourism

Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.

We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)

Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.

This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.

It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.

This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.

VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.

In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.

I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.

Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing

  • With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
  • With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
  • CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
  • Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.

Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.

Fixing money in politics: Free, open "campaign in a box"

I’m waiting at CDG in Paris, so it’s time to add a new article to my series about fixing money in politics by looking at another thing campaigns spend money on (and thus raise money for), namely management of their campaigns.

A modern campaign is a complex thing. And yes, most of the money is spent on advertising, GOTV, events and staff. But there’s also a lot of logistics, and a fair amount of software.

In the USA, each big election, both major parties rebuild an election software system largely from scratch. It’s actually the right strategy. With the next election coming in 4 years, the internet and our hardware and software tools would have changed so much that trying to modify the old legacy is an error. So they avoid it, at some cost.

There may be a Presidential election in the USA every 4 years, but around the world, there’s an election somewhere every week or two. So a general “campaign in a box” software package would find regular use, and get regular updating. I propose that this could be done as open source software. Campaigns have reason to be suspicious of any black-box software they might be given, but open source software would let them verify the security of the software, and let them improve it for the world.

There’s only one catch. When one party comes up with a great new tool, they want to keep it as their advantage. They don’t want to give it to the other side. They don’t want to let the other side, or sometimes even the public, see just how they do things. This might counter the virtues of open source. One could imagine a rule that did not require changes to be published until the end of the current election, but that still gives the tools to the “enemy” in the next election. But you get their tools, so it may be a decent exchange. Big computer companies have been happy with this trade.

In the box would be tools for full management of campaign staff and volunteers, events, advertising, GOTV and more. Yes, even though I recently ranted about the damage caused by GOTV, you can put political bias into these tools if they are to work. You have to give the campaigns what they want, even if they want tools to spam, run negative ads and do GOTV. But giving them a nice web site can always help.

The real goal is to make it easier and cheaper to run a campaign. With good software, including good tools for building political ads online and on YouTube, it becomes possible to run a small campaign with more volunteers and less money. That’s the real goal — make it cheaper to run a campaign so candidates feel they can get elected without raising huge sums and becoming beholden.

Election in a Box

Campaign in a box could extend beyond tools for campaigns. It could be part of “Election in a Box” which could provide a suite of open source tools to help both small and large organizations and political jurisdictions to run elections well. Not necessarily digital voting or online voting as I spoke about earliern in the New Democracy topic, but all the other logistics of an election. There are also good designs for open source voting machines which have a donated computer help produce a paper ballot which can be examined by the voter, and then inserted to a scanner to help count it for audited voting.

It could also include tools for doing online candidate debates on sites like YouTube. Imagine a platform where candidates make video clips of themselves answering a set of questions or talking on a set of issues, and then allowing them to make response videos to any other candidate’s video, and to make response videos in turn. This would allow any voter to say, “I want to see a debate between these 3 candidates on these 4 issues” and you could keep watching back and forth until you got bored. Software to do this could bump up the political discourse, perhaps. At least the debates could be a little more engaged and real, and minor parties could participate if people want to see them. Pundits could tell people, “Hey, watch what the Libertarian says in the Health Care question.”

Election in a box would also be very valuable to the small countries and the newly formed countries who don’t have the experience and tools to make such tools on their own.

We in the open source community have done so much to generate and support great software that has been given free to the world for huge economic gain. Perhaps similar effort can save a lot of money for politicians, and make them raise less of it.

Are today's challenges of making robocars dealbreakers?

There’s been a lot of press recently about an article in Slate by Lee Gomes which paints a pessimistic picture of the future of robocars, and particularly Google’s project. The Slate article is a follow-on to a similar article in MIT Tech Review

Gomes and others seem to feel that they and the public were led to believe that current projects were almost finished and ready to be delivered any day, and they are disappointed to learn that these vehicles are still research projects and prototypes. In a classic expression of the Gartner Hype Cycle there are now predictions that the technology is very far away.

Both predictions are probably wrong. Fully functional robocars that can drive almost everywhere are not coming this decade, but nor are they many decades away. But more to the point, less-functional robocars are probably coming this decade — much sooner than these articles expect, and these vehicles are much more useful and commercially viable than people may expect.

There are many challenges facing developers, and those challenges will keep them busy refining products for a long time to come. Most of those challenges either already have a path to solution, or constrain a future vehicle only in modest ways that still allow it to be viable. Some of the problems are in the “unsolved” class. It is harder to predict when those solutions will come, of course, but at the same time one should remember that many of the systems in today’s research vehicles were in this class just a few years ago. Tackling hard problems is just what these teams are good at doing. This doesn’t guarantee success, but neither does it require you bet against it.

And very few of the problems seem to be in the “unsolvable without human-smart AI” class, at least none that bar highly useful operation.

Gomes’ articles have been the major trigger of press, so I will go over those issues in detail here first. Later, I will produce an article that has even more challenges than listed, and what people hope to do about them. Still, the critiques are written almost as though they expected Google and others, rather than make announcements like “Look at the new milestone we are pleased to have accomplished” to instead say, “Let’s tell you all the things we haven’t done yet.”

Gomes begins by comparing the car to the Apple Newton, but forgets that 9 years after the Newton fizzled we had the success of the Palm Pilot, and 10 years after that Apple came back with the world-changing iPhone. Today, the pace of change is much faster than in the 80s.

Here are the primary concerns raised:

Maps are too important, and too costly

Google’s car, and others, rely on a clever technique that revolutionized the DARPA challenges. Each road is driven manually a few times, and the scans are then processed to build a super-detailed “ultramap” of all the static features of the road. This is a big win because big server computers get to process the scans in as much time as they need, and see everything from different angles. Then humans can review and correct the maps and they can be tested. That’s hard to beat, and you will always drive better if you have such a map than if you don’t.

Any car that could drive without a map would effectively be a car that’s able to make an adequate map automatically. As things get closer to that, making maps will become cheaper and cheaper.

Naturally, if the road differs from the map, due to construction or other changes, the vehicle has to notice this. That turns out to be fairly easy. Harder is assuring it can drive safely in this situation. That’s still a much easier problem than being able to drive safely everywhere without a map, and in the worst case, the problem of the changed road can be “solved” by just the ability to come to a safe stop. You don’t want to do that super often, but it remains the fail-safe out. If there is a human in the car, they can guide the vehicle in this. Even if the vehicle can’t figure out where to go to be safe, the human can. Even a remote human able to look at transmitted pictures can help the car with that — not live steering, but strategic guidance.

This problem only happens to the first car to encounter the surprise construction. If that car is still able to navigate (perhaps with human help,) the map can be quickly rebuilt, and if the car had to stop, all unmanned cars can learn to avoid the zone. They are unmanned, and thus probably not in a hurry.

The cost of maps

In the interests of safety, a lot of work is put into today’s maps. It’s a cost that somebody like Google or Mercedes can afford if they need to, (after all, Google’s already scanned every road in many countries multiple times) but it would be high for smaller players.  read more »

Is Carpool cheating the answer?

A recent newspaper column where people complained about carpool cheats got me thinking — could cheating actually be a solution to some carpool problems?

For many years, the wisdom was that carpool lanes were helping traffic and the environment, but that wisdom has been changing, and it is now seen that the lanes actually hurt (at least the traffic) in many cases. As such, the new approach is to build “managed lanes” and in particular the High-Occupancy-Toll (HOT) lanes which let solo drivers pay to use the lane. In addition, low emission cars and motorcycles usually get to use the lanes solo.

Why does this help? It turns out that a typical configuration of 3 solo lanes and one carpool lane is performing badly when the carpool lane is well under capacity. The ideal road would have all 4 lanes running just under 100% capacity (which is around 2,000 cars per hour, or 8,000 for the whole road.) At rush hour, however, the lanes often collapse in congestion to stop and go, which can drop as low as 1,300 vehicles/hour.

Carpool approaches suggest that if you have one carpool lane running at less than capacity (and thus congestion free and highly attractive) that you will make people choose to carpool. Each carpool takes a car or two off the road, which is a win for congestion (and the environment.)

Consider one carpool situation, where the carpool lane is running free at 50% of capacity, and the other 3 lanes are at 100% of capacity. You’re now moving 7,000 vehicles/hour instead of 8,000, but that would be OK if it’s because you took more than 1,000 vehicles off the road.

Unfortunately that’s not even remotely true. The vast majority of the carpools on the road are natural carpools that would have happened anyway. Couples or families travelling together. “Kidpools” where in almost all cases no car was taken off the road. The permitted solo drivers in low emission vehicles and motorcycles don’t remove cars, but are greener. The number of “induced” carpools — carpools that were created because of the attractive travel time offered by the carpool lane — is quite low. Perhaps as low as 10%, but likely not more than 20%. HOV-3 lanes may have more induced carpools.

To make it worse, consider a carpool lane at 70% usage (good) but the 3 other lanes in congestion, and now getting 1,500 vehicles per hour. We’ve dropped our road to just 5,900 cars per hour. And at 20% induced carpools we only took 280 cars off the road, for a total of 6,180 instead of our ideal of 8,000. There is a zone of congestion where moving another 500 cars from the solo lanes to the carpool lane would relieve the congestion in the solos, and we would get closer to our 8,000.

That’s what HOT lanes are about. By charging a fee, they move solo drivers who are willing to pay to use the underutilized carpool lane, and we remove them from the solos, increasing their throughput as well. It’s a win-win-win. HOT lanes adjust the price — if the carpool lane is starting to fill up, the price jacks up. The goal is to keep the carpool lane enough below 100% capacity that it flows smoothly, which is good for flow and also what makes it attractive in the first place to make those induced carpools.

With HOT, you can have 1,000 carpoolers and 900 paying solos and also 200 induced carpools so the lane is now delivering the equivalent of 2,100 vehicles/hour and everybody wins. Letting efficient solos use the lane doesn’t involve money, but subsidizes efficient vehicles.

Without HOT, the bizarre conclusion is that cheaters are helping move traffic along. Cheaters only cheat when the carpool lane is going really well — ie. underutilized — and the solo lanes are getting congested. Cheaters take some load off the solo lanes and make use of the wasted capacity. They will not cheat if the carpool lane is not beating the solo lanes by a nice margin. If the carpool lane gets overloaded, they are going to leave it — why risk the ticket?

I should note that I have never, ever deliberately cheated in the carpool lane. (Like most, once or twice I have forgotten what time it was for a minute or two.) I am not trying to justify cheating, and in fact one concern is that some cheaters will read this and imagine they are doing a service. Cheaters are helping the system, but in a completely unfair and inappropriate way.

Legitimizing Cheating

One reason we don’t have more HOT lanes, now that people realize that they are better, is that it costs a lot of money to put them in. Part of that money is for infrastructure — gantries, transponders, signs with prices, enforcement teams, operations teams. The biggest cost comes from the fact that generally people like to make HOT lanes truly separate from the main lanes, with a double line, and entry/exit only allowed at certain points. That means restriping or even new construction.

Many of the world’s transit systems work on an honour system. You have to buy a ticket, but nothing checks this. Instead, if you are caught on board without a ticket, you pay a fat fine. The fine is often calculated to balance the enforcement level, so that a regular cheater will be caught enough that it’s more expensive to cheat than to buy tickets. But often not a lot more expensive, as it turns out.

What if HOT lanes were the same way? Go ahead and cheat! Install random enforcement stations with cameras, and enforce enough so that any regular “cheater” gets fines which are calculated to collect as much or more money than the tolls.

The obvious flaw here is that this only works for the regular cheater. It’s too random, and an occasional lane user (or tourist) would be taking a big gamble, without enough use to balance it out. So we can add payment by cell phone to even things out.

Online payment

Before leaving, or after arriving, tell your phone or browser you will be using or did use the lane. (The reason to do it in advance is you will get a better price.) Your phone can show you the price, and some road signs will display it as well. This gives you a token which includes the time and your licence plate. If you get a fine notice, you can nullify it by providing the token.

(If you don’t care about privacy, you could register the licence plate directly. But I do care about privacy.)

This works with minimal new infrastructure. And payment via phone would be set to be cheaper than the average payment you would pay through random fines, so most people would do it. And all this happens with minimal new infrastructure, as long as you don’t need to reconfigure the lanes.

Enforcement can involve cameras, which may or may not be recording. You need enough of them so that people don’t just briefly switch out of the carpool lane just before coming to a camera, so this has some infrastructure cost. The camera would record the photo of the front seats of your car, and your plate. In isolate carpool lanes this does work better.

This is aimed at places where 2 is a carpool. It means something controversial. Carpoolers must share the front seat. And that means no kidpooling with children small enough to be required to ride in the back seat. Some people will hate that (parents) and some will love it (those who feel that kidpooling is unfair because it almost never causes an induced carpool.) This controversy can be some what mitigated by offering a discount to people who declare they are kidpooling (or better, multi-family kidpooling) with occasional checks.

It’s also an issue for Taxis, Uber and people with chauffeurs. Forcing the latter to pay won’t bother many people. Taxis can be given special status. Ad-hoc taxis, like Uber, can be told, “hey, just make the ride in the front if you want a free entry.” Is that such a big burden? If so, alternate systems can be set up, including requesting a token over the smartphone which can be compared to audited records of fares.

The camera stations could also photograph in through the sides of vehicles. Tinted side windows would not get to be carpools. This is harder than just doing the front, and harder to hide. And there would still be occasional live human observers, to the extent that cost allows.

To avoid risk of people wanting to use phones while driving, we simply allow you to buy a retroactive token within a day of your trip. (You don’t learn about your fine for a couple of days.) You could do that on the web, on a smartphone, by text (retroactive only) or even at any convenience store or gas station that has a payment machine. (This idea is not new. A decade ago I drove a toll road in Melbourne which lets you buy a toll pass at a gas station after you drive the road.)

Or, of course, just pay the fines if they are not that much more expensive, on average than buying tokens.

Even carpoolers could register that they carpooled, in case a problem comes up. Users will want to register an e-mail address or app address with the system under their plate to get notices of fines. If you don’t, notices would come by postal mail. If somebody else registers your plate and you don’t, it might delay notice of fines but you would fix this after the first one. If the typical toll is $3, and the fine is $300, you probably would get a fine notice you need to nullify perhaps every 75 uses on average. This makes paying cheaper. The smartphone app would also notice when you travel the route and remind you.

To protect privacy, the system would not remember tokens it issues, and it would erase all images once it was confirmed the car was legit (carpool, allowed vehicle or had a token.) Only the images of non-carpools who did not respond with their token would be retained for issuing fines to their car.

There can be problems with photo enforcement if it is dark (as it is during winter for portions of rush hour) or in places where the sun is at just the wrong angle. The latter can be fixed because we know just where the sun will be. The former is more challenging. Cameras would need to be placed in line with suitable street lights, and have larger lenses. During the day used cell phones in rainproof cases with tiny solar panels could do the job at low cost.

Live public test in Singapore

In late August, I visited Singapore to give an address at a special conference announcing a government sponsored collaboration involving their Ministry of Transport, the Land Transport Authority and A-STAR, the government funded national R&D centre. I got a chance to meet the minister and sit down with officials and talk about their plans, and 6 months earlier I got the chance to visit A-Star and also the car project at the National University of Singapore. At the conference, there were demos of vehicles, including one from Singapore Technologies, which primarily does military contracting.

Things are moving fast there, and this week, the NUS team announced they will be doing a live public demo of their autonomous golf carts and they have made much progress. They will be running the carts over a course with 10 stops in the Singapore Chinese and Japanese Gardens. The public will be able to book rides online, and then come and summon and direct the vehicles with their phones. The vehicles will have a touch tablet where the steering wheel will go. Rides will be free. Earlier, they demonstrated not just detecting pedestrians but driving around them (if they stay still) but I don’t know if this project includes that.

This is not the first such public demo - the CityMobil2 demonstration in Sardinia ran in August, on a stretch of beachfront road blocked to cars but open to bicycles, service vehicles and pedestrians. This project slowed itself to unacceptably slow speeds and offered a linear route.

The Singapore project will also mix with pedestrians, but the area is closed to cars and bicycles. There will be two safety officers on bicycles riding behind the golf carts, able to shut them down if any problem presents, and speed will also be limited.

Singapore is interesting because they have a long history of transportation innovation, and good reason for it. As a city-state, it’s almost all urban, and transportation is a real problem. That’s why congestion charging was first developed in Singapore, along with other innovations. Every vehicle in Singapore has a transponder, and they use them not just for congestion tolling, but to pay for parking seamlessly in almost all parking lots and a few other tricks.

In spite of this history of innovation, Singapore is also trending conservative — this might dampen truly fast innovation, but this joint project is a good start. Though I advised them that private projects will be able to move faster than public sector ones, in my view.

The NUS project is a collaboration with MIT, involving professor Emilio Frazzoli. Their press release has more details, including maps showing the route is non-linear but the speed is slow.

Tesla, Audi and other recent announcements

Some recent announcements have caused lots of press stir, and I have not written much about them, both because of my busy travel schedule, but also because there is less news that we might imagine.

Tesla is certainly an important company to watch. As the first successful start-up car company in the USA, they are showing they know how to do things differently, taking advantage of the fact that they don’t have a baked in knowledge of “how a car company works” the way other companies do. Tesla’s announcements of plans for more self-driving are important. Unfortunately, the announcements around the new dual-motor Model S involve offerings quite similar to what can be found already in cars from Mercedes, Audi and a few others. Namely advanced ADAS and the combination of lane-keeping and adaptive cruise control to provide a hands-off cruise control where you must keep your eyes on the road.

One notable feature demonstrated by Tesla is automatic lane change, which you trigger by hitting a turn signal. That’s a good interface, but it must be made clear to people that they still have the duty to check that it’s safe to change lanes. It’s not that easy for a robocar’s sensors, especially the limited sensor package in the Telsa, to see a car coming up fast behind you in the next lane. On some highways relative speeds can get pretty high. You’re not likely to be hit by such cars, but in some cases that’s because they will probably brake for you, not because you did a fully safe lane change.

Much more interesting are Elon Musk’s predictions of a real self-driving car in 5 to 6 years. He means one where you can read a book, or even, as he suggests, go to sleep. Going to sleep is one of the greatest challenges, almost as hard as operating unmanned or carrying a drunk or disabled person. You won’t likely do that just with cameras — but 5 to 6 years is a good amount of time for a company like Tesla.

Another unusual thing about Tesla is that while they are talking about robocars a lot, they have also built one of the finest driver’s cars ever made. The Model S is great fun to drive, and has what I call a “telepathic” interface sometimes — the motors have so much torque that you can almost think about where you want to go and the vehicle makes it happen. (Other examples of telepathic interfaces include touch-typing and a stickshift.) In some ways it is the last car that people might want to automate. But it’s also a luxury vehicle, and that makes self-driving desirable too.

Audi Racing

Another recent announcement creating buzz is Audi’s self-driving race car on a test track in Germany. Audi has done racing demos several times now. They are both important but also unimportant. It definitely makes sense to study how to control a car in extreme, high performance situations. To understand the physics of the tires so fully that you can compete in racing will teach lessons of use in danger situations (like accidents) or certain types of bad weather.

At the same time, real-world driving is not like racing, and nobody is going to be doing race-like driving on ordinary streets in their robocar. 99.9999% of driving consists of “staying in your lane” and some other basic maneuvers and so racing is fun and sexy but not actually very high on the priority list. (Not that teams don’t deserve to spend some of their time on a bit of fun and glory.) The real work of building robocars involves putting them through all the real-world road situations you can put them through, both real and in some cases simulated on a track or in a computer.

Google first showed its system to many people by having it race figure-8s on the roof parking lot at the TeD conference. The car followed a course through a group of cones at pretty decent speed and wowed the crowd with the tight turns. What most of the crowd didn’t know was that the cones were only there for show, largely. The car was guiding itself from its map of all the other physical things in the parking lot — line markers, pavement defects and more. The car is able to localize itself fine from those things. The cones just showed the public that it really was following the planned course. At the same time, making a car do that is something that was accomplished decades ago, and is used routinely to run “dummy cars” on car company test tracks.

A real demo turns out to be very boring, because that’s how being driven should be. I’m not saying it’s bad in any way to work on racing problems. The only error would be forgetting that the real-world driving problems are higher priority and success in them is less dramatic but more impressive in the technical sense.

This doesn’t mean we won’t see more impressive demos soon. Many people have shown off automatic braking. Eventually we will see demos of how vehicles respond in danger situations — accidents, pedestrians crossing into the road and the like. A tiny part of driving but naturally one we care about. And we will want them to understand the physics of what the tires and vehicle are capable of so that they perform well, but not so they can find the most efficient driving line on the track.

There was some debate about having a new self-driving car contest like the DARPA grand challenges, and a popular idea was man vs. machine, including racing. That would have been exciting. We asked ourselves whether a robot might have an advantage because it would have no fear of dying. (It might have some “fear” of smashing its owners very expensive car.) Turns out this happens on the racetrack fairly often with new drivers who try to get an edge by driving like they have no fear, that they will win all games of chicken. When this happens, the other drivers get together to teach that new driver a lesson. A lesson about cooperating and reciprocation in passing and drafting. So the robots would need to be programmed with that as well, or their owners would find a lot of expensive crashes and few victories.

Robocar Retirement

Here’s an interview with me in the latest Wall Street Journal on the subject of robocars and seniors.

This has always been a tricky question. Seniors are not early adopters, so the normal instinct would be to expect them to fear a new technology as dramatic as this one. Look at the market for simplified cell phones aimed at seniors who can’t imagine why they want a smartphone. Not all are like this, but enough are to raise the question.

Sometimes this barrier is broken. Pictures of grandchildren in e-mail brought grandparents online, as did video calls with them. Necessity overcomes the fear of change.

As people get older, they start losing driving ability. They die more often in accidents, eventually surpassing the rates of reckless teens, because they are more fragile, and they make mistakes that cause other people to hit them. Many seniors report troubles with vision at night, and they stop driving at night. In some cases, they get their licences taken away by the state — though the AARP and others fight this so it’s rare — or their kids take away their keys when things get really dangerous. And the kids become a taxi service for their parents.

The boomer generation, which took over the suburbs and exurbs have nice houses with minimal transit. Some find themselves leaving that home because they can’t drive any more and they will become a shut-in if they don’t do something.

The robocar offers answers to many of these problems. Safe transportation for those with disabilities. (Eventually even mild dementia.) Inexpensive taxi transportation anywhere, including those low-transit suburbs. And a chance to video chat with the grandchildren while on the way.

It’s no surprise that retirement communities are discussed as an early deployment zone for robocars. In those communities, you have a controlled street environment — often with heavy use of NEVs/golf carts already. You have people losing the ability to drive who have limited mobility needs. If they can get to basic shopping and a few other locations (including transit hubs to travel further) they can do pretty well.

Until the robocar came along, we were all doomed to lose the freedom cars gave us. This is no longer going to happen.

Talking soon on robocars and insurance

I’ve been on the road a lot, talking in places like Singapore, Shenzen and Hong Kong, and visiting Indonesia which is a driving chaos eye-opener. In a bit over 10 hours I will speak at Swiss Re’s conference on robocars and insurance in Zurich. While the start will be my standard talk, in the latter section we will have some new discussion of liability and insurance.

A live stream of the event should be available at I talk at 8:45am Central European Summer Time.

A lot of news while I’ve been on the road — driving permits in California, new projects and the Singapore effort I was there at the announcement of. And lots of non-news that got people very excited like the “revelation” that Google’s car doesn’t drive in snow (nobody thought it could) or on all roads (nobody even suggested this) or that it was forced to add a steering wheel for testing (this was always planned, Google participating in the hearings writing those laws.) And lots of car company announcements from the ITS world congress (a conference that 2 years ago barely acknowledged the presence of self-driving cars.)

More to come later.

Short Big Think video piece on Privacy vs. Security

There’s another video presentation by me that I did while visiting Big Think in NYC.

This one is on The NSA, Snowden and the “tradeoff” of Privacy and Security.

Earlier, I did a 10 minute piece on Robocars for Big Think that won’t be news to regular readers here but was reasonably popular.

Increasing voter turnout with compulsory voting and (gasp) electronic voting

Earlier this year, I started a series on fixing U.S. democracy. Today let me look at the problem I identified as #3: Voter turnout and the excessive power of GOTV.

In a big political campaign, fundraising is king, and most of the money goes to broadcast advertising. But a lot of that advertising, a lot of the other money, and most of the volunteer effort goes to something else called GOTV or “Get Out the Vote.” Come to help a campaign and it’s likely that’s what you will be asked to do.

US elections have terrible turnout. Under 50% in the 1996 Presidential election, and only 57% in more recent contested elections. In off-years and local elections, the turnout is astonishingly low. Turnout is very low in certain minorities as well.

Because turnout is so low, the most cost effective way to gain a vote for your side is to convince somebody who weakly supports you to show up at the polls on election day. Your ads may pretend to attempt to sway people from the other side, or the small number of “undecideds,” but a large fraction of the ads are just trying to make sure your supporters take the trouble to vote. Most of them won’t, but those you can get count as much as any other vote you get. So you visit and phone all these mild supporters, you offer them rides to the polling place, you do everything legal you can to identify them and get them out, and in some cases, to scare the supporters of your opponent.

Is this how a nation should elect its leaders? By who can do the best job at getting the lukewarm supporters to make the trip on election day? It seems wrong. I will go even further, and suggest that the 45% or more who don’t vote are in some sense “disenfranchised.” Clearly not in the strong sense of that word, where we talk about voter suppression or legal battles. But something about the political system has made them feel it is too much of a burden to vote and so they don’t. Those who do care find that hard to credit, they think of them as just lazy, or apathetic, and wonder if we really want to hear the voice of such people.

GOTV costs money, and as such, it is a large factor in what corrupts our politics. If GOTV becomes less effective, it can help reduce the influence of money in politics. It’s serious work. Many campaigns send out people to canvass the neighbourhoods not to try to sway you, but just to figure out who is worth working on for GOTV.

Compulsory voting

Many countries in the world make it compulsory to vote. If your name is not checked off at the polling place, you get fined. Australia is often given as an example of this, with a 91% turnout, though countries like Austria and New Zealand do better without compulsory voting. But it does seem to make a difference.  read more »

Even ASIC miners of Bitcoins face security threats

Last month I wrote about paradoxes involving bitcoin and other cryptocurrency mining. In particular, I pointed out that while many people are designing alternative coins so that they are hard to mine with ASICs — and thus can be more democratically mined by people’s ordinary computers or GPUs — this generates a problem. If mining is done on ordinary computers, it becomes worthwhile to break into ordinary computers and steal their resources for mining. This has been happening, even with low powered NAS box computers which nobody would ever bother to mine on if they had to pay for the computer and its electricity. The attacker pays nothing, so any mining capacity is good.

Almost any. In Bitcoin, ASIC mining is so productive that it’s largely a waste of time to mine with ordinary CPUs even if you get them for free, since there is always some minor risk in stealing computer time. While ordinary computers are very hard to secure, dedicated ASIC mining rigs are very simple special purpose computers, and you can probably secure them.

But in a recently revealed attack thieves stole bitcoins from miners by attacking not the ASIC mining rigs, but their internet connections. The rigs may be simple, but the computers they flow their data through, and the big network routers, are less so. Using BGP redirection, it is suspected, the thieves just connected the mining rigs to a different mining pool than the one they thought they joined. And so they worked away, mining hard, and sometimes winning the bitcoin lottery, not for their chosen pool, but the thieves’ pool.

It’s not hard to imagine fixes for this particular attack. Pools and rigs can authenticate more strongly, and pools can also work to keep themselves more secure.

But we are shown one of the flaws of almost all digital money systems. If your computer can make serious money just by computing, or it can spend money on your behalf without need for a 2nd factor authentication, then it becomes very worthwhile for people to compromise your system and steal your computer time or your digital money. Bitcoin makes this even worse by making transactions irrevocable and anonymous. For many uses, those are features, but they are also bugs.

For the spending half, there is much effort in the community to build more secure wallets that can’t just spend your money if somebody takes over your computer. They rely on using multiple keys, and keeping at least one key in a more secure, even offline computer. Doing this is very hard, or rather doing it with a pleasant and happy user interface is super hard. If you’re going to compete with PayPal it’s a challenge. If somebody breaks into my PayPal account and transfers away the money there, I can go to PayPal and they can reverse those transactions, possibly even help track down the thieves. It’s bad news if a merchant was scammed but very good news for me.

One could design alternate currencies with chargebacks or refundability, but Bitcoin is quite deliberate in its choice not to have those. It was designed to be like cash. The issue is that while you could probably get away keeping your cash in your mattress and keeping a secure house, this is a world where somebody can build robots that can go into all the houses it can find and pull the cash out of the mattresses without anybody seeing.

Do we need to ban the password?

Ok, I’m not really much of a fan of banning anything, but the continued reports of massive thefts of password databases from web sites are not slowing down. Whether the recent Hold Security report of discovering a Russian ring that got a billion account records from huge numbers of websites is true or not, we should imagine that it is.

As I’ve written before there are two main kinds of password using sites. The sites that keep a copy of your password (ie. any site that can e-mail you your password if you forget it) and the sites who keep an encrypted/hashed version of your password (these can reset your password for you via e-mail if you forget it.) The latter class is vastly superior, though it’s still an issue when a database of encrypted passwords is stolen as it makes it easier for attackers to work out brute-force attacks.

Sites that are able to e-mail you a lost password should be stamped out. While I’m not big on banning, it make make sense that a rule require that any site which is going to remember your password in plain form have a big warning on the password setting page and login page:

This site is going to store your password without protection. There is significant risk attackers will someday breach this site and get your ID and password. If you use these credentials on any other site, you are giving access to these other accounts to the operators of this site or anybody who compromises this site.

Sites which keep a hashed password (including the Drupal software running this blog, though I no longer do user accounts) probably should have a lesser warning too. If you use a well-crafted password unlikely to be checked in a brute-force attack, you are probably OK, but only a small minority do that. Such sites still have a risk if they are taken over, because the taken over site can see any passwords typed by people logging in while it’s taken over.

Don’t feel too guilty for re-using passwords. Everybody does it. I do it, in places where it’s no big catastrophe if the password leaks. It’s not the end of the world if one blog site has the multi-use password I use on another blog site. With hundreds of accounts, there’s no way to not re-use with today’s tools. For my bank accounts or other accounts that could do me harm, I keep better hygene, and so should you.

But in reality we should not use passwords at all. Much better technology has existed for many decades, but it’s never been built in a way to make it easy to use. In particular it’s been hard to make it portable — so you can just go to another computer and use it to log into a site — and it’s been impossible to make it universal, so you can use it everywhere. Passwords need no more than your memory, and they work for almost all sites.

Even our password security is poor. Most sites use your password just to create a session cookie that keeps you authenticated for a long session on the site. That cookie’s even easier to steal than a password at most sites.  read more »

The Neighbourhood Elevator and a new vision of urban density

I’ve been musing more on the future of the city under the robocar, and many visions suggest we’ll have more sprawl. Earlier I have written visions of Robocar Oriented Development and outlined all the factors urban planners should look at.

In the essay linked below, I introduce the concept of a medium density urban neighbourhood that acts like a higher density space thanks to robocars functioning like the elevators in the high-rises of high density development.

Read The Neighbourhood Elevator and 21st century urban density at

Robocar News: UK Legalization, MobilEye IPO, Baidu, new Lidar, Nissan pullback, FBI Weapons, Navia, CityMobil2

A whole raft of recent robocar news.

UK to modify laws for full testing, large grants for R&D

The UK announced that robocar testing will be legalized in January, similar to actions by many US states, but the first major country to do so. Of particular interest is the promise that fully autonomous vehicles, like Google’s no-steering-wheel vehicle, will have regulations governing their testing. Because the US states that wrote regulations did so before seeing Google’s vehicle, their laws still have open questions about how to test faster versions of it.

Combined with this are large research grant programs, on top of the £10M prize project to be awarded to a city for a testing project, and the planned project in Milton Keynes.

Jerusalem’s MobilEye going public in largest Israeli IPO

The leader in doing automated driver assist using cameras is Jerusalem’s MobilEye. This week they’re going public, to a valuation near $5B and raising over $600 million. MobilEye makes custom ASICs full of machine vision processing tools, and uses those to make camera systems to recognize things on the road. They have announced and demonstrated their own basic supervised self-driving car with this. Their camera, which is cheaper than the radar used in most fancy ADAS systems (but also works with radar for better results) is found in many high-end vehicles. They are a supplier to Tesla, and it is suggested that MobilEye will play a serious role in Tesla’s own self-driving plans.

As I have written, I don’t believe cameras are even close to sufficient for a fully autonomous vehicle which can run unmanned, though they can be a good complement to radar and especially LIDAR. LIDAR prices will soon drop to the low $thousands, and people taking the risk of deploying the first robocars would be unwise to not use LIDAR to improve their safety just to save a few thousand for early adopters.

Chinese search engine Baidu has robocar (and bicycle) project

Baidu is the big boy in Chinese search — sadly a big beneficiary of Google’s wise and moral decision not to be collaborators on massive internet censorship in China — and now it’s emulating Google in a big way by opening its own self-driving car project.

Various stories suggest a vehicle which involves regular handoff between a driver and the car’s systems, something Google decided was too risky. Not many other details are known.

Also rumoured is a project with bicycles. Unknown if that’s something like the “bikebot” concept I wrote about 6 years ago, where a small robot would clamp to a bike and use its wheels to deliver the bicycle on demand.

Why another search engine company? Well, one reason Google was able to work quickly is that it is the world’s #1 mapping company, and mapping plays a large role in the design of robocars. Baidu says it is their expertise in big data and AI that’s driving them to do this.

Velodyne has a new LIDAR

The Velodyne 64 plane LIDAR, which is seen spinning on top of Google’s cars and most of the other serious research cars, is made in small volumes and costs a great deal of money — $75,000. David Hall, who runs Velodyne, has regularly said that in volume it would cost well under $1,000, but we’re not there yet. He has released a new LIDAR with just 16 planes. The price, while not finalized, will be much higher than $1K but much lower than $75K (or even the $30K for the 32 plane version found on Ford’s test vehicle and some others.)

As a disclaimer, I should note I have joined the advisory board of Quanergy, which is making 8 plane LIDARs at a much lower price than these units.

Nissan goes back and forth on dates

Conflicting reports have come from Nissan on their dates for deployment. At first, it seemed they had predicted fairly autonomous cars by 2020. A later announcement by CEO Carlos Ghosn suggested it might be even earlier. But new reports suggest the product will be less far along, and need more human supervision to operate.

FBI gets all scaremongering

Many years ago, I wrote about the danger that autonomous robots could be loaded with explosives and sent to an address to wreak havoc. That is a concern, but what I wrote was that the greater danger could be the fear of that phenomenon. After all, car accidents kill more people every month in the USA than died at the World Trade Center 13 years ago, and far surpass war and terrorism as forms of violent death and injury in most nations for most of modern history. Nonetheless, an internal FBI document, released through a leak, has them pushing this idea along with the more bizarre idea that such cars would let criminals multitask more and not have to drive their own getaway cars.  read more »

The two cultures of robocars

I have many more comments pending on my observations from the recent AUVSI/TRB Automated Vehicles Symposium, but for today I would like to put forward an observation I made about two broad schools of thought on the path of the technology and the timeline for adoption. I will call these the aggressive and conservative schools. The aggressive school is represented by Google, Induct (and its successors) and many academic teams, the conservative school involves car companies, most urban planners and various others.

The conservative (automotive) view sees this technology as a set of wheels that has a computer.

The aggressive (digital) school sees this as a computer that has a set of wheels.

The conservative view sees this as an automotive technology, and most of them are very used to thinking about automotive technology. For the aggressive school, where I belong, this is a computer technology, and will be developed — and change the world — at the much faster pace that computer technologies do.

Neither school is probably entirely right, of course. It won’t go as gung-ho as a smartphone, suddenly in every pocket within a few years of release, being discarded when just 2 years old even though it still performs exactly as designed. Nor will it advance at the speed of automotive technology, a world where electric cars are finally getting some traction a century after being introduced.

The conservative school embraces the 4 NHTSA Levels or 5 SAE levels of technology, and expects these levels to be a path of progress. Car companies are starting to sell “level 2” and working on “level 3” and declaring level 4 or 5 to be far in the future. Google is going directly to SAE level 4.

The two cultures do agree that the curve of deployment is not nearly-instant like a smartphone. It will take some time until robocars are a significant fraction of the cars on the road. What they disagree on is how quickly that has a big effect on society. In sessions I attended, the feeling that the early 2020s would see only a modest fraction of cars being self-driving meant to the conservatives that they would not have that much effect on the world.

In one session, it was asked how many people had cars with automatic cruise control (ACC.) Very few hands went up, and this is no surprise — the uptake of ACC is quite low, and almost all of it is part of a “technology package” on the cars that offer it. This led people to believe that if ACC, now over a decade old, could barely get deployed, we should not expect rapid deployment of more complete self-driving. And this may indeed be a warning for those selling super-cruise style products which combine ACC and lanekeeping under driver supervision, which is the level 2 most car companies are working on.

To counter this, I asked a room how many had ridden in Uber or its competitors. Almost every hand went up this time — again no surprise. In spite of the fact that Uber’s cars represent an insignificant fraction of the deployed car fleet. In the aggressive view, robocars are more a service than a product, and as we can see, a robocar-like service can start affecting everybody with very low deployment and only a limited service area.

This dichotomy is somewhat reflected in the difference between SAE’s Level 4 and NHTSA’s. SAE Level 4 means full driving (including unmanned) but in a limited service area or under other limited parameters. This is what Google has said they will make, this is what you see planned for services in campuses and retirement communities. This is where it begins, and grows one region at a time. NHTSA’s levels falsely convey the idea that you slowly move to fully automated mode and immediately do it over a wide service area. Real cars will vary as to what level of supervision they need (the levels) over different times, streets and speeds, existing at all the levels at different times.

Follow the conservative model and you can say that society will not see much change until 2030 — some even talk about 2040. I believe that is an error.

Another correlated difference of opinion lies around infrastructure. Those in the aggressive computer-based camp wish to avoid the need to change the physical infrastructure. Instead of making the roads smart, make the individual cars smart. The more automotive camp has also often spoken of physical changes as being more important, and also believes there is strong value in putting digital “vehicle to vehicle” radios in even non-robocars. The computer camp is much more fond of “virtual infrastructure” like the detailed ultra-maps used by Google and many other projects.

It would be unfair to claim that the two schools are fully stratified. There are researchers who bridge the camps. There are people who see both sides very well. There are “computer” folks working at car companies, and car industry folks on the aggressive teams.

The two approaches will also clash when it comes to deciding how to measure the safety of the products and how they should be regulated, which will be a much larger battle. More on that later.

Robotics: Science and Systems and Automated Vehicles Symposium this week

It’s a big week for Robocar conferences.

In Berkeley, yesterday I attended and spoke at the “Robotics: Science and Systems” conference which had a workshop on autonomous vehicles. That runs to Wednesday, but overlapping and near SF Airport is the Automated Vehicles Symposium — a merger of the TRB (Transportation Research Board) and AUVSI conferences on the same topic. 500 are expected to attend.

Yesterday’s workshop was pretty good, with even a bit of controversy.

Yesterday saw:

  • Ed Olson on more of the lessons from aviation on handoff between automation and manual operation. This keeps coming up a a real barrier to some of the vehicle designs that have humans share the chores with the system.
  • Jesse Levinson of Stanford’s team showed some very impressive work in automatic calibration of sensors, and even fusion of LIDAR and camera data, aligning them in real time in spite of movement and latency. This work will make sensors faster, more reliable and make fusion accurate enough to improve perception.
  • David Hall, who runs Velodyne, spoke on the history of their sensors, and his plans for more. He repeated his prediction that in large quantities his sensor could cost only $300. (I’m a bit skeptical of that, but it could cost much, much less than it does today.) David made the surprising statement that he thinks we should make dedicated roads for the vehicles. (Surprising not just because I disagree, but because you could even get by without much LIDAR on such roads.)
  • Marco Panove of Stanford showed research they did on Taxi models from New York and Singapore. The economics look very good. Dan Fagnant also presented related research assuming an on-demand semi shared system with pickup stations in every TAZ. It showed minimal vacant miles but also minimal successful rideshare. The former makes sense when it’s TAZ to TAZ (TAZs are around a square mile) but I would have thought there would be more rideshare. The conclusion is that VMT go up due to empty miles, but that rideshare can partially compensate, though not as much as some might hope.
  • Ken Laberteaux of Toyota showed his research on the changing demographics of driving and suburbs. Conclusion: We are not moving back into the city, suburbanization is continuing. Finding good schools continues to drive people out unless they can afford private school are are childless.

The event had a 3-hour lunch break, where most went to watch some sporting event from Brazil. The Germans at the conference came back happier.

Some good technical talks presented worthwhile research

  • Sheng Zhao and a team from UC Riverside showed a method to get cm accuracy in position and even in pose (orientation) from cheap GPS receivers, by using improved math on phase-matching GPS. This could also be combined with cheap IMUs. Most projects today use very expensive IMUs and GPSs, not the cheap ones you find in your cell phone. This work may lead to being able to get reliable data from low cost parts.
  • Matthew Cornick and a team from Lincoln Lab at MIT showed very interesting work on using ground penetrating radar to localize. With GPR, you get a map of what’s below the road — you see rocks and material patterns down several feet. These vary enough, like the cracks and lines on a road, and so you can map them, and then find your position in that map — even if the road is covered in snow. While the radar units are today bulky, this offers the potential for operations in snow.
  • A team from Toyota showed new algorithms to speed up the creation of the super-detailed maps needed for robocars. Their algorithms are good at figuring out how many lanes there are and when they start and stop. This could make it much cheaper to build the ultramaps needed in an automatic way, with less human supervision.

The legal and policy sessions got more heated.

  • Bryant Walker Smith laid out some new proposals for how to regulate and govern torts about the vehicles.
  • Eric Feron of Georgia Tech made proposals for how to do full software verification. Today formally proving and analysing code for correctness takes 0.6 hours per line of code — it’s not practical for the 50 million line (or more) software systems in cars today. Jonathan argues it can be made cheaper, and should be done. Note that fully half the cost of developing the 787 aircraft was software verification!

The final session, on policy included:

  • Jane Lappin on how DoT is promoting research.
  • Steve Shladover on how we’re all way to optimistic on timelines, and that coming up with tests to demonstrate superior safety to humans is very far away, since humans run 65,000 hours between injury accidents.
  • Myself on why regulation should keep a light touch, and we should not worry too much about the Trolley Problem — which came up a couple of times.
  • Raj Rajkumar of CMU on the success they have had showing the CMU/GM car to members of congress.

Now on to the AVS tomorrow.

Solar freaking roadways? Do the math

In the last few months, I have found myself asked many times about a concept for solar roadways. Folks from Idaho proposing them have gotten a lot of attention with FHWA funding, a successful crowdfunding and even an appearance at Solve for X. Their plan is hexagonal modules with strong glass, with panels and electronics underneath, LED lights, heating elements for snow country and a buried conduit for power cables, data and water runoff. In addition, they hope for inductive charging plates for electric vehicles.

This idea has come up before, but since these folks built a small prototype, they generated tremendous attention. But they haven’t spoken at all about the cost, and that concerns me, because with all energy projects, the financial math is 99% of the issue. That’s true of infrastructure projects as well.

There are two concepts here. The first is, can you make a cost effective manufactured road panel? Roads are quite expensive today, but they are just asphalt gravel and other industrial materials whose cost is measured the range of $50 to $100 per ton. A chart from Florida suggests that basic rural asphalt roads cost about $9 per square foot, all-in, including labour and grading (it’s flat there) and about $4/square foot for milling and resurfacing. Roadway modules could be factory made (by robots) but still would require more labour to install, but I still think it is a very tall order for a manufactured surface to not cost a great deal more, even an order of magnitude more than plain road. Paved roads need maintenance, and that’s expensive. It is proposed that these panels would be cheaper to maintain as you just swap them out, but I am again skeptical of this math. Indeed, one of the major barriers to proposals for electric roads (which can charge cars) is that putting anything in the road makes it prohibitively more expensive to maintain.

I won’t say this is impossible — but it’s all about the math. We need to see math that would show that the modular manufactured pavement approach can compete. I’m happy for that math to include future technologies, like robot assembly and placement (though realize that we’ll probably see road construction with simpler materials also done by robots even sooner.) Let’s see the numbers, how cheap can it get?

All of this is without the solar panels inside (or the electronics.) Because the solar panels have their own math. The only synergy is this: If the modular roadway can be made so that it costs only a bit more than other approaches, it offers us “free land” to put the panels, and it’s connected land in long strips to run power wires.

How valuable is free land? Well, cropland in the USA costs an average of about 10 cents per square foot. 23 cents in California. 3 cents/square foot in the rural west. Much more, of course, in urban places. The land is not that important, so the other value comes from having a nice, manufactured place in which to put solar panels.

Today solar panels are still costly. They are just getting down (primarily thanks to cheap Chinese money) to our grid price. Trends suggest they will get lower and become cost effective as a variable source of power. But until they get really, really cheap, you want to use them most efficiently.

To use solar panels at their best, you don’t want to lie them flat (except in the tropics) but rather you want to tilt them just just a bit below the angle of your latitude. Conventional wisdom also points them south, though it’s actually better for the grid and most people’s power demands if you point them south-west, losing a few percent of their output but getting more of it to match peak demand. Putting them flat costs you 20 to 30% of their output. (You can also have them motorized and gain even more, but it’s usually not cost-effective, and will become less cost-effective as panels get cheaper and motors don’t.)

To use solar panels at their best, you also want to put them where it’s very sunny. Finally you want to first put them where the local power comes from coal. When you have gotten rid of most of the coal, you can start putting them elsewhere. You can put panels in less sunny places which have power from hydro, nuclear or natural gas, but you’re really wasting your money. The ideal places are Arizona and New Mexico, with tons of sun and lots of coal. And lots of cheap, fairly low-value land.

To be fair, the biggest cost of the panels will soon be the hardware they are mounted in, along with the wires and electronics to connect them, and so perhaps these road modules could compete by being cheap hardware for that. But it seems not too likely.

In cities, rooftops provide another source of free land, much of it slanted about right and pointed in roughly the right direction. With lower cost than tearing up roads. But to be fair, right now one of the bigger cost elements is getting permits to do the construction and electrical work. Roads are far from bureaucracy-free, but at least it scales — you get permits for a big project all at once, not one house at a time. But we can solve that problem for houses if we really want to as well.

So my challenge to the solar roadway team is to show us the math. No, we don’t need to see what it cost to make your prototypes. I am sure they are very expensive, but that’s beside the point. I want to see a plan for how low the cost can go in theory, even assuming future technologies. And compare that to how low the cost for the alternatives can go in theory. And then factor in how things don’t get to that theoretical point due to bureaucracy, unions and other practicalities. Compare panels in the road to panels by the side of the road, tilted and not being driven over. Look at what paved roads cost in practice to what they could cost in theory to get an idea of how close you can actually get, or come up with a really convincing reason why one approach is immune from the problems of another.

And if that math says yes, go at it. But if it doesn’t, focus on where the math tells you to go.

The paradox of Bitcoin proof-of-work mining

Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.

Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.

How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.

Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.

Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.

Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)

What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.

(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)

CPU/GPU mining

Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:

  • Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
  • Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
  • Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
  • In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
  • When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.

The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too.  read more »

The tide of surveys gets worse -- "would you please rate our survey?"

Five years ago, I posted a rant about the excess of customer service surveys we’re all being exposed to. You can’t do any transaction these days, it seems, without being asked to do a survey on how you liked it. We get so many surveys that we now just reject these requests unless we have some particular problem we want to complain about — in other words, we’re back to what we had with self-selected complaints. The value of surveys is now largely destroyed, and perversely, as the response rates drop and the utility diminishes, that just pushes some companies to push even harder on getting feedback, creating a death spiral.

A great example of this death spiral came a few weeks ago when I rode in an Uber and the driver had a number of problems. So this time I filled out the form to rate the driver and leave comments. Uber’s service department is diligent, and actually read it, and wrote me back to ask for more details and suggestions, which I gave.

That was followed up with:

Hi Brad Templeton,

We’d love to hear what you think of our customer service. It will only take a second, we promise. This feedback will allow us to make sure you always receive the best possible customer service experience in future.

If you were satisfied in how we handled your query, simply click this link.

If you weren’t satisfied in how we handled your ticket, simply click this link.

A survey on my satisfaction with the survey process! Ok, to give Uber some kudos, I will note:

  • They really did try to make this one simple, just click a link. Though one wonders, had I clicked I was unsatisfied, would there have been more inquiry? Of course I was unsatisfied — because they sent yet another survey. The service was actually fine.
  • At least they addressed me as “Hi Brad Templeton.” That’s way better than “Dear Brad” like the computer sending the message pretending it’s on a first-name basis with me. Though the correct salutation should be “Dear Customer” to let me know that it is not a personally written message for me. The ability to fill in people’s names in form letters stopped being impressive or looking personal in the 1970s.

This survey-on-a-survey is nice and short, but many of the surveys I get are astoundingly long. They must be designed, one imagines, to make sure nobody who values their time ever fully responds.

Why does this happen? Because we’ve become so thrilled at the ability to get high-volume feedback from customers that people feel it is a primary job function to get that feedback. If that’s your job, then you focus on measuring everything you can, without thinking about how the measurement (and over-measurement) affects the market, the customers and the very things you are try to measure. Heisenberg could teach these folks a lesson.

To work, surveys must be done on a small sample of the population, chosen in a manner to eliminate bias. Once chosen, major efforts should be made to assure people who are chosen do complete the surveys, which means you have to be able to truthfully tell them they are part of a small sample. Problem is, nobody is going to believe that when your colleagues are sending a dozen other surveys a day. It’s like over-use of antibiotics. All the other doctors are over-prescribing and so they stop working for you, even if you’re good.

The only way to stop this is to bring the hammer down from above. People higher up, with a focus on the whole customer experience, must limit the feedback efforts, and marketing professionals need to be taught hard in school and continuing education just why there are only so many they can do.

Syndicate content