Submitted by brad on Thu, 2014-08-07 18:49.
Last month I wrote about paradoxes involving bitcoin and other cryptocurrency mining. In particular, I pointed out that while many people are designing alternative coins so that they are hard to mine with ASICs — and thus can be more democratically mined by people’s ordinary computers or GPUs — this generates a problem. If mining is done on ordinary computers, it becomes worthwhile to break into ordinary computers and steal their resources for mining. This has been happening, even with low powered NAS box computers which nobody would ever bother to mine on if they had to pay for the computer and its electricity. The attacker pays nothing, so any mining capacity is good.
Almost any. In Bitcoin, ASIC mining is so productive that it’s largely a waste of time to mine with ordinary CPUs even if you get them for free, since there is always some minor risk in stealing computer time. While ordinary computers are very hard to secure, dedicated ASIC mining rigs are very simple special purpose computers, and you can probably secure them.
But in a recently revealed attack thieves stole bitcoins from miners by attacking not the ASIC mining rigs, but their internet connections. The rigs may be simple, but the computers they flow their data through, and the big network routers, are less so. Using BGP redirection, it is suspected, the thieves just connected the mining rigs to a different mining pool than the one they thought they joined. And so they worked away, mining hard, and sometimes winning the bitcoin lottery, not for their chosen pool, but the thieves’ pool.
It’s not hard to imagine fixes for this particular attack. Pools and rigs can authenticate more strongly, and pools can also work to keep themselves more secure.
But we are shown one of the flaws of almost all digital money systems. If your computer can make serious money just by computing, or it can spend money on your behalf without need for a 2nd factor authentication, then it becomes very worthwhile for people to compromise your system and steal your computer time or your digital money. Bitcoin makes this even worse by making transactions irrevocable and anonymous. For many uses, those are features, but they are also bugs.
For the spending half, there is much effort in the community to build more secure wallets that can’t just spend your money if somebody takes over your computer. They rely on using multiple keys, and keeping at least one key in a more secure, even offline computer. Doing this is very hard, or rather doing it with a pleasant and happy user interface is super hard. If you’re going to compete with PayPal it’s a challenge. If somebody breaks into my PayPal account and transfers away the money there, I can go to PayPal and they can reverse those transactions, possibly even help track down the thieves. It’s bad news if a merchant was scammed but very good news for me.
One could design alternate currencies with chargebacks or refundability, but Bitcoin is quite deliberate in its choice not to have those. It was designed to be like cash. The issue is that while you could probably get away keeping your cash in your mattress and keeping a secure house, this is a world where somebody can build robots that can go into all the houses it can find and pull the cash out of the mattresses without anybody seeing.
Submitted by brad on Thu, 2014-08-07 15:17.
Ok, I’m not really much of a fan of banning anything, but the continued reports of massive thefts of password databases from web sites are not slowing down. Whether the recent Hold Security report of discovering a Russian ring that got a billion account records from huge numbers of websites is true or not, we should imagine that it is.
As I’ve written before there are two main kinds of password using sites. The sites that keep a copy of your password (ie. any site that can e-mail you your password if you forget it) and the sites who keep an encrypted/hashed version of your password (these can reset your password for you via e-mail if you forget it.) The latter class is vastly superior, though it’s still an issue when a database of encrypted passwords is stolen as it makes it easier for attackers to work out brute-force attacks.
Sites that are able to e-mail you a lost password should be stamped out. While I’m not big on banning, it make make sense that a rule require that any site which is going to remember your password in plain form have a big warning on the password setting page and login page:
This site is going to store your password without protection. There is significant risk attackers will someday breach this site and get your ID and password. If you use these credentials on any other site, you are giving access to these other accounts to the operators of this site or anybody who compromises this site.
Sites which keep a hashed password (including the Drupal software running this blog, though I no longer do user accounts) probably should have a lesser warning too. If you use a well-crafted password unlikely to be checked in a brute-force attack, you are probably OK, but only a small minority do that. Such sites still have a risk if they are taken over, because the taken over site can see any passwords typed by people logging in while it’s taken over.
Don’t feel too guilty for re-using passwords. Everybody does it. I do it, in places where it’s no big catastrophe if the password leaks. It’s not the end of the world if one blog site has the multi-use password I use on another blog site. With hundreds of accounts, there’s no way to not re-use with today’s tools. For my bank accounts or other accounts that could do me harm, I keep better hygene, and so should you.
But in reality we should not use passwords at all. Much better technology has existed for many decades, but it’s never been built in a way to make it easy to use. In particular it’s been hard to make it portable — so you can just go to another computer and use it to log into a site — and it’s been impossible to make it universal, so you can use it everywhere. Passwords need no more than your memory, and they work for almost all sites.
Even our password security is poor. Most sites use your password just to create a session cookie that keeps you authenticated for a long session on the site. That cookie’s even easier to steal than a password at most sites. read more »
Submitted by brad on Wed, 2014-07-30 13:01.
A whole raft of recent robocar news.
UK to modify laws for full testing, large grants for R&D
The UK announced that robocar testing will be legalized in January, similar to actions by many US states, but the first major country to do so. Of particular interest is the promise that fully autonomous vehicles, like Google’s no-steering-wheel vehicle, will have regulations governing their testing. Because the US states that wrote regulations did so before seeing Google’s vehicle, their laws still have open questions about how to test faster versions of it.
Combined with this are large research grant programs, on top of the £10M prize project to be awarded to a city for a testing project, and the planned project in Milton Keynes.
Jerusalem’s MobilEye going public in largest Israeli IPO
The leader in doing automated driver assist using cameras is Jerusalem’s MobilEye. This week they’re going public, to a valuation near $5B and raising over $600 million. MobilEye makes custom ASICs full of machine vision processing tools, and uses those to make camera systems to recognize things on the road. They have announced and demonstrated their own basic supervised self-driving car with this. Their camera, which is cheaper than the radar used in most fancy ADAS systems (but also works with radar for better results) is found in many high-end vehicles. They are a supplier to Tesla, and it is suggested that MobilEye will play a serious role in Tesla’s own self-driving plans.
As I have written, I don’t believe cameras are even close to sufficient for a fully autonomous vehicle which can run unmanned, though they can be a good complement to radar and especially LIDAR. LIDAR prices will soon drop to the low $thousands, and people taking the risk of deploying the first robocars would be unwise to not use LIDAR to improve their safety just to save a few thousand for early adopters.
Chinese search engine Baidu has robocar (and bicycle) project
Baidu is the big boy in Chinese search — sadly a big beneficiary of Google’s wise and moral decision not to be collaborators on massive internet censorship in China — and now it’s emulating Google in a big way by opening its own self-driving car project.
Various stories suggest a vehicle which involves regular handoff between a driver and the car’s systems, something Google decided was too risky. Not many other details are known.
Also rumoured is a project with bicycles. Unknown if that’s something like the “bikebot” concept I wrote about 6 years ago, where a small robot would clamp to a bike and use its wheels to deliver the bicycle on demand.
Why another search engine company? Well, one reason Google was able to work quickly is that it is the world’s #1 mapping company, and mapping plays a large role in the design of robocars. Baidu says it is their expertise in big data and AI that’s driving them to do this.
Velodyne has a new LIDAR
The Velodyne 64 plane LIDAR, which is seen spinning on top of Google’s cars and most of the other serious research cars, is made in small volumes and costs a great deal of money — $75,000. David Hall, who runs Velodyne, has regularly said that in volume it would cost well under $1,000, but we’re not there yet. He has released a new LIDAR with just 16 planes. The price, while not finalized, will be much higher than $1K but much lower than $75K (or even the $30K for the 32 plane version found on Ford’s test vehicle and some others.)
As a disclaimer, I should note I have joined the advisory board of Quanergy, which is making 8 plane LIDARs at a much lower price than these units.
Nissan goes back and forth on dates
Conflicting reports have come from Nissan on their dates for deployment. At first, it seemed they had predicted fairly autonomous cars by 2020. A later announcement by CEO Carlos Ghosn suggested it might be even earlier. But new reports suggest the product will be less far along, and need more human supervision to operate.
FBI gets all scaremongering
Many years ago, I wrote about the danger that autonomous robots could be loaded with explosives and sent to an address to wreak havoc. That is a concern, but what I wrote was that the greater danger could be the fear of that phenomenon. After all, car accidents kill more people every month in the USA than died at the World Trade Center 13 years ago, and far surpass war and terrorism as forms of violent death and injury in most nations for most of modern history. Nonetheless, an internal FBI document, released through a leak, has them pushing this idea along with the more bizarre idea that such cars would let criminals multitask more and not have to drive their own getaway cars. read more »
Submitted by brad on Wed, 2014-07-23 15:32.
I have many more comments pending on my observations from the recent AUVSI/TRB Automated Vehicles Symposium, but for today I would like to put forward an observation I made about two broad schools of thought on the path of the technology and the timeline for adoption. I will call these the aggressive and conservative schools. The aggressive school is represented by Google, Induct (and its successors) and many academic teams, the conservative school involves car companies, most urban planners and various others.
The conservative (automotive) view sees this technology as a set of wheels that has a computer.
The aggressive (digital) school sees this as a computer that has a set of wheels.
The conservative view sees this as an automotive technology, and most of them are very used to thinking about automotive technology. For the aggressive school, where I belong, this is a computer technology, and will be developed — and change the world — at the much faster pace that computer technologies do.
Neither school is probably entirely right, of course. It won’t go as gung-ho as a smartphone, suddenly in every pocket within a few years of release, being discarded when just 2 years old even though it still performs exactly as designed. Nor will it advance at the speed of automotive technology, a world where electric cars are finally getting some traction a century after being introduced.
The conservative school embraces the 4 NHTSA Levels or 5 SAE levels of technology, and expects these levels to be a path of progress. Car companies are starting to sell “level 2” and working on “level 3” and declaring level 4 or 5 to be far in the future. Google is going directly to SAE level 4.
The two cultures do agree that the curve of deployment is not nearly-instant like a smartphone. It will take some time until robocars are a significant fraction of the cars on the road. What they disagree on is how quickly that has a big effect on society. In sessions I attended, the feeling that the early 2020s would see only a modest fraction of cars being self-driving meant to the conservatives that they would not have that much effect on the world.
In one session, it was asked how many people had cars with automatic cruise control (ACC.) Very few hands went up, and this is no surprise — the uptake of ACC is quite low, and almost all of it is part of a “technology package” on the cars that offer it. This led people to believe that if ACC, now over a decade old, could barely get deployed, we should not expect rapid deployment of more complete self-driving. And this may indeed be a warning for those selling super-cruise style products which combine ACC and lanekeeping under driver supervision, which is the level 2 most car companies are working on.
To counter this, I asked a room how many had ridden in Uber or its competitors. Almost every hand went up this time — again no surprise. In spite of the fact that Uber’s cars represent an insignificant fraction of the deployed car fleet. In the aggressive view, robocars are more a service than a product, and as we can see, a robocar-like service can start affecting everybody with very low deployment and only a limited service area.
This dichotomy is somewhat reflected in the difference between SAE’s Level 4 and NHTSA’s. SAE Level 4 means full driving (including unmanned) but in a limited service area or under other limited parameters. This is what Google has said they will make, this is what you see planned for services in campuses and retirement communities. This is where it begins, and grows one region at a time. NHTSA’s levels falsely convey the idea that you slowly move to fully automated mode and immediately do it over a wide service area. Real cars will vary as to what level of supervision they need (the levels) over different times, streets and speeds, existing at all the levels at different times.
Follow the conservative model and you can say that society will not see much change until 2030 — some even talk about 2040. I believe that is an error.
Another correlated difference of opinion lies around infrastructure. Those in the aggressive computer-based camp wish to avoid the need to change the physical infrastructure. Instead of making the roads smart, make the individual cars smart. The more automotive camp has also often spoken of physical changes as being more important, and also believes there is strong value in putting digital “vehicle to vehicle” radios in even non-robocars. The computer camp is much more fond of “virtual infrastructure” like the detailed ultra-maps used by Google and many other projects.
It would be unfair to claim that the two schools are fully stratified. There are researchers who bridge the camps. There are people who see both sides very well. There are “computer” folks working at car companies, and car industry folks on the aggressive teams.
The two approaches will also clash when it comes to deciding how to measure the safety of the products and how they should be regulated, which will be a much larger battle. More on that later.
Submitted by brad on Mon, 2014-07-14 13:59.
It’s a big week for Robocar conferences.
In Berkeley, yesterday I attended and spoke at the “Robotics: Science and Systems” conference which had a workshop on autonomous vehicles. That runs to Wednesday, but overlapping and near SF Airport is the Automated Vehicles Symposium — a merger of the TRB (Transportation Research Board) and AUVSI conferences on the same topic. 500 are expected to attend.
Yesterday’s workshop was pretty good, with even a bit of controversy.
- Ed Olson on more of the lessons from aviation on handoff between automation and manual operation. This keeps coming up a a real barrier to some of the vehicle designs that have humans share the chores with the system.
- Jesse Levinson of Stanford’s team showed some very impressive work in automatic calibration of sensors, and even fusion of LIDAR and camera data, aligning them in real time in spite of movement and latency. This work will make sensors faster, more reliable and make fusion accurate enough to improve perception.
- David Hall, who runs Velodyne, spoke on the history of their sensors, and his plans for more. He repeated his prediction that in large quantities his sensor could cost only $300. (I’m a bit skeptical of that, but it could cost much, much less than it does today.) David made the surprising statement that he thinks we should make dedicated roads for the vehicles. (Surprising not just because I disagree, but because you could even get by without much LIDAR on such roads.)
- Marco Panove of Stanford showed research they did on Taxi models from New York and Singapore. The economics look very good. Dan Fagnant also presented related research assuming an on-demand semi shared system with pickup stations in every TAZ. It showed minimal vacant miles but also minimal successful rideshare. The former makes sense when it’s TAZ to TAZ (TAZs are around a square mile) but I would have thought there would be more rideshare. The conclusion is that VMT go up due to empty miles, but that rideshare can partially compensate, though not as much as some might hope.
- Ken Laberteaux of Toyota showed his research on the changing demographics of driving and suburbs. Conclusion: We are not moving back into the city, suburbanization is continuing. Finding good schools continues to drive people out unless they can afford private school are are childless.
The event had a 3-hour lunch break, where most went to watch some sporting event from Brazil. The Germans at the conference came back happier.
Some good technical talks presented worthwhile research
- Sheng Zhao and a team from UC Riverside showed a method to get cm accuracy in position and even in pose (orientation) from cheap GPS receivers, by using improved math on phase-matching GPS. This could also be combined with cheap IMUs. Most projects today use very expensive IMUs and GPSs, not the cheap ones you find in your cell phone. This work may lead to being able to get reliable data from low cost parts.
- Matthew Cornick and a team from Lincoln Lab at MIT showed very interesting work on using ground penetrating radar to localize. With GPR, you get a map of what’s below the road — you see rocks and material patterns down several feet. These vary enough, like the cracks and lines on a road, and so you can map them, and then find your position in that map — even if the road is covered in snow. While the radar units are today bulky, this offers the potential for operations in snow.
- A team from Toyota showed new algorithms to speed up the creation of the super-detailed maps needed for robocars. Their algorithms are good at figuring out how many lanes there are and when they start and stop. This could make it much cheaper to build the ultramaps needed in an automatic way, with less human supervision.
The legal and policy sessions got more heated.
- Bryant Walker Smith laid out some new proposals for how to regulate and govern torts about the vehicles.
- Eric Feron of Georgia Tech made proposals for how to do full software verification. Today formally proving and analysing code for correctness takes 0.6 hours per line of code — it’s not practical for the 50 million line (or more) software systems in cars today. Jonathan argues it can be made cheaper, and should be done. Note that fully half the cost of developing the 787 aircraft was software verification!
The final session, on policy included:
- Jane Lappin on how DoT is promoting research.
- Steve Shladover on how we’re all way to optimistic on timelines, and that coming up with tests to demonstrate superior safety to humans is very far away, since humans run 65,000 hours between injury accidents.
- Myself on why regulation should keep a light touch, and we should not worry too much about the Trolley Problem — which came up a couple of times.
- Raj Rajkumar of CMU on the success they have had showing the CMU/GM car to members of congress.
Now on to the AVS tomorrow.
Submitted by brad on Sat, 2014-07-12 11:29.
In the last few months, I have found myself asked many times about a concept for solar roadways. Folks from Idaho proposing them have gotten a lot of attention with FHWA funding, a successful crowdfunding and even an appearance at Solve for X. Their plan is hexagonal modules with strong glass, with panels and electronics underneath, LED lights, heating elements for snow country and a buried conduit for power cables, data and water runoff. In addition, they hope for inductive charging plates for electric vehicles.
This idea has come up before, but since these folks built a small prototype, they generated tremendous attention. But they haven’t spoken at all about the cost, and that concerns me, because with all energy projects, the financial math is 99% of the issue. That’s true of infrastructure projects as well.
There are two concepts here. The first is, can you make a cost effective manufactured road panel? Roads are quite expensive today, but they are just asphalt gravel and other industrial materials whose cost is measured the range of $50 to $100 per ton. A chart from Florida suggests that basic rural asphalt roads cost about $9 per square foot, all-in, including labour and grading (it’s flat there) and about $4/square foot for milling and resurfacing. Roadway modules could be factory made (by robots) but still would require more labour to install, but I still think it is a very tall order for a manufactured surface to not cost a great deal more, even an order of magnitude more than plain road. Paved roads need maintenance, and that’s expensive. It is proposed that these panels would be cheaper to maintain as you just swap them out, but I am again skeptical of this math. Indeed, one of the major barriers to proposals for electric roads (which can charge cars) is that putting anything in the road makes it prohibitively more expensive to maintain.
I won’t say this is impossible — but it’s all about the math. We need to see math that would show that the modular manufactured pavement approach can compete. I’m happy for that math to include future technologies, like robot assembly and placement (though realize that we’ll probably see road construction with simpler materials also done by robots even sooner.) Let’s see the numbers, how cheap can it get?
All of this is without the solar panels inside (or the electronics.) Because the solar panels have their own math. The only synergy is this: If the modular roadway can be made so that it costs only a bit more than other approaches, it offers us “free land” to put the panels, and it’s connected land in long strips to run power wires.
How valuable is free land? Well, cropland in the USA costs an average of about 10 cents per square foot. 23 cents in California. 3 cents/square foot in the rural west. Much more, of course, in urban places. The land is not that important, so the other value comes from having a nice, manufactured place in which to put solar panels.
Today solar panels are still costly. They are just getting down (primarily thanks to cheap Chinese money) to our grid price. Trends suggest they will get lower and become cost effective as a variable source of power. But until they get really, really cheap, you want to use them most efficiently.
To use solar panels at their best, you don’t want to lie them flat (except in the tropics) but rather you want to tilt them just just a bit below the angle of your latitude. Conventional wisdom also points them south, though it’s actually better for the grid and most people’s power demands if you point them south-west, losing a few percent of their output but getting more of it to match peak demand. Putting them flat costs you 20 to 30% of their output. (You can also have them motorized and gain even more, but it’s usually not cost-effective, and will become less cost-effective as panels get cheaper and motors don’t.)
To use solar panels at their best, you also want to put them where it’s very sunny. Finally you want to first put them where the local power comes from coal. When you have gotten rid of most of the coal, you can start putting them elsewhere. You can put panels in less sunny places which have power from hydro, nuclear or natural gas, but you’re really wasting your money. The ideal places are Arizona and New Mexico, with tons of sun and lots of coal. And lots of cheap, fairly low-value land.
To be fair, the biggest cost of the panels will soon be the hardware they are mounted in, along with the wires and electronics to connect them, and so perhaps these road modules could compete by being cheap hardware for that. But it seems not too likely.
In cities, rooftops provide another source of free land, much of it slanted about right and pointed in roughly the right direction. With lower cost than tearing up roads. But to be fair, right now one of the bigger cost elements is getting permits to do the construction and electrical work. Roads are far from bureaucracy-free, but at least it scales — you get permits for a big project all at once, not one house at a time. But we can solve that problem for houses if we really want to as well.
So my challenge to the solar roadway team is to show us the math. No, we don’t need to see what it cost to make your prototypes. I am sure they are very expensive, but that’s beside the point. I want to see a plan for how low the cost can go in theory, even assuming future technologies. And compare that to how low the cost for the alternatives can go in theory. And then factor in how things don’t get to that theoretical point due to bureaucracy, unions and other practicalities. Compare panels in the road to panels by the side of the road, tilted and not being driven over. Look at what paved roads cost in practice to what they could cost in theory to get an idea of how close you can actually get, or come up with a really convincing reason why one approach is immune from the problems of another.
And if that math says yes, go at it. But if it doesn’t, focus on where the math tells you to go.
Submitted by brad on Sat, 2014-06-28 10:47.
Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.
Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.
How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.
Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.
Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.
Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)
What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.
(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)
Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:
- Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
- Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
- Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
- In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
- When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.
The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too. read more »
Submitted by brad on Tue, 2014-06-24 16:25.
Five years ago, I posted a rant about the excess of customer service surveys we’re all being exposed to. You can’t do any transaction these days, it seems, without being asked to do a survey on how you liked it. We get so many surveys that we now just reject these requests unless we have some particular problem we want to complain about — in other words, we’re back to what we had with self-selected complaints. The value of surveys is now largely destroyed, and perversely, as the response rates drop and the utility diminishes, that just pushes some companies to push even harder on getting feedback, creating a death spiral.
A great example of this death spiral came a few weeks ago when I rode in an Uber and the driver had a number of problems. So this time I filled out the form to rate the driver and leave comments. Uber’s service department is diligent, and actually read it, and wrote me back to ask for more details and suggestions, which I gave.
That was followed up with:
Hi Brad Templeton,
We’d love to hear what you think of our customer service. It will only take a second, we promise. This feedback will allow us to make sure you always receive the best possible customer service experience in future.
If you were satisfied in how we handled your query, simply click this link.
If you weren’t satisfied in how we handled your ticket, simply click this link.
A survey on my satisfaction with the survey process! Ok, to give Uber some kudos, I will note:
- They really did try to make this one simple, just click a link. Though one wonders, had I clicked I was unsatisfied, would there have been more inquiry? Of course I was unsatisfied — because they sent yet another survey. The service was actually fine.
- At least they addressed me as “Hi Brad Templeton.” That’s way better than “Dear Brad” like the computer sending the message pretending it’s on a first-name basis with me. Though the correct salutation should be “Dear Customer” to let me know that it is not a personally written message for me. The ability to fill in people’s names in form letters stopped being impressive or looking personal in the 1970s.
This survey-on-a-survey is nice and short, but many of the surveys I get are astoundingly long. They must be designed, one imagines, to make sure nobody who values their time ever fully responds.
Why does this happen? Because we’ve become so thrilled at the ability to get high-volume feedback from customers that people feel it is a primary job function to get that feedback. If that’s your job, then you focus on measuring everything you can, without thinking about how the measurement (and over-measurement) affects the market, the customers and the very things you are try to measure. Heisenberg could teach these folks a lesson.
To work, surveys must be done on a small sample of the population, chosen in a manner to eliminate bias. Once chosen, major efforts should be made to assure people who are chosen do complete the surveys, which means you have to be able to truthfully tell them they are part of a small sample. Problem is, nobody is going to believe that when your colleagues are sending a dozen other surveys a day. It’s like over-use of antibiotics. All the other doctors are over-prescribing and so they stop working for you, even if you’re good.
The only way to stop this is to bring the hammer down from above. People higher up, with a focus on the whole customer experience, must limit the feedback efforts, and marketing professionals need to be taught hard in school and continuing education just why there are only so many they can do.
Submitted by brad on Tue, 2014-06-24 09:45.
Some recent press and talks:
Earlier in June I sat down with “Big Think” for an interview they have titled “Robocars 101” explaining some of the issues around the cars.
I also did a short interview on NPR’s “All Things Considered” not long after Google’s new car was announced. What you might find interesting is how I did it. I was at a friend’s house in Copenhagen and went into a quiet room where they called me on my cell phone. However, I also started a simple audio recorder app on my phone. When we were done, I shared the mp3 of a better sample from the same microphone with them, which they mixed in.
As a result, the interview sounds almost like it was done in-studio instead of over an international cell phone call.
Videos of my talks at Next Berlin at at Dutch Media Future Week 2014 are also up. And a shortened talk at Ontario Centers for Excellence Discovery 2014 in Toronto May 12. There we had the Governor General of Canada as our opening act. :-) That’s just 3 of the 11 events I was at on that trip.
Completely off the Robocar track is a short interview with CNBC where I advise people to invest in Bitcoin related technology, not in bitcoins.
Submitted by brad on Sun, 2014-06-22 20:51.
So far it’s been big players like Google and car companies with plans in the self-driving space. Today, a small San Francisco start-up named Cruise, founded by Kyle Vogt (a founder of the web video site Justin.tv) announces their plans to make a retrofit kit that will adapt existing cars to do basic highway cruise, which is to say, staying in a lane and keeping pace behind other cars while under a driver’s supervision.
I’ve been following Cruise since its inception. This offering has many similarities to the plans of major car companies, but there are a few key differences:
- This is a startup, which can be more nimble than the large companies, and having no reputation to risk, can be bolder.
- They plan to make this as a retrofit kit for a moderate set of existing cars, rather than custom designing it to one car.
They’re so dedicated to the retrofit idea that the Audi A4 they are initially modifying does not even have drive-by-wire brakes like the commonly used hybrid cars. Their kit puts sensors on the roof, and puts a physical actuator on the brake and another physical actuator on the steering wheel — they don’t make use of the car’s own steering motor. They want a kit that can be applied to almost any car the market tells them to target.
They won’t do every car, though. All vendors have a strong incentive to only support cars they have given some solid testing to, so most plans don’t involve retrofit at all, and of course Google has now announced their plans to design a car from scratch. Early adopters may be keen on retrofit.
I rode in the car last week during a demo at Alemeda air station, a runway familiar to viewers of Mythbusters. There they set up a course of small orange cones, which are much easier to see than ordinary lane markings, so it’s hard to judge how well the car does on lane markings. It still has rough edges, to be sure, but they don’t plan to sell until next year. In the trial, due to insurance rules, it kept under 40mph, though it handled that speed fine, though drifted a bit in wider parts of the “lane.”
On top is an aerodynamic case around a sensor pack which is based on stereo cameras and radar from Delphi. Inside is just a single button in the center arm console to enable and disable cruise mode. You take the car to the lane and push the button.
All stuff we’ve seen before, and not as far along, but the one key difference — being a nimble startup — may make all the difference. Only early adopters will pay the $10,000 for a product where you must (at least for now) still watch the road, but that may be all that is needed.
Submitted by brad on Sun, 2014-06-22 11:30.
On my recent wanderings in Europe, I became quite enamoured by Google’s
latest revision of transit directions. Google has had transit directions for
some time, but they have recently improved them, and linked them in more cities
to live data about where transit vehicles actually are.
The result not a mere incremental improvement, it’s a game-changing increase
in the utility of decent transit. In cities like Oslo and London, the tool
gives the user the ability to move with transit better than a native. In the
past, using transit, especially buses, as a visitor has always been so frustrating
that most visitors simply don’t use it, in spite of the much lower cost compared
to taxis. Transit, especially when used by an unfamiliar visitor, is slow and
complex, with long waits, missed connection and confusion about which bus
or line to take during shorter connections, as well as how to pay.
Not so any more. With a superhuman ability, your phone directs you to transit stops
you might not figure out from a map, where the right bus usually appears quite quickly.
Transfers are chosen to be quick as well, and directions are given as to which direction to
go, naming the final destination as transit signs often do, rather than the compass direction. It’s optimized by where the vehicles actually are and predicted to be, and this
will presumably get even better.
By making transit “just work” it becomes much more useful, and gives us a taste of the
robocar taxi world. That world is even easier, of course — door to door with no
connections and no need for you to even follow directions. But while Uber also shows us
that world well in user experience, Uber is expensive, as are cabs, while transit is closer
in cost to the anticipated robocar cost of well below $1/mile.
It also helps to have transit systems with passes or contactless pay cards, to avoid the hassles of payment.
Why does this work so well? In the transit-heavy cities, it turns out there are often 2, 3 or even 4 ways to get to your destination via different transit lines and connections. The software is able to pick among them in a way even a native couldn’t, and one is often leaving soon, and it finds it for you.
In some cities, there is not live data, so it only routes based on schedules. This cuts
the utility greatly. From a user experience standpoint, it is often better to give people
a wait they expect than to do a better job but not give accurate expectations.
What’s clear now is that transit agencies should have done this a lot sooner. Back in the 1980s
a friend of mine built one of the first systems which tracked transit vehicles and gave
you a way to call to see when the bus would come, or in some cases signs on the bus stops.
Nice as those were they are nothing compared to this. There is not much in this technology
that could not have been built some time ago. In fact, it could have been built even
before the smartphone, with people calling in by voice and saying, “I am at the corner of X and
Y and I need to get to Z” with a human helper. The cost would have actually been worth it
because by making the transit more useful it gets more riders.
That might be too expensive, but all this needed was the smartphone with GPS and a
data connection, and it is good that it has come.
In spite of this praise, there is still much to do.
- Routing is very time dependent. Ask at 1:00 and you can get a very different answer than you get asking at 1:02. And a different one at 1:04. The product needs a live aspect that updates as you walk and time passes.
- The system never figures out you are already on the bus, and so always wants to route you as though you were standing on the road. Often you want to change plans or re-look up options once you are on the vehicle, and in addition, you may want to do other things on the map.
- Due to how rapidly things change, the system also needs to display when multiple options are equivalent. For example, it might say, “Go to the train platform and take the B train northbound.” Then due to how things have change, you see a C train show up — do you get on it? Instead, it should say, “Take a B, C or E train going north towards X, Y or Z, but B should come first.”
- For extra credit, this should get smarter and combine with other modes. For example, many cities have bikeshare programs that let you ride a bike from one depot to another. If the system knew about those it could offer you very interesting routings combining bikes and transit. Or if you have your own bike and transit lines allow it on, you could use that.
- Likewise, you could combine transit with cabs, getting a convenient route with low walking but with much lower cab expense.
- Finally, you could also integrate with one-way car share programs like car2go or DriveNow, allowing a trip to mix transit, car, bike and walking for smooth movement.
- Better integration with traffic is needed. If the buses are stuck in traffic, it’s time to tell you to take another method (even cycling or walking) if time is your main constraint.
- Indoor mapping is needed in stations, particularly underground ones. Transit agencies should have beacons in the stations or on the tracks so phones can figure out where they are when GPS is not around. Buses could also have beacons to tell you if you got on the right one.
- The systems should offer an alert when you are approaching your stop. Beacons could help here too. For a while the GPS map has allowed the unfamiliar transit rider to know when to get off, but this can make it even better.
- This is actually a decent application for wearables and things like Google glass, or just a bluetooth earpiece talking in your ear, watching you move through the city and the stations and telling you which way to go, and even telling you when you need to rush or relax.
- In some cities going onto the subway means loss of signal. There, storing the live model for relevant lines in a cache would let the phone still come up with pretty good estimates when offline for a few minutes.
A later stage product might let you specify a destination and a time, and then it will buzz you when it’s time to start walking, and guide you there, through a path that might include walking, bike rides, transit lines and even carshare or short cab rides for a fast, cheap trip with minimal waiting, even when the transit isn’t all that good.
Submitted by brad on Mon, 2014-06-09 19:48.
I’m in the home stretch of a long international trip — photos to follow — but I speak tomorrow at Lincoln Center on how computers (and robocars) will change the worlds of finance. In the meantime, Google’s announcement last month has driven a lot of news in the Robocar space worthy of reporting.
On the lighter side, this video from the Conan O’Brien show highlights the issues around people’s deep fear of being injured by machines. While the video is having fun, this is a real issue that will dominate the news when the first accidents and injuries happen. I cover that in detail in my article about accidents but the debate will be a major one.
Nissan announced last year that it would sell cars in 2020. Now that Tesla has said 2016, Google has said civilians will be in their small car within a year and Volvo has said the same will happen in Sweden by 2017, Nissan CEO Carlos Ghosn has said they might do it 2 years earlier.
As various locations rush to put in robocar laws, in Europe they are finally getting around to modifying the Vienna convention treaty, which required a human driver. However, the new modifications, driven by car companies, still call for a steering wheel that a driver can use to take over (as do some of the US state laws.) These preclude Google’s new design, but perhaps with a bit of advance warning, this can be fixed. Otherwise, changing it again will be harder. Perhaps the car companies — none of whom have talked about anything like Google’s car with no controls — will be happy with that.
The urban test course at the University of Michigan, announced not very long ago, is almost set to open — things are moving fast, as they will need to if Michigan is to stay in the race. Google’s new prototype, by the way, is built in Michigan. Google has not said who but common speculation names not a major car company, but one of their big suppliers.
The Ernst & Young auto research lab (in Detroit) issued a very Detroit style forecast for autonomous vehicles which said their widespread use was 2 decades away. Not too surprising for such a group. Consultants are notoriously terrible at predictions for exponential technology. Their bad smartphone predictions are legendary (and now erased, of course.) A different study predicts an $87 billion market — but the real number is much larger than that.
This article where top car designers critique Google’s car illustrates my point from last week how people with car company experience are inclined to just not get it. But at the same time some of the automotive press do get it.
Submitted by brad on Sat, 2014-06-07 15:20.
25 years ago, on June 8, 1989, I announced to the world my new company ClariNet, which offered for sale an electronic newspaper delivered over the internet. This has the distinction, as far as I know, of being the first business created to use the internet as a platform, what we usually call a “dot-com” company.
I know it was the first because up until that time, the internet’s backbone was run by the National Science Foundation and it had a policy disallowing commercial use of the network. In building ClariNet, I found a way to hack around those rules and sell the service. Later, the rules would be relaxed and the flood of dot-coms came on a path of history that changed the world.
A quarter of a century seems like an infinite amount of time in internet-years. Five years ago, for the 20th anniversary, I decided to write up this history of the company, how I came to found it, and the times in which it was founded.
Read The history of ClariNet.com and the dawn of internet based business
There’s not a great deal to add in the 5 years since that prior anniversary.
- Since then, USENET’s death has become more complete. I no longer use it, and porn, spam and binaries dominate it now. Even RSS, which was USENET’s successor — oddly with some inferiorities — has begun to fall from favour.
- The last remnants of ClariNet, if they exist at Yellowbrix, are hard to find, though that company exists and continues to sell similar services.
- Social media themselves are showing signs of shrinking. Publishing and discussing among large groups just doesn’t scale past a certain point and people are shrinking their circles rather than widening them.
- We also just saw the 25th anniversary of the Web itself a few months ago, or at least its draft design document. ClariNet’s announcement in June was just that — work had been underway for many months before that, and product would not ship until later in the summer.
Many readers of this blog will not have seen this history before, and 25 years is enough of an anniversary to make it worth re-issuing. There is more than just the history of ClariNet in there. You will also find the history of other early internet business, my own personal industry history that put me in the right place at the right time with these early intentions, and some anecdotes from ClariNet’s life and times.
Submitted by brad on Tue, 2014-06-03 05:41.
I’ve been on the road for the last month, and there’s more to come. Right now I’m in Amsterdam for a few hours, to be followed by a few events in London, then on to New York for Singularity U’s Exponential Finance conference, followed by the opening of our Singularity University Graduate Studies Program for 2014. (You can attend our opening ceremony June 16 by getting tickets here — it’s always a good crowd)
But while on the road, let me lament about what’s missing from so many of the hotel rooms and AirBnB apartments I’ve stayed in, which is an understanding of what digital folks, especially digital couples need.
Yes, rooms are small, especially in Europe, and one thing they often sacrifice is desk space. In particular, desk space for two people with laptops. This is OK if you’ve ditched the laptop for a tablet, but many rooms barely have desk space enough for one, or the apartments have no desk, only the kitchen table. And some only have one chair.
We need desk space, and we need a bit of room to put things, and we need it for two. Of course, there should be plugs at desk level if you can — the best thing is to have a power strip on the desk, so we can plug in laptops, camera chargers, phone chargers and the like.
Strangely, at least half the hotels I stay in have a glass tabletop for their desk. The once surface my mouse won’t work on. Yes, I hate the trackpad so I use a mouse if I am doing any serious computing. I can pull over a piece of paper or book to be a mousepad, but this is silly.
Really sweet, but rarely seen, is an external monitor. Nice 24” computer monitors cost under $150 these days, so there should be one — or two. And there should be cables (HDMI and VGA at least) because while I bring cables sometimes, you never know which cable the monitor in a room will use. Sometimes you can plug into the room’s TV — but sometimes it has been modified so you can’t. It’s nice if you can, though a TV on the while is not a great monitor for working. It’s OK for watching video if I wanted to.
For extra credit, perhaps the TV can support some of the new video over wireless protocols, like Miracast, Widi or Apple’s TV protocol, to make it easy to connect devices, even phones and tablets.
Sadly, there is no way yet for you to provide me with a keyboard or mouse in the room that I could trust.
Though when it comes to phone chargers, many use their phone as their alarm clock, and so they want it by the bed. There should be power by the bed, and it should not require you to unplug the bedside lamp or clock radio.
Another nice touch would be plugs or power strips with the universal multi-socket that accepts all the major types of plugs. Sure, I always have adapters but it’s nice to not have to use them. My stuff is all multi-voltage of course.
Most hotel rooms come with a folding luggage stand, which is good. But they should really come with two. Couples and families routinely have 3 bags. A hotel should know that if you’ve booked a double room, you probably want at least two. Sometimes I have called down to the desk to get more and they don’t have any more — just one in each room. If you are not going to put them in the room, the bell desk should be able to bring up any you need.
Free Wifi (and wired) without a goddamned captive portal
I’ve ranted about this before, but captive portals which hijack your browser — thus breaking applications and your first use — are still very common. Worse, some of them reset every time you turn off your computer — or your phone, and you have to re-auth. Some portals are there to charge you, but I find that not an excuse any more. When hotels charge me for internet, I ask them how much the electricity and water are in the room. It’s past time that hotels that charge for internet just have that included in the online shopping sites like Kayak and Tripadvisor when you search for hotels. Or at the least I should be able to check a box for “show me the price with internet, and any taxes and made-up resort fees” so I can compare the real price.
But either way, the captive portals break too many things. (Google Glass can’t even work at all with them.) Cheap hotels give free wifi with no portal — this is a curse of fancier hotels. If you want sell premium wifi, so be it — but let me log into the basic one with no portal, and then I can go to a URL where I can pay for the upgrade. If you insist give me really crappy internet, 2G speed internet, with no portal, so that things at least work, though slowly, until I upgrade.
If you need a password, use WPA2. You can set up a server so people enter their room number and name with WPA2-Enterprise. You can meet certain “know your user” laws that force these portals on people that way.
And have wired internet — with a cable — if you can. At a desk, it’s more reliable and has no setup programs and needs no password or portal at all.
Submitted by brad on Sun, 2014-06-01 05:15.
It’s not too surprising that the release of images of Google’s prototype robocar have gotten comments like this:
Revolutionary Tech in a Remarkably Lame Package from Wired
A Joy Ride in Google’s Clown Car says Re/Code
I’ve also seen comparisons to the Segway, and declarations that limited to 25 mph, this vehicle won’t get much adoption or affect the world much.
Google’s own video starts with a senior expressing that it’s “cute.”
I was not involved in the specifics of design of this vehicle, though I pushed hard as I could for something in this direction. Here’s why I think it’s the right decision.
First of all, this is a prototype. Only 100 of this design will be made, and there will be more iterations. Google is all about studying, learning and doing it again, and they can afford to. They want to know what people think of this, but are not scared if they underestimate it at first.
Secondly, this is what is known as a “Disruptive Technology.” Disruptive technologies, as described in the Silicon Valley bible “The Innovators Dilemma” are technologies that seem crazy and inferior at first. They meet a new need, not well understood by the incumbent big companies. Those big companies don’t see it as a threat — until years later, they are closing their doors. Every time a disruptive technology takes over, very few of the established players make it through to the other side. This does not guarantee that Google will dominate or crush those companies, or that everything that looks silly eventually wins. But it is a well established pattern.
This vehicle does not look threatening — not to people on the street, and not to existing car companies and pundits who don’t get it. Oh, there are many people inside those car companies who do get it, but the companies are incapable of getting it in their bones. Even when their CEOs get it, they can’t steer the company 90 degrees — there are too many entrenched forces in any large company. The rare exception are founder-led companies (like Google and Facebook and formerly Apple and Microsoft) where if the founder gets it, he or she can force the company to get it.
Even large companies who read this blog post and understand it still won’t get it, not most of the time. I’ve talked to executives from big car companies. They have a century of being car companies, and knowing what the means. Google, Tesla and the coming upstarts don’t.
One reason I will eventually move away from my chosen name for the technology — robocar — along with the other popular names like “self-driving car” is that this future vehicle is not a car, not as we know it today. It is no more a “driverless car” than a modern automobile is a horseless carriage. 100 years ago, the only way they could think of the car was to notice that there was no horse. Today, all many people notice about robocars is that no human is driving. This is the thing that comes after the car.
Some people expected the car to look more radical. Something like the Zoox or ATMBL by Mike and Maaike (who now work in a different part of Google.) Cars like those will come some day, but are not the way you learn. You start simple, and non threatening, and safe. And you start expensive — the Google prototype still has the very expensive Velodyne LIDAR on it, but trust me, very soon LIDAR is going to get a lot less expensive.
The low speed is an artifact of many things. You want to start safe, so you limit where you go and how fast. In addition, US law has a special exception from most regulations for electric vehicles that can’t go more than 25mph and stick to back roads. Some may think that’s not very useful (turns out they are wrong, it has a lot of useful applications) but it’s also a great way to start. Electric vehicles have another big advantage in this area. Because you can reverse electric motors, they can work as secondary brakes in the event of failure of the main brake system, and can even be secondary steering in case of failure of the steering system at certain speeds. (Google has also said that they have two steering motors in order to handle the risk of failure of one steering motor.) Electric vehicles are not long-range enough to work as taxis in a large area, but they can handle smaller areas just fine.
If you work in the auto industry, and you looked at this car and saw a clown car, that’s a sign you should be afraid.
Submitted by brad on Wed, 2014-05-28 00:40.
In what is the biggest announcement since Google first revealed their car project, it has announced that they are building their own car, a small low-speed urban vehicle for two with no steering wheel, throttle or brakes. It will act as a true robocar, delivering itself and taking people where they want to go with a simple interface. The car is currently limited to 25mph, and has special pedestrian protection features to make it even safer. (I should note that as a consultant to that team, I helped push the project in this direction.)
This is very different from all the offerings being discussed by the various car companies, and is most similar to the Navia which went on sale earlier this year. The Navia is meant as a shuttle, and up to 12 people stand up in it while it moves on private campus roads. It only goes 20 km/h rather than the 40 km/h of Google’s new car. Google plans to operate their car on public roads, and will have non-employees in test prototype vehicles “very soon.”
This is a watershed moment and an expression of the idea that the robocar is not a car but the thing that comes after the car, as the car came after the horse. Google’s car is disruptive, it seems small and silly looking and limited if you look at it from the perspective of existing car makers. That’s because that’s how the future often looks.
I have a lot to say about what this car means, but at the same time, very little because I have been saying it since 2007. One notable feature (which I was among those pushing for inside) is a soft cushion bumper and windshield. Clearly the goal is always to have the car never hit anybody, but it can still happen because systems aren’t perfect and sometimes people appear in front of cars quickly making it physically impossible to stop. In this situation, cars should work to protect pedestrians and cyclists. Volvo and Autoliv have an airbag that inflates on the windshield bars, which are the thing that most often kills a cyclist. Of the 1.2 million who are killed in car accidents each year, close to 500,000 are pedestrians, mostly in the lower income nations. These are first steps in protecting them as well as the occupants of the car.
The car has 2 seats (side-by-side) and very few controls. It is a prototype, being made at first in small quantities for testing.
More details, and other videos, including a one of Chris Urmson giving more details, can be found at the new Google Plus page for the car. Also of interest is this interview with Chris.
I’m in Milan right now about to talk to Google’s customers about the car — somewhat ironic — after 4 weeks on the road all over Europe. 2 more weeks to go! I will be in Copenhagen, Amsterdam, London and NYC in the coming weeks, after having been in NYC, Berlin, Krakow, Toronto, Amsterdam, Copenhagen, Oslo, the fjords and Milan. In New York, come see me at Singularity U’s Exponential Finance conference June 10-11.
Submitted by brad on Mon, 2014-04-28 12:44.
News from Google’s project is rare, but today on the Google blog they described new achievements in urban driving and reported a number of 700,000 miles. The car has been undergoing extensive testing in urban situations, and Google let an Atlantic reporter get a demo of the urban driving which is worth a read.
You will want to check out the new video demo of urban operations:
While Google speakers have been saying for a while that their goal is a full-auto car that does more than the highway, this release shows the dedication already underway towards that goal. It is the correct goal, because this is the path to a vehicle that can operate vacant, and deliver, store and refuel itself.
Much of the early history of development has been on the highway. Most car company projects have a focus on the highway or traffic jam situations. Google’s cars were, in years past, primarily seen on the highways. In spite of the speed, highway driving is actually a much easier task. The traffic is predictable, and the oncoming traffic is physically separated. There are no cyclists, no pedestrians, no traffic lights, no stop signs. The scariest things are on-ramps and construction zones. At low speed the highway could even be considered a largely solved problem by now.
Highway driving accounts for just over half of our miles, but of course not our hours. A full-auto car on the highway delivers two primary values: Fewer accidents (when delivered) and giving productive time back to the highway commuter and long distance traveller. This time is of no small value, of course. But the big values to society as a whole come in the city, and so this is the right target. The “super-cruise” products which require supervision do not give back this time, and it is debatable if they give the safety. Their prime value is a more relaxing driving experience.
Google continues to lead its competitors by a large margin. (Disclaimer: They have been a consulting client of mine.) While Mercedes — which is probably the most advanced of the car companies — has done an urban driving test run, it is not even at the level that Google was doing in 2010. It is time for the car makers to get very afraid. Major disruption is coming to their industry. The past history of high-tech disruptions shows that very few of the incumbent leaders make it through to the other side. If I were one of the car makers who doesn’t even have a serious project on this, I would be very afraid right now.
Submitted by brad on Mon, 2014-04-21 13:24.
Many states and jurisdictions are rushing to write laws and regulations governing the testing and deployment of robocars. California is working on its new regulations right now. The first focus is on testing, which makes sense.
Unfortunately the California proposed regulations and many similar regulations contain a serious flaw:
The autonomous vehicle test driver is either in immediate physical control of the vehicle or is monitoring the vehicle’s operations and capable of taking over immediate physical control.
This is quite reasonable for testing vehicles based on modern cars, which all have steering wheels and brakes with physical connections to the steering and braking systems. But it presents a problem for testing delivery robots or deliverbots.
Delivery robots are world-changing. While they won’t and can’t carry people, they will change retailing, logistics, the supply chain, and even going to the airport in huge ways. By offering very quick delivery of every type of physical goods — less than 30 minutes — at a very low price (a few pennies a mile) and on the schedule of the recipient, they will disrupt the supply chain of everything. Others, including Amazon, are working on doing this by flying drone, but for delivery of heavier items and efficient delivery, the ground is the way to go.
While making fully unmanned vehicles is more challenging than ones supervised by their passenger, the delivery robot is a much easier problem than the self-delivering taxi for many reasons:
- It can’t kill its cargo, and thus needs no crumple zones, airbags or other passive internal safety.
- It still must not hurt people on the street, but its cargo is not impatient, and it can go more slowly to stay safer. It can also pull to the side frequently to let people pass if needed.
- It doesn’t have to travel the quickest route, and so it can limit itself to low-speed streets it knows are safer.
- It needs no windshield or wheel, and can be small, light and very inexpensive.
A typical deliverbot might look like little more than a suitcase sized box on 3 or 4 wheels. It would have sensors, of course, but little more inside than batteries and a small electric motor. It probably will be covered in padding or pre-inflated airbags, to assure it does the least damage possible if it does hit somebody or something. At a weight of under 100lbs, with a speed of only 25 km/h and balloon padding all around, it probably couldn’t kill you even if it hit you head on (though that would still hurt quite a bit.)
The point is that this is an easier problem, and so we might see development of it before we see full-on taxis for people.
But the regulations do not allow it to be tested. The smaller ones could not fit a human, and even if you could get a small human inside, they would not have the passive safety systems in place for that person — something you want even more in a test vehicle. They would need to add physical steering and braking systems which would not be present in the full drive-by-wire deployment vehicle.
Testing on real roads is vital for self-driving systems. Test tracks will only show you a tiny fraction of the problem.
One way to test the deliverbot would be to follow it in a chase car. The chase car would observe all operations, and have a redundant, reliable radio link to allow a person in the chase car to take direct control of any steering or brakes, bypassing the autonomous drive system. This would still be drive-by-wire(less) though, not physical control.
These regulations also affect testing of full drive-by-wire vehicles. Many hybrid and electric cars today are mostly drive-by-wire in ordinary operations, and the new Infiniti Q50 features the first steer-by-wire. However the Q50 has a clutch which, in the event of system failure, reconnects the steering column and the wheels physically, and the hybrids, even though they do DBW regenerative braking for the first part of the brake pedal, if you press all the way down you get a physical hydraulic connection to the brakes. A full DBW car, one without any steering wheel like the Induct Navia, can’t be tested on regular roads under these regulations. You could put a DBW steering wheel in the Navia for testing but it would not be physical.
Many interesting new designs must be DBW. Things like independent control of the wheels (as on the Nissan Pivo) and steering through differential electric motor torque can’t be done through physical control. We don’t want to ban testing of these vehicles.
Yes, teams can test regular cars and then move their systems down to the deliverbots. This bars the deliverbots from coming first, even though they are easier, and allows only the developers of passenger vehicles to get in the game.
So let’s modify these regulations to either exempt vehicles which can’t safely carry a person, or which are fully drive-by-wire, and just demand a highly reliable DBW system the safety driver can use.
Submitted by brad on Sun, 2014-04-20 11:06.
I wrote earlier on how we might make it easier to find a lost jet and this included the proposal that the pingers in the black boxes follow a schedule of slowing down their pings to make their batteries last much longer.
In most cases, we’ll know where the jet went down and even see debris, and so getting a ping every second is useful. But if it’s been a week, something is clearly wrong, and having the pinger last much longer becomes important. It should slow down, eventually dropping to intervals as long as one minute, or even an hour, to keep it going for a year or more.
But it would be even more valuable if the pinger was precise about when it pinged. It’s easy to get very accurate clocks these days, either sourced from GPS chips (which cost $5) or just synced on occasion from other sources. Unlike GPS transmitter clocks, which must sync to the nanosecond, here even a second of drift is tolerable.
The key is that the receiver who hears a ping must be able to figure out when it was sent, because if they can do that they can get the range, and even a very rough range is magic when it comes to finding the box. Just 2 received pings from different places with range will probably find the box.
I presume the audio signal is full of noise and you can’t encode data into it very well, but you can vary the interval between pings. For example, while a pinger might bleep every second, every 30 seconds it could ping twice in a second. Any listener who hears 30 seconds of pings would then know the pinger’s clock and when each ping was sent. There could be other variations in the intervals to help pin the time down even better, but it’s probably not needed. In 30 seconds, sound travels 28 miles underwater, and it’s unlikely you would hear the ping from that far away.
When the ping slows down as battery gets lower, you don’t need the variation any more, because you will know that pings are sent at precise seconds. If pings are down to one a minute, you might hear just one, but knowing it was sent at exactly the top of the minute, you will know its range, at least if you are within 50 miles.
Of course things can interfere here — I don’t know if sound travels with such reliable speed in water, and of course, waves bounce off the sea floor and other things. It is possible the multipath problem for sound is much worse than I imagine, making this impossible. Perhaps that’s why it hasn’t been done. This also adds some complexity to the pinger which they may wish to avoid. But anything that made the pings distinctive would also allow two ships tracking the pings to know they had both heard the same particular ping and thus solve for the location of the pinger. Simple designs are possible.
Two way pinger
If you want to get complex of course you could make the pinger smart, and listening for commands from outside. Listening takes much less power, and a smart pinger could know not to bother pinging if it can’t hear the ship searching for it. Ships can ping with much more volume, and be sure to be heard. While there is a risk a pinger with a broken microphone might not understand it has a broken microphone, otherwise, a pinger should sit silent until it hears request pings from ships, and answer those. It could answer them with much more power and thus more range, because it would only ping when commanded to. It could sit under the sea for years until it heard a request from a passing ship or robot. (Like the robots made by my friends at Liquid Robotics, which cruise unmanned at 2 knots using wave power and could spend years searching an area.)
The search for MH370 has cost hundreds of millions of dollars, so this is something worth investigating.
Other more radical ideas might be a pinger able to release small quantities of radioactive material after waiting a few weeks without being found. Or anything else that can be detected in extremely minute concentrations. Spotting those chemicals could be done sampling the sea, and if found soon enough — we would know exactly when they would be released — could help narrow the search area.
Track the waves
I will repeat a new idea I added to the end of the older post. As soon as the search zone is identified, a search aircraft should drop small floating devices with small radio transmitters good to find them again at modest range. Drop them as densely as you can, which might mean every 10 miles or every 100 miles but try to get coverage on the area.
Then, if you find debris from the plane, do a radio hunt for the nearest such beacon. When you find it, or others, you can note their serial number, know where they were dropped, and thus get an idea of where the debris might have come from. Make them fancier, broadcasting their GPS location or remembering it for a dump when re-collected, and you could build a model of motion on the surface of the sea, and thus have a clue of how to track debris back to the crash site. In this case, it would have been a long time before the search zone was located, but in other cases it will be known sooner.
Reporting has not been clear, but it appears that the ships which heard the pings did so in the very first place they looked. With a range of only a few miles, that seems close to impossibly good luck. If it turns out they did hear the last gasp of the black boxes, this suggests an interesting theory.
The theory would be that some advanced intelligence agencies have always known where the plane went down, but could not reveal that because they did not want reveal their capabilities. A common technique in intelligence, when you learn something important by secret means, is to then engineer another way to learn that information, so that it appears it was learned through non-secret means or luck. In the war, for example, spies who broke enemy codes and learned about troop movements would then have a “lucky” recon plane “just happen” to fly over the area, to explain how you knew where they were. Too much good luck and they might get suspicious, and might learn you have broken their crypto.
In this case the luck is astounding. Yes, it is the central area predicted by the one ping found by Inmarsat, but that was never so precise. In this case, though, all we might discern — if we believe this theory at all — is that maybe, just maybe, some intelligence agency among the countries searching has some hidden ways to track aircraft. Not really all that surprising as a bit of news, though.
Let’s hope they do find what’s left — but if they do, it seems likely to me it happened because the spies know things they aren’t telling us.