Submitted by brad on Mon, 2014-07-14 13:59.
It’s a big week for Robocar conferences.
In Berkeley, yesterday I attended and spoke at the “Robotics: Science and Systems” conference which had a workshop on autonomous vehicles. That runs to Wednesday, but overlapping and near SF Airport is the Automated Vehicles Symposium — a merger of the TRB (Transportation Research Board) and AUVSI conferences on the same topic. 500 are expected to attend.
Yesterday’s workshop was pretty good, with even a bit of controversy.
- Ed Olson on more of the lessons from aviation on handoff between automation and manual operation. This keeps coming up a a real barrier to some of the vehicle designs that have humans share the chores with the system.
- Jesse Levinson of Stanford’s team showed some very impressive work in automatic calibration of sensors, and even fusion of LIDAR and camera data, aligning them in real time in spite of movement and latency. This work will make sensors faster, more reliable and make fusion accurate enough to improve perception.
- David Hall, who runs Velodyne, spoke on the history of their sensors, and his plans for more. He repeated his prediction that in large quantities his sensor could cost only $300. (I’m a bit skeptical of that, but it could cost much, much less than it does today.) David made the surprising statement that he thinks we should make dedicated roads for the vehicles. (Surprising not just because I disagree, but because you could even get by without much LIDAR on such roads.)
- Marco Panove of Stanford showed research they did on Taxi models from New York and Singapore. The economics look very good. Dan Fagnant also presented related research assuming an on-demand semi shared system with pickup stations in every TAZ. It showed minimal vacant miles but also minimal successful rideshare. The former makes sense when it’s TAZ to TAZ (TAZs are around a square mile) but I would have thought there would be more rideshare. The conclusion is that VMT go up due to empty miles, but that rideshare can partially compensate, though not as much as some might hope.
- Ken Laberteaux of Toyota showed his research on the changing demographics of driving and suburbs. Conclusion: We are not moving back into the city, suburbanization is continuing. Finding good schools continues to drive people out unless they can afford private school are are childless.
The event had a 3-hour lunch break, where most went to watch some sporting event from Brazil. The Germans at the conference came back happier.
Some good technical talks presented worthwhile research
- Sheng Zhao and a team from UC Riverside showed a method to get cm accuracy in position and even in pose (orientation) from cheap GPS receivers, by using improved math on phase-matching GPS. This could also be combined with cheap IMUs. Most projects today use very expensive IMUs and GPSs, not the cheap ones you find in your cell phone. This work may lead to being able to get reliable data from low cost parts.
- Matthew Cornick and a team from Lincoln Lab at MIT showed very interesting work on using ground penetrating radar to localize. With GPR, you get a map of what’s below the road — you see rocks and material patterns down several feet. These vary enough, like the cracks and lines on a road, and so you can map them, and then find your position in that map — even if the road is covered in snow. While the radar units are today bulky, this offers the potential for operations in snow.
- A team from Toyota showed new algorithms to speed up the creation of the super-detailed maps needed for robocars. Their algorithms are good at figuring out how many lanes there are and when they start and stop. This could make it much cheaper to build the ultramaps needed in an automatic way, with less human supervision.
The legal and policy sessions got more heated.
- Bryant Walker Smith laid out some new proposals for how to regulate and govern torts about the vehicles.
- Eric Feron of Georgia Tech made proposals for how to do full software verification. Today formally proving and analysing code for correctness takes 0.6 hours per line of code — it’s not practical for the 50 million line (or more) software systems in cars today. Jonathan argues it can be made cheaper, and should be done. Note that fully half the cost of developing the 787 aircraft was software verification!
The final session, on policy included:
- Jane Lappin on how DoT is promoting research.
- Steve Shladover on how we’re all way to optimistic on timelines, and that coming up with tests to demonstrate superior safety to humans is very far away, since humans run 65,000 hours between injury accidents.
- Myself on why regulation should keep a light touch, and we should not worry too much about the Trolley Problem — which came up a couple of times.
- Raj Rajkumar of CMU on the success they have had showing the CMU/GM car to members of congress.
Now on to the AVS tomorrow.
Submitted by brad on Sat, 2014-07-12 11:29.
In the last few months, I have found myself asked many times about a concept for solar roadways. Folks from Idaho proposing them have gotten a lot of attention with FHWA funding, a successful crowdfunding and even an appearance at Solve for X. Their plan is hexagonal modules with strong glass, with panels and electronics underneath, LED lights, heating elements for snow country and a buried conduit for power cables, data and water runoff. In addition, they hope for inductive charging plates for electric vehicles.
This idea has come up before, but since these folks built a small prototype, they generated tremendous attention. But they haven’t spoken at all about the cost, and that concerns me, because with all energy projects, the financial math is 99% of the issue. That’s true of infrastructure projects as well.
There are two concepts here. The first is, can you make a cost effective manufactured road panel? Roads are quite expensive today, but they are just asphalt gravel and other industrial materials whose cost is measured the range of $50 to $100 per ton. A chart from Florida suggests that basic rural asphalt roads cost about $9 per square foot, all-in, including labour and grading (it’s flat there) and about $4/square foot for milling and resurfacing. Roadway modules could be factory made (by robots) but still would require more labour to install, but I still think it is a very tall order for a manufactured surface to not cost a great deal more, even an order of magnitude more than plain road. Paved roads need maintenance, and that’s expensive. It is proposed that these panels would be cheaper to maintain as you just swap them out, but I am again skeptical of this math. Indeed, one of the major barriers to proposals for electric roads (which can charge cars) is that putting anything in the road makes it prohibitively more expensive to maintain.
I won’t say this is impossible — but it’s all about the math. We need to see math that would show that the modular manufactured pavement approach can compete. I’m happy for that math to include future technologies, like robot assembly and placement (though realize that we’ll probably see road construction with simpler materials also done by robots even sooner.) Let’s see the numbers, how cheap can it get?
All of this is without the solar panels inside (or the electronics.) Because the solar panels have their own math. The only synergy is this: If the modular roadway can be made so that it costs only a bit more than other approaches, it offers us “free land” to put the panels, and it’s connected land in long strips to run power wires.
How valuable is free land? Well, cropland in the USA costs an average of about 10 cents per square foot. 23 cents in California. 3 cents/square foot in the rural west. Much more, of course, in urban places. The land is not that important, so the other value comes from having a nice, manufactured place in which to put solar panels.
Today solar panels are still costly. They are just getting down (primarily thanks to cheap Chinese money) to our grid price. Trends suggest they will get lower and become cost effective as a variable source of power. But until they get really, really cheap, you want to use them most efficiently.
To use solar panels at their best, you don’t want to lie them flat (except in the tropics) but rather you want to tilt them just just a bit below the angle of your latitude. Conventional wisdom also points them south, though it’s actually better for the grid and most people’s power demands if you point them south-west, losing a few percent of their output but getting more of it to match peak demand. Putting them flat costs you 20 to 30% of their output. (You can also have them motorized and gain even more, but it’s usually not cost-effective, and will become less cost-effective as panels get cheaper and motors don’t.)
To use solar panels at their best, you also want to put them where it’s very sunny. Finally you want to first put them where the local power comes from coal. When you have gotten rid of most of the coal, you can start putting them elsewhere. You can put panels in less sunny places which have power from hydro, nuclear or natural gas, but you’re really wasting your money. The ideal places are Arizona and New Mexico, with tons of sun and lots of coal. And lots of cheap, fairly low-value land.
To be fair, the biggest cost of the panels will soon be the hardware they are mounted in, along with the wires and electronics to connect them, and so perhaps these road modules could compete by being cheap hardware for that. But it seems not too likely.
In cities, rooftops provide another source of free land, much of it slanted about right and pointed in roughly the right direction. With lower cost than tearing up roads. But to be fair, right now one of the bigger cost elements is getting permits to do the construction and electrical work. Roads are far from bureaucracy-free, but at least it scales — you get permits for a big project all at once, not one house at a time. But we can solve that problem for houses if we really want to as well.
So my challenge to the solar roadway team is to show us the math. No, we don’t need to see what it cost to make your prototypes. I am sure they are very expensive, but that’s beside the point. I want to see a plan for how low the cost can go in theory, even assuming future technologies. And compare that to how low the cost for the alternatives can go in theory. And then factor in how things don’t get to that theoretical point due to bureaucracy, unions and other practicalities. Compare panels in the road to panels by the side of the road, tilted and not being driven over. Look at what paved roads cost in practice to what they could cost in theory to get an idea of how close you can actually get, or come up with a really convincing reason why one approach is immune from the problems of another.
And if that math says yes, go at it. But if it doesn’t, focus on where the math tells you to go.
Submitted by brad on Sat, 2014-06-28 10:47.
Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.
Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.
How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.
Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.
Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.
Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)
What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.
(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)
Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:
- Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
- Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
- Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
- In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
- When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.
The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too. read more »
Submitted by brad on Tue, 2014-06-24 16:25.
Five years ago, I posted a rant about the excess of customer service surveys we’re all being exposed to. You can’t do any transaction these days, it seems, without being asked to do a survey on how you liked it. We get so many surveys that we now just reject these requests unless we have some particular problem we want to complain about — in other words, we’re back to what we had with self-selected complaints. The value of surveys is now largely destroyed, and perversely, as the response rates drop and the utility diminishes, that just pushes some companies to push even harder on getting feedback, creating a death spiral.
A great example of this death spiral came a few weeks ago when I rode in an Uber and the driver had a number of problems. So this time I filled out the form to rate the driver and leave comments. Uber’s service department is diligent, and actually read it, and wrote me back to ask for more details and suggestions, which I gave.
That was followed up with:
Hi Brad Templeton,
We’d love to hear what you think of our customer service. It will only take a second, we promise. This feedback will allow us to make sure you always receive the best possible customer service experience in future.
If you were satisfied in how we handled your query, simply click this link.
If you weren’t satisfied in how we handled your ticket, simply click this link.
A survey on my satisfaction with the survey process! Ok, to give Uber some kudos, I will note:
- They really did try to make this one simple, just click a link. Though one wonders, had I clicked I was unsatisfied, would there have been more inquiry? Of course I was unsatisfied — because they sent yet another survey. The service was actually fine.
- At least they addressed me as “Hi Brad Templeton.” That’s way better than “Dear Brad” like the computer sending the message pretending it’s on a first-name basis with me. Though the correct salutation should be “Dear Customer” to let me know that it is not a personally written message for me. The ability to fill in people’s names in form letters stopped being impressive or looking personal in the 1970s.
This survey-on-a-survey is nice and short, but many of the surveys I get are astoundingly long. They must be designed, one imagines, to make sure nobody who values their time ever fully responds.
Why does this happen? Because we’ve become so thrilled at the ability to get high-volume feedback from customers that people feel it is a primary job function to get that feedback. If that’s your job, then you focus on measuring everything you can, without thinking about how the measurement (and over-measurement) affects the market, the customers and the very things you are try to measure. Heisenberg could teach these folks a lesson.
To work, surveys must be done on a small sample of the population, chosen in a manner to eliminate bias. Once chosen, major efforts should be made to assure people who are chosen do complete the surveys, which means you have to be able to truthfully tell them they are part of a small sample. Problem is, nobody is going to believe that when your colleagues are sending a dozen other surveys a day. It’s like over-use of antibiotics. All the other doctors are over-prescribing and so they stop working for you, even if you’re good.
The only way to stop this is to bring the hammer down from above. People higher up, with a focus on the whole customer experience, must limit the feedback efforts, and marketing professionals need to be taught hard in school and continuing education just why there are only so many they can do.
Submitted by brad on Tue, 2014-06-24 09:45.
Some recent press and talks:
Earlier in June I sat down with “Big Think” for an interview they have titled “Robocars 101” explaining some of the issues around the cars.
I also did a short interview on NPR’s “All Things Considered” not long after Google’s new car was announced. What you might find interesting is how I did it. I was at a friend’s house in Copenhagen and went into a quiet room where they called me on my cell phone. However, I also started a simple audio recorder app on my phone. When we were done, I shared the mp3 of a better sample from the same microphone with them, which they mixed in.
As a result, the interview sounds almost like it was done in-studio instead of over an international cell phone call.
Videos of my talks at Next Berlin at at Dutch Media Future Week 2014 are also up. And a shortened talk at Ontario Centers for Excellence Discovery 2014 in Toronto May 12. There we had the Governor General of Canada as our opening act. :-) That’s just 3 of the 11 events I was at on that trip.
Completely off the Robocar track is a short interview with CNBC where I advise people to invest in Bitcoin related technology, not in bitcoins.
Submitted by brad on Sun, 2014-06-22 20:51.
So far it’s been big players like Google and car companies with plans in the self-driving space. Today, a small San Francisco start-up named Cruise, founded by Kyle Vogt (a founder of the web video site Justin.tv) announces their plans to make a retrofit kit that will adapt existing cars to do basic highway cruise, which is to say, staying in a lane and keeping pace behind other cars while under a driver’s supervision.
I’ve been following Cruise since its inception. This offering has many similarities to the plans of major car companies, but there are a few key differences:
- This is a startup, which can be more nimble than the large companies, and having no reputation to risk, can be bolder.
- They plan to make this as a retrofit kit for a moderate set of existing cars, rather than custom designing it to one car.
They’re so dedicated to the retrofit idea that the Audi A4 they are initially modifying does not even have drive-by-wire brakes like the commonly used hybrid cars. Their kit puts sensors on the roof, and puts a physical actuator on the brake and another physical actuator on the steering wheel — they don’t make use of the car’s own steering motor. They want a kit that can be applied to almost any car the market tells them to target.
They won’t do every car, though. All vendors have a strong incentive to only support cars they have given some solid testing to, so most plans don’t involve retrofit at all, and of course Google has now announced their plans to design a car from scratch. Early adopters may be keen on retrofit.
I rode in the car last week during a demo at Alemeda air station, a runway familiar to viewers of Mythbusters. There they set up a course of small orange cones, which are much easier to see than ordinary lane markings, so it’s hard to judge how well the car does on lane markings. It still has rough edges, to be sure, but they don’t plan to sell until next year. In the trial, due to insurance rules, it kept under 40mph, though it handled that speed fine, though drifted a bit in wider parts of the “lane.”
On top is an aerodynamic case around a sensor pack which is based on stereo cameras and radar from Delphi. Inside is just a single button in the center arm console to enable and disable cruise mode. You take the car to the lane and push the button.
All stuff we’ve seen before, and not as far along, but the one key difference — being a nimble startup — may make all the difference. Only early adopters will pay the $10,000 for a product where you must (at least for now) still watch the road, but that may be all that is needed.
Submitted by brad on Sun, 2014-06-22 11:30.
On my recent wanderings in Europe, I became quite enamoured by Google’s
latest revision of transit directions. Google has had transit directions for
some time, but they have recently improved them, and linked them in more cities
to live data about where transit vehicles actually are.
The result not a mere incremental improvement, it’s a game-changing increase
in the utility of decent transit. In cities like Oslo and London, the tool
gives the user the ability to move with transit better than a native. In the
past, using transit, especially buses, as a visitor has always been so frustrating
that most visitors simply don’t use it, in spite of the much lower cost compared
to taxis. Transit, especially when used by an unfamiliar visitor, is slow and
complex, with long waits, missed connection and confusion about which bus
or line to take during shorter connections, as well as how to pay.
Not so any more. With a superhuman ability, your phone directs you to transit stops
you might not figure out from a map, where the right bus usually appears quite quickly.
Transfers are chosen to be quick as well, and directions are given as to which direction to
go, naming the final destination as transit signs often do, rather than the compass direction. It’s optimized by where the vehicles actually are and predicted to be, and this
will presumably get even better.
By making transit “just work” it becomes much more useful, and gives us a taste of the
robocar taxi world. That world is even easier, of course — door to door with no
connections and no need for you to even follow directions. But while Uber also shows us
that world well in user experience, Uber is expensive, as are cabs, while transit is closer
in cost to the anticipated robocar cost of well below $1/mile.
It also helps to have transit systems with passes or contactless pay cards, to avoid the hassles of payment.
Why does this work so well? In the transit-heavy cities, it turns out there are often 2, 3 or even 4 ways to get to your destination via different transit lines and connections. The software is able to pick among them in a way even a native couldn’t, and one is often leaving soon, and it finds it for you.
In some cities, there is not live data, so it only routes based on schedules. This cuts
the utility greatly. From a user experience standpoint, it is often better to give people
a wait they expect than to do a better job but not give accurate expectations.
What’s clear now is that transit agencies should have done this a lot sooner. Back in the 1980s
a friend of mine built one of the first systems which tracked transit vehicles and gave
you a way to call to see when the bus would come, or in some cases signs on the bus stops.
Nice as those were they are nothing compared to this. There is not much in this technology
that could not have been built some time ago. In fact, it could have been built even
before the smartphone, with people calling in by voice and saying, “I am at the corner of X and
Y and I need to get to Z” with a human helper. The cost would have actually been worth it
because by making the transit more useful it gets more riders.
That might be too expensive, but all this needed was the smartphone with GPS and a
data connection, and it is good that it has come.
In spite of this praise, there is still much to do.
- Routing is very time dependent. Ask at 1:00 and you can get a very different answer than you get asking at 1:02. And a different one at 1:04. The product needs a live aspect that updates as you walk and time passes.
- The system never figures out you are already on the bus, and so always wants to route you as though you were standing on the road. Often you want to change plans or re-look up options once you are on the vehicle, and in addition, you may want to do other things on the map.
- Due to how rapidly things change, the system also needs to display when multiple options are equivalent. For example, it might say, “Go to the train platform and take the B train northbound.” Then due to how things have change, you see a C train show up — do you get on it? Instead, it should say, “Take a B, C or E train going north towards X, Y or Z, but B should come first.”
- For extra credit, this should get smarter and combine with other modes. For example, many cities have bikeshare programs that let you ride a bike from one depot to another. If the system knew about those it could offer you very interesting routings combining bikes and transit. Or if you have your own bike and transit lines allow it on, you could use that.
- Likewise, you could combine transit with cabs, getting a convenient route with low walking but with much lower cab expense.
- Finally, you could also integrate with one-way car share programs like car2go or DriveNow, allowing a trip to mix transit, car, bike and walking for smooth movement.
- Better integration with traffic is needed. If the buses are stuck in traffic, it’s time to tell you to take another method (even cycling or walking) if time is your main constraint.
- Indoor mapping is needed in stations, particularly underground ones. Transit agencies should have beacons in the stations or on the tracks so phones can figure out where they are when GPS is not around. Buses could also have beacons to tell you if you got on the right one.
- The systems should offer an alert when you are approaching your stop. Beacons could help here too. For a while the GPS map has allowed the unfamiliar transit rider to know when to get off, but this can make it even better.
- This is actually a decent application for wearables and things like Google glass, or just a bluetooth earpiece talking in your ear, watching you move through the city and the stations and telling you which way to go, and even telling you when you need to rush or relax.
- In some cities going onto the subway means loss of signal. There, storing the live model for relevant lines in a cache would let the phone still come up with pretty good estimates when offline for a few minutes.
A later stage product might let you specify a destination and a time, and then it will buzz you when it’s time to start walking, and guide you there, through a path that might include walking, bike rides, transit lines and even carshare or short cab rides for a fast, cheap trip with minimal waiting, even when the transit isn’t all that good.
Submitted by brad on Mon, 2014-06-09 19:48.
I’m in the home stretch of a long international trip — photos to follow — but I speak tomorrow at Lincoln Center on how computers (and robocars) will change the worlds of finance. In the meantime, Google’s announcement last month has driven a lot of news in the Robocar space worthy of reporting.
On the lighter side, this video from the Conan O’Brien show highlights the issues around people’s deep fear of being injured by machines. While the video is having fun, this is a real issue that will dominate the news when the first accidents and injuries happen. I cover that in detail in my article about accidents but the debate will be a major one.
Nissan announced last year that it would sell cars in 2020. Now that Tesla has said 2016, Google has said civilians will be in their small car within a year and Volvo has said the same will happen in Sweden by 2017, Nissan CEO Carlos Ghosn has said they might do it 2 years earlier.
As various locations rush to put in robocar laws, in Europe they are finally getting around to modifying the Vienna convention treaty, which required a human driver. However, the new modifications, driven by car companies, still call for a steering wheel that a driver can use to take over (as do some of the US state laws.) These preclude Google’s new design, but perhaps with a bit of advance warning, this can be fixed. Otherwise, changing it again will be harder. Perhaps the car companies — none of whom have talked about anything like Google’s car with no controls — will be happy with that.
The urban test course at the University of Michigan, announced not very long ago, is almost set to open — things are moving fast, as they will need to if Michigan is to stay in the race. Google’s new prototype, by the way, is built in Michigan. Google has not said who but common speculation names not a major car company, but one of their big suppliers.
The Ernst & Young auto research lab (in Detroit) issued a very Detroit style forecast for autonomous vehicles which said their widespread use was 2 decades away. Not too surprising for such a group. Consultants are notoriously terrible at predictions for exponential technology. Their bad smartphone predictions are legendary (and now erased, of course.) A different study predicts an $87 billion market — but the real number is much larger than that.
This article where top car designers critique Google’s car illustrates my point from last week how people with car company experience are inclined to just not get it. But at the same time some of the automotive press do get it.
Submitted by brad on Sat, 2014-06-07 15:20.
25 years ago, on June 8, 1989, I announced to the world my new company ClariNet, which offered for sale an electronic newspaper delivered over the internet. This has the distinction, as far as I know, of being the first business created to use the internet as a platform, what we usually call a “dot-com” company.
I know it was the first because up until that time, the internet’s backbone was run by the National Science Foundation and it had a policy disallowing commercial use of the network. In building ClariNet, I found a way to hack around those rules and sell the service. Later, the rules would be relaxed and the flood of dot-coms came on a path of history that changed the world.
A quarter of a century seems like an infinite amount of time in internet-years. Five years ago, for the 20th anniversary, I decided to write up this history of the company, how I came to found it, and the times in which it was founded.
Read The history of ClariNet.com and the dawn of internet based business
There’s not a great deal to add in the 5 years since that prior anniversary.
- Since then, USENET’s death has become more complete. I no longer use it, and porn, spam and binaries dominate it now. Even RSS, which was USENET’s successor — oddly with some inferiorities — has begun to fall from favour.
- The last remnants of ClariNet, if they exist at Yellowbrix, are hard to find, though that company exists and continues to sell similar services.
- Social media themselves are showing signs of shrinking. Publishing and discussing among large groups just doesn’t scale past a certain point and people are shrinking their circles rather than widening them.
- We also just saw the 25th anniversary of the Web itself a few months ago, or at least its draft design document. ClariNet’s announcement in June was just that — work had been underway for many months before that, and product would not ship until later in the summer.
Many readers of this blog will not have seen this history before, and 25 years is enough of an anniversary to make it worth re-issuing. There is more than just the history of ClariNet in there. You will also find the history of other early internet business, my own personal industry history that put me in the right place at the right time with these early intentions, and some anecdotes from ClariNet’s life and times.
Submitted by brad on Tue, 2014-06-03 05:41.
I’ve been on the road for the last month, and there’s more to come. Right now I’m in Amsterdam for a few hours, to be followed by a few events in London, then on to New York for Singularity U’s Exponential Finance conference, followed by the opening of our Singularity University Graduate Studies Program for 2014. (You can attend our opening ceremony June 16 by getting tickets here — it’s always a good crowd)
But while on the road, let me lament about what’s missing from so many of the hotel rooms and AirBnB apartments I’ve stayed in, which is an understanding of what digital folks, especially digital couples need.
Yes, rooms are small, especially in Europe, and one thing they often sacrifice is desk space. In particular, desk space for two people with laptops. This is OK if you’ve ditched the laptop for a tablet, but many rooms barely have desk space enough for one, or the apartments have no desk, only the kitchen table. And some only have one chair.
We need desk space, and we need a bit of room to put things, and we need it for two. Of course, there should be plugs at desk level if you can — the best thing is to have a power strip on the desk, so we can plug in laptops, camera chargers, phone chargers and the like.
Strangely, at least half the hotels I stay in have a glass tabletop for their desk. The once surface my mouse won’t work on. Yes, I hate the trackpad so I use a mouse if I am doing any serious computing. I can pull over a piece of paper or book to be a mousepad, but this is silly.
Really sweet, but rarely seen, is an external monitor. Nice 24” computer monitors cost under $150 these days, so there should be one — or two. And there should be cables (HDMI and VGA at least) because while I bring cables sometimes, you never know which cable the monitor in a room will use. Sometimes you can plug into the room’s TV — but sometimes it has been modified so you can’t. It’s nice if you can, though a TV on the while is not a great monitor for working. It’s OK for watching video if I wanted to.
For extra credit, perhaps the TV can support some of the new video over wireless protocols, like Miracast, Widi or Apple’s TV protocol, to make it easy to connect devices, even phones and tablets.
Sadly, there is no way yet for you to provide me with a keyboard or mouse in the room that I could trust.
Though when it comes to phone chargers, many use their phone as their alarm clock, and so they want it by the bed. There should be power by the bed, and it should not require you to unplug the bedside lamp or clock radio.
Another nice touch would be plugs or power strips with the universal multi-socket that accepts all the major types of plugs. Sure, I always have adapters but it’s nice to not have to use them. My stuff is all multi-voltage of course.
Most hotel rooms come with a folding luggage stand, which is good. But they should really come with two. Couples and families routinely have 3 bags. A hotel should know that if you’ve booked a double room, you probably want at least two. Sometimes I have called down to the desk to get more and they don’t have any more — just one in each room. If you are not going to put them in the room, the bell desk should be able to bring up any you need.
Free Wifi (and wired) without a goddamned captive portal
I’ve ranted about this before, but captive portals which hijack your browser — thus breaking applications and your first use — are still very common. Worse, some of them reset every time you turn off your computer — or your phone, and you have to re-auth. Some portals are there to charge you, but I find that not an excuse any more. When hotels charge me for internet, I ask them how much the electricity and water are in the room. It’s past time that hotels that charge for internet just have that included in the online shopping sites like Kayak and Tripadvisor when you search for hotels. Or at the least I should be able to check a box for “show me the price with internet, and any taxes and made-up resort fees” so I can compare the real price.
But either way, the captive portals break too many things. (Google Glass can’t even work at all with them.) Cheap hotels give free wifi with no portal — this is a curse of fancier hotels. If you want sell premium wifi, so be it — but let me log into the basic one with no portal, and then I can go to a URL where I can pay for the upgrade. If you insist give me really crappy internet, 2G speed internet, with no portal, so that things at least work, though slowly, until I upgrade.
If you need a password, use WPA2. You can set up a server so people enter their room number and name with WPA2-Enterprise. You can meet certain “know your user” laws that force these portals on people that way.
And have wired internet — with a cable — if you can. At a desk, it’s more reliable and has no setup programs and needs no password or portal at all.
Submitted by brad on Sun, 2014-06-01 05:15.
It’s not too surprising that the release of images of Google’s prototype robocar have gotten comments like this:
Revolutionary Tech in a Remarkably Lame Package from Wired
A Joy Ride in Google’s Clown Car says Re/Code
I’ve also seen comparisons to the Segway, and declarations that limited to 25 mph, this vehicle won’t get much adoption or affect the world much.
Google’s own video starts with a senior expressing that it’s “cute.”
I was not involved in the specifics of design of this vehicle, though I pushed hard as I could for something in this direction. Here’s why I think it’s the right decision.
First of all, this is a prototype. Only 100 of this design will be made, and there will be more iterations. Google is all about studying, learning and doing it again, and they can afford to. They want to know what people think of this, but are not scared if they underestimate it at first.
Secondly, this is what is known as a “Disruptive Technology.” Disruptive technologies, as described in the Silicon Valley bible “The Innovators Dilemma” are technologies that seem crazy and inferior at first. They meet a new need, not well understood by the incumbent big companies. Those big companies don’t see it as a threat — until years later, they are closing their doors. Every time a disruptive technology takes over, very few of the established players make it through to the other side. This does not guarantee that Google will dominate or crush those companies, or that everything that looks silly eventually wins. But it is a well established pattern.
This vehicle does not look threatening — not to people on the street, and not to existing car companies and pundits who don’t get it. Oh, there are many people inside those car companies who do get it, but the companies are incapable of getting it in their bones. Even when their CEOs get it, they can’t steer the company 90 degrees — there are too many entrenched forces in any large company. The rare exception are founder-led companies (like Google and Facebook and formerly Apple and Microsoft) where if the founder gets it, he or she can force the company to get it.
Even large companies who read this blog post and understand it still won’t get it, not most of the time. I’ve talked to executives from big car companies. They have a century of being car companies, and knowing what the means. Google, Tesla and the coming upstarts don’t.
One reason I will eventually move away from my chosen name for the technology — robocar — along with the other popular names like “self-driving car” is that this future vehicle is not a car, not as we know it today. It is no more a “driverless car” than a modern automobile is a horseless carriage. 100 years ago, the only way they could think of the car was to notice that there was no horse. Today, all many people notice about robocars is that no human is driving. This is the thing that comes after the car.
Some people expected the car to look more radical. Something like the Zoox or ATMBL by Mike and Maaike (who now work in a different part of Google.) Cars like those will come some day, but are not the way you learn. You start simple, and non threatening, and safe. And you start expensive — the Google prototype still has the very expensive Velodyne LIDAR on it, but trust me, very soon LIDAR is going to get a lot less expensive.
The low speed is an artifact of many things. You want to start safe, so you limit where you go and how fast. In addition, US law has a special exception from most regulations for electric vehicles that can’t go more than 25mph and stick to back roads. Some may think that’s not very useful (turns out they are wrong, it has a lot of useful applications) but it’s also a great way to start. Electric vehicles have another big advantage in this area. Because you can reverse electric motors, they can work as secondary brakes in the event of failure of the main brake system, and can even be secondary steering in case of failure of the steering system at certain speeds. (Google has also said that they have two steering motors in order to handle the risk of failure of one steering motor.) Electric vehicles are not long-range enough to work as taxis in a large area, but they can handle smaller areas just fine.
If you work in the auto industry, and you looked at this car and saw a clown car, that’s a sign you should be afraid.
Submitted by brad on Wed, 2014-05-28 00:40.
In what is the biggest announcement since Google first revealed their car project, it has announced that they are building their own car, a small low-speed urban vehicle for two with no steering wheel, throttle or brakes. It will act as a true robocar, delivering itself and taking people where they want to go with a simple interface. The car is currently limited to 25mph, and has special pedestrian protection features to make it even safer. (I should note that as a consultant to that team, I helped push the project in this direction.)
This is very different from all the offerings being discussed by the various car companies, and is most similar to the Navia which went on sale earlier this year. The Navia is meant as a shuttle, and up to 12 people stand up in it while it moves on private campus roads. It only goes 20 km/h rather than the 40 km/h of Google’s new car. Google plans to operate their car on public roads, and will have non-employees in test prototype vehicles “very soon.”
This is a watershed moment and an expression of the idea that the robocar is not a car but the thing that comes after the car, as the car came after the horse. Google’s car is disruptive, it seems small and silly looking and limited if you look at it from the perspective of existing car makers. That’s because that’s how the future often looks.
I have a lot to say about what this car means, but at the same time, very little because I have been saying it since 2007. One notable feature (which I was among those pushing for inside) is a soft cushion bumper and windshield. Clearly the goal is always to have the car never hit anybody, but it can still happen because systems aren’t perfect and sometimes people appear in front of cars quickly making it physically impossible to stop. In this situation, cars should work to protect pedestrians and cyclists. Volvo and Autoliv have an airbag that inflates on the windshield bars, which are the thing that most often kills a cyclist. Of the 1.2 million who are killed in car accidents each year, close to 500,000 are pedestrians, mostly in the lower income nations. These are first steps in protecting them as well as the occupants of the car.
The car has 2 seats (side-by-side) and very few controls. It is a prototype, being made at first in small quantities for testing.
More details, and other videos, including a one of Chris Urmson giving more details, can be found at the new Google Plus page for the car. Also of interest is this interview with Chris.
I’m in Milan right now about to talk to Google’s customers about the car — somewhat ironic — after 4 weeks on the road all over Europe. 2 more weeks to go! I will be in Copenhagen, Amsterdam, London and NYC in the coming weeks, after having been in NYC, Berlin, Krakow, Toronto, Amsterdam, Copenhagen, Oslo, the fjords and Milan. In New York, come see me at Singularity U’s Exponential Finance conference June 10-11.
Submitted by brad on Mon, 2014-04-28 12:44.
News from Google’s project is rare, but today on the Google blog they described new achievements in urban driving and reported a number of 700,000 miles. The car has been undergoing extensive testing in urban situations, and Google let an Atlantic reporter get a demo of the urban driving which is worth a read.
You will want to check out the new video demo of urban operations:
While Google speakers have been saying for a while that their goal is a full-auto car that does more than the highway, this release shows the dedication already underway towards that goal. It is the correct goal, because this is the path to a vehicle that can operate vacant, and deliver, store and refuel itself.
Much of the early history of development has been on the highway. Most car company projects have a focus on the highway or traffic jam situations. Google’s cars were, in years past, primarily seen on the highways. In spite of the speed, highway driving is actually a much easier task. The traffic is predictable, and the oncoming traffic is physically separated. There are no cyclists, no pedestrians, no traffic lights, no stop signs. The scariest things are on-ramps and construction zones. At low speed the highway could even be considered a largely solved problem by now.
Highway driving accounts for just over half of our miles, but of course not our hours. A full-auto car on the highway delivers two primary values: Fewer accidents (when delivered) and giving productive time back to the highway commuter and long distance traveller. This time is of no small value, of course. But the big values to society as a whole come in the city, and so this is the right target. The “super-cruise” products which require supervision do not give back this time, and it is debatable if they give the safety. Their prime value is a more relaxing driving experience.
Google continues to lead its competitors by a large margin. (Disclaimer: They have been a consulting client of mine.) While Mercedes — which is probably the most advanced of the car companies — has done an urban driving test run, it is not even at the level that Google was doing in 2010. It is time for the car makers to get very afraid. Major disruption is coming to their industry. The past history of high-tech disruptions shows that very few of the incumbent leaders make it through to the other side. If I were one of the car makers who doesn’t even have a serious project on this, I would be very afraid right now.
Submitted by brad on Mon, 2014-04-21 13:24.
Many states and jurisdictions are rushing to write laws and regulations governing the testing and deployment of robocars. California is working on its new regulations right now. The first focus is on testing, which makes sense.
Unfortunately the California proposed regulations and many similar regulations contain a serious flaw:
The autonomous vehicle test driver is either in immediate physical control of the vehicle or is monitoring the vehicle’s operations and capable of taking over immediate physical control.
This is quite reasonable for testing vehicles based on modern cars, which all have steering wheels and brakes with physical connections to the steering and braking systems. But it presents a problem for testing delivery robots or deliverbots.
Delivery robots are world-changing. While they won’t and can’t carry people, they will change retailing, logistics, the supply chain, and even going to the airport in huge ways. By offering very quick delivery of every type of physical goods — less than 30 minutes — at a very low price (a few pennies a mile) and on the schedule of the recipient, they will disrupt the supply chain of everything. Others, including Amazon, are working on doing this by flying drone, but for delivery of heavier items and efficient delivery, the ground is the way to go.
While making fully unmanned vehicles is more challenging than ones supervised by their passenger, the delivery robot is a much easier problem than the self-delivering taxi for many reasons:
- It can’t kill its cargo, and thus needs no crumple zones, airbags or other passive internal safety.
- It still must not hurt people on the street, but its cargo is not impatient, and it can go more slowly to stay safer. It can also pull to the side frequently to let people pass if needed.
- It doesn’t have to travel the quickest route, and so it can limit itself to low-speed streets it knows are safer.
- It needs no windshield or wheel, and can be small, light and very inexpensive.
A typical deliverbot might look like little more than a suitcase sized box on 3 or 4 wheels. It would have sensors, of course, but little more inside than batteries and a small electric motor. It probably will be covered in padding or pre-inflated airbags, to assure it does the least damage possible if it does hit somebody or something. At a weight of under 100lbs, with a speed of only 25 km/h and balloon padding all around, it probably couldn’t kill you even if it hit you head on (though that would still hurt quite a bit.)
The point is that this is an easier problem, and so we might see development of it before we see full-on taxis for people.
But the regulations do not allow it to be tested. The smaller ones could not fit a human, and even if you could get a small human inside, they would not have the passive safety systems in place for that person — something you want even more in a test vehicle. They would need to add physical steering and braking systems which would not be present in the full drive-by-wire deployment vehicle.
Testing on real roads is vital for self-driving systems. Test tracks will only show you a tiny fraction of the problem.
One way to test the deliverbot would be to follow it in a chase car. The chase car would observe all operations, and have a redundant, reliable radio link to allow a person in the chase car to take direct control of any steering or brakes, bypassing the autonomous drive system. This would still be drive-by-wire(less) though, not physical control.
These regulations also affect testing of full drive-by-wire vehicles. Many hybrid and electric cars today are mostly drive-by-wire in ordinary operations, and the new Infiniti Q50 features the first steer-by-wire. However the Q50 has a clutch which, in the event of system failure, reconnects the steering column and the wheels physically, and the hybrids, even though they do DBW regenerative braking for the first part of the brake pedal, if you press all the way down you get a physical hydraulic connection to the brakes. A full DBW car, one without any steering wheel like the Induct Navia, can’t be tested on regular roads under these regulations. You could put a DBW steering wheel in the Navia for testing but it would not be physical.
Many interesting new designs must be DBW. Things like independent control of the wheels (as on the Nissan Pivo) and steering through differential electric motor torque can’t be done through physical control. We don’t want to ban testing of these vehicles.
Yes, teams can test regular cars and then move their systems down to the deliverbots. This bars the deliverbots from coming first, even though they are easier, and allows only the developers of passenger vehicles to get in the game.
So let’s modify these regulations to either exempt vehicles which can’t safely carry a person, or which are fully drive-by-wire, and just demand a highly reliable DBW system the safety driver can use.
Submitted by brad on Sun, 2014-04-20 11:06.
I wrote earlier on how we might make it easier to find a lost jet and this included the proposal that the pingers in the black boxes follow a schedule of slowing down their pings to make their batteries last much longer.
In most cases, we’ll know where the jet went down and even see debris, and so getting a ping every second is useful. But if it’s been a week, something is clearly wrong, and having the pinger last much longer becomes important. It should slow down, eventually dropping to intervals as long as one minute, or even an hour, to keep it going for a year or more.
But it would be even more valuable if the pinger was precise about when it pinged. It’s easy to get very accurate clocks these days, either sourced from GPS chips (which cost $5) or just synced on occasion from other sources. Unlike GPS transmitter clocks, which must sync to the nanosecond, here even a second of drift is tolerable.
The key is that the receiver who hears a ping must be able to figure out when it was sent, because if they can do that they can get the range, and even a very rough range is magic when it comes to finding the box. Just 2 received pings from different places with range will probably find the box.
I presume the audio signal is full of noise and you can’t encode data into it very well, but you can vary the interval between pings. For example, while a pinger might bleep every second, every 30 seconds it could ping twice in a second. Any listener who hears 30 seconds of pings would then know the pinger’s clock and when each ping was sent. There could be other variations in the intervals to help pin the time down even better, but it’s probably not needed. In 30 seconds, sound travels 28 miles underwater, and it’s unlikely you would hear the ping from that far away.
When the ping slows down as battery gets lower, you don’t need the variation any more, because you will know that pings are sent at precise seconds. If pings are down to one a minute, you might hear just one, but knowing it was sent at exactly the top of the minute, you will know its range, at least if you are within 50 miles.
Of course things can interfere here — I don’t know if sound travels with such reliable speed in water, and of course, waves bounce off the sea floor and other things. It is possible the multipath problem for sound is much worse than I imagine, making this impossible. Perhaps that’s why it hasn’t been done. This also adds some complexity to the pinger which they may wish to avoid. But anything that made the pings distinctive would also allow two ships tracking the pings to know they had both heard the same particular ping and thus solve for the location of the pinger. Simple designs are possible.
Two way pinger
If you want to get complex of course you could make the pinger smart, and listening for commands from outside. Listening takes much less power, and a smart pinger could know not to bother pinging if it can’t hear the ship searching for it. Ships can ping with much more volume, and be sure to be heard. While there is a risk a pinger with a broken microphone might not understand it has a broken microphone, otherwise, a pinger should sit silent until it hears request pings from ships, and answer those. It could answer them with much more power and thus more range, because it would only ping when commanded to. It could sit under the sea for years until it heard a request from a passing ship or robot. (Like the robots made by my friends at Liquid Robotics, which cruise unmanned at 2 knots using wave power and could spend years searching an area.)
The search for MH370 has cost hundreds of millions of dollars, so this is something worth investigating.
Other more radical ideas might be a pinger able to release small quantities of radioactive material after waiting a few weeks without being found. Or anything else that can be detected in extremely minute concentrations. Spotting those chemicals could be done sampling the sea, and if found soon enough — we would know exactly when they would be released — could help narrow the search area.
Track the waves
I will repeat a new idea I added to the end of the older post. As soon as the search zone is identified, a search aircraft should drop small floating devices with small radio transmitters good to find them again at modest range. Drop them as densely as you can, which might mean every 10 miles or every 100 miles but try to get coverage on the area.
Then, if you find debris from the plane, do a radio hunt for the nearest such beacon. When you find it, or others, you can note their serial number, know where they were dropped, and thus get an idea of where the debris might have come from. Make them fancier, broadcasting their GPS location or remembering it for a dump when re-collected, and you could build a model of motion on the surface of the sea, and thus have a clue of how to track debris back to the crash site. In this case, it would have been a long time before the search zone was located, but in other cases it will be known sooner.
Reporting has not been clear, but it appears that the ships which heard the pings did so in the very first place they looked. With a range of only a few miles, that seems close to impossibly good luck. If it turns out they did hear the last gasp of the black boxes, this suggests an interesting theory.
The theory would be that some advanced intelligence agencies have always known where the plane went down, but could not reveal that because they did not want reveal their capabilities. A common technique in intelligence, when you learn something important by secret means, is to then engineer another way to learn that information, so that it appears it was learned through non-secret means or luck. In the war, for example, spies who broke enemy codes and learned about troop movements would then have a “lucky” recon plane “just happen” to fly over the area, to explain how you knew where they were. Too much good luck and they might get suspicious, and might learn you have broken their crypto.
In this case the luck is astounding. Yes, it is the central area predicted by the one ping found by Inmarsat, but that was never so precise. In this case, though, all we might discern — if we believe this theory at all — is that maybe, just maybe, some intelligence agency among the countries searching has some hidden ways to track aircraft. Not really all that surprising as a bit of news, though.
Let’s hope they do find what’s left — but if they do, it seems likely to me it happened because the spies know things they aren’t telling us.
Submitted by brad on Tue, 2014-04-08 22:35.
I read a lot of feeds, and there are now scores of stories about robocars every week. Almost every day a new publication gives a summary of things. Here, I want to focus on things that are truly new, rather than being comprehensive.
Mahindra “Rise” Prize
The large Indian company Mahindra has announced a $700,000 Rise prize for robocar development for India’s rather special driving challenges. Prizes have been a tremendous boost to robocar development and DARPA’s contests changed the landscape entirely. Yet after the urban challenge, DARPA declared their work was done and stopped, and in spite of various efforts to build a different prize at the X-Prize foundation, the right prize has never been clear. China has annual prizes and has done so for several years, but they get little coverage outside of China.
An Indian prize has merit because driving in India is very much different, and vastly more chaotic than most of the west. As such, western and east Asian companies are unlikely to spend a lot of effort trying to solve the special Indian problems first. It makes sense to spur Indian development, and of course there is no shortage of technical skill in India.
Many people imagine that India’s roads are so chaotic that a computer could never drive on them. There is great chaos, but it’s important to note that it’s slow chaos, not fast chaos. Being slow makes it much easier to be safe. Safety is the hard part of the problem. Figuring out just what is happening, playing subtle games of chicken — these are not trivial, but they can be solved, if the law allows it.
I say if the law allows it because Indians often pay little heed to the traffic law. A vehicle programmed to strictly obey the law will probably fail there without major changes. But the law might be rewritten to allow a robot to drive the way humans drive there, and be on an open footing. The main challenge is games of chicken. In the end, a robot will yield in a game of chicken and humans will know that and exploit it. If this makes it impossible for the robot to advance, it might be programmed to “yield without injury” in a game of chicken. This would mean randomly claiming territory from time to time, and if somebody else refuses to yield, letting them hit you, gently. The robot would use its knowledge of physics to keep the impact low enough speed to cause minor fender damage but not harm people. If at fault, the maker of the robot would have to pay, but this price in damage to property may be worthwhile if it makes the technology workable.
The reason it would make things workable is that once drivers understood that, at random, the robot will not yield (especially if it has the right-of-way) and you’re going to hit it. Yes, they might pay for the damage (if you had the right of way) but frankly that’s a big pain for most people to deal with. People might attempt insurance fraud and deliberately be hit, but they will be recorded in 3D, so they had better be sure they do it right, and don’t do it more than once.
Of course, the cars will have to yield to pedestrians, cylists and in India, cows. But so does everybody else. And if you just jump in front of a car to make it hit the brakes, it will be recording video of you, so smile.
New Vislab Car
I’ve written before about Vislab at the University of Parma. Vislab are champions of using computer vision to solve the driving problem, though their current vehicles also make use of LIDAR, and in fact they generally agree with the trade-offs I describe in my article contrasting LIDAR and cameras.
They have a new vehicle called DEEVA which features 20 cameras and 4 lasers. Like so many “not Google” projects, they have made a focus on embedding the sensors to make them not stand out from the vehicle. This continues to surprise me, because I have very high confidence that the first customers of robocars will be very keen that they not look like ordinary cars. They will want the car to stand out and tell everybody, “Hey, look, I have a robocar!” The shape of the Prius helped its sales, as well as its drag coefficient.
This is not to say there aren’t people who, when asked, will say they don’t want the car to look too strange, or who say, looking at various sensor-adorned cars, that these are clearly just lab efforts and not something coming soon to roads near you. But the real answer is neither ugly sensors nor hidden sensors, but distinctive sensors with a design flair.
More interesting is what they can do with all those cameras, and what performance levels they can reach.
I will also note that car uses QNX as its OS. QNX was created by friend I went to school with in Waterloo, and they’re now a unit of RIM/Blackberry (also created by classmates of mine.) Go UW!
Submitted by brad on Sat, 2014-04-05 16:51.
A recent Surpreme court case which struck down limits on the total amount donors could provide to a large group of candidates has fired up the debate on what to do about the grand problem, particularly in the USA, of the corrupting influence of money on politics. I have written about this before in my New Democracy Topic, including proposals for anonymous donations, official political spam and many others.
As I strongly believe that it is very difficult to draft campaign finance rules that don’t violate the 1st amendment (the Supreme court agrees) and also that it would be a horrible, horrible decision to weaken the 1st amendment to solve this problem, nasty as the problem is, I have been working on alternate solutions. (I also don’t believe any of the proposed weakenings of the 1st amendment would actually work and not backfire.)
I am going to do a series here on those solutions over time, but first I want to lay out my perceptions of the various elements of the problem, for it is necessary to understand them to fix them. While political corruption is rife anywhere, the influence of big money seems most widespread in the USA.
Problem 1: Politicians feel they can’t get elected without spending a lot of money
Ask any member of congress what they did on their first day in office. The answer will be “made calls to donors.” They are always fundraising, because they don’t think they can get elected without it. They generally resent this, which is a ray of hope. If they thought they had a choice, that they could get elected without fundraising, they would reduce it a lot.
One thing that’s not easy to fix is the fact that if you fundraise, those who give you money will expect something for it, which is the thing we’re trying to eliminate. Even if the donors don’t ever explicitly state that expectation, it is always there, because every candidate will ask if what they are doing will piss off the donors, even more than they will ask what will piss off the voters. If you depend on the donations, you will do what it takes to keep them coming. Donations get a donor’s phone calls and letters answered, as well as requests for meetings.
I say that politicians feel they need money, and in fact they are often right about this. Money does produce votes. But neither are they totally right, as there are alternatives.
As noted in the comments, the length of campaigns plays a role in how much money people need to raise. Due to fixed election dates, US election campaigns are extremely long compared to other countries. (In Canada, an election might be called at any time, and takes place in as little as 36 days. Fundraising is often done in advance, of course, but there is only a little time in which to spend the money.)
The most common proposed solution here is public campaign finance, but I am developing alternatives to that or systems which could work in
combination with that.
Problem 2: The main reason they need money is to buy TV ads
About 60% of the budget of a big campaign is spent on ads, most of them on TV. Today online advertising is just 10% of TV.
There is a reason they love TV. It gets to most demographics, and your message can be very dramatic and convincing. Most of all, you reach people who were not looking for your message. Everybody has a web site, but the web site only is seen by people who actively sought it out. TV gets into the homes of an ordinary voter and gives you a shot at influencing them. Other forms of advertising do that too, but few do it as well as TV.
This aspect of the problem is important because we’re in the middle of a big shift in the nature of advertising. The new advertising giant, Google, is a relatively new company with entirely different methods. We’re also in the middle of a big shift in media. Broadcast media, I feel, are on the decline, and new media forms, mostly online forms, are likely to take the lead. When this happens — and I say when, not if — it means that most of the donated political money will flow to the new media. This gives the new media a chance to either be the destination for all corruption money or to change the rules of the game, if they have the courage to do so.
In many cases, the world of advertising hasn’t simply moved form one company to a competitor. In the case of newspaper classified advertising, that industry was just supplanted by free online ads like craigslist. Thanks to internet media, publishing is now cheap or almost free, and advertising is much more efficient and in some areas, cheaper. The potential for disruption is ripe.
Problem 3: The other big effort is “Get out the Vote”
While most of the dollars go to advertising, a lot of them, and most of the volunteer time, goes to what they call GOTV.
GOTV is so important because US voter turnouts are low. 50-60% in Presidential years, less in off-years. Because of that, by far the most productive use of campaign resources is often not trying to convince an undecided or opposing voter to switch to your side, but simply getting a voter who already supports you but doesn’t care a great deal to actually make the trek to the polls on voting day.
While you might imagine elections are fought and won with one candidate’s ads or speeches or debate performance swaying undecided voters one way or another, the reality is that turnouts are so low that GOTV is what decides a lot of races.
Aside from the basic principle that it’s crazy to decide our leaders based on who has the best system of pushing apathetic voters to come to the polls, it’s also true that GOTV uses a lot of money and resources, and as such is another of the big reasons for problem #1. A lot of the advertising purchased is bought to make existing supporters more likely to turn out as much as it’s there to sway undecideds.
There are many areas for solution here, including increasing the voter turnout to a level where GOTV is not so productive. For example, in many countries, voting is mandatory — you are fined if you don’t vote. Chile gets 95% turnout this way, and Australia at 81% is the worst turnout of the compulsory nations.
It is also possible to increase turnout by making voting super-easy. Options such as online or cell-phone voting, while rife with election security and privacy problems, may be worth the risk if they reduce the power of GOTV — or simply make GOTV much cheaper.
Problem 4: Other campaign costs
While they are in 3rd place, the other campaign costs — travel, events, databases, staff, candidate’s time and many other things — still add up to a lot, and it’s money that must be fundraised. Today, all candidates build impressive computer systems from scratch every 4 years. After the election the system is discarded, because in 4 years, technology will have changed so much it is better to rewrite it from scratch.
Elections, however, are taking place every month around the world, which would justify the constant development of generalized campaign tools. If done open source, they could easily be free to campaigns, saving them lots of resources — and the need to raise money for them.
Problem 5: Buying influence pays off
Candidates raise money because they have to, but donors give it because they get good value in return. Yes, some get the “pure” good value they are supposed to get — the hope that they get a better candidate elected, who will run things closer to the way they want. In a general “for the country” sense, not in a personal benefit sense, but even that’s technically OK if it does not involve doing personal favours.
Sadly, they usually get much more than that. They get personal benefit, even the ability to write drafts of laws and stop laws they dislike. Congress members even have a semi-official “pork” system which spreads federal money around districts, to please voters and also donors.
Worst of all, buying influence can be profitable from a pure financial sense. While Shel Adelson might give money to support his views on foreign policy, corporations and many others give money because they feel they will make a profit in the bottom line. As soon as this profit is possible, it’s almost impossible to stop money from flowing in, no matter what rules you make. (It might be noted that Libertarians believe one of the most compelling arguments for keeping the government out of the economy is that a government that has no ability to hurt or benefit economic interests is one that can’t be bribed to hurt or benefit economic interests.)
This is also what makes corporations interested in donations. Corporations, at least in the pure sense, are interested only in the bottom line, and have a fiduciary duty to the stockholders to care only about shareholder value. Some closely held corporations will also take actions based on direct shareholder political interests, and some corporations, like PACs exist to do nothing else but that.
Some solutions can come from changing the system so that it’s just not as productive to buy politicians. This requires new rules on how they vote, which are hard to get. An ideal system might demand that officials recuse themselves from any vote on any bill which would unduly benefit any of their constituents or voters. Vote trading would attempt to get around this, but it seems crazy that today we think it is their job to look out for their constituents (and unofficially their donors) at the expense of the rest of the country.
The most common solution for this problem is to limit donations, with caps for each donor, and also caps on amount raised or amount spent. Success is highly mixed in this area.
Paths to improvement
These nexus points, notable #1, #2 and #5, are the place to look for solutions. While problem #1 can be addressed with limits on donations, fundraising and spending (otherwise known as Campaign Finance Reform) this approach is very challenging. Because of problem #5 in particular, money will “find a way” like water flowing downhill. You may put up a dam but the water will find another channel if it can.
The only defence against issue #5 — that buying politicians is lucrative — is to combine the politician’s core dislike of fundraising with efforts to make it a bit less productive to buy politicians. While money will always try to buy them, if the price goes up, and the need for the money goes down, there can be improvement.
One of the most popular proposals to fix #1 is public funding of campaigns, combined with mandatory or optional limits on fundraising or spending. The latter limits are hard to do under the 1st amendment. This is not because “corporations are people” (a strange meme because that idea never appears in the Citizens United decision that many people imagine it came from) but because freedom of the press, especially for political speech, is not divisible in the 1st amendment. It has always been given to corporations (including ones like the New York Times corporation) and in fact for a century or more, until the rise of the blogging era, almost all press of significance have been corporations.
Attempts to limit what sort of political ads that rich people and corporations may run are extremely difficult under the 1st amendment, as the court has said, and in spite of the terrible problem caused by the influence of money in politics, the 1st amendment deservedly remains untouched. Much of the argument around this case (and Citizen’s United) has been of the form, “Corruption is horribly bad, so the court should decide the 1st amendment doesn’t protect it.” Many things the 1st amendment protects are bad, but we’ve decided letting the government decide which are good or bad is worse. Here, we can add to that the important sense that giving congress extra control over how their elections are run is another very bad idea.
In coming weeks, I will outline alternate solutions. But I also believe neither I, nor anybody else have thought up all the possible solutions. Politics, advertising and media are in a state of flux thanks to new technologies that I and my compatriots have built. Whether you think the future is bright or dark, I can assure you it’s different, and may options for solution to this problem are out there, even those we may not see as yet.
Submitted by brad on Thu, 2014-04-03 14:01.
Look at the skyline of any growing city, and what do you see, but a sea of construction cranes. The theory is that each crane will go away and be replaced by an architectually interesting or pleasing building, but the cycle continues and there are always cranes.
My proposal: An ordinance requiring aesthetic elements on construction cranes. Make them look beautiful. Make them look like the birds they are named after, or anything else. Get artists to design them as grand public art installations. Obviously you can’t increase the weight a lot, or cut the visibility of the operator too much, but leave that as a challenge to the artists. And give us a city topped with giant works of art instead of eyesores.
While we’re building these skyscrapers, it seems to me we also don’t seem to care about the aesthetics of our cities from above. The view from the towers, or incoming aircraft bringing in fresh visitors, is of ugly rooftops, covered with ugly pipes, giant air conditioners and spaces everybody imagines that nobody sees. Yet we all see them.
Compare that with many European hillside towns where everybody knew they would be seen from above. At least in the old days, the roofs were crafted with the same care as the house. Today, that’s been changing, and many roofs are covered with antennas, satellite dishes and in the middle east, black water heaters. We care a lot about how our houses look from the curb, and we imagine people don’t see the roof. But we do.
Submitted by brad on Mon, 2014-03-31 16:14.
Why are there lines at airport security? I mean, we know why the lines form, when passenger load exceeds the capacity, with the bottleneck usually being the X-ray machines. The question is why this imbalance is allowed to happen?
The variable wait at airport security levies a high cost, because passengers must assume it will be long, just in case it is. That means every passenger gets there 15 or more minutes earlier than they would need to, even if there is no wait. Web sites listing wait times can help, but they can change quickly.
For these passengers, especially business passengers, their time is valuable, and almost surely a lot more costly than that of TSA screeners. If there are extra screeners, it costs more money to keep them idle when loads are low, but the passengers would be more than willing to pay that cost to get assuredly short airport lines.
(There are some alternatives, as Orwellian programs like Clear and TSA-PRE allow you to bypass the line if you will be fingerprinted and get a background check. But this should not be the answer.)
In some cases, the limit is the size of the screening area. After 9/11, screening got more intensive, and they needed more room for machines and more table space for people to prepare their bags for all the rules about shoes, laptops, liquids and anything in their pockets.
Here are some solutions:
Appointments at security
The TSA has considered this but it is not widely in use. Rather than a time of departure, what you care about is when you need to get to the airport. You want an appointment at security, so if you show up at that time, you get screened immediately and are on your way to the gate in time. Airlines or passengers could pay for appointments, though in theory they should be free and all should get them, with the premium passengers just paying for appointments that are closer to departure time.
Double-decker X-ray machines
There may not be enough floor space, but X-ray machines could be made double decker, with two conveyor belts. No hand luggage is allowed to be more than a foot high, though you need a little more headroom to arrange your things. Taller people could be asked to use the upper belt, though by lowering the lower belt a little you can get enough room for all and easy access to the upper belt for all but children and very short folks.
A double width deck is also possible, if people are able to reach over, or use the other side to load. (See below.)
This might be overkill, as I doubt the existing X-ray machines run at half their capacity. It is the screener’s deliberation that takes the time, and thus the next step is key…
Remote X-ray screeners
The X-ray screener’s job is to look at the X-ray image and flag suspect items. There is no need for them to be at the security station. There is no need for them to even be in the airport or the city, come to that. With redundant, reliable bandwidth, screeners could work in central screening stations, and be sent virtually to whatever security station has the highest load.
Each airport would have some local screeners, though they could work in a central facility so they can virtually move from station to station as needed, and even go there physically in the event of some major equipment failure. They would be enough to handle the airport’s base-load, but peak loads would call in screeners from other locations in the city, state or country.
Using truly remote screeners creates a risk that a network outage could greatly slow processing. This would mean delayed flights until text messages can go out to all passengers to expect longer lines and temporary workers can come in — or the outage can be repaired. To avoid this, you want reliable, redundant bandwidth, multiple screener centers and the ability to even use LTE cell phones as a backup. And, perhaps, an ability to quickly move screeners from airport to airport to handle downtimes at a particular airport. (Fortunately, there happens to be a handy technology for moving people from airport to airport!)
Screeners need not be working a specific line. Screeners could be allocated by item. Ie. one bag is looked at by screener 12 and the next bag is looked at by screener 24, just giving each item or set of items to the next available screener, which means an X-ray could actually constantly run at full speed if there are available staff. Each screener would, if they saw an issue, get to look at the other bags of the same passenger, and any bag flagged as suspect could immediately be presented to one or more other screeners for re-evaluation. In addition, as capacity is available, a random subset of bags could be looked at by 2 or more screeners.
It can also make sense to just skip having a human look at some bags at random to reduce wait and cost. It might even make sense to let some bags go unviewed in order to have other bags be viewed by 2 screeners. Software triage of how many screeners should look at a bag (0, 1, 2, etc.) is also possible though random might be better because attackers might figure out how to fool the software. With the screeners being remote and the belts operating at a fixed speed, passengers won’t learn who was randomly selected for inspection or not.
Some screeners need to be there — the one who swabs your bag, or does an extra search on it, the one who does the overly-intimate patdown and the one with the gun who tries to stop you if you try to run. But the ones who just give advice can be remote, and the one who inspects your boarding pass can be remote for passengers able to hold those things up to the scanners. I suspect remote inspection of ID is also possible though I can see people resisting that. The scanner who looks at your nude photo can certainly be remote — currently they are out of view so you don’t feel as bothered.
This remote approach, instead of costing more, might actually save money, especially on the national level. That’s because the different time zones have different peak times, and remote workers can quickly move to follow the traffic loads.
It’s also easier with remote screeners for passengers to use both sides of the belt to load and get their stuff. Agents would have to go in among them to pull bags for special inspection, though.
Of course it could be even better
Don’t misunderstand — the whole system should be scrapped and replaced with something that is more flyer-friendly as well as more capable of catching actual hijacker techniques. But if it’s going to exist, it should be possible to remove the line for everybody, not just those who go through background checks and fingerprinting just to travel.
After 2001, a company developed bomb proof luggage containers and now there is a new bag approach which would reduce the need to x-ray and delay checked luggage as much as they do. They were never widely deployed, because they cost more and weigh more.
I have 3 things I carry on planes:
- The things I need on the plane (like my computer, books and other items.)
- The vital and fragile things which I insist not leave my control, such as my camera gear and medicines.
- When I am not checking a bag, everything else for short trips.
I’m open to having all but #1 being put into a bomb-proof container by me and removed by me in a manner similar to gate check, so I can assure it’s always on the plane with me. Of course if I’m to do that then security (for just me and the items of type one) must be close to the plane — which it is for many international flights to the USA. That would speed up that security a lot. The use of remote screeners could make it easier to have security at the gate, too.
Personally, once the problem of taking over the cockpit was solved by new cockpit doors and access policies, I think there was an argument that you need not screen passengers at all. Sure, they could bring on guns, but would be no longer able to hijack the aircraft, so it’s no different from a bus or a train. Kept to small items, they would not be able to cause as much damage as they could do with a suitcase sized bomb in the security line. The security line is, by definition, unsecured, and anybody can bring a large uninspected roll-aboard up to it, amidst a very large crowd — similar to what happened in Moscow in 2011.
Instead, you would have gates where a portal in the wall would have a bomb-proof luggage container into which you could put your personal bags and coats. Most people would then just get on, but a random sampling would be directed to extra security. Those wishing to bring larger things on-board (medical gear, super-fragiles, mega-laptops) would need to arrive earlier and go through security too. A forklift would quickly move the bombproof container into the hold and the plane would take off.
Submitted by brad on Tue, 2014-03-25 16:26.
We’ve all learned a lot about what can and can’t be done from the tragic story of MH 370, as well as the Air France flight lost over the Atlantic. Of course, nobody expected the real transponders to be disconnected or fail, and so it may be silly to speculate about how to avoid this situation when there already is supposed to be a system that stops aircraft from getting lost. Even so, here are some things to consider:
In the next few years, Iridium plans to launch a new generation of satellites with 1 megabit of bandwidth, replacing the pitiful 2400 bps they have now. In addition, with luck, Google Loon may get launched and do even more. With that much bandwidth, you can augment the “black box” with a live stream of the most important data. In particular, you would want a box to transmit as much as it could in the event of catastrophic shock, loss of signal from the aircraft and any unplanned descent, including of course getting close to the ground away from the target airport set at takeoff. Even the high cost of Iridium is no barrier for rare use, and you actually have a lot of seconds in the case of planes lost while flying at high altitude. Not enough to send much cockpit voice, but the ability to send all major alerts, odd-readings and cockpit inputs.
You could send more to geosync satellites but I will assume in a crisis it’s hard to keep aimed.
Another place you could stream live data would be to other aircraft. Turns out that up high as they are, aircraft are often able to transmit to other aircraft line of sight. Yes, the deep south Indian ocean may not be one of those places, but in general the range would be 500 miles, and longer if you used any wavelength that could travel beyond the horizon. Out there over the ocean, there’s nobody to interfere with, and closer to land, you can talk to the land. Near land, the live stream would go to terrestrial receivers, even cell towers. Live data gives you information even if the black box is destroyed or lost. If you are sure that can never happen, the black box is enough.
It also could make sense to have the black box be on the outside of the aircraft, meant to break away on impact with ground or water, and of course, it should float. The Emergency Locator Transmitter should be set up this way as well. You want another box pinging that sinks with the plane, though. The floating ELT/black box could even eject itself from the plane on its own if it detected an imminent crash in any remote area, including the ocean. With a GPS, it will know its altitude and location. It could even have a parachute on it.
Speaking of pinging, one issue right now is the boxes only have power for 2 weeks. Obviously there is a limit on power, and you want a strong signal, but it is possible to slow down your ping rate as your battery gets low, to the point that you are perhaps only pinging a few times a day. The trick is you would ping at very specific and predictable times, so people would know precisely when to listen — even years later if they get a new idea about where to look. Computers can go to sleep on these sorts of batteries and last for years if they only have to use power once a day.
If all you want to know is where an aircraft is, we’ve seen from this that it doesn’t take too much. A slightly more frequent accurately timed ping of any kind picked up by 2 satellites (LEO or geosync) is enough to get a pretty good idea where a plane is. The cheapest and simplest solution might be a radio that can’t be disabled that does this basic ping either all the time, or any time it doesn’t get the signal that others systems like ACARS are not doing their job.
Like many, I was surprised that the cell phones on board the aircraft that were left on — and every flight has many phones left on — didn’t help at all. Aircraft fly too high for most cell phones to actually associate with cell towers on the ground, so you would not see any connections made, but it seems likely that as the plane returned over inhabited areas on its way south, some of those phones probably transmitted something to those ground stations, something the ground stations ignored because they could not complete the handshake. If those stations kept lower level logs, there might be information there, but they probably don’t keep them. Because metal plane skins block signals, they might have been very weak. If the passengers were conscious, they probably would have been trying to hold their phones near the window, even though they could not connect at their altitude.
Another thing I have not understood is why we have only seen the results of one ping detected by the Inmarsat over the Indian. From that ping, they were able to calculate the distance of the aircraft to the satellite, and thus draw that giant arc we’ve all seen on the maps. It’s not clear to me why there was only one ping. Another ping would have drawn another arc, and so on, but that would have given us much more data to narrow down the course of the aircraft, as it’s a fair presumption it was flying straight. The reason they know know the one ping came from the southern hemisphere is the satellite itself is not perfectly centered and so moves up and down, giving a different doppler for north vs. south.
We may not learn their fate. I must admit, I’m probably an unusual passenger. I am an astronomer, and so will notice if a plane has made such a big course correction, though I have to admit in the southern hemisphere I would get confused. But then I would pull out my phone and ask its GPS where we are. I do this all the time, and I often notice when the aircraft I am in does something odd like divert or circle. But I guess there are not so many people of this stripe on a typical plane. (Though I have flown in and out of KL on Malaysian Airlines myself, but long ago.)
While hope for the people aboard is gone, I do hope we learn the cause of the tragedy, to see if anything we can think that is not too expensive would prevent it from happening again. The cost need not be that low. The cost of this search and the Air France search both added up to a lot.
Update: A New Idea — as soon as the search zone is identified, a search aircraft should drop small floating devices with small radio transmitters good to find them again at modest range. Drop them as densely as you can, which might mean every 10 miles or every 100 miles but try to get coverage on the area.
Then, if you find debris from the plane, do a radio hunt for the nearest such beacon. When you find it, or others, you can note their serial number, know where they were dropped, and thus get an idea of where the debris might have come from. Make them fancier, broadcasting their GPS location or remembering it for a dump when re-collected, and you could build a model of motion on the surface of the sea, and thus have a clue of how to track debris back to the crash site. In this case, it would have been a long time before the search zone was located, but in other cases it will be known sooner.