The recently released national noise map makes it strikingly clear just how much air travel contributes to the noise pollution in our lives. In my previous discussion of flying cars I expressed the feeling that the noise of flying cars is one of their greatest challenges. While we would all love a flying car (really a VTOL helicopter) that takes off from our back yards, we will not tolerate our neighbour having one if there is regular buzzing and distraction overhead and in the next yard.
Helicopters are also not energy efficient, so real efforts for flying cars are fixed wing, using electric multirotors to provide vertical take-off but converting in some way to fixed wing flight, usually powered by those same motors in a different orientation. If batteries continue their path of getting cheaper, and more importantly lighter, this is possible.
Fixed wing planes can be decently efficient — particularly when they travel as the crow flies — though they can have trouble competing with lightweight electric ground vehicles. Almost all aircraft today fly much faster than their optimum efficiency speed. There are a lot of reasons for this. One is the fact that maintenance is charged by the hour, not the mile. Another is that planes need powerful engines to take off, and people are in a hurry and want to use that powerful engine to fly fast once they get up there.
Typical powered planes have a glide ratio (which is a good measure of their aerodynamic efficiency) around 10:1 to 14:1. That means for every foot they drop, they go forward 10 to 14 feet. Gliders, more properly known as “sailplanes” are commonly at a 50:1 glide ratio today and go even higher. Sailplane pilots can use that efficiency to enter slowly rising columns of air found over hot spots on the ground and “soar” around in a circle to gain altitude, staying up for hours. Silent flying is great fun, though the tight turns to rise in a thermal can cause nausea. Efficient sailplanes are also light and can have fairly bumpy rides. (Note as well that the extra weight of energy storage and motors and drag of propellers means a lower glide ratio.)
It is the silent flight that is interesting. An autonomous high efficiency aircraft, equipped with redundant electric motors and power systems, need not run its engines a lot of the time. While you would never want to be constantly starting and stopping piston powered aircraft engines, electric engines can start and stop and change speed very quickly. The motors provide tremendous torque for fast response times. It would be insane to regularly land your piston powered aircraft without power, figuring you can just turn on the engine “if you need it.” It might not be that crazy to do it in an electric aircraft when you can get the engine up and operating in a fraction of a second with high reliability, and you have multiple systems, so even the rare failures can be tolerated.
Both passengers and people on the ground would greatly appreciate planes that were silent most of the time, including when landing at short airstrips. It could make the difference for acceptance.
Making efficient aircraft VTOL is a challenge. They tend to have large wingspans and are not so suitable for backyards, even if they can hover. But the option for redundant multirotor systems makes possible something else — aircraft wings that unfold in the air. There are “flying cars” with folding wings which fold the wings up so the car can get on the road, but unfolding in the air is one of those things that is insane for today’s aircraft designs. A VTOL multirotor could rise up, unfold its wings, and if they don’t unfold properly, it can descend (noisily) on the VTOL system, either to where it took off form, or a nearby large area if the wings unfolded but not perfectly. An in-flight failure of the folding system could again be saved (uncomfortably but safely) by the VTOL system.
We don’t yet know how to make powered vertical takeoff or landing quiet enough. We might make the rest of flight fairly silent, and make the noisy part fairly brief. The neighbours don’t all run their leaf blower several times per day. But a combination of robocars that take you on the first and last kilometer to places where aircraft can make noise without annoyance if they do it briefly might be a practical alternative.
Planes that fly silently would not fit well with today’s air traffic control regiments that allocate ranges of altitude to planes. A plane with a 50:1 ratio could travel 10 miles while losing 1,000 feet of altitude, then climb back up on power for another silent pass. But constant changing of altitude would freak out ATC. A computerized ATC for autonomous planes could enable entirely different regimens of keeping planes apart that would allow this, and it would also allow long slow glides all the way to the runway.
Recently we’ve seen a series of startups arise hoping to make robocars with just computer vision, along with radar. That includes recently unstealthed AutoX, the off-again, on-again efforts of comma.ai and at the non-startup end, the dedication of Tesla to not use LIDAR because it wants to sell cars today, before LIDARs can be bought at automotive quantities and prices.
Their optimism is based on the huge progress being made in the use of machine learning, most notably convolutional neural networks, at solving the problems of computer vision. Milestones are dropping quickly in AI and particularly pattern matching and computer vision. (The CNNs can also be applied to radar and LIDAR data.)
There are reasons pushing some teams this way. First of all, the big boys, including Google, already have made tons of progress with LIDAR. There right niche for a startup can be the place that the big boys are ignoring. It might not work, but if it does, the payoff is huge. I fully understand the VCs investing in companies of this sort, that’s how VCs work. There is also the cost, and for Tesla and some others, the non-availability of LIDAR. The highest capability LIDARs today come from Velodyne, but they are expensive and in short supply — they can’t make them to keep up with the demand just from research teams!
For the three key technologies, these trends seem assured:
LIDAR will improve price/performance, eventually costing just hundreds of dollars for high resolution units, and less for low-res units.
Computer vision will improve until it reaches the needed levels of reliability, and the high-end processors for it will drop in cost and electrical power requirements.
Radar will drop in cost to tens of dollars, and software to analyse radar returns will improve
In addition, there are some more speculative technologies whose trends are harder to predict, such as long-range LWIR LIDAR, new types of radar, and even a claimed lidar alternative that treats the photons like radio waves.
These trends are very likely. As a result, the likely winner continues to be a combination of all these technologies, and the question becomes which combination.
LIDAR’s problem is that it’s low resolution, medium in range and expensive today. Computer Vision (CV)’s problem is that it’s insufficiently reliable, depends on external lighting and needs expensive computers today. Radar’s problem is super low resolution.
Option one — high-end LIDAR with computer vision assist
High end LIDARs, like the 32 and 64 laser units favoured by the vast majority of teams, are extremely reliable at detecting potential obstacles on the road. They never fail (within their range) to differentiate something on the road from the background. But they often can’t tell you just what it is, especially at a distance. It won’t know a car from a pickup truck, or 2 pedestrians from 3. It won’t read facial expressions or body language. It can read signs but only when they are close. It can’t see colours, such as traffic signals.
The fusion of the depth map of LIDAR with the scene understanding of neural net based vision systems is powerful. The LIDAR can pull the pedestrian image away from the background, and then make it much easier for the computer vision to reliably figure out what it is. The CV is not 100% reliable, but it doesn’t have to be. Instead, it can ideally just improve the result. LIDAR alone is good enough if you take the very simple approach of “If there’s something in the way, don’t hit it.” But that’s a pretty primitive result that make brake too much for things you should not brake for.
Consider a bird on the road, or a blowing trash bag. It’s a lot harder for the LIDAR system to reliably identify those things. On the other hand, the visions systems will do a very good job at recognizing the birds. A vision system that makes errors 1 time every 10,000 is not adequate for driving. That’s too high an error rate as you encounter thousands of obstacles every hour. But missing 1 bird out of 10,000 means that you brake unnecessarily for a bird perhaps once every year or two, which is quite acceptable.
Option two — lower end LIDAR with more dependence on vision
Low end lidars, with just 4 or so scanning planes, cost a lot less. Today’s LIDAR designs basically need to have an independent laser, lens and sensor for each plane, and so the more planes, the more cost. But that’s not enough to identify a lot of objects, and will be pretty deficient on things low to the ground or high up, or very small objects.
The interesting question is, can the flaws of current computer vision systems be made up for by a lower-end, lower cost LIDAR. Those flaws, of course, include not always discerning things in their field. They also include needing illumination at night. This is a particular issue when you want a 360 degree view — one can project headlights forward and see as far as they see, but you can’t project headlights backward or to the side without distracting drivers.
It’s possible one could use infrared headlights in the other directions (or forward for that matter.) After all, the LIDAR sends out infrared laser beams. There are eye safety limits (your iris does not contract and you don’t blink to IR light) but the heat output is also not very high.
Once again, the low end lidar will eliminate most of the highly feared false negatives (when the sensor doesn’t see something that’s there) but may generate more false positives (ghosts that make the vehicle brake for nothing.) False negatives are almost entirely unacceptable. False positives can be tolerated but if there are too many, the system does not satisfy the customer.
This option is cheaper but still demands computer vision even better than we have today. But not much better, which makes it interesting.
Tesla has said they are researching what they can do with radar to supplement cameras. Radar is good for obstacles in front of you, especially moving ones. Better radar is coming that does better with stationary objects and pulls out more resolution. Advanced tricks (including with neural networks) can look at radar signals over time to identify things like walking pedestrians.
Radar sees cars very well (especially licence plates) but is not great on pedestrians. On the other hand, for close objects like pedestrians, stereo vision can help the computer vision systems a lot. You mostly need long range for higher speeds, such as the highways, where vehicles are your only concern.
Cost will eventually be a driver of robocar choices, but not today. Today, safety is the only driver. Get it safe, before your competitors do, at almost any cost. Later make it cheap. That’s why most teams have chosen the use of higher end LIDAR and are supplementing in with vision.
There is an easy mistake to make, though, and sometimes the press and perhaps some teams are making it. It’s “easy” on the grand scale to make a car that can do basic driving and have a nice demo. You can do it with just LIDAR or just vision. The hard part is the last 1%, which takes 99% of the time, if not more. Google had a car drive 1,000 miles of different roads and 100,000 total roads in the first 2 years of their project back in 2010, and even in 2017 with by far the largest and most skilled team, they do not feel their car is ready. It gets easier every day, as tech advances, to get the demo working, but that should not be mistaken for the real success that is required.
California has published updated draft regulations for robocars whose most notable new feature is rules for testing and operating unmanned cars, including cars which have no steering wheel, such as Google, Navya, Zoox and others have designed.
This is a big step forward from earlier plans which would have banned testing and deploying those vehicles. That that they are ready to deploy, but once you ban something it’s harder to un-ban it.
One type of vehicle whose coverage is unclear are small unmanned delivery robots, like we’re working on at Starship. Small, light, low speed, inherently unmanned and running mostly on the sidewalks they are not at all a fit for these regulations and presumably would not be covered by them — that should be made more explicit.
Another large part of the regulations cover revoking permits and the bureaucracy around that. You can bet that this is because of the dust-up between the DMV and Uber/Otto a few months ago, where Uber declared that they didn’t need permits (probably technically true) but the DMV found it not at all in the spirit of the rules and revoked the licence plates on the cars. The DMV wants to be ready to fight those who challenge its authority.
Intel buys MobilEye
Intel has paid over $15B to buy Jerusalem based MobilEye. MobilEye builds ASIC-based camera/computer vision systems to do ADAS and has been steadily enhancing them to work as a self-driving sensor. They’ve done so well the stock market already got very excited and pushed them up to near this rich valuation — the stock traded at close to this for a while, but fell after ME said it would no longer sell their chips to Tesla. (Tesla’s first autopilot depended heavily on the MobilEye, and while ME’s contract with Tesla explicitly stated it did not detect things like cross-traffic, that failure to detect played a role in the famous Tesla autopilot fatal crash.
In a surprising and wise move, Intel is going to move its other self-driving efforts to Israel and let MobilEye run them, rather than gobble them up and swallow/destroy them. ME is a smart company, fairly nimble, though it has too much focus on making low-cost sensors in a world where safety at high cost is better than less safety at low cost. (Disclaimer: I own some MBLY and made a nice profit on it in this sale.)
MobilEye has been the leader in doing ADAS functions with just cameras and cameras+radar. Several other startups are attempting this, and of course so is Tesla in their independent effort. However, LIDAR continues to get cheaper (with many companies, including Quanergy, whom I advise, working hard on that.) The question may be shifting from will it be cameras or lasers? to “will it be fancy vision systems with low-end LIDAR, or will it be high-end LIDAR with more limited vision systems?” In fact, that question deserves another post.
Waymo and Uber Lawsuit
I am not going to comment a great deal on this lawsuit, because I am close with both sides, and have NDAs with both Otto and formerly with Google/Waymo. There are lots of press reports on the lawsuit, filed by Waymo accusing Anthony Levandowski (who co-founded Otto and helped found the car team at Google) of stealing a vast trove of Google’s documents and designs. This fairly detailed Bloomberg report has a lot of information, including reports that at an internal meeting, Anthony told his colleagues that any downloading he did was simply to allow work from home.
The size of the lawsuit is staggering. Since Otto sold for 1% of Uber stock (worth over $750M) the dollar values are huge, particularly if, as Google alleges, they can demonstrate Uber encouraged wrongdoing. At the same time, if Google doesn’t prove their allegations, Otto and Anthony could file for what might be the largest libel lawsuit in history, since Google published their accusations not just in court filings, but in their blog.
One reason that might not happen is that Uber is seeking to force arbitration. Like almost all contracts these days, the contracts here included clauses forcing disputes to go to arbitrators, not courts. That will mean that the resolution and other data remain secret.
At the same time, Uber should fear something else. Uber is nothing, a $0 company, without iPhone and Android. (There is a Windows mobile app but it’s very low penetration.) Uber could push all drivers to iPhone, but if they ever found themselves unable to use Android for customers, they would lose more than they can afford.
I am not suggesting Google would go as far as to pull or block the Uber app on Android if it got into a battle. Aside from being unethical that might well violate antitrust regulations. But don’t underestimate the risk of betting half your business on a platform controlled by a company you go to war with. There are tricks I can think of (but am not yet publishing here) which Google could do which would not be seen as unfair or anti-competitive but which could potentially ruin Uber. Uber and Google will both have to be cautious in any serious battle.
In other Uber news, leaked reports say their intervention rate is still quite high. Intervention figures can be hard to interpret. Drivers are told to intervene at the smell of trouble, so the rate of grabbing the wheel can be much higher than the rate of actual problems. These leaks suggest, however, a fairly high rate of actual problems. This should remind people that while it’s pretty easy for a skilled team to get a car on the road and doing basic driving in a short time, there is a reason that Google’s very smart team has been at it 9 years and is still not ready to ship. The last 1% of the work takes 99% of the time.
I have so much paper that I’ve been on a slow quest to scan things. So I have high speed scanners and other tools, but it remains a great deal of work to get it done, especially reliably enough that you would throw away the scanned papers. I have done around 10 posts on digitizing and gathered them under that tag.
Recently, I was asked by a friend who could not figure out what to do with the papers of a deceased parent. Scanning them on your own or in scanning shops is time consuming and expensive, so a new thought came to me.
Set up a scanning table by mounting a camera that shoots 4K video looking down on the table. I have tripods that have an arm that extends out but there are many ways to mount it. Light the table brightly, and bring your papers. Then start the 4K video and start slapping the pages down (or pulling them off) as fast as you can.
There is no software today that can turn that video into a well scanned document. But there will be. Truth is, we could write it today, but nobody has. If you scan this way, you’re making the bet that somebody will. Even if nobody does, you can still go into the video and find any page and pull it out by hand, it will just be a lot of work, and you would only do this for single pages, not for whole documents. You are literally saving the document “for the future” because you are depending on future technology to easily extract it. read more »
Sooner than most expected, the Trump administration is in trouble. Many are talking about how to end it, or hasten that end.
The Democrats don’t have the power to take down Trump prior to 2020. Not even after 2018.
The revolt against Trump almost surely has to come from within his own party.
While many Republicans dislike Trump, revolt within a party is extremely difficult and goes against all party instincts.
Republicans will strongly resist fighting Trump as the left would like, or in a way which benefits the left.
As such, the more the left approves of a method of fighting Trump, the less likely it is the Republicans would use it.
This suggests a very different anti-Trump strategy than the obvious one followed by most.
Many in the GOP would prefer not to have Trump, and are ready to be disloyal to him as their leader. They are not, however, prepared to be disloyal to their party and their movement. Career party members of both sides often will put loyalty to party ahead of loyalty to country, even though they would never admit that.
This means that if the GOP does this, it must be for their own reasons, not the left’s, and it must clearly not appear to serve the left except in the broadest way.
This creates a conundrum for the left fighting Trump. If they rally around something, such as a Trump error, they push the right to reluctantly defend Trump on that issue. Many GOP can’t stand Trump but support him because the alternative is victory for the left, and injury for their party. As such, the best strategy for the left may be to pull back, or stick only to issues that are clearly their own.
The Democrats might consider strategies that are victories for the GOP. Conceding important items in congress in exchange for impeachment. The Republicans know the Democrats will vote for impeachment, so only a minority of Republicans need support it, but for them, a party divided like that is no victory. This may mean offering support for portions of Pence’s or the party’s agenda. Something so that the entire GOP can see it as a victory for their party. The Democrats lost in 2016, and they must accept that, and give up the hope that Trump’s fall would be good for the Democratic Party. They must accept only that it will be good for the country and neutral, or even slightly negative for the party.
It’s a common human foible but politicians cringe from ever admitting they were wrong. Those who supported Trump, even holding their noses, won’t see themselves as having failed. They won’t go, “Oh, you Democrats were right, sorry about that.” The reason will need to be something new, something few people knew or talked about before now. People are just less likely to do the right thing if they know it’s what their opponents want them to do.
The Democrats, however, are not a cohesive force. Even if “hold back and let the GOP do it” is the right plan, they will not embrace it in large numbers. Thus they will slow down the fall of Trump. This was a frequent mistake made during the election — the unprecedented level of contempt by the left for Trump and in particular for Trump supporters brought the Trump supporters together and made them stronger, rather than weakening them. It was a strong contributor to the Trump victory.
This advice does not mean, “Only complain about Trump in a way that the right-wing will understand.” Normally that is the best approach. Here, the problem is that as soon as a complaint is seen as coming from the left, there will be resistance to acting on it.
Some hold out for a change of Congress in 2018. It is quite normal for the President’s party — especially an unpopular President — to lose seats in the mid-terms. Unfortunately, the senate seats up in 2018 are far from likely to swing the senate to the Democrats. In fact, only 9 Republican seats are up for re-election with only Nevada at risk, and many of the Democrat incumbents are in pro-Trump states. It would take an immense voter revolt to not have the Senate become more Republican. In the House, Operation Redmap has assured Republican control short of a very major shift, and it also seems to mostly assure — absent some sort of court ruling against Gerrymandering — that they will get to draw the lines again in 2020 and continue it for another decade.
The Deep State
One group that can take down Trump aside from the Republicans is the intelligence agencies. Many speculate that this is already underway. This is extremely troubling to me. A coup d’état by the intelligence agencies is still a coup, even if it meets some test of “being a coup that needed to happen.” This is a bad precedent because the truth is the intelligence agencies have deep dirt on everybody, so it becomes up to them to decide which coups need to happen and which don’t. (Indeed, we saw the Russian agencies use this power already.)
There are already checks and balances for this. If the agencies find evidence of treason or malfeasance by one branch, they should present it to the other branch to act. All evidence should go to the congressional intelligence committees. But that means that again, the Republicans must decide whether to take down their own.
The press can play a role, but mainly the right-wing or right-of-center press. Again, it is their criticism of Trump that would enable the Republicans to break party loyalty, not criticisms found in media even perceived to be left or otherwise inherently anti-Trump. This is one reason Trump has worked to push more media into that classification, because it means their attacks will not be respected by his base and his party.
Trump’s base is not the mainstream GOP
The strongest counter to this approach is that Trump won the GOP nomination (and election ) due to support outside the mainstream GOP, merged with support from the party-loyal factions in the mainstream GOP. He has a tool to use against his opponents within the mainstream GOP, the same tool he used to defeat them in the nomination process. So even they must take care, for while they care most about alienating their own base, and least about alienating the progressive left, they are worried about alienating the “outsider right” contingency that Trump stumbled upon. They ideally want to be seen as having done the best thing for the party and the country in any efforts they make to block, or remove, the President.
Caltrain is the commuter rail line of the San Francisco peninsula. It’s not particularly good, and California is the land of the car commuter, but a plan was underway to convert it from diesel to electric. This made news this week as the California Republican house members announced they want to put a stop to both this project, and the much larger California High Speed Rail that hopes to open in 2030. For various reasons they may be right about the high speed rail but stop the electric trains? Electric trains are much better than diesel; they are cleaner and faster and quieter. But one number stands out in the plan.
To electrify the 51 miles of track, and do some other related improvements is forecast to cost over 1.5 billion dollars. Around $30M per mile.
So I started to ask, what other technology could we buy with $1.5 billion plus a private right-of-way through the most populated areas of silicon valley and the peninsula? Caltrain carries about 60,000 passengers/weekday (30,000 each way.) That’s about $50,000 per rider. In particular, what about a robotic transit line, using self-driving cars, vans and buses?
Paving over the tracks is relatively inexpensive. In fact, if we didn’t have buses, you could get by with fairly meager pavement since no heavy vehicles would travel the line. You could leave the rails intact in the pavement, though that makes the paving job harder. You want pavement because you want stations to become “offline” — vehicles depart the main route when they stop so that express vehicles can pass them by. That’s possible with rail, but in spite of the virtues of rail, there are other reasons to go to tires.
Fortunately, due to the addition of express trains many years ago, some stations already are 4 tracks wide, making it easy to convert stations to an express route with space by the side for vehicles to stop and let passengers on/off. Many other stations have parking lots or other land next to them allowing reasonably easy conversion. A few stations would present some issues.
Making robocars for a dedicated track is easy; we could have built that decades ago. In fact, with their much shorter stopping distance they could be safer than trains on rails. Perhaps we had to wait to today to convince people that one could get the same safety off of rails. Another thing that only arrived recently was the presence of smartphones in the hands of almost all the passengers, and low cost computing to make kiosks for the rest. That’s because the key to a robotic transit line would be coordination on the desires of passengers. A robotic transit line would know just who was going from station A to station J, and attempt to allocate a vehicle just for them. This vehicle would stop only at those two stations, providing a nonstop trip for most passengers. The lack of stops is also more energy efficient, but the real win is that it’s more pleasant and faster. With private ROW, it can easily beat a private car on the highways, especially at rush hour.
Another big energy win is sizing the vehicles to the load. If there are only 8 passengers going from B to K, then a van is the right choice, not a bus. This is particularly true off-peak, where vast amounts of energy are wasted moving big trains with just a few people. Caltrain’s last train to San Francisco never has more than 100 people on it. Smaller vehicles also allow for more frequent service in an efficient manner, and late night service as well — except freight uses these particular rails at night. (Most commuter trains shut down well before midnight.) Knowing you can get back is a big factor in whether you take a transit line at night.
An over-done service with a 40 passenger bus every 2 seconds would move 72,000 people (but really 30,000) in one hour in one direction to Caltrain’s 30,000 in a day. So of course we would not build that, and there would only be a few buses, mainly for rush hour. Even a fleet of just 4,000 9 passenger minvans (3 rows of 3) could move around 16,000 per hour (but really 8,000) in each direction. Even if each van was $50,000 each, we’ve spent only $200M of our $1.5B, though they might wear out too fast at that price, so we could bump the price and give them a much longer lifetime.
These vans and cars could be electric. This could be done entirely with batteries and a very impressive battery swap system, or you could have short sections of track which are electrified — with overhead rails or even third rails. The electric lines would be used to recharge batteries and supercapacitors, and would only be present on parts of the track. Unlike old 3rd rail technology, which requires full grade separation, there are new techniques to build safe 3rd rails that only provide current in a track segment after getting a positive digital signal from the vehicle. This is much cheaper than overhead wires. Inductive charging is also possible but makes pavement construction and maintenance much more expensive.
Other alternatives would be things like natural gas (which is cheap and much cleaner than liquid fuels, though still emits CO2) because it can be refilled quickly. Or hydrogen fuel cell vehicles could work here — hydrogen can be refilled quickly and can be zero emissions. Regular fossil fuel is also an option for peak times. For example the rush hour buses might make more sense running on CNG or even gasoline. The lack of starts and stops can make this pretty efficient.
In such a system, you can also add new “stations” anywhere the ROW is wide enough for a side-lane and a small platform. You don’t need the 100m long platform able to hold a big train, just some pavement big enough to load a van. You can add a new station for extremely low cost. Of course, with more stations, it’s harder to group people for nonstop trips, and more people would need to take two-hop trips — a small van or car that takes them from a mini-station to a major station, where they join a larger group heading to their true destination.
Of course, if you were designing this from scratch, you would make the ROW with a shoulder everywhere that allowed vehicles to pull off the main track at any point to pick up a passenger and there would barely be “stations” — they would be closer to bus stops.
Getting off the track
Caltrain’s station in San Francisco is quite far from most of the destinations people want to go to. It’s one of the big reasons people don’t ride it. Vans on tires, however, have the option of keeping going once they get to the station. Employers could sponsor vehicles that arrive at the station and keep driving to their office tower. Vans could also continue to BART or more directly to underground Muni, long before the planned subway is ready. Likewise on the peninsula, vans and buses would travel from stations to corporate HQ. Google, Yahoo, Apple and many other companies already run transit fleets to bring employees in — you can bet that given the option they would gladly have those vans drive the old rail line at express speeds. On day one, they could have a driver who only drives the section back and forth between the station and the corporate office. In the not too distant future, the van or bus would of course drive itself. It’s not even out of the question that one of the passengers in a van, after having taken a special driving test, could drive that last mile, though you may need to assure somebody drives it back.
I noted above that capacity would be slightly less than half of full. That’s because Caltrain has 40 at-grade crossings on the peninsula. The robotic vehicles would coordinate their trips to travel in bunches, leaving gaps where the cross-street’s light can be turned green. If any car was detected trying to run the red, the signal could be uploaded to allow all the robotic vans to slow or even brake hard. Unlike trains, they could brake in reasonable amounts of time if somebody stalls on the old track. You would also detect people attempting to drive on the path or walk on it. Today’s cameras and cheap LIDARs can make that affordable. The biggest problem is the gaps must appear in both directions (more on that in the comments.)
Over time, there is also the option in some places to build special crossings. Because the vans and cars would all be not very high, much less expensive underpasses could be created under some of the roads for use only by the smaller vehicles. Larger vehicles would still need to bunch themselves together to leave gaps for the cross-traffic. One could also create overpasses rated only for lightweight vehicles at much lower cost, though those would still need to be high enough for trucks to go underneath. In addition, while cars can handle much, much steeper grades than trains, it could get disconcerting to handle too much up and down at 100mph. And yes, in time, they would go 100mph or even faster. And in time, some would even draft one another to both increase capacity and save energy — creating virtual trains where there used to be physical ones.
And then, obsolete
This robotic transit line would be much better than the train. But it would also be obsolete in just a couple of decades! As the rest of the world moves to more robocars, the transit line would switch to being just another path for the robocars. It would be superior, because it would allow only robocars and never have traffic congestion. You would have to pay extra to use it at rush hour, but many vehicles would, and large vehicles would get preference. The stations would largely vanish as all vehicles are able to go door to door. Most of the infrastructure would get re-used after the transit line shuts down.
It might seem crazy to build such a system if it will be obsolete in a short time, but it’s even crazier to spend billions on shoring up 19th century train.
What about the first law?
I’ve often said the first law of robocars is you don’t change the infrastructure. In particular, I am in general against ideas like this which create special roads just for robocars, because it’s essential that we not imagine robocars are only good on special roads. It’s only when huge amounts of money are already earmarked for infrastructure that this makes sense. Now we are well on the way to making general robocars good for ordinary streets. As such, special cars only for the former rail line run less risk of making people believe that robocars are only safe on dedicated paths. In fact, the funded development would almost surely lead to vehicles that work off the path as well, and allow high volume manufacturing of robotic transit vehicles for the future.
Could this actually happen?
I do fear that our urban and transit planners are unlikely to be so forward looking as to abandon a decades old plan for a centuries old technology overnight. But the advantages are huge:
It should be cheaper
Many companies could do it, and many would want to, to fund development of other technology
It would almost surely be technology from the Bay Area, not foreign technology, though vehicle manufacturing would come from outside
They could also get money for the existing rolling stock and steel in the rails to fund this
The service level would be vastly better. Wait times of mere minutes. Non-stop service. Higher speeds.
The energy use would be far lower and greener, especially if electric, CNG or hydrogen vehicles are used
The main downside is risk. This doesn’t exist yet. If you pave the road to retain the rails embedded in them, you would not need to shut down the rail line at first. In fact, you could keep it running as long as there were places that the vans could drive around trains that are slowing or stopping in the stations. Otherwise you do need to switch one day.
There’s been a lot of talk this week on the nature of free speech. I’m a very strong defender of free speech, so I felt it would be worth laying out some of the reasons why “the first amendment is not just the law, it’s a good idea.” While I am not speaking for any particular organization, and am not a lawyer nor giving legal advice, my background includes things like:
Being a plaintiff in ACLU v. Reno, which we won 9-0 in the supreme court, for which I was named a “Champion of Free Speech” by the ACLU.
20 years with the Electronic Frontier Foundation, including 10 as chairman.
Two recent events has caused much debate. A viral video of somebody punching Richard B. Spencer, a man who gathers attention by promoting neo-nazi and whites-first rules has caused people to ask, “Isn’t it OK to punch a Nazi?” You see Spencer declaring “Hail Trump” and people doing Nazi salutes in one famous video.
There have also been two attempts by Breitbart writer Milo Yiannopoulos to speak at UC Davis and UC Berkeley that have been met with protests, calls that he be banned from speaking, and cancellations of his talks due to fear of violence. At UCB, a large group of apparent “black bloc” anarchists invaded a peaceful protest with violent acts and resulted in chaos and cancellation of the talk.
For a free speech supporter, the situation is fairly clear. No, it’s not OK to punch a Nazi (or in this case a wannabe neo-nazi) simply for what he says or what he is, even if it’s so-called “hate speech.” (In fact, that we don’t punch people for what they say is one of the important things that makes us better than Nazis.) And universities should not distinguish among speakers who are legitimately invited by members of the university community because of the content of their messages, even if it is hugely unpopular, offensive and hateful.
Speech can be evil. But censorship is more evil.
It is a common mistake of those who say, “I am all in favour of free speech, but….” to imagine that we support free speech because speech is pure and can’t cause harm. This is the “sticks and stones” philosophy, but if you follow it, then it follows that if you can show that some speech is, unlike most speech, actually harmful, it is then OK to ban it.
While some speech is indeed harmless, important speech is powerful. It evokes change in the world, for good or ill. Speech can do great good and great harm. Consider the book “The Communist Manifesto” which advocates that to bring about an ideal communist society, one must begin with armed revolution and a “dictatorship of the proletariat” that uses draconian methods to work towards the pure goal. That idea has been used to create such dictatorships, and they have all been horrors. These dictatorships (particularly Stalin and Mao) perverted the ideas but used the ideals to justify acts which killed many tens of millions — leaving the Nazi holocaust in the dust. You can’t get much more evil or more proven harm. Yet we don’t ban that book.
Lots of speech is evil, but we have found no way to determine that reliably or in advance. As such, giving any entity the power to decide what speech is good and what is evil is a more dangerous proposition than just allowing all speech. For just as the idea in The Communist Manifesto have led to the death of millions, so much of the good in the world is also attributable to other ideas and books, including ones which were banned. We can’t grant an agency the power to decide what is good or bad without having them stamp out too much of the good. Nobody has the crystal ball that can do this, and history shows the terrible record of censorship agencies in the places that allow them.
There is also a practical angle. Censorship is only moderately effective. It’s probably slightly better at crushing good ideas than bad ones, but either way, for all the pain we get from censorship, it rarely actually stops the bad (or offensive or blasphemous) ideas from getting out. In fact, it is often of negative value, causing more publicity and support for the thing to be censored. (This worked for me when they tried to shut down my newsgroup, and later against Barbara Streisand to the extent that the principle was given her name.) In fact, I strongly suspect that the protests (even the peaceful ones) are doing precisely what Yiannopoulos wants. You think he cares that much about giving a talk to UC students? Or instead about the chance to be banned on the campus famous for the Free Speech movement of the 60s?
If we decide it’s going to be OK to punch some people for what they say, but not others, you need an arbiter who decides which speech is evil enough to warrant punching. And having that arbiter is a worse idea than letting the offensive person speak.
We have other ways to deal with bad speech
While there is bad speech, there is some merit to the “sticks and stones” argument in that people must be driven to action by the bad speech in order to get the harm. There, history shows that countering bad speech with good speech is a better, and certainly less dangerous counter-weapon than censorship. The answer to bad speech is more speech and more education.
There is a difference between speech and action
I will often hear people say that clearly some times of speech must be stopped — “what about shouting ‘fire!’ in a crowded theatre?”
That example is wrong for two important reasons. First, it’s fairly clear that shouting fire like this is not merely speech, but an action. It is the setting of a false fire alarm. It is like pulling the lever on the electronic fire alarm, which is easily seen as an action, and we can regulate actions. It is illegal to do a false fire alarm, particularly if it could cause a stampede.
Secondly, it’s a great demonstration of the evils of censorship. That argument became famous in the supreme court case Schenck v. United States. The case revolved around distributing leaflets which opposed the Draft in WWI. The court considered promoting resistance to the draft as akin to shouting fire in a crowded theatre. With our modern sensibility, we now see the debate about the merits of the draft to be an important one in a free society, one where all voices should be heard. Back then, they decided that the “incorrect” anti-draft position was so terrible it was like setting a false fire alarm. The reason why we can’t trust any agency to decide what speech is good and what is bad becomes very clear if you examine this case.
Generally, free speech law has allowed actions to be regulated but not speech. So setting a false fire alarm can be regulated. In addition, restrictions on the time and manner of speech can be regulated. They can make a rule prohibiting megaphones, but they can’t make a rule which ends up prohibiting megaphones based on what is said through them.
They can also make rules against conspiring to commit crimes. “Let’s attack John Smith” is more than speech, it is conspiracy to commit assault. “John Smith deserves assault” is not necessarily conspiracy, and the courts examine the circumstances in the borderline cases to see if the speech was also a threat, incitement or conspiracy. And yes, saying “It is OK to punch a Nazi” is speech when it’s an intellectual exercise, but more than speech when it turns into “let’s go down to the rally and punch Richard Spencer.” To count as incitement, the incited violent acts must be imminent, the path between the words and the violence must be clear and direct.
Hate speech is protected speech, at least in the USA
In many places, there have been efforts to define a special class of speech called “hate speech” and then to ban it. A number of countries, including Canada, have such laws. They are controversial and as predicted above, they have from time to time been used to attack political opponents of those in power rather than just shut down the Nazis and racists the way they are supposed to.
In the USA however, courts have consistently protected hate speech the same as any other speech.
Universities are held to an even higher standard
Many have been upset with universities allowing hate speakers to speak on campus. There are times when a student or professor wants to express an unpopular view, but more uproar comes when an outsider, like Yiannopoulos, is going to give an address.
Outsiders can’t generally come to universities, but often they get invites from people who are insiders. Yiannopoulos was invited by student Republican clubs, for example.
In the USA, the 1st amendment stops the government from censoring. The University of California is a state school, but it’s also a private institution, so there is debate on to what extent the 1st amendment governs it. (It does not govern totally private entities, such as a private club which can indeed decide what messages are allowed at club meetings.)
I’m not going to speak to that debate; rather I am going to invoke something much older than the 1st amendment, namely the traditions of academic freedom. For centuries, longer than any government or constitution has existed, universities have taken the principles of academic freedom as sacred. These principles declare an even higher bar. Universities are supposed to be the places that welcome controversial and dangerous views, views even the most enlightened governments of the world are afraid of. This has given us concepts like tenure, which assure faculty they will not be fired for expressing controversial views. History has taught us that so many of the most valuable ideas ever put forward began as controversial and banned thoughts in mainstream society.
As such, over and above any 1st amendment duties, universities, if they wish to honour their traditions, must set rules for who speaks based not at all on the message said by the speaker. They can limit locations and times. They can require external speakers to get an invitation from an accredited member of their community, but they must not treat a speaker of one message differently from another.
Indeed, there is an argument that if a speaker is so controversial, even within their own community, that there is fear of violence, that they should go the extra mile to provide extra protection rather than shy away in fear.
This does mean that a few dickheads will get to speak at universities to spout gibberish. That’s better than the alternative.
So is it OK to punch a Nazi?
Usually those asking this question point out that had the world punched/fought the real Nazis early on, the great horrors of the 20th century might have been averted. It is important to realize that this is clearly only obvious in hindsight. The people of the day did not have that vision at all. The Nazis, of course, got violent quite early on, so there were plenty of reasons to meet them with force if people had the will do do so. It was not a lack of moral clarity about “punching” them.
Indeed, at the end of the war, when the allies had almost all the Nazis captive, they tried them, and those who could be proven involved in the war crimes were executed or jailed. The others, in spite of killing many allied soldiers and civilians in battle, were set free. Including many members of the Nazi party.
Even when we had actual Nazis to deal with, the answer was not to punch them for what they were or what they said. They were punished if they were involved in the atrocities. Not for talking about them. If the actual victims of the real Nazis could do that, it seems odd for people today to claim to be wiser about it.
While the real Nazis are best known for killing people for their ethnicity and religion, they were also ready to do it for ideology, politics or sexual orientation, and many communists or simple political opponents were persecuted, rounded up and executed for it. Punching people for their beliefs is what Nazis do, not us. Instead, counter their ideology with better ideology, and be wary; for if they take up arms in their cause, it is certainly appropriate to respond with force.
On these numbers, Google’s lead is extreme. Of over 600,000 autonomous miles driven by the various teams, Google/Waymo was 97% of them — in other words 30 times as much as everybody else put together. Beyond that, their rate of miles between disengagements (around 5,000 — a 4x improvement over 2015) is one or two orders of magnitude better than the others, and in fact for most of the others, they have so few miles that you can’t even produce a meaningful number. Only Cruise, Nissan and Delphi can claim enough miles to really tell.
Tesla is a notable entry. In 2015 they reported driving zero miles, and in 2016 they did report a very small number of miles with tons of disengagements from software failures (one very 3 miles.) That’s because Tesla’s autopilot is not a robocar system, and so miles driven by it are not counted. Tesla’s numbers must come from small scale tests of a more experimental vehicle. This is very much not in line with Tesla’s claim that it will release full autonomy features for their cars fairly soon, and that they already have all the hardware needed for that to happen.
Unfortunately you can’t easily compare these numbers:
Some companies are doing most of their testing on test tracks, and they do not need to report what happens there.
Companies have taken different interpretations of what needs to be reported. Most of Cruise’s disengagements are listed as “planned” but in theory those should not be listed in these reports. But they also don’t list the unplanned ones which should be there.
Delphi lists real causes and Nissan is very detailed as well. Others are less so.
Many teams test outside California, or even do most of their testing there. Waymo/Google actually tests a bunch outside California, making their numbers even bigger.
Cars drive all sorts of different roads. Urban streets with pedestrians are much harder than highway miles. The reports do list something about conditions but it takes a lot to compare apples to apples. (Apple is not one of the companies filing a report, BTW.)
One complication is that typically safety drivers are told to disengage if they have any doubts. It thus varies from driver to driver and company to company what “doubts” are and how to deal with them.
Google has said their approach is to test any disengagement in simulator, to find out what probably would have happened if the driver did not disengage. If there would have been a “contact” (accident) then Google considers that a real incident, and those are more rare than is reported here. Many of the disengagements are when software detects faults with software or sensors. There, we do indeed have a problem, but like human beings who zone out, not all such failures will cause accidents or even safety issues. You want to get rid of all of them, to be sure, but if you are are trying to compare the safety of the systems to humans, it’s not easy to do.
It’s hard to figure out a good way to get comparable numbers from all teams. The new federal guidelines, while mostly terrible, contain an interesting rule that teams must provide their sensor logs for any incident. This will allow independent parties to compare incidents in a meaningful way, and possibly even run them all in simulator at some level.
It would be worthwhile for every team to be required to report incidents that would have caused accidents. That requires a good simulator, however, and it’s hard for the law to demand this of everybody.
I generally pay very little attention when companies issues a press release about an “alliance.” It’s usually not a lot more than a press release unless there are details on what will actually be built.
The recent announcement that Uber plans to buy some self-driving cars from Daimler/Mercedes is mostly just such an announcement — a future intent, when Mercedes actually builds a full self-driving car, that Uber will buy some. This, in spite of the fact that Uber has its own active self-driving system in development, and that it paid stock worth $760M to purchase freshly-minted startup Otto to accelerate that.
This shows a special advantage that Uber has over other players here. Their own project is very active, but unlike others, it doesn’t cripple Uber if it fails. Uber’s business is selling rides, and it will continue to be. If Uber can’t do it with its own cars, it can buy somebody else’s. Uber does not have the intention to make cars (neither does Google and that’s probably true of most other non-car companies.) There are many companies who will make cars to order for you. But if Google’s self-drive software (and hardware) project fails, they are left with very little. If Uber’s fails, they are still very much in business, but not as much in control of the underlying vehicles. As long as there are multiple suppliers for Uber to choose from, they are good.
One nightmare for the car companies is the reduction in value of their brands. If you summon “UberSelect” (the luxury Uber) you don’t care if it is a Lexus or Mercedes that shows up. As long as it’s a decent luxury car, you are good, because you are not buying the car, you are using it for 20 minutes. Uber is the brand you are trusting — and car companies fear that. I presume one thing that Daimler wants from this announcement is to remind people that they are a leader and may well be the supplier of cars to companies like Uber. But will they be in charge of the relationship? I doubt it.
Lyft should have the same advantage — but it took a $500M investment from GM which strongly pressures it to use whatever solution GM creates. Of course, if GM’s project fails, Lyft still has the freedom to use another, including Mercedes.
A lawsuit from Tesla against former Tesla autopilot team leader Sterling Anderson and former head of Google Chauffeur (now Waymo) Chris Urmson reveals little, other than the two have a company which will get a lot of attention in the space. But that’s enough. Google’s project is the most advanced one in the world. I was there and worked for Chris in its early days. Tesla’s is not necessarily the most advanced technologically — it has no LIDAR development — but it’s way ahead of others in terms of getting out there and deploying to gain experience, which has given it a headstart, especially in camera/radar based systems. The leaders of the two projects together will cause a stir in the auto business.
Earlier I posted my gallery of CES gadgets, and included a photo of the eHang 184 from China, a “personal drone” able, in theory, to carry a person up to 100kg.
Whether the eHang is real or not, some version of the personal automated flying vehicle is coming, and it’s not that far away. When I talk about robocars, I am often asked “what about flying cars?” and there will indeed be competition between them. There are a variety of factors that will affect that competition, and many other social effects not yet much discussed.
The VTOL Multirotor
There are two visions of the flying car. The most common is VTOL — vertical takeoff and landing — something that may have no wheels at all because it’s more a helicopter than a car or airplane. The recent revolution in automation and stability for multirotor helicopters — better known as drones — is making people wonder when we’ll get one able to carry a person. Multirotors almost exclusively use electric motors because you must adjust speed very quickly to get stability and control. You also want the redundancy of multiple motors and power systems, so you can lose a rotor or a battery and still fly.
This creates a problem because electric batteries are heavy. It takes a lot of power to fly this way. Carrying more batteries means more weight — and thus more power needed to carry the batteries. There are diminishing returns, and you can’t get much speed, power or range before the batteries are dead. OK in a 3 kilo drone, not OK in a 150 kilo one.
Lots of people are experimenting with combining multirotor for takeoff and landing, and traditional “fixed wing” (standard airplane) designs to travel any distance. This is a great deal more efficient, but even so, still a challenge to do with batteries for long distance flight. Other ideas including using liquid fuels some way. Those include just using a regular liquid fuel motor to run a generator (not very efficient) or combining direct drive of a master propeller with fine-control electric drive of smaller propellers for the dynamic control needed.
Another interesting option is the autogyro, which looks like a helicopter but needs a small runway for takeoff.
The traditional aircraft
Some “flying car” efforts have made airplanes whose wings fold up so they can drive on the road. These have never “taken off” — they usually end up a compromise that is not a very good car or a very good plane. They need airports but you can keep driving from the airport. They are not, for now, autonomous.
Some want to fly most of their miles, and drive just short distances. Some other designs are mostly for driving, but have an ability to “short hop” via parasailing or autogyro flying when desired. read more »
It is insufficient to assert, as you do, that the product does not remove any of the driver’s responsibilities” and “there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose.”
The ODI report rules that Tesla properly considered driver distraction risks in its design of the product. It goes even further, noting that after the introduction of Tesla autopilot (including driving by those monitoring it properly, those who were distracted, and those who drove with it off) still had a decently lower accident rate for mile than drivers of Teslas before autopilot. In other words, while the autopilot without supervision is not good enough to drive on its own, the autopilot even with the occasionally lapsed supervision that is known to happen, combined with improved AEB and other ADAS functions, is still overall a safer system than not having the autopilot at all.
This will provide powerful support for companies developing autopilot style systems, and companies designing robocars who wish to use customer supervised driving as a means to build up test miles and verification data. They are not putting their customers at risk as long as they do it as well as Tesla. This is interesting (and the report notes that evaluation of autopilot distraction is not a settled question) because it seems probable that people using the autopilot and ignoring the road to do e-Mail or watch movies are not safer than regular drivers. But the overall collection of distracted and watchful drivers is still a win.
This might change as companies introduce technologies which watch drivers and keep them out of the more dangerous inattentive style of use. As the autopilots get better, it will become more and more tempting, after all.
Tesla stock did not seem to be moved by this report. But it was also not moved by the accident or other investigations — it actually went on a broadly upward course for 2 months following announcement of the fatality.
The ODI’s job is to judge if a vehicle is defective. That is different from saying it’s not perfect. Perfection is not expected, especially from ADAS and similar systems. The discussion about the finer points of whether drivers might over-trust the system are not firmly settled here. That can still be true without the car being defective and failing to perform as designed, or being designed negligently.
I go to CES first to see the cars but it’s also good to see all the latest gadgets. My gallery, with captions you will see at the bottom as you page through them, provides photos and comments on interesting and stupid products and gadgets for this year.
CES always contains an amazing array of “What are they thinking?” products. This year, more than ever, we had more things that were made “smart” and “connected” for little reason one can discern. I was quite disappointed to read various media lists of top gadgets of CES 2017 and not find a single one that was actually exciting. There are a few that will be exciting one day — the clothes folding robot, the human carrying drone — but they are not here yet.
Recently we’ve seen two essays by people I highly respect in the field of AI and robotics. Their points are worthy of reading, but in spite of my respect, I have some differences of course.
The first essay comes from Andrew Ng, head of AI (and thus the self-driving car project) at Baidu. You will find few who can compete with Andrew when it comes to expertise on AI. (Update: This essay is not recent, but I only came upon it recently.)
In Wired he writes that Self-Driving Cars Won’t Work Until We Change Our Roads—And Attitudes. And the media have read this essay as being much more strong about changing the roads than he actually writes. I have declared it to be the “first law of robocars” that you don’t change the infrastructure. You improve your car to match the world you are given, you don’t ask the world to change to help your cars. There are several reasons I promote this rule:
As soon as you depend on a change in the world in order to drive safely, you have vastly limited where you can deploy. You declare that your technology will be, for a very long time, a limited area technology.
You have to depend on, and wait for others to change the world or their attitudes. It’s beyond your control.
When it comes to cities and infrastructure, the pace of change is glacial. When it comes to human behaviour, it can be even worse.
While it may seem that the change to infrastructure is clearer and easier to plan, the reality is almost assuredly the opposite. That’s because the clever teams of developers, armed with the constantly improving technologies driven by Moore’s law, have the ability to solve problems in a way that is much faster than our linear intuitions suggest. Consider measuring traffic by installing tons of sensors, vs. just getting everybody to download Waze. Before Waze, the sensor approach seemed clear, if expensive. But it was wrong.
As noted, Andrew Ng does not actually suggest that much change to the infrastructure. He talks about:
Having road construction crews log changes to the road before they do them
Giving police and others who direct traffic a more reliable way to communicate their commands to cars
Better painting of lane markers
More reliable ways to learn the state of traffic lights
Tools to help humans understand the actions and plans of robocars
The first proposal is one I have also made, because it’s very doable, thanks to computer technology. All it requires at first blush is a smartphone app in the hands of construction crews. Before starting a project, they would know that just as important as laying out cones and signs is opening the app and declaring the start of a project. The phone has a GPS and can offer a selection of precise road locations and log it. Of course, the projects should be logged even before they begin, but because that’s imperfect, smartphone logging is good enough. You could improve this by sticking old smartphones in all the road construction machines (old phones are cheap and there are only so many machines) so that any time a machine stops on a road for very long, it sends a message to a control center. Even emergency construction gets detected this way.
Even with all that, cars still need to detect changes to the road (that’s easy with good maps) and cones and machines. Which they can do.
I think the redirection problem is more difficult. Many people redirect traffic, even civilians. However, I would be interested to see Ng’s prediction on how hard it is to get neural network based recognizers to understand all the common gestures. Considering that computers are now getting better at reading sign languages, which are much more complex, I am optimistic here. But in any event, there is another solution for the cases where the system can’t understand the advice, namely calling in an operator in a remote control center, which is what Nissan plans to do, and what we do at Starship. Unmanned cars, with no human to help, will just avoid data dead zones. If somehow they get to them, there can be other solutions, which are imperfect but fine when the problem is very rare, such as a way for the traffic manager to speak to the car (after all, spoken language understanding is now close to a solved problem for limited vocabulary problems.)
Here I disagree with Andrew. His statement may be a result of efforts to drive on roads without maps, even though Baidu has good map expertise. Google’s car has a map of the texture of the road. It knows where the cracks and jagged lane markers are. The car actually likes degrading lane markers. It’s perfectly painted straight and smooth roads which confuse it (though only slightly, and not enough to cause a problem.) So no, I think that better line painting is not on the must-do list.
He’s right, seeing lights can be challenging, though the better cars are getting good at it. The simple algorithm is “you don’t go if you don’t confirm green.” That means you don’t run a red but you could block traffic. If that’s very rare it’s OK. We can consider infrastructure to solve that, though I’m wary. Fortunately, if the city is controlling its lights with a central computer, you don’t have to alter the traffic light itself (which is hard,) you can just query the city, in those rare cases, for when the light will be changing. I think that problem will be solved, but I also think it may well be solved just by better cameras. Good robocars know exactly where all the lights are, and they know where they are, and thus they know exactly what pixels in a video image are from the light, even if the sun is behind it. (Good robocars also know where the sun is and will avoid stopping in a place where there is no light they can see without the sun right behind it.)
Working with people
How cars interact with people is one of Andrew Ng’s points and the central point of Rodney Brooks’ essay Unexpected Consequences of Self Driving Cars. Already many of the car companies have had fun experimenting with that, putting displays on the outside of cars of various sorts. While cars don’t have the body language and eye contact of human drivers, I don’t predict a problem we can’t solve with good effort.
Brooks’ credentials are also superb, as founder of iRobot (Roomba) and Rethink Robotics (Baxter) as well as many accomplishments as an MIT professor. His essay delves into one of the key questions I have wondered about for some time — how to deal with a world where things do not follow the rules, and where there are lots of implicit and changing rules and interactions. Google discovered the first instant of this when their car got stuck at a 4 way stop by being polite. They had to program the car to assert its right to go in order to handle the stop. Likewise, you need to speed to be a good citizen on many of our roads today.
His key points are as follows:
There is a well worked out dance between pedestrians and cars, that varies greatly among different road types, with give and take, and it’s not suitable for machines yet.
People want to know a driver has seen them before stepping near or certainly in front of a vehicle.
People jaywalk, and even expect cars to stop for them when they do on some streets.
In snowy places, people walk on the street when the sidewalk is not shoveled.
Foot traffic can be so much that timid cars can’t ever get out of sidestreets or driveways. Nice pedestrians often let them out. They will hand signal their willingness to yield or use body language.
Sometimes people just stand at the corner or edge of the road, and you can’t tell if they are standing there or getting ready to cross.
People setting cars to circle rather than park
People might jump out of their car to do something, leaving it in the middle of the street blocking traffic, where today they would be unwilling to double park.
People might abuse parking spots by having a car “hold” them for quick service when they want to leave an event.
Cars will grab early spots to pick up children at schools.
Brooks starts with one common mistake — he has bought into the “levels” defined by SAE, even claiming them to be well accepted. In fact, many people don’t accept them, especially the most advanced developers, and I outlined recently why there is only one level, namely unmanned operation, and so the levels are useless as a taxonomy. Instead the real taxonomy in the early days will be the difference between mobility on demand services (robotaxi) and self-drive enabled high end luxury cars. Many of his problems involve privately owned cars and selfish behaviour by their owners. Many of those behaviours don’t make sense in a world with robotaxis. I think it’s very likely that the robotaxis come first, and come in large numbers first, while some imagine it’s the other way around.
Brooks is right that there will be unintended consequences, and the technology will be put to uses nobody thought of. People will be greedy, and antisocial, that can be assured. Fortunately, however, people will work out solutions, in advance, to anything you can think of or notice just by walking down the street or thinking about issues for a few days. The experienced developers have been thinking about these problems for decades now, and cars like Google’s have driven for 300 human lifetimes of driving, and that number keeps increasing. They note every unusual situation they encounter on every road they can try to drive, and the put it into the simulator if it’s important. They’ve already seen more situations than any one human will encounter on those roads, though they certainly haven’t driven all the types of road in the world. But they will, before they certify as safe for deployment on such roads.
As I noted, only the “level 4” situation is real. Level 5 is an aspirational science-fiction goal, and the others are unsafe. Key to the improved thinking on “levels” it is no longer the amount of human supervision needed that makes the difference, it is the types of roads and situations you can handle. All these vehicles will only handle a subset of roads, and that is what everybody plans. If there is a road that is too hard, they just won’t drive it. Fortunately, there are lots of road subsets out there that are very, very useful and make economic sense. For a while, many companies planned only to do highways, which are the simplest road subset of all, except for the speed. A small subset, but everybody agrees it’s valuable.
So the short answer is, solutions will be found to these problems if the roads they occur on are commercially necessary. If they are not necessary, the solutions will be delayed until they can be found, though that’s probably not too long.
As noted above, many people do expect systems to be developed to allow dialogue between robocars and pedestrians or other humans. One useful tool is gaze detection — just as a cheap flash camera causes “red eye” in photos, machines shining infrared light can easily tell if you are looking at them. Eye contact in that direction is detectable. There have been various experiments in sending information in the reverse direction. Some cars have lasers that can paint lines on the road. Others can display text. Some have an LED ribbon surrounding them that shows all the objects and people tracked by the car, so people can understand that they are being perceived. You can also flash a light back directly at people to return their eye contact — I see you and I see that you saw me.
Over time, we’ll develop styles of communication, and they will get standarized. It’s not essential to do that on day one; you just stay on the simpler roads until you know you can handle the others. Private cars will pause and pop out a steering wheel. Services like Uber will send you a human driver in the early days if the car is going somewhere the systems can’t drive, or they might even let you drive part of it. Such incrementalism is the only way it can ever work.
People taking advantage of timidity of robocars
I believe there are solutions to some of the problems laid out. One I have considered is pedestrians and others who take advantage of the naturally conservative and timid nature of a robocar. If people feel they can safely cut off or jaywalk in front of robocars, they will. And the unmanned cars will mostly just accept that, though only about 10% of all cars should be unmanned at any given time. The cars with passengers are another story. Those passengers will be bothered if they are cut off, or forced to brake quickly. They will spill their coffee. And they will fight back.
Citizen based strong traffic code enforcement
Every time you jump in front of such a car, it will of course have saved the video and other sensor data. It’s always doing that. But the passenger might tell the car, “Please save that recent encounter. E-mail it to the police.” The police will do little with it at first, but in time, especially since there are rich people in these cars, they will throw a face recognizer and licence plate recognizer on the system that gets the videos. They will notice that one person keeps jaywalking right in front of the cars and annoying the passengers. Or the guy who keeps cutting off the cars as though they are not there because they always brake. They will have video of him doing it 40 times, or 100. And at that point, they will do something. The worst offender will get identified and get an E-mail from police. We have 50 videos of you doing this. Here are 50 tickets. Then the next, and the next until nobody wants to get to the top of the list.
This might actually create pressure the other way — a street that belongs only to the cars and excludes the non-car user. A traffic code that is enforced to the letter because every person inconvenienced has an ability to file a complaint trivially. We don’t want that either, but we can control that balance.
I actually look forward to fixing one of the dynamics of jaywalking that doesn’t work. Often, people like to jaywalk and a car is approaching. They want to have the car pass at full speed and then walk behind it — everybody is more comfortable behind a car than in front of one. But the driver gets paranoid and stops, and eventually you uncomfortably cross in front, annoyed at that and that you stopped somebody you didn’t intend to stop. I suspect robocars will be able to handle this dynamic better, predicting when people might actually be on a path to enter their lane, but not slowing down for stopped pedestrians (adults at least) and trust them to manage their crossing. Children are a different matter.
People being selfish with robocars
Brooks wonders about people doing selfish things with their robocars. Here, he mostly talks about privately owned robocars, since most of what he describes would not or could not happen with a robotaxi. There will be some private cars so we want to think about this.
A very common supposition I see here and elsewhere is the idea of a car that circles rather than parking. Today, operating a car is about $20/hour so that’s already completely irrational, and even when robocar operation drops to $8/hour or less, parking is going to be ridiculously cheap and plentiful so that’s not too likely. There could be competition for spots in very busy areas (schools, arenas etc.) which don’t have much space for pick-up and drop-off, and that’s another area where a bit of traffic code could go a long way. Allow facilities to make a rule: “No car may enter unless its passenger is waiting at the pick-up spot” with authority to ticket and evict any car that does otherwise. Over time, such locations will adjust their pick-up spots to the robocar world and become more like Singapore’s airport, which provides amazing taxi throughput with no cab lines by making it all happen in parallel. Of course, cars would wait outside the zone but robocars can easily double and triple park without blocking the cars they sit in the path of. Robocars waiting for passengers at busy locations will be able to purchase waiting spaces for less than the cost of circling, and then serve their customers or owners. If necessary, market prices can be put on the prized close waiting spaces to solve any problems of scarcity.
So when can it happen?
Robocars will come to different places at different times. They will handle different classes of streets at different times. They will handle different types of interactions with pedestrians and other road users at different times. Where you live will dictate when you can use it and how you can use it. Vendors will push at the most lucrative routes to start, then work down. There will be many problems that are difficult at first, and the result will be the early cars just don’t go on those sorts of streets or into those sorts of situations. Human driving, either by the customer or something like an Uber driver, will fill in the gaps.
Long before then, teams will have encountered or thought of just about any situation you’ve seen, and any situation you’ve likely thought of in a short amount of time. They will have programmed every variation of that situation they can imagine into their simulators to see what their car does. They will use this to grow the network of roads the cars handle every day. Even if at the start, it is not a network of use to you, it won’t be too long before it becomes that, at first for some of your rides, and eventually for most or all.
CES has become the big event for major car makers to show off robocar technology. Most of the north hall, and a giant and valuable parking lot next to it, were devoted to car technology and self-driving demos.
Gallery of CES comments
Earlier I posted about many of the pre-CES announcements and it turns out there were not too many extra events during the show. I went to visit many of the booths and demos and prepared some photo galleries. The first is my gallery on cars. In this gallery, each picture has a caption so you need to page through them to see the actual commentary at the bottom under the photo. Just 3 of many of the photos are in this post.
To the left you see BMW’s concept car, which starts to express the idea of an ultimate non-driving machine. Inside you see that the back seat has a bookshelf in it. Chances are you will just use your eReader, but this expresses and important message — that the car of the future will be more like a living, playing or working space than a transportation space.
The main announcement during the show was from Nissan, which outlined their plans and revealed some concept cars you will see in the gallery. The primary demo they showed involved integration of some technology worked on by Nissan’s silicon valley lab leader, Maarten Sierhuis in his prior role at NASA. Nissan is located close to NASA Ames (I myself work at Singularity University on the NASA grounds) and did testing there.
Their demo showed an ability to ask a remote control center to assist a car with a situation it doesn’t understand. When the car sees something it can’t handle, it stops or pulls over, and people in the remote call center can draw a path on their console to tell the car where to go instead. For example, it can be drawn how to get around an obstacle, or take a detour, or obey somebody directing traffic. If the same problem happens again, and it is approved, the next car can use the same path if it remains clear.
I have seen this technology a number of places before, including of course the Mars rovers, and we use something like it at Starship Technologies for our delivery robots. This is the first deployment by a major automaker.
Nissan also committed to deployment in early 2020 as they have before — but now it’s closer.
You can also see Nissan’s more unusual concepts, with tiny sensor pods instead of side-view mirrors, and steering wheels that fold up.
Several startups were present. One is AIMotive, from Hungary. They gave me a demo ride in their test car. They are building a complete software suite, primarily using cameras and radar but also able to use LIDAR. They are working to sell it to automotive OEMs and already work with Volvo on DriveMe. The system uses neural networks for perception, but more traditional coding for path planning and other functions. It wasn’t too fond of Las Vegas roads, because the lane markers are not painted there — lanes are divided only with Bott’s Dots. But it was still able to drive by finding the edge of the road. They claim they now have 120 engineers working on self-driving systems in Hungary. read more »
You may have seen a lot of press around a dashcam video of a car accident in the Netherlands. It shows a Tesla in AutoPilot hitting the brakes around 1.4 seconds before a red car crashes hard into a black SUV that isn’t visible from the viewpoint of the dashcam. Many press have reported that the Tesla predicted that the two cars would hit, and because of the imminent accident, it hit the brakes to protect its occupants. (The articles most assuredly were not saying the Tesla predicted the accident that never happened had the Tesla failed to brake, they are talking about predicting the dramatic crash shown in the video.)
The accident is brutal but apparently nobody was hurt.
The press speculation is incorrect. It got some fuel because Elon Musk himself retweeted the report linked to, but Telsa has in fact confirmed the alternate and more probable story which does not involve any prediction of the future accident. In fact, the red car plays little to no role in what took place.
Tesla’s autopilot uses radar as a key sensor. One great thing about radar is that it tells you how fast every radar target is going, as well as how far away it is. Radar for cars doesn’t tell you very accurately where the target is (roughly it can tell you what lane a target is in.) Radar beams bounce off many things, including the road. That means a radar beam can bounce off the road under a car that is in front of you, and then hit a car in front of it, even if you can’t see the car. Because the radar tells you “I see something in your lane 40m ahead going 20mph and something else 30m ahead going 60mph” you know it’s two different things. read more »
Thursday night I am heading off to CES, and it’s become the main show it seems for announcing robocar news. There’s already a bunch.
BMW says it will deploy a fleet of 40 cars in late 2017
Bumping up the timetables, BMW has declared it will have a fleet of 40 self-driving series 7 cars, using BMW’s technology combined with MobilEye and Intel. Intel has recently been making a push to catch up to Nvidia as a chipmaker supplier to automakers for self-driving. It’s not quite said what the cars will do, but they will be trying lots of different roads. So far BMW has mostly been developing its own tech. More interesting has been their announcement of plans to sell rides via their DriveNow service. This was spoken of a year ago but not much more has been said.
Intel also bought 15% of “HERE” the company formerly known as Navteq and Nokia. Last year, the German automakers banded together to buy HERE from Nokia and the focus has been on “HD” self-driving maps.
Hyundai, Delphi show off cars
There are demo cars out there from Delphi and a Hyundai Ioniq. Delphi’s car has been working for a while (it’s an Audi SUV) but recently they have also added a bunch of MobilEye sensors to it. Reports about the car are good, and they hope to have it ready by 2019, showing up in 2020 or 2021 cars on dealer lots.
Toyota sticks to concepts
Toyota’s main announcement is the Concept-i meant to show off some UI design ideas. It’s cute but still very much a car, though with all the silly hallmarks of a concept — hidden wheels, strangely opening doors and more.
Quanergy announces manufacturing plans for $250 solid state LIDAR
Quanergy (Note: I am on their advisory board) announced it will begin manufacturing this year of automotive grade $250 solid state LIDARs. Perhaps this will stop all the constant articles about how LIDAR is super-expensive and means that robocars must be super-expensive too. The first model is only a taste of what’s to come in the next couple of years as well.
New Ford Model has sleeker design
Ford has become the US carmaker to watch (in addition to Tesla) with their announcement last year that they don’t plan to sell their robocars, only use them to offer ride service in fleets. They are the first and only carmaker to say this is their exclusive plan. Just prior to CES, Ford showed off a new test model featuring smaller Velodyne pucks and a more deliberate design.
I have personally never understood the desire to design robocars to “look like regular cars.” I strongly believe that, just like the Prius, riders in the early robocars will want them to look distinctive, so they can show off how they are in a car of the future. Ford’s carm based on the Fusion hybrid, is a nice compromise — clearly a robocar with its sensors, but also one of sleek and deliberate design.
Nvidia keeps its push
Nvidia has a new test car they have called BB8. (Do they have to licence that name?) It looks fairly basic, and they show a demo of it taking somebody for a ride with voice control, handling a lot of environments. It’s notable that at the end, the driver has to take over to get to the destination, so it doesn’t have everything, nor would we expect it. NVIDIA is pushing their multi-GPU board as the answer to how to get a lot of computing power to run neural networks in the car.
Announcements are due tomorrow from Nissan and probably others. I’ll report Friday from the show floor. See you there.
These matters are studied both by statisticians, who focus on the science of measurement, particularly of things about groups, and election theorists, who also are interested in that but add the study of votes/polls which do not deliberately sample a subset of a population, but attempt to consider the will of the entire group. Both of them are highly concerned about how to deal with the fact a substantial fraction of the population may not participate.
One way to look at the difference is to consider this: An election is not supposed to be just a measurement. It is that, but more than that it is an action. It is the actual enactment of the will of the voters. While there are government officials who count the votes and report on them, a person is not put into office by those officials. Rather, it is the voters who put the candidate into office through their votes. (In Canada, it’s different. The Queen and her Governor-General technically have the legal power, and they observe how the people voted and invite the winner to form a government in the Queen’s name.)
Because voting is an act, rather than just an expression of opinion, we have come to deal with the non-participators as still acting. By not registering to vote or not showing up, they have still taken an action; they have deferred to the others to select the winner.
We tolerate this, though we don’t like it. Low turnouts reduce confidence in the results, and they also mean that election results can be more easily manipulated through “get out the vote” efforts. On the other hand, we get quite upset when people don’t vote for other reasons outside their own will, particularly if somebody else impeded their ability to vote, or manipulated them into not voting. Both voting and not voting must be acts of the free person.
Election theorists join with statisticians in some ways. All are interested in making sure that the aggregate will that comes from counting the votes most accurately reflects the aggregate will of the voters. We debate the merits of different counting systems. Many feel that multi-candidate ballots/preferential ballots do a much better job than first-past-the-post plurality systems. But in all case the counting system is simply the means of calculating the voters’ will so it can be enacted.
In the US Presidential elections, in spite of what is written on the ballot, the voters are appointing a slate of members of the electoral college. This is done independently in each state. In the swing states, all is as you would expect. Candidates campaign. Major efforts are made to woo voters and to get voters to come out. Voters go to the polls knowing and expecting that their will shall be done. They expect they might be part of the group which gets to designate the slate of electors.
In the safe states, it’s very different. In these states, who the electors will be is already well established from polls and the historical patterns of the state. The voters will picks the electors, but it’s a foregone conclusion. Nobody campaigns. There are no major efforts to get out the vote. There will be other races on the ballots which will bring out voters, who will vote within the known constraints. A decent chunk of voters will also show up because “this is how we do things” and together the knowledge that this will happen seals the fate of the state. On top of that, in the safe states, one knows that if things got so far outside the predicted norms as to make the vote actually close, then long ago the election will already have gone to the unexpected party, which in that situation will win all the swing states and victory. This is particularly true on the west coast, where the result is almost always decided before the polls close, and will certainly be decided long before that in a strange situation. If today’s California came close to going Republican, the rest of the USA would also be going so Republican that California’s shift can’t matter.
People know this, and this makes a big difference. A vote in California is technically an action, but only technically. It’s technically a vote but that’s an illusion. In reality, it can never change the result. It’s only for show. The candidates know it too. Because of that a lot of people don’t even register, and a lot stay home. The vote in California is not an election, but only a measurement. A survey. All it ever does is change the number printed in the paper.
Statisticians know all about surveys. They can be pretty good at measuring aggregate opinion if done well, but it is hard to do them well. The problem is what we call sampling bias. In an election, not voting is an implicit action. In a survey, not participating is just not participating. When there is nothing to gain or lose from participating or not participating, the motivations are different.
In 2016, the average swing state Presidential turnout was 64.6% of eligible voters. California’s turnout was 56.1%, just under the 56.6% average of the safe states. In Hawai`i, which knows the election is always decided before it votes (pretty much always for Democrats) the turnout was 41.7% A lot of people don’t show up.
This turns the safe-state votes into something closer to a self-selected survey. Millions are not voting, and those who are voting do so for other reasons than to enact their will. The self-selected survey is the most common class of what is also called the “non-scientific survey.” The name is intended to be derisive. It is easy to jump to false conclusions from a self-selected survey.
It isn’t that simple of course. The vote in safe states is a mix of actual polling and self-selection. As noted, there are people coming to vote on other races. We know how many of those there are. Turnout in off-year elections is around 40%, sometimes worse. And, as we can see, a lot of people show up because there is a Presidential race, in spite of the lack of power in their votes. Some do it from duty. Some from the excitement of a Presidential race. Many do not understand the impotence of their vote, and certainly many do not look at it the way it is described in this article, with a statistician’s eye. So many are voting as though their vote counted. Many have studied the race in detail, as though their vote counted. I can’t even vote and I study it as deeply as any.
But some vote very differently because they know their vote lacks power. Around 9 million don’t vote at all, who would have voted if they were in swing states. Almost surely many millions of those who do vote will do it differently than they might if their vote counted. But there is also no denying that a considerable majority of the voters are treating their vote as just as real, voting just as they would if it could change things. But a considerable majority is not enough. As long as a large group — even if it’s a small minority, even just 5% — are altering or withdrawing their votes, the total loses scientific validity, and has much larger error bars on it.
It is worth noting that by the normal definitions of a popular vote election, it is invalid to add the results of two distinct elections. There is no question that the Presidential elector selections of each state are distinct elections, run by the states. Even on that grounds you can’t add them and treat it as a popular vote. Because ballots replace the actual candidates (slates of electors pledged to the candidates) with the names of candidates, it makes people forget that they are two distinct elections. Thus it becomes necessary to understand how they are not just distinct because they are in different states, but because they operate on different principles as well.
This is why I wrote that, in spite of the fact that it is possible to sum up the votes cast in the 51 different electoral college contests and call it the popular vote, it is nonsensical to do so. You can’t add the totals from people who were voting with the full power of voters in a popular vote election to the totals from people who were participating in a voluntary survey. Aside from the real accuracy problems of the latter class, they are just different things. They can be added on a calculator, but to do so is to announce a misleading number, a meaningless one. You can call it “the popular vote” but it is not like a real popular vote, the kind used in all the other elections of the USA and most of the rest around the world. Calling it the popular vote makes many people — we’ve seen this — think it has a winner and a loser. They think it has meaning. They think it supports or questions the legitimacy of the winner of the electoral college. Since real popular votes are, in our modern democratic world, seen as superior to systems like the electoral college, calling it “the popular vote” implies to many people that it is superior, when in fact it’s meaningless. It would only be superior if it were an actual popular vote election like the others.
The common statistic reported after the US election was that Clinton “won the popular vote” by around 3 million votes over Trump. This has caused great rancour over the role of the electoral college and has provided a sort of safety valve against the shock Democrats (and others) faced over the Trump victory.
I’m here with concerning analysis, which I offer because it is a mistake on the part of the US left to underestimate the magnitude of Trump’s victory, or to imagine it was only because of a flaw in the system which he gamed better than Clinton.
The problem is that the US does not officially have a thing called “the popular vote.” That exists nowhere in its rules. There is no popular election of the President. Rather, there 54 elections with popular votes in 51 jurisdictions, which newspaper reporters then sum up into a number they incorrectly describe as “the national popular vote.” Of course, Clinton did win that invalid sum by around 3M votes. But bad statistical practice by the press, though it has created a common convention — for many decades — of calling that number “the popular vote,” does not make it valid. True popular votes involve all voters being free and equal, and we criticise any foreign election that pretends to call itself a popular vote when the voters are not free and equal. A popular vote, by its proper definition, is the vote total in a single election. Not 54 of them. As such, the sum is no more a popular vote total than adding the results of the 2008 and 2012 votes would get you a popular vote for or against Obama.
It’s especially invalid because it’s really summing two fairly different types of results.
True Popular vote totals from “swing” states where both candidates actively campaigned, turnout was higher, and voters expected their votes to count
Low-accuracy popular vote totals from “safe states” which candidates did not contest, and where voters knew their vote would not change the result
Statisticians will tell you these are two very different animals. We probably wish we knew who would have won the popular vote, if there had been a real national popular vote. Because there was no such vote, the hard answer is we don’t know what its result would be. In particular, with a statistically invalid sum like the published national popular vote, it is incorrect to say one party “won” or “lost.” There is no actual contest to win or lose, and while you can pretend that a higher total is winning, it is not a mathematically valid conclusion.
We do know that in the 16 contested regions, Trump surpassed Clinton in a simple sum by about 500,000 votes. (As you would expect, since he needed to win the swing states to win the college.) In the uncontested states, where the Presidential choice was closer to a self-selected survey than a vote, a sum of those popular votes has her about 3.4M more than Trump. While you can’t add popular votes, each popular vote is a statistic, and you can combine statistics if you follow correct statistical procedures.
There are many factors which will introduce error into the results from non-contested states, making it harder to figure out what the actual popular vote might have been.
Voters knew their votes didn’t matter. Many stayed home; these states had generally lower voter turnout. The states with the lowest turnout (HI, WV, TN, TX, OK, AR, AZ, NM, MS, NY, CA, IN, UT) were generally safe states with large margins. Average turnout in 16 contested states was 65%, in non-contested states 57%.
To get specific, a rough calculation suggests 8 to 9 million more votes would be cast in the non-contested states if they had a 65% turnout. This is a giant disenfranchisement.
The two candidates had the lowest approval ratings ever. Many Clinton voters were not supporting her, but were out to stop Trump. Trump’s ratings were even lower, so many of his voters were only out to stop Clinton. I suggest that in states where you know your vote will not elect or stop anybody, there is less motivation for nose-holding votes.
As noted, campaigns were not active in these states. In some states, like California, Clinton did campaign, though presumably to raise money rather than votes. Having only one candidate campaign skews things more.
More safe state voters felt comfortable voting for 3rd party choices, which they would have been less likely to do in a swing state. Many of the 4.6M votes for 3rd party candidates in safe states may have gone to major party candidates, though in what direction is unknown.
In some safe states, even the downballot races are predetermined, discouraging voters. In California, the election of Democrats in most down-ballot races was assured; the primary was the real contest. (However, contentious ballot propositions can counter this in some states.)
In the end, though, results from a race that everybody agreed didn’t matter are just a different animal from results in a contested race. You can’t add apples and oranges, or perhaps more correctly, oranges and lemons. Different, though not entirely. You can add them and get a total number of citrus (votes of any kind,) but you can’t call it the count of oranges (real votes.)
In spite of the frequent description of the US vote-total as a popular vote, this is at odds with common usage. The thousands of other elections in the USA are actual popular votes, as are the vast majority of elections in free countries. The US national vote sum, and similar sums published in some parliamentary elections, are the rare exception where an official and incorrect tally gets called a popular vote.
A century ago in 1916, women could not vote for President in most of the USA — except for Illinois, which recognized women’s right to vote in Presidential elections in 1913. President Wilson did not support suffrage in 1916 but his opponent, Hughes, did, and suffragettes campaigned for Hughes as a result.
Wilson won, but Hughes won Illinois handily, in fact his margin there of 202,000 votes was his highest in any state (and 2nd highest in the land) — in part because the addition of women to the rolls meant Illinois had more voters than any other state. I have to speculate that this margin had to do with women voting for the candidate ready to defend their basic human rights.
Wilson won the college 277 to 254. And he won the so-called popular vote by 600,000 votes. But that “popular vote” in this case consisted of adding the popular vote from states like Illinois where women were human, and other states where they were less than human. Who can defend adding those totals together, cast under such different rules, and calling it “the popular vote” and declaring that Wilson “won” the popular vote in 1916.
Today, the difference between California and other states is not so dramatic as disenfranchising an entire sex. But because Californians are told their vote for President doesn’t matter, the turnout there was 56% and an average of 65% in the swing states. If California had that average, that’s 2.3 million more voters. Millions disenfranchised not because of their sex, but because the system says their vote doesn’t matter. California’s “popular vote” is a sham, and not too different a sham from that of men-only New York in 1916 or “Dear Leader of course” North Korea today. Oh sure, they have something they call the popular vote in North Korea, but the result is known in advance and nobody thinks their vote counts. (And yes, they know they could be punished if they put their ballot in the wrong box.)
You could not add the votes of Illinois and New York in 1916 and call it a true popular vote. You can’t add the results of California’s sham popular vote to Florida’s real popular vote and call it a true popular vote. I mean, people do that, but they should not.
Can we figure it out?
All this said, you could attempt to measure what the vote would have been. We may not have enough data, but we could make some estimates. We know that Clinton led Trump by 3.5% in national polls before the election, but we also know that Trump outperformed those polls by 1.5-6% in many contested states. To really do this would require much more careful analysis than you see in this paragraph, which is written only to show one extreme of what’s possible, and the difference is almost surely less than this from these two states. Full analysis would require looking at detailed voting and polling patterns and an understanding of what motivates people to stay home or vote differently in safe states. vs. swing states, and an understanding of how Trump outperformed his polls so broadly in the contested states. In the other direction, since the 8-9 million missing voters in the safe states are in states that swing Democratic, there are arguments Clinton’s total could have been even higher. However, even with that analysis we still would not really know.
My intuition is that such a result would show Clinton scoring higher than Trump, but not by 3M votes. And the margin of error would include results where Trump wins that popular vote, but this would be the outside condition. Certainly the only hard data on states that were actually contested has him win if extrapolated, but the Democratic party dominance in the big uncontested states is very strong. Also not factored in this is the effect of voter suppression techniques.
I should note to non-regular readers that I am anti-Trump. At the same time, having been shocked several times by underestimating his support, I write this because this underestimation must stop, and both sides need to come to much better understanding of how people voted for or against them, and why.
A slightly better approach would be to publish vote totals divided between swing and safe states. Because situations differ so much in the safe states, this is still not super accurate, but it’s a lot better. (I built this from an earlier download so numbers may not match final totals exactly.)
Clinton Trump Johnson Stein McMillin Others
Swing Total 25,946,624 26,423,193 1,783,571 434,433 203,500 351,415
Safe Total 40,582,344 37,227,033 2,770,706 1,031,304 435,055 468,484
It is interesting to note how much better Stein did in Safe states, 130% better. Johnson did 50% better, Clinton 55% more and Trump 38% more
So what should the popular vote be?
One might argue that in an ideal democracy, the popular vote would represent the aggregate view of all voters. Some nations make voting mandatory in order to get this. Australia gets 95% turnout using this technique, but Malta, New Zealand and several other countries get turnout around 90% without legal compulsion.
It might even be argued that a truly ideal democracy would not only have everybody vote, but have everybody study the choices to make an informed vote. We don’t get any of these ideals, and so in the USA it has come to be accepted that the popular vote is the vote totals from those who took the time to show up. The low turnout enables both voter suppression efforts and gives extreme value to successful “get out the vote” efforts, since it is far cheaper to convince a weak supporter to show up than to convince an undecided voter to swing your way.
Some election theorists have actually proposed that the best way to do elections would be to use a random sample, sometimes combined with strong incentives for members of this sample to vote, and possibly to also learn before voting. This seems strange to non-mathematicians but actually has strong validity. (In one variant, the selected electors are known weeks in advance and the campaigns and public interest groups focus their attention on “educating” them, in which case the number must be large so that truly personal targeting is not effective.) In a nation with 90% turnout these techniques make elections much cheaper but don’t affect results much. In a country with 60% turnout which switches to 99% turnout from the randomly selected electors, the result becomes a much more accurate measure of voter will than the current system.
It is also worth noting that the entire popular vote system for President is not in the US constitution, and so alternate systems, including sampling, actually are legally possible if states willed it, though politically unlikely. There are many advantages to sampling: Close to 100% turnout, more informed voters, the possible reduction of massive campaign spending and fundraising and the elimination of voter suppression. Its main disadvantage is that it doesn’t match non-mathematician’s instincts about how an election should work, and the added risk of corruption of the random selection.
In order to get a real popular vote, even one where we total the will of the 60% who show up, it is necessary to get rid of the college. The college could be nullified by a pact between California, Texas and two other large size republican safe states. If just those 4 states agreed to cast all their electors according to a popular vote result, it would be sufficient to make the college match that popular vote. Once it was known that this was the case, all voters would now know their vote counted, and all candidates would campaign in all states instead of just swing states, and we would have a true popular vote result.
The California DMV got serious in their battle with Uber and revoked the car registrations for Uber’s test vehicles. Uber had declined to register the cars for autonomous testing, using an exemption in that law which I described earlier. The DMV decided to go the next step and pull the more basic licence plate every car has to have if based in California. Uber announced it would take the cars to another state.
While I’m friends with the Uber team, I have not discussed this matter with them, so I can only speculate why it came to this. As noted, Uber was complying with the letter of the law but not the spirit, which the DMV didn’t like. At the same time, the DMV kept pointing out that registering was really not that hard or expensive, so they can’t figure out why Uber stuck to its guns. (Of course, Uber has a long history of doing that when it comes to cities trying to impose old-world taxi regulations on them.)
The DMV is right, it’s not hard to register. But with that registration comes other burdens, in particular filing regular public reports on distance traveled, interventions and any accidents. Companies doing breakthrough R&D don’t usually work under such regimes, and I am speculating this might have been one of Uber’s big issues. We’ve all see the tremendous amount of press that Google has gotten over accidents which were clearly not the fault of their system. The question is whether the public’s right to know (or the government’s) about risks to public safety supersedes the developer’s desires to keep their research projects proprietary and secret.
It’s clear that we would not want a developer going out on the roads and having above-average numbers of accidents and keeping it hidden. And it may also be true that we can’t trust the developers to judge the cause of fault, because they could have a bias. (Though on most of the teams I have seen, the bias has been a safety paranoid one, not the other way around.)
Certainly when we let teens start to drive, we don’t have them make a public report of any accidents they have. The police and DMV know, and people who get too many tickets or accidents get demerits and lose licences when it is clear they are a danger to the public. Perhaps a reasonable compromise would have been that all developers report all problems to the DMV, but that those results are not made public immediately. They would be revealed eventually, and immediately if it was determined the system was at fault.
Uber must be somewhat jealous of Tesla. Tesla registered several cars under the DMV system, and last I saw, they sent in their reports saying their cars had driven zero miles. That’s because they are making use of the same exemption that Uber wanted to make use of, and saying that the cars are not currently qualifying as autonomous under the law.
As you can see, the van still has Waymo’s custom 360 degree LIDAR dome on top, and two sensors at the back top corners, plus other forward sensors. The back sensors I would guess to be rear radar — which lets you make lane changes safely. We also see three apparent small LIDARs, one on the front bumper, and the other two on the sides near the windshield pillars with what may be side-view radars.
A bumper LIDAR makes sure you can see what’s right in front of the bumper, an area that the rooftop LIDAR might not see. That’s important for low speed operations and parking, or situations where there might be something surprising right up close. I am reminded of reports from the Navya team that when they deployed their shuttles, teens would try to lie down in front of the shuttle to find out if it would stop for them. Teens will be teens, so you may need a sensor for that.
Side radar is important for cross traffic when trying to do things like making turns at stop signs onto streets with high speed. Google also has longer range LIDAR to help with that.
The minivan is of course the opposite end of the spectrum from the 2-passenger no-steering-wheel 3rd generation prototype. That car tested many ideas for low speed urban taxi operations, and the new vehicle seems aimed at highway travel and group travel (with six or more seats.) One thing people particularly like is that like most minivans these days, it has an automatic sliding door. Somehow that conveys the idea of a robotic taxi even more when it opens the door for you! The step-in-step-out convenience of the minivan does indeed give people a better understanding of the world of frictionless transportation that is coming.
Update: Also announced yesterday was a partnership between Honda and Waymo. It says they will be putting the Waymo self-driving system into Honda cars. While the details in the release are scant, this actually could be a much bigger announcement than the minivans, in which Chrysler’s participation is quite minimal. Waymo has put out the spec for the modified minivan, and Chrysler builds it to their spec, then Waymo installs the tech. A Waymo vehicle sourced from Chrysler. The Honda release suggests something much bigger — a Honda vehicle sourced from, or partnering with Waymo.
There has not been as much press about this Honda announcement but it may be the biggest one.
NPRM for DSRC and V2V
The DoT has finally released their proposed rules requiring all new cars (starting between 2020 and 2022) to come equipped with vehicle-to-vehicle radio units, speaking the DSRC protocol and blabbing their location everywhere they go. Regular readers will know that I think this is a pretty silly idea, even a dangerous one from the standpoint of privacy and security, and that most developers of self-driving cars, rather than saying this is a vital step, describe it as “something we would use if it gets out there, but certainly not essential for our vehicles.”
Everybody should have off-site backup of their files. For most people, the biggest threat is fire, but here in California, the most likely disaster you will encounter is an earthquake. Only a small fraction of houses will burn down, but everybody will experience the big earthquake that is sure to come in the next few decades. Of course, fortunately only a modest number of houses will collapse, but many computers will be knocked off desks or have things fall on them.
To deal with this, I’ve been keeping a copy of my data in my car — encrypted of course. I park in my driveway, so nothing will fall on the car in a quake, and only a very large fire would have risk of spreading to the car, though it’s certainly possible.
The two other options are network backup and truly remote backup. Network backup is great, but doesn’t work for people who have many terabytes of storage. I came back from my latest trip with 300gb of new photos, and that would take a very long time to upload if I wanted network storage. In addition, many TB of network storage is somewhat expensive. Truly remote storage is great, but the logistics of visiting it regularly, bringing back disks for update and then taking them back again is too much for household and small business backup. In fact, even being diligent about going down to the car to get out the disk and update is difficult.
A possible answer — a wireless backup box stored in the car. Today, there are many low-cost linux based NAS boxes and they mostly run on 12 volts. So you could easily make a box that goes into the car, plugs into power (many cars now have 12v jacks in the trunk or other access to that power) and wakes up every so often to see if it is on the home wifi, and triggers a backup sync, ideally in the night. read more »