I have often written on the challenge facing existing automakers in the world of robocars. They need to learn to completely switch their way of thinking in a world of mobility on demand, and not all of them will do so. But they face serious challenges even if they are among the lucky ones who fully “get” the robocar revolution, change their DNA and make products to compete with Google and the rest of the non-car companies.
Unfortunately for the car companies, their biggest assets — their brands, their experience, their quality and their car manufacturing capacity — are no longer as valuable as they were.
Their brands are not valuable
Today if you summon a car with a company like Uber, you don’t care about what brand of car it is, as long as it’s decent. Even with the “luxury” variants of Uber, you don’t care which type of luxury car shows up, as long as it meets certain standards. For companies who have most of their value in their nameplate, this is nightmare #1. The taxi service (Uber or otherwise) becomes the brand that is seen and valued by the customer.
When you are buying a car for 5 years at the dealership, you care a lot about the brand, both for what it means, and for what it says about you when you show up driving it. When you buy a car by the ride, you don’t care a lot about the brand, because you are only going to use it for a short time.
Their brands might be tarnished
There will be accidents in Robocars, unfortunately. Those accidents will cost money, but they will also cause problems in public image. The problem is, “Mercedes runs over grandmother” is a headline that will make people less likely to buy any type of Mercedes. As such, Mercedes has plans to market self-driving car service under their Car2Go brand. You may not even know that Car2Go is Daimler, and they might like it that way. “Google car runs over grandmother” is bad news for the Google car project, but is not going to make anybody stop doing web searches with Google. (Except the grandmother…)
The non-car companies don’t have a car brand to tarnish, but they do have famous brands. They can use those brands to attract customers without the same risk. Big car companies have famous brands but may be afraid to use them.
They might just be the contract manufacturer
Companies like Uber, Google, Apple and others don’t plan to manufacture cars. Why would they? There is tons of car manufacturing capacity out there. They can just go to carmakers and say, “here’s a purchase order for 100,000 cars — built to our spec with our logo on them.” It will be very hard to turn down such an order. Still, some companies will be too proud to do this, or too unwilling to sign their own suicide note.
If they don’t accept the order, somebody else will. If nobody in the west does, somebody in China will. China is the world’s #1 car manufacturing country, but the cars are rarely exported to the west. They would love to change that.
A likely model for this is the relationship of Apple and Foxconn. Foxconn makes your iPhone, but many don’t know that. Foxconn makes good money, but Apple makes much more, designing the product and owning the customer. The car companies don’t want to be Foxconn in the world of the future, but the alternative may be to be much smaller.
(BTW, Foxconn has said it is interested in making cars.)
First-rate quality might not be that important
Chinese manufacturers don’t have the quality of the current leaders. But they may not need to. Just as Apple taught Foxconn how to make good iPhones, they might follow the same pattern here. But they don’t need to. That’s because a less reliable robocar is not the same sort of problem an unreliable personal car is. Sure, it should not break down while you are riding in it — but even then the company can quickly send you a replacement to pick you up in just a few minutes. If it breaks down otherwise, it just goes out of service. This costs the fleet manager money, but they saved a lot of money with the lower quality manufacturer. When cars can move on demand to service customers, breakdowns are not the same sort of problem. When your own car breaks down it’s a nightmare, and you will pay a lot to avoid it. For a fleet, it’s just a cost. All cars are down for maintenance some of the time. Cheaper cars will be down more, but if they are cheap enough, it still saves money.
Customer perception of quality is still important. The vehicle must maintain the level of comfort and interior quality the customer has paid for. Safety related failures are of course much less tolerable.
New car designs will be radically different
The robocar of the future will look quite different from the cars of the past. Existing car companies can handle this, but they lose some of the advantage that comes from decades of experience. The future robocars are probably electric and much simpler, with hundreds of parts rather than tens of thousands. It’s a new world and experience with the old may actually be a disadvantage. Only Nissan and Tesla have lots of electric car experience today, though GM is building it. Electric platforms are much simpler and ripe for creativity from new players.
While I’m very excited about the coming robocar world, there are still many unsolved problems. One I’ve been thinking about, particularly with my recent continued thinking on transit, is how to provide robotaxi service to the poor, which is to say people without much money and without credit and reputations.
In particular, we want to avoid situations where taxi fleet operators create major barriers to riding by the poor in the form of higher fees, special burdens, or simply not accepting the poor as customers. If you look at services like Uber today, they don’t let you ride unless you have a credit card, though in some cases prepaid debit cards will work.
Today a taxi (or a bus or Uber style vehicle) has a person in it, primarily to drive, but they perform another role — they constrain the behaviour of the rider or riders. They reduce the probability that somebody might trash the vehicle or harass or be violent to another passenger.
Of course, such things happen quite rarely, but that won’t stop operators from asking, “What do we do when it does happen? How can we stop it or get the person who does it to pay for any damage?” And further they will say, “I need a way to know that in the rare event something goes wrong, you can and will pay for it.” They do this in many similar situations. The problem is not that the poor will be judged dangerous or risky. The problem is that they will be judged less accountable for things that might go wrong. Rich people will throw up in the back of cars or damage them as much as the poor, perhaps more; the difference is there is a way to make them pay for
it. So while I use the word poor here, I really mean “those it is hard to hold accountable” because there is a strong connection.
As I have outlined in one of my examinations of privacy a taxi can contain a camera with a physical shutter that is open only between riders. It can do a “before and after” photograph, mostly to spot if you left items behind, but also to spot if you’ve damaged or soiled the vehicle. Then the owner can have the vehicle go for cleaning, and send you the bill.
But they can only send you the bill if they know who you are and have a way to bill you. For the middle class and above, that’s no problem. This is the way things like Uber work — everybody is registered and has a credit card on file. This is not so easy for the poor. Many don’t have credit cards, and more to the point, they can’t show the resources to fix the damage they might do to a car, nor may they have whatever type of reputation is needed so fleet operators will trust them. The actions of a few damn the many.
The middle class don’t even need credit cards. Those of us wishing to retain our privacy could post a bond through a privacy protecting intermediary. The robotaxi company would know me only as “PrivacyProxy 12323423” and I would have an independent relationship with PrivacyProxy Inc. which would accept responsibility for any damage I do to the car, and bill me for it or take money from my bond if I’m truly anonymous.
Options for the poor
Without the proxy, robotaxi operators will want some sort of direct accountability from passengers for any problems they might cause. Even for the middle class, it mostly means being identified, so if damage is found, you can be tracked down and made to pay. The middle class have ability to pay, and credit. The poor don’t, at least many of them don’t.
People with some level of identity (an address, a job) have ways to be accountable. If the damage rises to the level where refusing to fix it is a crime at some level, fear of the justice system might work, but it’s unlikely the police are going to knock on somebody’s door for throwing up in a car.
In the future, I expect just about everybody of all income levels will have smartphones, and plans (though prepaid plans are more common at lower income levels.) One could volunteer to be accountable via the phone plan, losing your phone number if you aren’t. Indeed, it’s going to be hard to summon a car without a phone, though it will also be possible using internet terminals, kiosks and borrowing the phones of others.
More expensive rides
A likely solution, seen already in the car rental industry, is to charge extra for insurance for those who can’t prove accountability another way. Car rental company insurance is grossly overpriced, and I never buy it because I have personal insurance and credit cards to cover such issues. Those who don’t often have to pay this higher price.
It’s still a sad reality to imagine the poor having to pay more for rides than for the rich.
An option to mitigate this might be cars aimed at carrying those who are higher risk. These cars might be a bit more able to withstand wear and tear. Their interiors might be more like bus interiors, easily cleaned and harder to damage, rather than luxury leather which will probably be only for the wealthier. To get one, you might have to wait longer. While a middle-class customer ordering a cheap car might be sent a luxury car because that’s what’s spare at the time, it is less likely an untrusted and poor customer would get that.
Before we go do far, I predict the cost of robotaxi rides will get well below $1/mile, heading down to 30 cents/mile. Even with a 30% surcharge, that’s still cheaper than what we have today, in fact it’s cheaper than a bus ticket in many towns, certainly cheaper than an unsubsidized bus ticket which tends to run $5-$6. Still my hope for robotaxi service is that it makes good transportation more available to everybody, and having it cost more for the poor is a defect.
In addition, as long as damage levels remain low, as a comment points out, perhaps the added cost on every ride would be small enough that you don’t need worry about this for poor or rich. (Though having no cost to doing so does mean more spilled food, drink and sadly, vomit.)
Over time, fortunately, poor riders could develop reputations for treating vehicles well. Build enough reputation and you might have access to the same fleet and prices that the middle class do, or at least much cheaper insurance. Cause a problem and you might lose the reputation. It would be possible to build such a reputation anonymously, though I suspect most people and companies would prefer to tie it to identity, erasing privacy. Anonymous reputations in particular can be sold or stolen which presents an issue. One option is to tie the reputation to a photo, but not a name. When you get in the car, it would confirm you match the photo, but would not immediately know your name. (In the future, though, police and database companies will be able to turn the photo into a name easily enough.)
Poor riders would still have to pay more to start, probably, or suffer the other indignities of the lower class ride. However, a poor rider who develops a sterling reservation might be able to get some of that early surcharge back later. (Not if it’s insurance. You can’t get insurance back if you don’t use it, it doesn’t work that way!)
It could also be possible for the poor to get friends to vouch for them and give them some starter reputation.
Unfortunately, poor who squander their reputation (or worse, just ride with friends who trash a car) could find themselves unable to travel except at high cost they can’t afford. It could be like losing your car.
The government will have an interest in making sure the poor are not left out of this mobility revolution. As such, there might be some subsidy program to help people get going, and a safety net for loss of reputation. This of course comes with a cost. Taxes would pay for the insurance to fix cars that are damaged by riders unable to be held accountable.
The alternative, after all, is needing to continue otherwise unprofitable transit services with human drivers just for the sake of these people who can’t get private robocar rides. Transit may continue (though without human drivers) at peak times, but it almost surely vanishes off-peak if not for this. read more »
Recently a reddit user posted this short video of an amazingly lucky driver in Japan who was able to turn his car around just in time to escape the torrent of the tsunami.
The question asked was, how would a robocar deal with this? It turns out there are many answers to this question. For this particular question, as you’ll see by the end, the answer is probably “very well.”
Let’s start with the bad news. On its own, built in a world where few thought about tsunamis, there is a good chance the vehicle would not handle it well. The instinct for most developers is to be conservative and cautious when facing an unknown situation. The most cautious thing is to do nothing, to just stop and perhaps ask for help from a person in the car or a remote center. Usually if you don’t understand the situation, doing something is much riskier than doing nothing. Usually — but clearly not here.
This situation might be viewed as similar to something you might expect a car to have programming for — something is approaching fast towards you. Cars will probably have logic to deal with a car coming the wrong way down their lane, and this looks a bit like that. It’s actually stuff coming in both lanes. We can imagine the car might have logic to attempt to retreat in that situation, though this isn’t going to look too much like anything the sensors have seen before. With 3D sensors, though, it will be clear that something huge is coming fast. And with a map of what the road should look like, you will easily tell the wall of water and debris from what you should be seeing.
The best reason the car might handle this however, is the very existence of this video, and the posts about it — including this blog post here. The reason is that the developers of robocars, in order to test them, are busy building simulators. In these simulators they are programming every crazy situation they can think of, even impossible situations, just to see what each revision of the car software will do. They are programming every situation that their cars have encountered on the road. Every situation that caused their software to make an error, or anybody else to make an error.
In other words, if you can think of it after a little bit of thinking, they probably thought of it too. And if it’s in blog posts and famous news stories, they probably heard about it. Flooding and every kind of strange weather ever reported. The details of every accident from every police report that can be turned into a simulation. Earthquakes. Tornadoes. Hurricanes. Alien invasions. Oncoming tanks. If you can think of it without a major effort, and it seems like it could happen, they will put it in. And so every car will indeed be tested. In fact, the developers will probably have fun with the really strange situations which are so rare that they may not have commercial or safety justification, but still are interesting. Scenes from movies. James Bond car chases. You name it.
In this particular case, there is another thing to help with this situation. Tsunamis don’t happen by surprise, not any more. The world, having seen them like this, now has earthquake detection and tsunami warning everywhere robocars are likely to go in the near future. The warnings will be transmitted along the same data stream warning cars about traffic, weather and road conditions. We even have maps of the terrain and can even predict what areas are low and which areas cars should head to in the event of a tsunami warning, and they will take routes designed to avoid risk. With superhuman knowledge, they will not panic and do much better than people at taking the route to high ground, and so they odds of them confronting the wall of water would be very slim, unless there was no choice. The robocar simply would not have been going down that road the way the Japanese driver was.
Now we get to a final special ability of robocars — they will be just as capable in reverse gear as they are going forward, other than due to the speed limitations of reverse gear. So while you reverse timidly, a robocar need not do so. It will be able to pull off the fastest 3 point turn you can imagine if it wants to, or even just escape in reverse. Of course if it needs more speed than reverse offers, it would turn around in the best spot to do so. Stanford has even done a lot of research on drifting, and this will go into simulators too, so cars will probably know how to turn around as fast as a stunt driver if they have to. Electric cars may be able to go as fast in reverse as they can going forward to top it all off. (I should note that not all car designs feature sensors that see the same forward and back, so this may not be true for all vehicles, but all vehicles that can reverse at all need not be timid about it the way people are.)
So for this situation, and anything else we know about, robocars should do a superhuman job. That doesn’t mean there aren’t things nobody ever thought of. But the more videos and stories like this that get recorded, the less and less probable unknown events will be, and thus an unknown event where the software does the wrong thing becomes not impossible, but very low probability.
My recent article on a future vision for public transit drew some ire by those who viewed it as anti-transit. Instead, the article broke with transit orthodoxy by suggesting that smaller vehicles (including cars and single person pods) might produce more efficient transit than big vehicles. Transitophiles love big vehicles for reasons beyond their potential efficiency, so it’s a hard sell.
Let’s look at the factors which determine what vehicle size makes the best transit.
Before the robocar future arrives, vehicle size is partly dominated by the need for drivers. Consider a bus route which could have one 40 person bus every 30 minutes or a 20 person bus every 15 minutes. The smaller vehicles have the same capacity, and but they will use a little more energy, a little more road space and cost somewhat more to buy. This leads to the intuition that bigger must be better.
At the same time the smaller vehicles need twice as many drivers. Labour is more than half the operating budget of many transit agencies. Look at the Chicago Transit Authority and you see labour listed as 69% — and much labour is actually in other subcontractor categories — while fuel and electricity are only 7% — the capital costs like vehicles are not even included here. Needing twice the drivers dominates the equation.
Riders of course would have an easy time deciding. They would of course love having vehicles every 15 minutes! Indeed they would be very pleased to get a 7 person van every 5 minutes if they could, the difference would be qualitative, not just quantitative, because when you get to that frequency you start thinking about it more like a car. In addition, the 2 small vehicles do about 1/8th the damage to the road as the one large vehicle.
Taking the cost of drivers out, what is the optimum size? More to the point, what provides the optimum balance between rider demand (which would love more frequent service in smaller vehicles) and efficiency (which pushes for larger vehicles, up to a point?) In particular, more smaller vehicles does not just have to mean more frequent service on one route, it can also mean more routes. More routes can both mean getting places you could not get to before, and also getting there faster because you don’t need as many transfers.
Here’s where big vehicles are better:
When near full, or overfull, they use:
Less energy per passenger-mile
Less road space per passenger
Less vehicle cost (depreciation, maintenance etc.) per passenger
Less frequent service forces people to bunch their travel together with others, allowing the advantages above.
Fewer stops also forces people to bunch together, to live near transit and to walk more.
Here are some of the advantages of more, smaller vehicles
As noted, road damage is roughly as the 4th power of vehicle weight per axle.
More frequent and/or ubiquitous service as described above
Less likely to be lightly loaded (smaller vehicle is sent when demand is light.)
When lightly loaded, much more efficient in all factors than large vehicle
While the whole fleet takes more total road space than the large vehicles, each vehicle causes much less obstruction of traffic.
Able to use smaller bus-stops and navigate tighter turns and narrower roads.
Able to park in smaller spaces including many lots for cars (though still taking as much or slightly more total space.)
Stops are sometimes fewer, and take less time (fewer people getting on/off any given vehicle.)
Each vehicle is considerably less expensive.
The big trade-off comes because the load varies. The full 40 person bus is an efficiency and cost win over two full 20 person buses (or 10 full 4 person cars) but not as much of a win as you might imagine. But the real question involves the frequent issue of a half-full 40 person bus vs. a full 20 person bus. In this case, the smaller vehicle is quite a bit more efficient. Even worse is the 1/4 full 10 person bus vs. the half full 20 person bus or 3 4-person cars. Here the winner is probably the cars, and this is important, because the average bus in the USA actually has just under 10 people on it.
The ideal situation would be to send out a fleet of 40 or even 60 person buses at the peak of rush hour, and then put those in garages, and send out small buses during the off-peak takes and just cars in the off-off-peak times like the night. Have every vehicle run as close to full as possible and you get your greatest efficiency. This is not an option for a few reasons:
To do that with buses, you must lower frequency to keep them full, and riders will reject that
Agencies usually can’t afford huge fleets of large vehicles as well as huge fleets of medium vehicles to keep the large vehicles idle for most of the day. They are better off choosing with a loss of efficiency.
In the robocar world, they will be able to call upon a large fleet of small vehicles (cars for 1-4 people) at all times and they won’t need to own them. But the transit companies and agencies still must own these larger (8 to 60) person vehicles.
In some cities, it may be practical to keep a fleet of large vehicles for use only at rush hour. In fact, that’s what some commuter train lines use, and they are the most efficient transportation lines in the USA. The rush-hour-only commuter trains run full out to the suburbs, spend the night in the suburbs and run full back into town. That’s really efficient. The commuter trains with daytime service are not nearly as good. Train lines that can drop cars off-peak get a win here as well.
How practical it is depends on how long you need the big bus to last. Transit vehicles tend to be robust, heavy and expensive, and they are well maintained to maximize their lifetime. A bus that only works rush hour will last more years than one that works all day. The problem is it may last too many years, to the point that it becomes obsolete or wears out from time rather than just miles. Leaving vehicles idle also means tying up capital for longer, so even if you find a good schedule for depreciation of the vehicles, the cost of money makes it difficult to have two or three different fleets.
So in the end, cities have to choose. Because of the labour cost of drivers, they almost always choose the bigger vehicles. Without that cost, the advantages of the smaller vehicles win out because of the variability of load. If the line regularly runs low-load vehicles, it has chosen a size that is larger than optimal.
This is all general analysis. The next step I would like to see from the transportation research community is to build these models with the actual numbers from real transit systems. For each city, for each route, the optimal size will be different. And of course, the existence of the robocars will change demand, which also changes load. They can change demand down (by being a superior solution) or up (by making it easier to get to the shared vehicle.) They can also replace the big vehicles entirely at off-peak times. That sounds like competition, but it actually can be enabling. One reason transit agencies run their big vehicles all day long (erasing their efficiency) is that riders want assurance they can come in at rush hour and then decide to leave early or late. Thus there has to be off-peak service. If riders can be assured that something else (like a robotic taxi or even an Uber) can get them home inexpensively off-peak, they are more willing to take the transit in.
Indeed, it could make sense for transit agencies to say, “we will have low service after 8pm, but if you can show you rode with us in the morning, we will subsidize a private car for you after hours 10 times a month.” They might actually save money by offering this rather than running a mostly empty bus.
Perhaps the world’s most exciting new technology today are deep neural networks, in
particular the convolutional neural networks such as “Deep Learning.” These networks
are conquering some of the most well known problems in artificial intelligence and pattern
matching, and since their development just a few years ago, milestones in AI have been
falling as computer systems that match or surpass human capability have been demonstrated. Playing Go
is just the most recent famous example.
This is particularly true in image recognition. Over the past several years, neural
network systems have gotten better than humans at problems like recognizing street
signs in camera images and even beating radiologists at identifying cancers in
These networks are having their effect on robocar development. They are allowing
significant progress in the use of vision systems for robotics and driving, making
those progress much faster than expected. 2 years ago, I declared that the time when
vision systems would be good enough to build a safe robocar without lidar was still
fairly far away. That day has not yet arrived, but it is definitely closer, and it’s
much harder to say it won’t be soon. At the same time, LIDAR and other sensors are
improving and dropping in price. Quanergy (to whom I am an advisor) plans to ship $250
8-line LIDARS this year, and $100 high resolution LIDARS in the next couple of years.
The deep neural networks are a primary tool of MobilEye, the Jerusalem company which
makes camera systems and machine-vision ASICs for the ADAS (Advanced Driver Assistance
Systems) market. This is the chip used in Tesla’s autopilot, and Tesla claims it has
done a great deal of its own custom development, while MobilEye claims the important magic sauce is still mostly them. NVIDIA has made a big
push into the robocar market by promoting their high end GPUs as the supercomputing tool
cars will need to run these networks well. The two companies disagree, of course, on
whether GPUs or ASCICs are the best tool for this — more on that later.
In comes comma.ai
In February, I rode in an experimental car that took this idea to the extreme. The small
startup comma.ai, lead by iPhone hacker George Hotz, got some press by building an autopilot
similar in capability to many others from car companies in a short amount of time. In January, I wrote an introduction to their approach
including how they used quick hacking of the car’s network bus to simplify having the computer control the car.
did it with CNNs, and almost entirely with CNNs. Their car feeds the images from a camera
into the network, and out from the network come commands to adjust the steering and speed to
keep a car in its lane. As such, there is very little traditional code in the system, just
the neural network and a bit of control logic.
Here’s a video of the car taking us for a drive:
The network is built instead by training it. They drive the car around, and the car learns
from the humans driving it what to do when it sees things in the field of view. To help
in this training, they also give the car a LIDAR which provides an accurate 3D scan of the
environment to more absolutely detect the presence of cars and other users of the road. By letting
the network know during training that “there is really something there at these coordinates,”
the network can learn how to tell the same thing from just the camera images. When it is
time to drive, the network does not get the LIDAR data, however it does produce outputs of
where it thinks the other cars are, allowing developers to test how well it is seeing things.
This approach is both interesting and frightening. This allows the development of a credible
autopilot, but at the same time, the developers have minimal information about how it works,
and never can truly understand why it is making the decisions it does. If it makes an
error, they will generally not know why it made the error, though they can give it more training
data until it no longer makes the error. (They can also replay all other scenarios for which
they have recorded data to make sure no new errors are made with the new training data.) read more »
For many years, I have been using RAID for my home storage. With RAID (and its cousins) everything is stored redundantly so that if any disk drive fails, you don’t lose your data, and in fact your system doesn’t even go down. This can come at a cost of anywhere from about 25% to 50% of your disk space (but disk is cheap) and it also often increases disk performance. Some years ago I wrote about how disk drives should be sold in form factors designed for easy RAID in every PC, and I still believe that.
RAID comes with a few costs. One of them is that you need to do too much sysadmin to get it working right. The nastiest cost is there are some edge cases where RAID can cause you to lose all your data where you would not have lost it (or all of it) if you had not used RAID. That’s bad — it should never make things worse.
A few years ago I switched to one of the new filesystems which put the RAID-like functionality right into the filesystem, instead of putting that into a layer underneath. I think that’s the right thing, and in fact, fear of layer violations is generally a mistake here. I am using BTRFS. Others use ZFS and a few other players. BTRFS is new and so its support for RAID-5 (Which only costs 25-33% of your space and is fast) is too young, so I use its RAID-1, where everything is just written twice onto two different disks. Unlike traditional RAID, BTRFS will do RAID-1 on more than 2 drives, and they don’t have to be all of equal size. That’s good, though I ran into some problems with the fairly common operation of increasing the size of my storage by replacing my smallest drive with a much larger one.
The long term goal of such systems should be near-trivial sysadmin. The system should handle all drives and partitions thrown at it in a “just works” way. You give it any amount of drives and it figures out the best thing to do, and adapts as you change. You should only need to tell it a few policies, such as how much need you have for reliability and speed and how much space you are willing to pay for it. The systems should never put you at more risk than you ask for, or more risk than you would have had with having just one drive or a set of non-redundant drives. That’s hard, but it is a worthwhile goal.
But I think we could do more, and we could do it in a way that we get better and better storage with less sysadmin.
Multiple drives, but not too many
I think most users will probably stick to 2 drives, and rarely go above 3. The reality is that 4 or more is for servers and heavy users, because each drive takes power and generates heat. However, adding an SSD to the mix is always a good idea but it’s not for redundancy.
The OS should understand what’s happening and reflect it in the filesystem
The truth is not all files need as much redundancy and speed. The OS can know a lot about that and identify:
Files that are accessed frequently vs. ones not accessed much, or for a long time
Files that are accessed by interactive applications which cause those applications to be IO bound. (ie. slowed by waiting for the disk.)
Files that have been backed up in particular ways, and when.
Your OS should start by storing everything redundantly (RAID 1 or 5) until such time as the disk starts getting close to full. When that happens, it should of course alert you it is time to upgrade your drives or add another. But it can also offer another option which ou can explicitly ask for, namely reduce the redundancy on files which are rarely accessed, have not been used for a while, and have been backed up.
It turns out, that’s often a lot of the files on a disk. In particular, the thing that uses up most of the disk space for the ordinary user is their collection of photos and videos. Other than the few that get regular access, there is no actual need for RAID level redundancy on these images. If their own drive is lost, there is a backup where you can get them. They aren’t needed for regular system operation.
The systems already know what files belong to the OS, and can keep them redundant, though most home users are not looking for 100% uptime, they really only want 100% data safety.
To do this right, programs need to tell the OS why they are accessing files. Your photo organizer possibly scans your photo collection regularly, but this scan doesn’t make the files system crucial. My goal is not to have the users designate these things, though that is one option. Ideally the system should figure it out.
The system can also take the most important files, the ones that cause the system to block, and make sure they are both redundantly stored and found on SSD.
Backup needs to be easy and automatic. When systems boot up, they should offer to do backup for others who are nearby and semi-nearby, and then they should trade backup space. My system should offer space to others, and make use of their space for either general backup (if in the same house/company/LAN) and offsite backup (remote but with good bandwidth.) Of course, ISPs and other providers can also provide this space for money.
The key thing is this should happen with almost no setup by the user. One problem for me is that I can come back from a trip with 50gb of new photos, and they would clog my upstream for remote backup. The system should understand what files have priority, and if the backlog gets too much, request I plug in an external USB drive to offer a backup until the backlog can be cleared. Otherwise I should not have to deal with it. Of course, the backup I offer others does not need RAID redundancy. Instead, I should be queried regularly to prove I still have the backups, and if not, the person I am backing up should seek another place.
Of course all remote backup must be encrypted by me. In fact, all disks should be encrypted, but too much desire for security can cause risk of losing all your data. Systems must understand the reduced threat model of the ordinary user and make sure keys are backed up in enough places that the chances of losing them are nil, even if it increases the chance that the NSA might get the keys. This is actually pretty hard. The typical “What was your pet’s name” pseudo security questions are not strong enough, but going stronger makes it more likely there can be key loss. Proposals such as my friendscrow can work if the system knows your social network. They have the advantage that there is zero UI to escrowing the key, and a lot of work to recover it. This is the ideal model because if there is ZUI on storing it, you are sure it will be stored. Nobody minds extra work if they have lost all the normal paths to getting their key.
Most of our focus these days is on self-driving personal cars. In spite of that focus, the effects on mass transit will also be quite dramatic, in ways far beyond taking the driver out of the bus. Indeed, for various reasons, I believe traditional approaches to mass transit (large vehicles on fixed routes and schedules, sometimes with private right-of-way) will be obsoleted by robocar technology, and that the result will be almost 100% good — transportation that is better, faster, more convenient and even more sustainable. (The latter shocks people, who think that anything with small vehicles is inherently less energy efficient.)
I have a new special article on Robocars.com outlining potential visions for the future of transit, and what they might mean. The vision is a work in progress, but I invite debate.
I frequently see people claim that one effect of robocars is that because we’ll share the cars (when they work as taxis) and most cars stay idle 95% of the time, that a lot fewer cars will be made — which is good news for everybody but the car industry. I did some analysis of why that’s not necessarily true and recent analysis shows the problem to be even more complex than I first laid out.
To summarize, in a world of robotic taxis, just like today’s taxis, they don’t wear out by the year any more, they wear out by the mile (or km.) Taxis in New York last about 5 years and about 250,000 miles, for example. Once cars wear out by the mile, the number of cars you need to build per year is equal to:
Total Vehicle Miles per year Avg Car Lifetime in Miles
As you can see, the simple equation does not involve how many people share the vehicle at all! As long as the car is used enough that the car isn’t junked before it wears out from miles, nothing changes. It’s never that simple, however, and some new factors come into play. The actual model is very complex with a lot of parameters — we don’t know enough to make a good prediction.
People travel more in cars.
It’s likely that the number of miles people want to travel goes up for a variety of reasons. Robocars make car travel much more pleasant and convenient. Some people might decide to live further from work now that they can work, read, socialize or even sleep on the commute. They might make all sorts of trips more often. Outside of rush hour, they might also be more likely to switch from other modes, such as public transit, and even flying. Consider two places about a 5 hour drive apart — today flying is going to take just under 3 hours due to all the hassles we’ve added to flying, even with the improvements robocars make to those hassles. Many might prefer an uninterrupted car ride where they can work, watch videos or sleep.
Vehicles run empty to reposition
Regular taxis have wasted miles between rides. Indeed, a New York taxi has no passenger 38% of the time. Fortunately, robocars will be a lot more efficient than that, since they don’t need to cruise around looking for rides. Research suggests a more modest 10% “empty mile” cost, but this will vary from situation to situation. If you need the robotaxi fleet to constantly run empty in the reverse commute direction, it could get worse. Among those who believe robocars will be more personally owned than used as taxis, we often see a story painted of how a household has a car that takes one person to work, and returns home empty to take the 2nd person, and then returns again to take others on daytime errands. This is possible, but pretty inefficient. I think it’s far more likely that in the long term, such families will just use other taxi services rather than have their car return home to serve another family member.
Cars last longer
The bottom part of the equation is likely to increase, which reduces the number of cars made. Today, cars are engineered for their expected life-cycle — 19 years and 190,000 miles in California, for example. Once you know your car is going to have a high duty cycle, you change how you engineer it. In particular, you combine engineering of parts for your new desired life cycle with specific replacement schedules for things that will wear out sooner. You want to avoid junking a car with lots of life in the engine just because the seats are worn out, so you make it easy to replace the seats, and you have the car bring itself to a service center where that’s fast and easy. read more »
General Motors has purchased “Cruise,” a small self-driving startup in San Francisco. Rumours suggest the price was over one billion dollars. In addition, other rumours have come to me suggesting that at least one other startup has been seeking a new round of funding at that valuation, but did not succeed due to the market downturn.
I gave Cruise some small assistance when they were getting started, and wrote about them when they showed off
their first prototype. Since then, Cruise, as expected, moved away from highway autopilot retrofit into making a proper robocar, and their test Leaf has been running around SF with 4 velodyne LIDARs and other sensors for a while.
Even in my wildest dreams, I did not imagine startup valuations this high, this soon. (Time to get my own startup going.) Let’s consider why:
First, GM, as the world’s 2nd largest car company, is way behind on robocars. They were one of the first companies to announce a highway autopilot (called, ironically, “Super Cruise”) for the 2014 Cadillac. However, they quickly pulled back on that announced, and for the last few years have continued to delay it, recently announcing it would not even appear in the 2017 car, even though Mercedes, Tesla and several other companies had products like that.
GM’s main academic partner was CMU. They sponsored Boss, the CMU team that won the Darpa Urban Challenge, headed by Chris Urmson (who now leads the Google car project.) Recently, Uber moved into Pittsburgh in a big way and poached many of the top people from CMU for their project. This left GM with very little, a poor position for the world’s 2nd largest car company.
Next, we have Kyle Vogt, founder of Cruise. Kyle was on the founding team for justin.tv, and also for Twitch, which had a billion dollar acquisition — in other words, Kyle is not precisely hurting for money. He has not confirmed this to me, but I suspect when GM showed up at his door, he was not interested in joining a big car company, and his resources meant he was not in any hurry. I then presume GM took that as negotiation and bumped the price to where you would have to be crazy to say no.
GM will let cruise be independent, at least for now. That’s the only sane path. We’ll see where this goes.
Michael Bloomberg, a contender for an independent run for US President has announced he will not run though for a reason that just might be completely wrong. As a famous moderate (having been in both the Republican and Democratic parties) he might just have had a very rare shot at being the first independent to win since forever.
Here’s why, and what would have to happen:
Donald Trump would have to win the Republican nomination. (I suspect he won’t, but it’s certainly possible.)
The independent would have to win enough electoral votes to prevent either the Republican or Democrat getting 280.
If nobody has a majority of the electoral college, the house picks the President from the top 3 college winners. The house is Republican, so it seems pretty unlikely it would pick any likely Democratic Party nominee, and the Democrats would know this. Once they did know this, the Democrats would have little choice but to vote for the moderate, since they certainly would not vote for Trump.
Now all it takes is a fairly small number of Republicans to bolt from Trump. Normally they would not betray their own party’s official nominee, but in this case, the party establishment hates Trump, and I think that some of them would take the opportunity to knock him out, and vote for the moderate. If 30 or more join the democrats and vote for the moderate, he or she becomes President.
It would be different for the Vice President, chosen by the senate. Trump probably picks a mainstream republican to mollify the party establishment, and that person wins the senate vote easily.
To be clear, here the independent can win even if all they do is make a small showing, just strong enough to split off some electors from both other candidates. Winning one big state could be enough, for example, if it was won from the candidate who would otherwise have won. read more »
Reports released reveal that one of Google’s Gen-2 vehicles (the Lexus) has a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type — all other impacts have been reported as fairly clearly the fault of the other driver.
This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems.
Google is thinking about them too. Google reports it just recently started experimenting with new behaviours, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behaviour for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers — which consist of every width of vehicle you can imagine) — dynamically form their own patterns as needed.)
As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to “merge” back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus’ intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn’t imagine this split lane or this “merge.” and of course the law doesn’t require people to slow down to let you in.
But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can’t easily see you, so it’s your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.)
While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that’s actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law — not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same what it accepts it for all the humans out there, who are almost never ticketed for these infractions.
So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes, and some dings, on the way there, and that’s a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don’t involve things smaller than cars (such as pedestrians.) Robocars will need to not always yield in a game of chicken or they can’t survive on the roads.
In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle’s confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It’s still unclear to me why the car wasn’t able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around.
Nobody wants accidents — and some will play this accident as more than it is — but neither do we want so much caution that we never learn these lessons.
It’s also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that’s left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That’s why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward. Their current record is not as good as a human, though I would be curious what the accident rate is for student drivers overseen by a driving instructor, which is roughly parallel to the safety driver approach. This is Google’s first caused accident in around 1.5M miles.
It’s worth noting that sometimes humans solve this problem by making eye contact, to know if the other car has seen you. Turns out that robots can do that as well, because the human eye flashes brightly in the red and infrared when looking directly at you — the “red eye” effect of small flash cameras. And there are ways that cars could signal to other drivers, “I see you too” but in reality any robocar should always be seeing all other parties on the road, and this would just be a comfort signal. A little harder to read would be gestures which show intent, like nodding, or waving. These can be seen, though not as easily with LIDAR. It’s better not to need them.
I have a big article forthcoming on the future of public transit. I believe that with the robocar (and van) it moves from being scheduled, route-based mass transit to on-demand, ad-hoc route medium and small vehicle transit. That’s in part because of the disturbingly poor economics of current mass transit, especially in the USA. We can do much better.
However, long before that day, there is something else that could be done. Many mass transit systems shut down at night. Demand is low, and that creates a big burden for the “night people” of the world, who are left with taxis and occasional carpooling, or more limited night bus service.
I think transit agencies should make a deal with companies like Uber to operate their carpool services (UberPool and LyftLines) during transit closure hours, and subsidize the rides to bring them down equal to, or closer to a transit ticket. This could also be the case for other seriously off-peak times, like weekends and holidays.
Already the typical transit ticket in the USA is heavily subsidized. The real cost of providing a transit ride is much higher. In the transit-heavy cities, fares pay about 50-60% of operating cost, but in some cities it’s only 15-20%. The US national average is around 33%. And that’s just operating cost, it does not include the capital costs in many cases. One thing that pushes the number the wrong way is operation during off-peak hours on lightly loaded vehicles. So while the average ride may cost $6 to provide, it can be more at night. Already the mobile-summoned based carpools are close to that price. (For promotions, they have actually gotten to less. They also subsidize to get going, though.)
There are some big issues. First, not everybody has a smartphone, a data plan or even a phone. You need a method for those without them to summon a ride. You could start with an 800 number so any phone (or the few remaining payphones) could summon a ride. You could also make mini-kiosks by building a protective case and putting a surplus tablet at every subway stop and many bus stops.
Another issue is that these services, particularly the carpool versions, depend on not having anonymous riders. People feel much safer about carpooling with strangers if those strangers can be identified if there is a problem. Transit riding is anonymous, and should be. The solutions to this are challenging. On top of all this, riding in a mobile-hail car is never paid for with cash, and the drivers are not going to accept cash. At the least, this means you would need to provide tickets that people buy (from machines at stations or in advance) which the driver can scan with their phone. So no just deciding to take a ride with cash. Transit cards are an other issue, though there is no requirement that they work, because at least at first, this service is meant for hours when the transit was not even running, so it’s OK if it’s an extra cost.
Finally, there is the issue that this is too good. A ride in a private car vs. a late night transit bus, for the price of a bus? People will over-use it, and that would of course get the Taxis angry, though there is no reason they could not participate as they are all going to supporting mobile-app hail. But the subsidy may be too expensive if people over use it.
One solution to that is to only allow it to take you between transit stops. Even that’s “too good” in that it may be faster than the transit, and much faster if the trip involved changes, especially changes during limited service times. You could get extreme and only allow it between limited sets of stops, or require 2 rides (for the same price) to simulate having to change lines. This also makes carpooling much easier, as the drivers would mostly end up cruising close to the transit lines. IF they do it in vans it could be quite efficient, in fact.
We probably don’t need to go that far in limiting it, but we could. You could tune the ease and quality of the service so the demand is what you expect, and the subsidy affordable. And the ride companies could actually use this as a way to gain extra revenue. They could offer you a door to door ride with a subsidy for the portion that would have been along the transit line. For example, today you can take Uber to the subway station, ride the subway for $2 and then take Uber from the end station to your destination, and that can be cheaper than just taking the Uber directly. This ride could be offered at some subsidized price and keep up the volume. The taxi companies can either get into the 21st century and play, or not compete.
Last year, I wrote a few posts on the attack on Science Fiction’s Hugo awards, concluding in the end that only human defence can counter human attack. A large fraction of the SF community felt that one could design an algorithm to reduce the effect of collusion, which in 2015 dominated the nomination system. (It probably will dominate it again in 2016.) The system proposed, known as “e Pluribus Hugo” attempted to defeat collusion (or “slates”) by giving each nomination entry less weight when a nomination ballot was doing very well and getting several of its choices onto the final ballot. More details can be found on the blog where the proposal was worked out.
The process passed the first round of approval, but does not come into effect unless it is ratified at the 2016 meeting and then it applies to the 2017 nominations. As such, the 2016 awards will be as vulnerable to the slates as before, however, there are vastly more slate nominators this year — presuming all those who joined in last year to support the slates continue to do so.
Recently, my colleague Bruce Schneier was given the opportunity to run the new system on the nomination data from 2015. The final results of that test are not yet published, but a summary was reported today in File 770 and the results are very poor. This is, sadly, what I predicted when I did my own modelling. In my models, I considered some simple strategies a clever slate might apply, but it turns out that these strategies may have been naturally present in the 2015 nominations, and as predicted, the “EPH” system only marginally improved the results. The slates still massively dominated the final ballots, though they no longer swept all 5 slots. I consider the slates taking 3 or 4 slots, with only 1 or 2 non-slate nominees making the cut to be a failure almost as bad as the sweeps that did happen. In fact, I consider even nomination through collusion to be a failure, though there are obviously degrees of failure. As I predicted, a slate of the size seen in the final Hugo results of 2015 should be able to obtain between 3 and 4 of the 5 slots in most cases. The new test suggests they could do this even with a much smaller slate group as they had in the 2015 nominations.
Another proposal — that there be only 4 nominations on each nominating ballot but 6 nominees on the final ballot — improves this. If the slates can take only 3, then this means 3 non-slate nominees probably make the ballot.
An alternative - Make Room, Make Room!
First, let me say I am not a fan of algorithmic fixes to this problem. Changing the rules — which takes 2 years — can only “fight the last war.” You can create a defence against slates, but it may not work against modifications of the slate approach, or other attacks not yet invented.
Nonetheless, it is possible to improve the algorithmic approach to attain the real goal, which is to restore the award as closely as possible to what it was when people nominated independently. To allow the voters to see the top 5 “natural” nominees, and award the best one the Hugo award, if it is worth.
The approach is as follows: When slate voting is present, automatically increase the number of nominees so that 5 non-slate candidates are also on the ballot along with the slates.
To do this, you need a formula which estimates if a winning candidate is probably present due to slate voting. The formula does not have to be simple, and it is OK if it occasionally identifies a non-slate candidate as being from a slate.
Calculate the top 5 nominees by the traditional “approval” style ballot.
If 2 or more pass the “slate test” which tries to measure if they appear disproportionately together on too many ballots, then increase the number of nominees until 5 entries do not meet the slate condition.
As a result, if there is a slate of 5, you may see the total pool of nominees increased to 10. If there are no slates, there would be only 5 nominees. (Ties for last place, as always, could increase the number slightly.)
Let’s consider the advantages of this approach:
While ideally it’s simple, the slate test formula does not need to be understood by the typical voter or nominator. All they need to know is that the nominees listed are the top nominees.
Likewise, there is no strategy in nominating. Your ballot is not reduced in strength if it has multiple winners. It’s pure approval.
If a candidate is falsely identified as passing the slate test — for example a lot of Doctor Who fans all nominate the same episodes — the worst thing that happens is we get a few extra nominees we should not have gotten. Not ideal, but pretty tame as a failure mode.
Likewise, for those promoting slates, they can’t claim their nominations are denied to them by a cabal or conspiracy.
All the nominees who would have been nominated in the absence of slate efforts get nominated; nobody’s work is displaced.
Fans can decide for themselves how they want to consider the larger pool of nominees. Based on 2015’s final results (with many “No Awards”) it appears fans wish to judge some works as there unfairly and discount them. Fans who wish it would have the option of deciding for themselves which nominees are important, and acting as though those are all that was on the ballot.
If it is effective, it gives the slates so little that many of them are likely to just give up. It will be much harder to convince large numbers of supporters to spend money to become members of conventions just so a few writers can get ignored Hugo nominations with asterisks beside them.
It has a few downsides, and a vulnerability.
The increase in the number of nominees (only while under slate attack) will frustrate some, particularly those who feel a duty to read all works before voting.
All the slate candidates get on the ballot, along with all the natural ones. The first is annoying, but it’s hardly a downside compared to having some of the natural ones not make it. A variant could block any work that fits the slate test but scored below 5th, but that introduces a slight (and probably un-needed) bit of bias.
You need a bigger area for nominees at the ceremony, and a bigger party, if they want to show up and be sneered at. The meaning of “Hugo Nominee” is diminished (but not as much as it’s been diminished by recent events.)
As an algorithmic approach it is still vulnerable to some attacks (one detailed below) as well as new attacks not yet thought of.
In particular, if slates are fully coordinated and can distribute their strength, it is necessary to combine this with an EPH style algorithm or they can put 10 or more slate candidates on the ballot.
All algorithmic approaches are vulnerable to a difficult but possible attack by slates. If the slate knows its strength and knows the likely range of the top “natural” nominees, it can in theory choose a number of slots it can safety win, and name only that many choices, and divide them up among supporters. Instead of having 240 people cast ballots with the 3 choices, they can have 3 groups of 80 cast ballots for one choice only. No simple algorithm can detect that or respond to it, including this one. This is a more difficult attack than the current slates can carry off, as they are not that unified. However, if you raise the bar, they may rise to it as well.
All algorithmic approaches are also vulnerable to a less ambitious colluding group, that simply wants to get one work on the ballot by acting together. That can be done with a small group, and no algorithm can stop it. This displaces a natural candidate and wins a nomination, but probably not the award. Scientologists were accused of doing this for L. Ron Hubbard’s work in the past.
The best way to work out the formula would be through study of real data with and without slates. One candidate would be to take all nominees present on more than 5% of ballots, and pairwise compare them to find out what fraction of the time the pair are found together on ballots. Then detect pairs which are together a great deal more than that. How much more would be learned from analysis of real data. Of course, the slates will know the formula, so it must be difficult to defeat it even knowing it. As noted, false positives are not a serious problem if they are uncommon. False negatives are worse, but still better than alternatives.
So what else?
At the core is the idea of providing voters with information on who the natural nominees would have been, and allowing them to use the STV voting system of the final ballot to enact their will. This was done in 2015, but simply to give No Award in many of the categories — it was necessary to destroy the award in order to save it.
As such, I believe there is a reason why every other system (including the WSFS site selection) uses a democratic process, such as write-in, to deal with problems in nominations. Democratic approaches use human judgment, and as such they are not a response to slates, but to any attack.
As such, I believe a better system is to publish a longer list of nominees — 10 or more — but to publish them sorted according to how many nominations they got. This allows voters to decide what they think the “real top 5” was and to vote on that if they desire. Because a slate can’t act in secret, this is robust against slates and even against the “slate of one” described above. Revealing the sort order is a slight compromise, but a far lesser one than accepting that most natural nominees are pushed off the ballot.
The advantages of this approach:
It is not simply a defence against slates, it is a defence against any effort to corrupt the nominations, as long as it is detected and fans believe it.
It requires no algorithms or judgment by officials. It is entirely democratic.
It is completely fair to all comers, even the slate members.
The downsides are:
As above, there are a lot more nominees, so the meaning of being a nominee changes
Some fans will feel bound to read/examine more than 5 nominees, which produces extra work on their part
The extra information (sorting order) was never revealed before, and may have subtle effects on voting strategy. So far, this appears to be pretty minor, but it’s untested. With STV voting, there is about as little strategy as can be. Some voters might be very slightly more likely to rank a work that sorted low in first place, to bump its chances, but really, they should not do that unless they truly want it to win — in which case it is always right to rank it first.
It may need to add EPH style counting if slates get a high level of coordination.
Another surprisingly strong approach would be simply to add a rule saying, “The Hugo Administrators should increase the number of nominees in any category if their considered analysis leaves them convinced that some nominees made the final ballot through means other than the nominations of fans acting independently, adding one slot for each work judged to fail that test, but adding no more than 6 slots.” This has tended to be less popular, in spite of its simplicity and flexibility - it even deals with single-candidate campaigns — because some fans have an intense aversion to any use of human judgment by the Hugo administrators.
Very simple (for voters at least)
Very robust against any attempt to corrupt the nominations that the admins can detect. So robust that it makes it not worth trying to corrupt the nominations, since that often costs money.
Does not require constant changes to the WSFS constitution to adapt to new strategies, nor give new strategies a 2 year “free shot” before the rules change.
If administrators act incorrectly, the worst they do is just briefly increase the number of nominees in some categories.
If there are no people trying to corrupt the system in a way admins can see, we get the original system we had before, in all its glory and flaws.
The admins get access to data which can’t be released to the public to make their evaluations, so they can be smarter about it.
Clearly a burden for the administrators to do a good job and act fairly
People will criticise and second guess. It may be a good idea to have a post-event release of any methodology so people learn what to do and not do.
There is the risk of admins acting improperly. This is already present of course, but traditionally they have wanted to exercise very little judgment.
I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera.
The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed.
At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not.
And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available.
If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera.
Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later.
At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today.
One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do.
So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor.
What’s interesting is we could build this today, and soon we can build it pretty well.
What do others see?
One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces.
One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age.
Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience.
For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that?
Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference.
Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today.
Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence.
The highlight and founding program of Singularity University, where I am chair of computing, is our summer program, now known as the Global Solutions Program. 80 students come from all over the world (only a tiny minority will be from the USA) to learn about the hottest rapidly changing technologies, and then join together with others to kickstart projects that have the potential to use those technologies to solve the world’s biggest problems.
This year is the 2nd year of a Google scholarship program, which means the program is free for those who are accepted. About 50 slots go to those scholarships, the other 30 go to winners of national competitions to attend. You can apply both ways. That means you can expect a class of great rising and already risen stars. I don’t like to exaggerate, but almost everybody who goes through it finds it life-changing.
If you are at a point where you are ready to do something new and big, and you want to understand how technology that keeps changing faster and faster works and how it can change the world and your world, look into it.
Also closing on Feb 19 is our accelerator program for existing or nascent startups. Applicants get $100K in seed funding, office space at Nasa Research Park and more through our network. You can read about it or Apply.
In a recent article, Car and Driver magazine compares 4 of the Highway autopilot systems, including those from Tesla, Mercedes, BMW and Infiniti. They test on a variety of roads, and spoilers: The Tesla wins by a good margin in several categories.
It’s a pretty interesting comparison, and a nicely detailed article. They drove a variety of roads, though the reality is that none of these autopilots are much use off the highway, and they are not intended to be as yet. Each system will perform differently on different roads. People report a much better score for the Tesla on Highway 280, which is the highway closest to Tesla HQ.
Still, it should wake up people who want to compare Google’s report of needing an intervention to prevent an accident every 70,000 miles (or 5300 miles between software anomalies) and needing intervention every 2 miles on the Tesla and twice a mile on the Infiniti, on average.
Other News notes:
Google is expanding testing to Kirkland Washington — hoping for some heavy rain, among other things.
The California DMV hearings were contentious. You can hear a brief radio call-in debate with myself and one of the few people in favour of the regulations at KPCC’s “AirTalk”. Google threatened that if the regs are passed as written, they will plan to first deploy outside of California, and they probably mean it.
A small autonomous shuttle bus is doing test runs in the Netherlands, joining several other projects of this sort.
Porsche has come out against self-driving. Who would have thought it?
Baidu and Jaguar/Landrover are both upping their game. While you probably won’t automate off-road vehicles any time soon, having one that takes you to the countryside where you take the wheel can be a nice idea.
In Greenwich, the self-driving shuttle pilot there will use vehicles based on the Ultra PRT pods from Heathrow. Ultra’s pods have always been wheeled cars but they needed a dedicated track. Today, they can be modified not to.
Steve Zadesky, supposedly the lead of Apple’s unconfirmed project Titan, has left Apple. Rumours suggest a culture issue. Hmm.
The Isle of Man is tiny but is its own country — they are giving serious consideration to being a robocar pilot location. Last year I had some talks with another channel island on the same topic. There are advantages to having your own country.
I recently read a report of a plan for a new type of intersection being developed in Malaysia, and I felt it had some interesting applications for robocars.
The idea behind the intersection is that you have a traditional intersection, but dig in one or both directions, a special underpass which is both shallow and narrow. One would typically imagine this underpass as being 2 vehicles wide in the center of the road but other options are possible. The underpass might be very shallow, perhaps just 4 to 5 feet high.
The underpass is available only to vehicles which fit, which is to say ordinary height passenger cars or even just ordinary height half-width vehicles. Big vehicles such as SUV, vans, trucks etc. would not use the underpass, and instead use the at-grade intersection, where you would have traffic signals or stop signs.
Why is this such a good idea? It’s vastly cheaper to make such an underpass. Because it’s so shallow, it is cheap to dig and shore up the walls. You can start the downramp much closer to the intersection because you don’t need to go so far down. It’s a tiny fraction of the cost of a regular overpass or underpass which requires lots of space to go up and down, and must be high enough for big trucks to pass underneath. Not so here, as trucks never go under it.
The downramp could begin a very short distance from the intersection, or it could begin further out to allow for a longer tunnel, such space now dedicated to the left turn lanes. (Or the right turn lanes if the tunnels are on the outside rather than center of the road.)
The center has the advantage of only digging one tunnel for both directions and providing that space for the left-turn lane. The downside is you have this physical tunnel entrance with protective bollards in the middle of a road, which may present some risk — though there are many places where there are tunnel entrances in the middle of roads, but they are full sized. Indeed we have intersections like this in full sized mode, including on Geary St. in San Francisco. The alternative on the edges requires two trenches and puts the obstacles to the side, mixing straight-through underpass traffic with right turning traffic.
Cars small enough to use the tunnels would get a transponder to signal their ability, possibly to raise a gate. In addition, a camera system would detect any too-large vehicle trying to enter the tunnel and do whatever it can to stop it. In the end, a too-large vehicle would end up hitting soft barriers if it failed to stop or divert. (Most parking lots today have hanging barriers to let vehicles know they won’t fit.)
Now the small, light vehicles, such as the one-person robocars, could bypass the traffic lights if they are red. They might get an “express” lane that is just for them which goes through these underpasses so it’s a smooth ride all along the road, other than the ups and downs.
Robocars would have a better time knowing where they fit and letting the intersection know they fit. More to the point, their ability to drive “on rails” would allow a wider robocar to go down a narrower tunnel, keeping a tiny margin that a human driver could never handle. Human driven vehicles would need to be narrower if they used these tunnels.
This would strongly encourage the use of small, lower-height vehicles, which are also very energy efficient. Really strongly — who would want to drive in a big SUV that has to stop at traffic lights when you can go nonstop in a small pod? Of course, you probably still use the light if making a turn. This in turn would cause a drop in vehicle size and congestion, and increase overall road capacity beyond what we get from having no stopping for a large fraction of vehicles.
If you want to get extreme, you could even have just a one lane tunnel if it’s all robocars. The simplest approach would be to have the express lane (with tunnels) only go in the commute direction during rush hour. Off peak, the robocars could pace their trips in pulses so that they alternate what direction they move through the underpass. On a north-south road, you could imagine during the red lights having 15 cars northbound, then 15 cars southbound back and forth until the light is green and you allocate the tunnel to the most popular direction. Humans could not obey this easily but robots could.
This works best when one of the roads intersecting is bigger than the other, since it’s harder to have both routes get an underpass. You could have one take a deeper underpass — at 10’ deep under a 5’ deep one, it’s still not nearly as deep as a full road underpass. Or with all robocars, you could have the robots alternate through the underground intersection at full speed under computer control. People have built computer modules of this “reservation” style intersection for many years, but they never could solve the problem that not every car in an intersection is a trustable robocar, and as such, you can never make an intersection like this. If all cars are robocars, an underground at-grade intersection could easily allow traffic to flow on both routes, in both directions, with proper timing. Since you would not see the other vehicles coming it might not even be as scary.
I think these underpasses would pay for themselves in the increase in road efficiency they would generate, but if not, you could also require a toll to use them. I think a lot of people would pay a modest toll to have no red lights on their trip. Since all you need do is dig a shallow trench, shore up the walls, and cover it with metal plates or similar, it’s a completely different scale of problem from a real underpass. Without too much money, every major road could become a non-stop robocar road.
You can, of course, create more capacity by building full elevated guideways only for use by small, light vehicles. These are again, much cheaper to build than full roads that can handle heavy trucks, and they take up only pillar space so they can be run down the center of many roads. They still need to be up high enough for big vehicles to go under them. Aside from the cost, the big issue is how they change the built environment, blocking out the sun and putting vehicles running in front of the 2nd or 3rd floor of buildings and houses. This is like a PRT plan but you only need to build these in the most congested zones.
I’m doing a lot of flying these days for international speaking and consulting, and I try whenever possible to have 2 or more clients when I fly overseas, since the trips and time-changes can be draining.
By far my favourite flight search tool is Google flight search. That’s because it’s an order of magnitude faster than most of the other tools, and while it lacks some features I would like, once you have speed, there is no substitute for it. I also like routehappy when I am being particular about seats, though it doesn’t cover all airlines which makes it useless for primary search.
To save money, however, what I really need is a tool that can get smart about the various arcane prices airlines put on flights which can vary tremendously. In particular the situations where airlines have decided not to simply sell one-way fares at around half the price of return trips. This is almost universally true between the USA and Europe and on some domestic routes, and less true on travel involving Asia. It is quite common for one-way trips to cost the same as round trips, and sometimes, bizarrely, even more. In the case of some KLM flights, I have found a one way costing double the price of a round trip. The Dutch know this and commonly book returns on KLM and don’t fly the return leg. There are stories of airlines punishing people who do that but they are rare. (The airlines are much more upset about “hidden city” booking, where people notice a flight to X connecting through Y is much cheaper than the direct flight to Y, so they book to X and just walk off the plane there.)
Throwing away the return leg doesn’t stop the trip from costing as much as a return. Your goal is to pay a more fair price, and that usually means making sure that you fly all your flights (or certainly your transatlantic flights) ticketed by the same airline. That works some of the time, but not always. The best airline to fly out may be a terrible airline to fly back on. You may have to take a flight with a painful time and routing one way to get the schedule you need the other way. Of course, this is the supposed purpose of the pricing — to make you buy both directions from the same airline, but it’s often a false victory, I suspect it loses for the airline almost as much as it wins, and it pisses off customers.
Trying all the permutations
Airlines have tons of hidden fare rules that jack up or seriously reduce fares involving certain cities. If you are going to these cities, you want to use them.
If we consider a complex trip that goes A -> B -> C -> D -> E -> A (4 stops) you can put that into most of the flight search engines as a “multi city” trip. You’ll sometimes get back a great answer, but usually you get back a ridiculous one. That’s because the engine just shops that out to all the airlines, which means you only get airlines that sell all 5 routes. And if the itinerary is far flung, there may be no airlines that sell them all at a good price, or with a good routing. (Of course, rarely does any one airline fly all the routes, but they all have tons of partners they can build tickets from.)
So it turns out the best way to fly this trip means combining one-ways (where they are fairly priced) and open jaws. I have found, for example, that you can often save a huge amount of money by buying something like “A->B, D-E” from one airline and “B->C, E-A” from another and “C->D” one way from a third. Bizarrely, adding the right extra legs to certain itineraries triggers serious price drops. This is particularly true when you involve cities with lots of competition (like New York) or inherently low prices (like India.)
So what I want is a flight search engine that will try all the combinations. There are engines that will check if sets of one-ways will do the trick (Kayak calls it a hacker fare) but that’s not enough. Price all 5 together, and then the sets of 4 with a single one-way, then the sets of 3 with the different sets of 2 and so on. You want to combine the price search with a flight quality search too, so that you flight on shorter, better flights.
When I do this as a human, I do it with some knowledge of the geography. For example, if you have a short leg which is only flown nonstop by one airline, it’s pretty obvious you want to price that out independently from the other flights, because if your ticket comes from an airline that doesn’t partner with the nonstop airline, they will put you on a ridiculous connection instead of a cheap one-hour flight.
In addition, there is another advantage to breaking up a flight into smaller groupings. It gives you more ability to change the flights or even to skip them. In many cases, to avoid people playing tricks, airlines will cancel the rest of an itinerary if you don’t show up for an early leg, often with no refund. Once, when a change in plans put me in Copenhagen instead of Bergen, Norway the night before my planned flight from Bergen back to San Francisco (via Copenhagen), SAS insisted I fly to Bergen just so I could turn around and get on the flight back to Copenhagen for my connection.
Round the world
This gets worse when you do a multi-leg trip, and worse, a “round the world” trip involving Asia, Europe and the Americas. In the latter case, sometimes your best course is the special around-the-world tickets offered by the 3 big alliances. These tickets cost around $10,000 in business class, around $4K in coach. For certain types of trips they are the clear winning choice. They are flexible — you can book them as little as 3 days in advance, and you can change your flights, even the cities, for free or low cost. They are refundable with a small penalty! You can add side trips for personal travel at little to no extra cost, and you can go to obscure airports that are expensive to fly to for the same price. They have a small number of downsides:
They can cost more than many directly booked trips. If your client is paying, it may not be fair to charge them $10K for something you could book for $7K. Though you can always eat the extra cost if you are doing side-trips as it can easily be worth it.
You are limited to one alliance only, though most of them have several airlines to fly you on the route.
They fetch from a more limited inventory if flying in business class, so quite often, particularly if booking late or changing your plans, you may see the flight you want is not available in the class you paid for.
Of course, they have their RTW restrictions — you must cross each ocean exactly once, along with a few others. Usually not a problem, but sometimes.
So if you ever see that your complex trip is adding up to a high cost, look into these. OneWorld also has some subset trips that don’t require a Pacific crossing.
Smart travel agents
While a computer should be able to do all this, perhaps there are still members of the dying profession of travel agents who can do a decent job on this. Let me know if you know of some. In the past, there were ticket consolidators, who buy up buckets of tickets and then have the power to sell them at reasonable one-way prices. This can be good, though sometimes it means being a 2nd class passenger, not getting loyalty miles and not being able to deal directly with the airline for service.
In 2010, I proposed the idea of planes with no landing gear which land on robotic platforms. The spring loaded platforms are pulled by cables and so can accelerate and turn with multiple gees, so that almost no matter what the plane does, it can’t miss the platform, and it can even hit hard with safety.
Today I learned there is a European research project called Gabriel with very similar ideas. In their plan, the plane has landing pillars which insert into the platform, rather than wheels. This requires retractable pillars but not the weight of the wheels. The platform runs on a maglev track but can tilt and rotate slightly to match the plane as it lands or takes off.
Overall I still prefer my plan — and I have added some refinements in the intervening years.
I am not quite sure of the value of maglev, which is quite expensive. Cables can provide high acceleration quite well.
The pillars still need a complex mechanism (which can fail) though they make a very solid connection — if you can place them just right.
Their platform tilts up — this may mean it can provide power longer which could be useful. It also allows easier release of pillars.
My approach allowed, in theory the ability to land in any direction, eliminating crosswinds. Gabriel uses a linear track.
I don’t think there is much need for communications between the aircraft and the platform. Can’t see much the platform can’t figure out — it can easily track the aircraft with its cameras and position itself. There are a few things that could be communicated, but why not have it work fine even if the communications are out — which could happen.
My goal was to have a super short runway, taking off and landing with high acceleration.
My aim was to handle small aircraft, Gabriel seems aimed at larger ones. Admittedly larger ones may be more tolerant of landing only at prepared airports.
One refinement I have added involves the hard question of what to do if you lose power at takeoff. This is the scariest thing in flying, and you must be able to recover. You could have a longer takeoff runway, so that there is enough space to slow down again if the aircraft loses power just before being released.
An alternative, as suggested by Gregg Maryniak is to have a “catch” airfield downrange from the main airfield. In this case, if you lost power, the system could keep accelerating you and even release you, with enough power that you can climb over the intervening space and then glide to a landing on an emergency catch platform — which would grab you no matter what, and let you land hard. The intervening land could be farmland or any sort of land use willing to be at the end of an airport, but it need not be airstrip. The downside of this is you must take off along a vector which lets you get, with no power, to the catch robot, so you may have to deal with crosswinds. You could have more than one catch robot allowing different takeoff vectors, but it’s still vastly less land than a typical airport would require, with most of the land finding other uses. Indeed it might be possible to have a small set of catch robots arrayed around the takeoff airstrip and allow takeoff in almost any direction.
The emergency catch robots, being only for emergencies, might stop you faster than an ordinary landing, and thus require less land. For example, if you can take 20m/s/s of deceleration (2gs) you can stop from 40m/s in just 40 meters, meaning the emergency catch strip could be very small, an insignificant amount of land. At such a small size, it’s easy to imagine an array of pads around the main takeoff-zone. Admittedly it’s a hard landing, but it would be a rare exception. Better be belted in on takeoff and everything stowed in the back.
It seems concluded for now, but it will be interested to see if anything develops further.
I’ve been electric car shopping, but one thing has stood out as a big concern. Many electric cars are depreciating fast, and it may get even faster. I think part of this is due to the fact that electric cars are a bit more like electronics devices than they are cars. Electric cars will see major innovation in the next few years, as well as a decline in their price/performance of their batteries. This spells doom for their value. It’s akin to cell phones — your 2 year old cell phone still functions perfectly, but you dispose of it for a new one because of the pace of innovation. Electric cars are not at that pace, but they are skirting the phenomenon.
When it comes to Robocar, I remind people that the computer will be the most important part of the car, not the engine or other features. And the computer and software are on the Moore’s Law curve, like your phone. The battery system is not like this, but digital features are becoming more and more important parts of every car.
The most obvious cause of the big depreciation is not related to the cars. There is a $7500 federal tax rebate on a new electric car, so the moment you drive it off the lot, its blue book value drops an additional $7500. In addition, different states offer credits from of up to $5,000, and unless you take the car out of state, that amount will also drop off the value. This is the primary culprit for the huge depreciation numbers, but there is more.
Perversely, people with higher incomes don’t get California’s $2,500 credit, so for them, buying used is a very wise idea, because somebody else got the credit, and it’s reflected in the price of the car. Of course, if you are rich enough, you may tolerate paying $2,500 more than everybody else for the new car. In fact, if not for the sales tax, it would be a good strategy to get somebody else to buy a car for you and get the credit, then buy it from them. Or take over a lease (getting to that…)
There are rumours that vendors might even be trying to subsidize against this depreciation to avoid a collapse in the price of their cars. After all, such low used car value discourages confidence in the car (and steals away buyers of new cars.) Rumours suggest Nissan has been known to offer incentives to get people to keep their lease-returns rather than take them back, and there are stories of even Teslas getting low prices at auction, though in the retail market they have actually done pretty well.
The Leaf is the most popular electric car, and only it and the Tesla are real market cars from big players. The other cars are all “compliance” cars, made by companies who must meet quotas of green vehicles. The 2015 Leaf has a cited range around 80 miles, and users report a real range on the highway closer to 60 miles. For me, that means a car that can’t take me to San Francisco and back. The Leaf would handle a large fraction of my trips around Silicon Valley, but not being able to go to SF is a major detriment in this town. So I decided not to get a 2015 Leaf.
Better cars keep getting pre-announced
That decision was magnified when Nissan announced the 2016 Leaf would be able to do 107 miles. Technically, that’s enough for the San Francisco trip, though in reality it’s just on the edge. Any charging would allow the trip, including a 5 minute (“gas pump” level) stop at a DC supercharger (if nobody else is using it.) So I was waiting for that car to come out when…
They announced the Chevy Bolt, a $30K car (after rebate) with a 200 mile range. Finally a reasonably priced car with enough range. And then rumours circulated of a similar range in the 2017 Leaf — it needs to if it will compete, and so every other car needs to as well. Who will buy a 100 mile 2016 car when a 200 mile 2017 car for not much more is being promoted?
Of course, in a year, something even more appealing than the Bolt will be announced. While the Bolt’s range is enough for 99% of my drives (leaving out only Lake Tahoe and road tripping) there is still much that can improve — other parts of the car, the electronics, and of course the battery pack getting even cheaper at that range.
Every year, cars get a little bit better, but we’re in for a period of about 5 years in electric cars where each new year is a lot better, and that’s trouble for people trying to sell them if the customers figure that out. A cell phone is cheap enough to throw out after 2 years. A car is not. To top it off, in a few years the robocar features will start getting more serious (starting with the first no-supervision traffic jam assist) and so other parts of the car will also be on the Moore’s Law curve.
The battery is probably not on that curve, but it’s on a good one. The Bolt’s 200 mile range is a result of an expected reduction of battery cost from $500/kwh a couple of years ago to $200/kwh by 2020, and that’s without any breakthroughs or new chemistry. (It is speculated the Bolt’s battery cost will already beat that $200 number.) Breakthroughs — which sometimes come when enough money is pushing the process — could easily do much more.
Robocars have an answer to this rapid depreciation. If they are used as Taxis, they can survive. The typical New York Taxi drives 62,000 miles each year and wears out in 5 years. Personal cars take 19 years to wear out, and go around 200,000 miles. Robotaxis will wear out and be scrapped after just 5 years, which means it is less of a burden when they are 4 years old and obsolete from a technology standpoint. (We may also design these vehicles to make it easy to give them hardware upgrades so their electronics can keep pace.)
Personal robocars have it harder. Your 4 year old personal vehicle is going to look like crap compared to the new ones. It will get software updates to match them (which is vital) but without hardware updates it will, like an old iPhone, no longer even be able to handle the software updates. If you buy a personal robocar, get one where it’s easy to swap out the hardware, and expect to pay the cost of this.
Wear and tear of electric cars
The battery is the lifeblood of the electric car. No matter how new the rest is, a reduced range is a deal-killer for most buyers. Indeed, some predictions say the rest of the power train should wear out more slowly than traditional cars, so the depreciation is unfair in some ways.
Battery swap is an option on some electric cars, but that’s a big cost to pay over what you planned to pay. Older battery packs will still work, but deliver less range. Owners will salivate for new packs that are cheaper, lighter, fresher and possibly even higher capacity than what they have. That’s all good, but if you buy an electric car with a pack only good for 4 years at today’s prices, you’ve lost all the economies the electric car hopes to give you. Of course, robocars and especially robotaxis can manage their batteries for much longer life
It might make sense to buy a 2012 Leaf for $8,000 and pay $5K to add a battery pack to it that’s brand-new, giving you a car close to matching a new one in certain ways.
With all this, why look at electric cars today? For me, my electricity bill would actually go down due to metering differences, and of course my gasoline bill would drop too. And they are zippy and fun to drive and quite green with California’s (relatively) green energy grid. And because of this depreciation, used ones are a major bargain. The buyers of new cars (and the federal government) took the hit on a new electric, but you can pick up a 2012 Leaf for $8,000. That’s because all those 2012 units are coming off their leases, and people want them a lot less with those fancier models out there. (In addition, it is known the 2012 had some battery life issues fixed in 2013.)
A lot of people are leasing electric cars. Leasing has one financial advantage (you pay sales tax only on the depreciation you take, rather than the whole car) and otherwise it’s a bad idea unless you’re sure the vendor has guessed badly on the residual value of the car after the lease. With electric cars, you take so much of the depreciation that the tax advantage is not so great. But many electric owners are leasing. The $2500 tax credit in California can often pay for the downpayment, making it easy to come up with the money, and owners are, with good reason, willing to let the vendor take the risk on battery decay and mega-depreciation. Vendors are not idiots, though, and so their residual values are low, but perhaps not low enough. Of course, if you know better cars are coming and are sure you only want the car for 2 years, leasing can ease your legwork.
On the other hand, you can sometimes take over the lease of another electric car owner, letting them suffer the “due at signing” downpayment (which often exceeds all the monthly payments on a short lease) and giving you a car for a very short time, which might be a wise choice with all the new vehicles coming down the pipe.