brad's blog

We Robot Robot Law Conference and Robot Block Party

It’s National Robotics Week, and various events are going on — probably some in your area.

Today and Tomorrow I am at the We Robot conference at Stanford, where people are presenting papers puzzling over how robots and the law will interact. Not enough technology folks at this iteration of the conference — we have a natural aversion to this sometimes — but because we’re building big moving things that could run into people, the law has to be understood.

On Wednesday is the Robot Block Party, also at Stanford, and always fun, with stuff for kids.

Thursday has the xconomy robot conference which looks good though I probably won’t be there.

After the Phoenix APM event on the 21st I will be at Asilomar attending two conferences simultaneously. One is MLove, where I will join a session on connected cars. In a strange coincidence, MLove is located at the same conference center as another invite-only conference I attend annually for old-time (and new-time) microprocessor hackers. The odd thing was that normally when I get an invite that conflicts with a conference I am at, I have to say no — but if they are nice enough to do it at the same conference center on the same days, things can change. Both conferences are lots of fun, and it’s actually annoying to have them overlap since I would like to go to most of both of them.

A few Singularity U events are coming up, but most are sold out are invite-only in the coming month.

The Personal Cloud and Data Deposit Box

Last night I gave a short talk at the 3rd “Personal Clouds” meeting in San Francisco, The term “personal clouds” is a bit vague at present, but in part it describes what I had proposed in 2008 as the “data deposit box” — a means to acheive the various benefits of corporate-hosted cloud applications in computing space owned and controlled by the user. Other people are interpreting the phrase “personal clouds” to mean mechanisms for the user to host, control or monetize their own data, to control their relationships with vendors and others who will use that data, or in the simplest form, some people are using it to refer to personal resources hosted in the cloud, such as cloud disk drive services like Dropbox.

I continue to focus on the vision of providing the advantages of cloud applications closer to the user, bringing the code to the data (as was the case in the PC era) rather than bringing the data to the code (as is now the norm in cloud applications.)

Consider the many advantages of cloud applications for the developer:

  • You write and maintain your code on machines you build, configure and maintain.
    • That means none of the immense support headaches of trying to write software to run on mulitple OSs, with many versions and thousands of variations. (Instead you do have to deal with all the browsers but that’s easier.)
    • It also means you control the uptime and speed
    • Users are never running old versions of your code and facing upgrade problems
    • You can debug, monitor, log and fix all problems with access to the real data
  • You can sell the product as a service, either getting continuing revenue or advertising revenue
  • You can remove features, shut down products
  • You can control how people use the product and even what steps they may take to modify it or add plug-ins or 3rd party mods
  • You can combine data from many users to make compelling applications, particuarly in the social space
  • You can track many aspects of single and multiple user behaviour to customize services and optimize advertising, learning as you go

Some of those are disadvantages for the user of course, who has given up control. And there is one big disadvantage for the provider, namely they have to pay for all the computing resources, and that doesn’t scale — 10x users can mean paying 10x as much for computing, especially if the cloud apps run on top of a lower level cloud cluster which is sold by the minute.

But users see advantages too:  read more »

Speaking on Personal Clouds in SF, and Robocars in Phoenix

Two upcoming talks:

Tomorrow (April 4) I will give a very short talk at the meeting of the personal clouds interest group. As far as I know, I was among the first to propose the concept of the personal cloud in my essages on the Data Deposit Box back in 2007, and while my essays are not the reason for it, the idea is gaining some traction now as more and more people think about the consequences of moving everything into the corporate clouds.

My lighting talk will cover what I see as the challenges to get the public to accept a system where the computing resources are responsible to them rather than to various web sites.

On April 22, I will be at the 14th International Conference on Automated People Movers and Automated Transit speaking in the opening plenary. The APM industry is a large, multi-billion dollar one, and it’s in for a shakeup thanks to robocars, which will allow automated people moving on plain concrete, with no need for dedicated right-of-way or guideways. APMs have traditionally been very high-end projects, costing hundreds of millions of dollars per mile.

The best place to find me otherwise is at Singularity University Events. While schedules are being worked on, with luck you see me this year in Denmark, Hungary and a few other places overseas, in addition to here in Silicon Valley of course.

The rise of the small and narrow vehicle

Many of the more interesting consequences of a robotic taxi “mobility on demand” service is the ability to open up all sorts of new areas of car design. When you are just summoning a vehicle for one trip, you can be sent a vehicle that is well matched to that trip. Today we almost all drive in 5 passenger sedans or larger, whether we are alone, with a single passenger or in a group. Many always travel in an SUV or Minivan on trips that have no need of that.

The ability to use small, light vehicles means the ability to make transportation much more efficient. While electric cars are a good start (at least in places without coal-based electricity) the reality is today’s electric cars are still sedans and in fact are heavy due to their batteries. As such they use 250 to 350 watt-hours/mile. That’s good, but not great. At the national grid average, 300 wh/mile is around 3000 BTUs/mile or the equivalent of 37mpg. Good, and cleaner if from natural gas, but we can do a lot more.

Half-width vehicles have another benefit — they don’t take up much room on the road, or in parking/waiting. Two half-width vehicles that find one another on the road can pair up to take only one lane space. A road that’s heavy with half-width vehicles (as many are in the developing world) can handle a lot more traffic. Rich folks don’t tend to buy these vehicles, but they would accept one as a taxi if they are alone. Indeed, a half-width face-to-face vehicle should be very nice for 2 people.

The problem with half-width vehicles (about 1.5m or 4.5 feet if you’re going to fit two in a 12’ lane using robotic precision) is that a narrow stance just isn’t that stable, not at decent speeds. You like a wide stance to corner. One answer to that is the ability to bank, which two-wheeled vehicles do well, but which requires special independent suspension to do with 3 or 4 wheels. 2 wheels is great for some purposes, but 3 and 4 have a better grip on the road, particularly if a wet or slippery patch is encountered.

There are quite a number of 3 and 4 wheelers with independently adjustable wheels made. Consider the recent concept I-road by Toyota which exemplifies this well. There are however a number of vehicles that are not concepts, and this (rather long) Gizmag video provides a summary of a variety of real and concept vehicles in this space, as well as enclosed motorcycles and scooters, including the Nissan Landglider, the VW 1L, the Twizzy, the Tango, the Lumeno Smera and many others. Skip to about 13 minutes to see many of the 3-wheelers. Another vehicle I like is the Quadro — Watch this video of the 4 wheel version. These vehicles are aimed more at the motorcycle market and are open, while I suspect the single person robocar will be an enclosed vehicle.

I also wrote earlier about efforts on two wheels, like the concept vehicle the Twill. Other recent efforts have included the gyro-stabilized Lit Motors C-1 which can be fully enclosed on two wheels because you don’t have to stick your legs out.

I suspect the 4 wheeled bankable vehicles are the ideal solution, and the technology is surprisingly far along. Many companies prefer to make 3 wheeled vehicles because those currently get classed as motorcycles and require far less work to meet regulations. These exemptions are reportedly ending soon, and so the effort can shift to 4 wheels which should have the most stability.

The ability to bank is important not just to stay stable with a narrow stance. Banking also means you can tilt the passenger to make turns more comfortable in that the force vector will be mostly up and down, rather than side to side. In a turn it feels more like getting heavy and light rather than being shifted. Some people, however, will have trouble with motion sickness if they are not themselves looking out the window and feeling part of the banking move. Being able to tilt forward and back can have value so that starts and stops also produce more up and down force vectors rather than forward and back. While this is not yet demonstrated, it may be possible to make vehicles which provide minimal discomfort to many passengers when doing things like turns, stops and the roundabout. Roundabouts seem like a great idea for robocars in many ways, since you don’t need to have stop signs or lights, and robocars should be able to insert themselves into gaps in traffic with precision and confidence. Frequent roundabouts, however, would be disconcerting with all the turning and speed changes, to the point that many would prefer just a straight road with timed traffic lights, so that a clever car that knows the timing never hits a red.

Another entry in the narrow vehicle field that got a lot of attention is the autonomous driving Hitachi Ropits. The Ropits — here is a video — is a narrow vehicle with small wheels, and is able to be autonomous because it is super-slow — it only goes 3.7mph — you can keep up to it with a brisk walk — and is meant to go on sidewalks and pedestrian lanes, more of a mobility for the aged than a robocar. However, it is a new entry in the autonomous vehicle pantheon from a new player.

The big question that remains about these vehicles is crash safety. As motorcycles they are not receiving the same sort of testing. In a world that is mostly robocars, one could argue that you don’t need the same levels of crash safety, but we aren’t there yet. All is not lost, however. Recently I sat in a prototype of the Edison2 Very Light Car. The VLC is a 4-seater with a narrow body but a wide stance, for handling. This vehicle has been crash tested with good results, and it could be made with independent suspension and banking and a narrower stance if the market wanted that.

Small vehicles, just 4.5 feet wide and 10-12 feet long can make a huge difference. First of all, they are inherently (except the Tango) going to be light, and light is the most important thing in making them efficient. But they will also take up less space on the road, able to go 2 to a lane (or even lane split in some places.) They will also take up much less space parking. The combination of their small size (about 1/3 of a typical car) and their ability to pack close together “valet style” as robocars means you will be able to fit 4 or 5 of them in the same amount of parking lot area that today fits a single car in a non-valet lot. As noted, while many robocars will not be parking at all because they will be taxis that head off to get their next fare, those that do wish to park will be able to do it at vastly greater densities than we have today, and the consequences of that are big.

There are a few other options for increased stability with normally narrow stance. These might include:

  • Low center of gravity — this is what the Tango does, filling the very bottom with lead-acid batteries. Passengers might sit lower — some vehicle designs involve lowering after the passenger gets in.
  • Variable stance: a possible ability to widen the stance with an extendable axle so the vehicle takes a whole lane when in places that need that cornering ability and stability.
  • Extra wheel: The ability to temporarily deploy an extra wheel (probably not a drive wheel) to one side or both to temporarily increase stability. This wheel might take all the weight on that side, or balance with the others. Vehicles side-by-side could even coordinate to still fit in a lane but that sounds risky.
  • Just go slow: Narrow stance vehicles might just be used in lower speed urban routes, and take corners fairly slow.
  • Gyroscopes, under robotic control.

It’s important to consider that the risk of instability in a narrow vehicle is mostly one for human drivers, who are used to wide stances and may make errors on the physics. A robocar, with full knowledge of the vehicle’s characteristics and the shape of the road simply won’t try any turn that would tip it, and it won’t pick routes that have turns that would require the vehicle go so slowly as to impede traffic. Knowledge of road traction can complete this sort of analysis.

V2V and connected car part 3: Broadcast data

Earlier in part one I examined why it’s hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.

For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.

Today, the main thing that “connected car” means in reality is cell phone connectivity. That began with “telematics” — systems such as OnStar but has grown to using data networks to provide apps in cars. The ITS community hoped that DSRC would provide data service to cars, and this would be one reason for people to deploy it, but the cellular networks took that over very quickly. Unlike DSRC which is, as the name says, short range, the longer range of cellular data means you are connected most of the time, and all of the time in some places, and people will accept nothing less.

I believe there is a potential niche for broadcast data to mobile devices and cars. This would be a high-power shared channel. One obvious way to implement it would be to use a spare TV channel, and use the new ATSC-M/H mobile standard. ATSC provides about 19 megabits. Because TV channels can be broadcast with very high power transmitters, they reach almost everywhere in a large region around the transmitter. For broadcast data, that’s good.

Today we use the broadcast spectrum for radio and TV. Turns out that this makes sense for very popular items, but it’s a waste for homes, and largely a waste for music — people are quite satisfied instead with getting music and podcasts that are pre-downloaded when their device is connected to wifi or cellular. The amount of data we need live is pretty small — generally news, traffic and sports. (Call in talk shows need to be live but their audiences are not super large.)

A nice broadcast channel could transmit a lot of interest to cars.

  • Timing and phase information on all traffic signals in the broadcast zone.
  • Traffic data, highly detailed
  • Alerts about problems, stalled vehicles and other anomalies.
  • News and other special alerts — you could fit quite a few voice-quality station streams into one 19 megabit channel.
  • Differential GPS correction data, and even supplemental GPS signals.

The latency of the broadcast would be very low of course, but what about the latency of uploaded signals? This turns out to not be a problem for traffic lights because they don’t change suddenly on a few milliseconds notice, even if an emergency vehicle is sending them a command to change. If you know the signal is going to change 2 seconds in advance, you can transmit the time of the change over a long latency channel. If need be, a surprise change can even be delayed until the ACK is seen on the broadcast channel, to within certain limits. Most emergency changes have many seconds before the light needs to change.

Stalled car warnings also don’t need low latency. If a car finds itself getting stalled on the road, it can send a report of this over the cellular modem that’s already inside so many cars (or over the driver’s phone.) This may take a few seconds to get into the broadcast stream, but then it will be instantly received. A stalled car is a problem that lasts minutes, you don’t need to learn about it in the first few milliseconds.

Indeed, this approach can even be more effective. Because of the higher power of the radios involved, information can travel between vehicles in places where line of sight communications would not work, or would actually only work later than the server-relayed signal. This is even possible in the “classic” DSRC example of a car running a red light. While a line of sight communication of this is the fastest way to send it, the main time we want this is on blind corners, where LoS may have problems. This is a perfect time for those longer range, higher power communications on the longer waves.

Most phones don’t have ATSC-M/H and neither do cars. But receiver chips for this are cheap and getting cheaper, and it’s a consumer technology that would not be hard to deploy. However, this sort of broadcast standard could also be done in the cellular bands, at some cost in bandwidth for them.

19 megabits is actually a lot, and since traffic incidents and light changes are few, a fair bit of bandwidth would be left over. It could be sold to companies who want a cheaper way to update phones and cars with more proprietary data, including map changes, their own private traffic and so on. Anybody with a lot of customers might fight this more efficient. Very popular videos and audio streams for mobile devices could also use the extra bandwidth. If only a few people want something, point to point is the answer, but once something is wanted by many, broadcast can be the way to go.

What else might make sense to broadcast to cars and mobile phones in a city? While I’m not keen to take away some of the nice whitespaces, there are many places with lots of spare channels if designed correctly.

Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don’t think so. They’re shocked, and see this as a massive setback. They’ve invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here’s why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today’s mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren’t any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn’t planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages — packages that often don’t do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They’re even getting ready to cede their “telematics” (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?  read more »

The importance of serial media vs. sampled and Google Reader

The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won’t comment too much on their reasoning, though I will note that I believe it’s possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google’s statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.

The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It’s easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.

However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:

  • Is it browsed, serial or to be sampled?
  • Is there a core concept of new messages vs. already-read messages?
  • If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
  • Is it designed to make it easy to write and post or easy to read and consume?

Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are “read” and “unread.” A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.

E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read — almost never all of them — and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.

Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.

USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.

In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a “what’s new for me?” view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that.  read more »

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of “vehicle to vehicle” (V2V) communications was mostly orthogonal to that of robocars. That’s very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large — the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio — and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value “early adopter” value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn’t pay for phone wires in the hope that others would some day also pay for wires and they could talk to them — they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people’s social groups join, not at first. And they have the advantage of viral spreading — the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn’t satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.  read more »

Perils of the long range electric car

You’ve probably seen the battle going on between Elon Musk of Tesla and the New York Times over the strongly negative review the NYT made of a long road trip in a Model S. The reviewer ran out of charge and had a very rough trip with lots of range anxiety. The data logs published by Tesla show he made a number of mistakes, didn’t follow some instructions on speed and heat and could have pulled off the road trip if he had done it right.

Both sides are right, though. Tesla has made it possible to do the road trip in the Model S, but they haven’t made it easy. It’s possible to screw it up, and instructions to go slow and keep the heater low are not ones people want to take. 40 minute supercharges are still pretty long, they are not good for the battery and it’s hard to believe that they scale since they take so long. While Better Place’s battery swap provides a tolerable 5 minute swap, it also presents scaling issues — you don’t want to show up at a station that does 5 minute swaps and be 6th in line.

The Tesla Model S is an amazing car, hugely fun to drive and zippy, cool on the inside and high tech. Driving around a large metro area can be done without range anxiety, which is great. I would love to have one — I just love $85K more. But a long road trip, particularly on a cold day? There are better choices. (And in the Robocar world when you can get cars delivered, you will get the right car for your trip delivered.)

Electric cars have a number of worthwhile advantages, and as battery technologies improve they will come into their own. But let’s consider the economics of a long range electric. The Tesla Model S comes in 3 levels, and there is a $20,000 difference between the 40khw 160 mile version and the 85kwh 300 mile version. It’s a $35K difference if you want the performance package.

The unspoken secret of electric cars is that while you can get the electricity for the model S for just 3 cents/mile at national grid average prices (compared to 12 cents/mile for gasoline in a 30mpg car and 7 cents/mile in a 50mpg hybrid) this is not the full story. You also pay, as you can see, a lot for the battery. There are conflicting reports on how long a battery pack will last you (and that in turn varies on how you use and abuse it.) If we take the battery lifetime at 150,000 miles — which is more than most give it — you can see that the extra 45kwh add-on in the Tesla for $20K is costing about 13 cents/mile. The whole battery pack in the 85kwh Telsa, at $42K estimated, is costing a whopping 28 cents/mile for depreciation.

Here’s a yikes. At a 5% interest rate, you’re paying $2,100 a year in interest on the $42,000 Tesla S 85kwh battery pack. If you go the national average 12,000 miles/year that’s 17.5 cents/mile just for interest on the battery. Not counting vehicle or battery life. Add interest, depreciation and electricity and it’s just under 40 cents/mile — similar to a 10mpg Hummer H2. (I bet most Tesla Model S owners do more than that average 12K miles/year, which improves this.)

In other words, the cost of the battery dwarfs the cost of the electricity, and sadly it also dwarfs the cost of gasoline in most cars. With an electric car, you are effectively paying most of your fuel costs up front. You may also be adding home charging station costs. This helps us learn how much cheaper we must make the battery.

It’s a bit easier in the Nissan LEAF, whose 24kwh battery pack is estimated to cost about $15,000. Here if it lasts 150K miles we have 10 cents/mile plus the electricity, for a total cost of 13 cents/mile which competes with gasoline cars, though adding interest it’s 19 cents/mile — which does not compete. As a plus, the electric car is simpler and should need less maintenance. (Of course with as much as $10,000 in tax credits, that battery pack can be a reasonable purchase, at taxpayer expense.) A typical gasoline car spends about 5 cents/mile on non-tire maintenance.

This math changes a lot with the actual battery life, and many people are estimating that battery lives will be worse than 150K miles and others are estimating more. The larger your battery pack and the less often you fully use it, the longer it lasts. The average car doesn’t last a lot more than 150k miles, at least outside of California.

The problem with range anxiety becomes more clear. The 85kwh Tesla lets you do your daily driving around your city with no range anxiety. That’s great. But to get that you buy a huge battery pack. But you only use that extra range rarely, though you spend a lot to get it. Most trips can actually be handled by the 70 mile range Leaf, though with some anxiety. You only need all that extra battery for those occasional longer trips. You spend a lot of extra money just to use the range from time to time.  read more »

Your session has expired. Forgot your password? Click Here!

We see it all the time. We log in to a web site but after not doing anything on the site for a while — sometimes as little as 10 minutes — the site reports “your session has timed out, please log in again.”

And you get the login screen. Which offers, along with the ability to log in, a link marked “Forget your password?” which offers the ability to reset (OK) or recover (very bad) your password via your E-mail account.

The same E-mail account you are almost surely logged into in another tab or another window on your desktop. The same e-mail account that lets you go a very long time idle before needing authentication again — perhaps even forever.

So if you’ve left your desktop and some villain has come to your computer and wants to get into that site that oh-so-wisely logged you out, all they need to is click to recover the password, go into the E-mail to learn it, delete that E-mail and log in again.

Well, that’s if you don’t, as many people do, have your browser remember passwords, and thus they can log-in again without any trouble.

It’s a little better if the site does only password reset rather than password recovery. In that case, they have to change your password, and you will at least detect they did that, because you can’t log in any more and have to do a password reset. That is if you don’t just think, “Damn, I must have forgotten that password. Oh well, I will reset it now.”

In other words, a lot of user inconvenience for no security, except among the most paranoid who also have their E-mail auth time out just as quickly, which is nobody. Those who have their whole computer lock with the screen saver are a bit better off, as everything is locked out, as long as they also use whole disk encryption to stop an attacker from reading stuff off the disk.  read more »

Mesh networking when the cell network fails

Interesting article about a new plan for mesh networking Android phones if the cell network fails. I point this out because of another blog post of mine from 2005 on a related proposal from Klein Gilhousen that he was pushing after Katrina.

The wifi mesh has the problem that wifi range is not going to get much better then 30-40m, and so you need a very serious density of phones to get a real mesh going, especially to route IP as this plan wishes to. Klein’s plan was to have the phones mesh over the wireless bands that were going unusued when the cell networks were dead (or absent in the wilderness.) The problem with his plan was that phone tranceivers tend to not be able to transmit and receive on the same bands, they need a cell tower. He proposed new generations of phones be modified to allow that.

But it hasn’t happened, in spite of being an obviously valuable thing in disasters. Sure there are some interference issues at the edges of legitimate cell nets, but they could be worked out. Cell phones are almost exclusively sold via carriers in the many countries, including the USA. They haven’t felt it a priority to push for phones that can work without carriers.

I suspect trying to route voice or full IP is also a mistake, especially for a Katrina like situation. There the older network technologies of the world, designed for very intermittent connectivity, make some sense. A network designed to send short text messages, a “short message service” if you will, using mesh principles combined with store and forward could make sure texts got to and from a lot of places. You might throw in small photos so trapped people could do things like send photos of wounds to doctors.

Today’s phones have huge amounts of memory. Phones with gigabytes of flash could store tens to hundreds of millions of passing (compressed and encrypted) texts until work got out that a text had been delivered. Texts could hop during brief connections, and airplanes, blimps and drones could fly overhead doing brief data syncs with people on the ground. (You would not send every text to every phone, but every phone would know how many hops it has been recently from the outside, and you could send always upstream.) A combination of cell protocols when far and wifi when close (or to those airplanes) could get decent volumes of data moving.

Phones would know if they were on their own batteries, or plugged into a car or other power source, and the ones with power would advertise they can route long term. It would not be perfect but it would be much better than what we have now.

But the real lament is that, as fast as the pace of change is in some fields of mobile, here we are 7.5 years after Katrina, having seen several other disasters that wiped out cell nets, and nothing much has changed.

Top Myths of Robocars (and why V2V is not the answer)

There’s been a lot of press on robocars in the last few months, and a lot of new writers expressing views. Reading this, I have encountered a recurring set of issues and concerns, so I’ve prepared an article outlining these top myths and explaining why they are not true.

Perhaps of strongest interest will be one of the most frequent statements — that Vehicle to Vehicle (V2V) communication is important, or even essential, to the deployment of robocars. The current V2V (and Vehicle to Infrastructure) efforts, using the DSRC radio spec are quite extensive, and face many challenges, but to the surprise of many, this is largely orthogonal to the issues around robocars.

So please read The top 10 (or so) myths or robocars.

They are:

  • They won’t be safe
  • The big issue is who will be liable in a crash
  • The cars will need special dedicated roads and lanes
  • This only works when all cars are robocars and human driving is banned
  • We need radio links between cars to make this work
  • We wont see self-driving cars for many decades
  • It is a long time before this will be legal
  • How will the police give a robocar a ticket?
  • People will never trust software to drive their car
  • They can’t make an OS that doesn’t crash, how can they make a safe car?
  • We need the car to be able to decide between hitting a schoolbus and going over a cliff
  • The cars will always go at the speed limit

You may note that this is not my first myths FAQ, as I also have Common objections to Robocars written when this site was built. Only one myth is clearly in both lists, a sign of how public opinion has been changing.

CES Report, Road tolling and more

I’m back from CES, and there was certainly a lot of press over two pre-robocar announcements there:

Toyota

The first was the Toyota/Lexus booth, which was dominated by a research car reminiscent of the sensor-stacked vehicles of the DARPA grand challenges. It featured a Velodyne on top (like almost all the high capability vehicles today) and a very large array of radars, including six looking to the sides. Toyota was quite understated about the vehicle, saying they had low interest in full self-driving, but were doing this in order to research better driver assist and safety systems.

The Lexus booth also featured a car that used ultrasonic sensors to help you when backing out of a blind parking space. These sensors let you know if there is somebody coming down the lane of the parking lot.

Audi

Audi did two demos for the press which I went to see. Audi also emphasized that this is long-term concept stuff, and meant as research work to enhance their “driver in the loop systems.” They are branding these projects “Piloted Parking” and “Piloted Driving” to suggest the idea of an autopilot with a human overseer. However, the parking system is unmanned, and was demonstrated in the lot of the Mandarin Oriental. The demo area was closed off to pedestrians, however.

The parking demo was quite similar to the Junior 3 demo I saw 3 years ago, and no surprise, because Junior 3 was built at the lab which is a collaboration between Stanford and VW/Audi. Junior 3 had a small laser sensor built into it. Instead, the Piloted Parking car had only ultransonic sensors and cameras, and relied on a laser mounted in the parking lot. In this appraoch, the car has a wifi link which it uses to download a parking lot map, as well as commands from its owner, and it also gets data from the laser. Audi produced a mobile app which could command the car to move, on its own, into the lot to find a space, and then back to pick up the owner. The car also had a slick internal display with pop-up screen.

The question of where to put the laser is an interesting one. In this approach, you only park in lots that are pre-approved and prepared for self-parking. Scanning lasers are currently expensive, and if parking is your only application, then there are a lot more cars then there are parking lots and it might make sense to put the expensive sensor in the lots. However, if the cars want to have the laser anyway for driving it’s better to have the sensor in the car. In addition, it’s more likely that car buyers will early adopt than parking lot owners.

In the photo you see the Audi highway demo car sporting the Nevada Autonomous Vehicle testing licence #007. Audi announced they just got this licence, the first car maker to do so. This car offers “Piloted Driving” — the driver must stay alert, while a lane-keeping system steers the car between the lane markers and an automatic cruise control maintains distance from other cars. This is similar to systems announced by Mercedes, Cadillac, VW, Volvo and others. Audi already has announced such a system for traffic jams — the demo car also handled faster traffic.

Audi also announced their use of a new smaller LIDAR sensor. The Velodyne found on the Toyota car and Google cars is a large, roof-mounted device. However, they did not show a car using this sensor.

Audi also had a simulator in their booth showing a future car that can drive in traffic jams, and lets you take a video phone call while it is driving. If you take control of the car, it cuts off the video, but keeps the audio.  read more »

Robocars and road charging

Some articles from 2012

Happy 2013: Here are some articles I bookmarked last year that you may find of interest this year.

An NBC video about the AutonoMOUS team in Berlin which is one of the more advanced academic teams, featuring on-street driving, lane-changes and more.

An article about “dual mode transport” which in this case means all sorts of folding bikes and scooters that fit into cars. This is of interest both as competition to robocars (you can park remotely and scoot in, competing with one of the robocar benefits) and interesting if you consider the potential of giving limited self-driving to some of these scooters, so they can deliver themselves to you and you can take one way trips. The robocar world greatly enables the ability to switch modes on different legs of a trip, taking a car on one leg, a bike on another, a subway on a 3rd and a car back home. Now add a scooter for medium length trips.

Here is an analysis of how the U.S. overpays greatly for public transit buildout as well as other infrastructure projects. This probably plays a role in the poor performance of public transit in the US.

While I’ve pointed to many videos and sources on the Google car, rather than talk about it myself, if you want a fairly long lecture, check out this talk by Sebastian Thrun at the University of Alberta.

The Freakonomics folks have caught the fever, and ask the same question I have been asking about why urban and transportation planners are blind to this revolution in their analysis

You may have read my short report last year on the Santa Clara Law conference on autonomous vehicles. The Law Review Issue is now out with many of those papers. I found the insurance and liability papers to be of of the most use — so many other articles on those topics miss the boat.

The future of the city and Robocar Oriented Development

It’s been a while since I’ve done a major new article on long-term consequences of Robocars. For some time I’ve been puzzling over just how our urban spaces will change because of robocars. There are a lot of unanswered questions, and many things could go both ways. I have been calling for urban planners to start researching the consequences of robocars and modifying their own plans based on this.

While we don’t know enough to be sure, there are some possible speculations about potential outcomes. In particular, I am interested in the future of the city and suburb as robocars make having ample parking less and less important. Today, city planners are very interested in high-density development around transit stops, known as “transit oriented development” or TOD I now forecast a different trend I will call ROD, or robocar oriented development.

Read the essay Robocar Oriented Development

For a view of how the future of the city might be quite interesting, in contrast to the WALL-E car-dominant vision we often see.

Earlier I wrote an essay on robocar changes affecting urban planning which outlined various changes and posed questions about what they meant. In this new essay, I propose answers for some of those questions. This is a somewhat optimistic essay, but I’m not saying this is a certain outcome by any means.

As always, while I do consult for Google’s project, they don’t pay me enough to be their spokesman. This long-term vision is a result of the external work found on this web site, and should not be taken to imply any plans for that project.

Is the California High Speed Rail Plan ignoring accelerating technological change?

(Of late I have been writing a few articles for some other online sites. The following is an article that appeared on Forbes.com It was also commented on positively and negatively with angry threads.)

There’s been much debate in the USA about High Speed Rail (HSR) and most notably the giant project aimed at moving 20 to 24 million passengers a year through the California central valley, and in particular from downtown LA to downtown San Francisco in 2 hours 40 minutes.

There’s been big debate about the projected cost ($68B to $99B) and the inability of projected revenues to cover interest on the capital let alone operating costs. The project is beginning with a 130 mile segment in the central valley to make use of federal funds. This could be a “rail to nowhere” connecting no big towns and with no trains on it. By 2028 they plan to finally connect SF and LA.

The debate about the merits of this train is extensive and interesting, but its biggest flaw is that it is rooted in the technology of the past and present day. Indeed, HSR itself is around 50 years old, and the 350 kph top speed of the planned line was attained by the French TGV over 30 years ago.

The reality of the world, however, is that technology is changing very fast, and in some fields like computing at an exponential rate. Transportation has not been used to such rapid rates of change, but that protection is about to end. HSR planners are comparing their systems to other 20th century systems and not planning for what 2030 will actually hold.

At Singularity University, our mission is to study and teach about the effects of these rapidly changing technologies. Here are a few areas where new technology will disrupt the plans of long-term HSR planners:

Self-Driving Cars

Cars that can drive and deliver themselves left the pages of science fiction and entered reality in the 2000s thanks to many efforts, including the one at Google. (Disclaimer: I am a consultant to, but not a spokesman for that team.) Readers of my own blog will know it is one of my key areas of interest. By 2030 such vehicles are likely to be common, and in fact it’s quite probable they will be able to travel safely on highways at faster speeds than we trust humans to drive. They could also platoon to become more efficient.

Their ability to deliver themselves is both boon and bane to rail transit. They can offer an excellent “last/first mile” solution to take people from their driveways to the train stations — for it is door to door travel time that people care about, not airport-to-airport or downtown-to-downtown. The HSR focus on a competitive downtown-to-downtime time ignores the fact that only a tiny fraction of passengers will want that precise trip.

Self-delivering cars could offer the option of mobility on demand in a hired vehicle that is the right vehicle for the trip — often a light, efficient single passenger vehicle that nobody would buy as their only car today. These cars will offer a more convenient and faster door-to-door travel time on all the modest length trips (100 miles or less) in the central valley. Because the passenger count estimates for the train exceed current air-travel counts in the state, they are counting heavily on winning over those who currently drive cars in the central valley, but they might not win many of them at all.

The cars won’t beat the train on the long haul downtown SF to downtown LA. But they might well be superior or competitive (if they can go 100mph on I-5 or I-99) on the far more common suburb-to-suburb door to door trips. But this will be a private vehicle without a schedule to worry about, a nice desk and screen and all the usual advantages of a private vehicle.

Improved Air Travel

The air travel industry is not going to sit still. The airlines aren’t going to just let their huge business on the California air corridor disappear to the trains the way the HSR authority hopes. These are private companies, and they will cut prices, and innovate, to compete. They will find better solutions to the security nightmare that has taken away their edge, and they’ll produce innovative products we have yet to see. The reality is that good security is possible without requiring people arrive at airports an hour before departure, if we are driven to make it happen. And the trains may not remain immune from the same security needs forever.

On the green front, we already see Boeing’s new generation of carbon fiber planes operating with less fuel. New turboprops are quiet and much more efficient, and there is more to come.

The fast trains and self-driving cars will help the airports. Instead of HSR from downtown SF to downtown LA, why not take that same HSR just to the airport, and clear security while on the train to be dropped off close to the gate. Or imagine a self-driving car that picks you up on the tarmac as you walk off the plane and whisks you directly to your destination. Driven by competition, the airlines will find a way to take advantage of their huge speed advantage in the core part of the journey.

Self-driving cars that whisk people to small airstrips and pick them up at other small airstrips also offer the potential for good door-to-door times on all sorts of routes away from major airports. The flying car may never come, but the seamless transition from car to plane is on the way.

We may also see more radical improvements here. Biofuels may make air travel greener, and lighter weight battery technologies, if they arrive thanks to research for cars, will make the electric airplane possible. Electric aircraft are not just greener — it becomes more practical to have smaller aircraft and do vertical take-off and landing, allowing air travel between any two points, not just airports.

These are just things we can see today. What will the R&D labs of aviation firms come up with when necesessity forces them towards invention?

Improved Rail

Rail technology will improve, and in fact already is improving. Even with right-of-way purchased, adaptation of traditional HSR to other rail forms may be difficult. Expensive, maglev trains have only seen some limited deployment, and while also expensive and theoretical, many, including the famous Elon Musk, have proposed enclosed tube trains (evacuated or pneumatic) which could do the trip faster than planes. How modern will the 1980s-era CHSR technology look to 2030s engineers?

Telepresence

Decades after its early false start, video conferencing is going HD and starting to take off. High end video meeting systems are already causing people to skip business trips, and this trend will increase. At high-tech companies like Google and Cisco, people routinely use video conferencing to avoid walking to buildings 10 minutes away.

Telepresence robots, which let a remote person wander around a building, go up to people and act more like they are really there are taking off and make more and more people decide even a 3 hour one-way train trip or plane trip is too much. This isn’t a certainty, but it would also be wrong to bet that many trips that take place today just won’t happen in the future.

Sprawl

Like it or not, in many areas, sprawl is increasing. You can’t legislate it away. While there are arguments on both sides as to how urban densities will change, it is again foolish to bet that sprawl won’t increase in many areas. More sprawl means even less value in downtown-to-downtown rail service, or even in big airports. Urban planners are now realizing that the “polycentric” city which has many “downtowns” is the probable future in California and many other areas.

That Technology Nobody Saw Coming

While it may seem facile to say it, it’s almost assured that some new technology we aren’t even considering today will arise by 2030 which has some big impact on medium distance transportation. How do you plan for the unexpected? The best way is to keep your platform as simple as possible, and delay decisions and implementations where you can. Do as much work with the knowledge of 2030 as you can, and do as little of your planning with the knowledge of 2012 as you can.

That’s the lesson of the internet and the principle known as the “stupid network.” The internet itself is extremely simple and has survived mostly unchanged from the 1980s while it has supported one of history’s greatest whirlwinds of innovation. That’s because of the simple design, which allowed innovation to take place at the edges, by small innovators. Simpler base technologies may seem inferior but are actually superior because they allow decisions and implementations to be delayed to a time when everything can be done faster and smarter. Big projects that don’t plan this way are doomed to failure.

None of these future technologies outlined here are certain to pan out as predicted — but it’s a very bad bet to assume none of them will. California planners and the CHSR authority need to do an analysis of the HSR operating in a world of 2030s technology and sprawl, not today’s.

Mercedes cruising S-Class, NHTSA and Google

While there had been many rumous that Mercedes would introduce limited self-driving in the 2013 S-class, that was not to be, however, it seems plans for the 2014 S-class are much more firm. This car will feature “steering assist” which uses stereo cameras and radar to follow lanes and follow cars, along with standard ACC functions. Reportedly it will operate at very high speeds.

There’s also a nice article on the Mercedes test facility. They are well known for their interesting test facilities, and this one uses an inflatable car being towed on a test track, making it safe to hit the car if there is a problem.

Media sources are also reporting that Google (disclaimer, they are a client) has hired Ron Medford, deputy director of the National Highway Transportation Safety Agency, which sets the vehicle safety standards and is currently researching how to certify self-driving cars.

Foresight Institute technical conference is Jan 11 in Palo Alto

I’m on the board of the Foresight Institute, which at over 25 years old has been promoting nanotech since long before people knew the word. This January, we will be holding our technical conference on nanotechnology and related fields. Foresight’s focus is on the potential for molecular manufacturing — doing things at the atomic level — and not simply on fine structure materials.

It may surprise you just how much research is going on in the field of atomically precise manufacturing, and the positive results that are coming from it. Today people (including me) are excited by 3-D printers that can reproduce macroscopic shapes with good precision, but the holy grail is to build structures at the atomic level, as it has the potential to produce anything that can be formed, cheaply and in small volumes.

Foresight hosts two conferences — the other is a more general futurist conference on the implications of these technologies, while this one offers the results of in-depth research. Check out the program page for a list of speakers including Fraser Stoddart, George Church, John Randall, William Goddard and many others.

Update: Blog readers can get a $100 discount on registration with this code: 2013QDFP

Nate Silver is Not God and other political musings

In the wake of the election, the big nerd story is the perfect stats-based prediction that Nate Silver of the 538 blog made on the results in every single state. I was following the blog and like all, am impressed with his work. The perfection gives the wrong impression, however. Silver would be the first to point out he predicted Florida as very close with a slight lean for Obama, and while that is what happened, that’s really just luck. His actual prediction was that it was too close to call. But people won’t see that, they see the perfection. I hope he realizes he should try to downplay this. For his own sake, if he doesn’t, he has nowhere to go but down in 2014 and 2016.

But the second reason is stronger. People will put even more faith in polls. Perhaps even not faith, but reasoned belief, because polls are indeed getting more accurate. Good polls that are taken far in advance are probably accurate about what the electorate thinks then, but the electorate itself is not that accurate far in advance. So the public and politicians should always be wary about what the polls say before the election.

Silver’s triumph means they may not be. And as the metaphorical Heisenberg predicts, the observations will change the results of the election.

There are a few ways this can happen. First, people change their votes based on polls. They are less likely to vote if they think the election is decided, or they sometimes file protest votes when they feel their vote won’t change things. Vice versa, a close poll is one way to increase turnout, and both sides push their voters to make the difference. People are going to think the election is settled because 538 has said what people are feeling.

The second big change has already been happening. Politicians change their platforms due to the polls. Danny Hillis observed some years ago that the popular vote is almost always a near tie for a reason. In a two party system, each side regularly runs polls. If the polls show them losing, they move their position in order to get to 51%. They don’t want to move to 52% as that’s more change than they really want, but they don’t want to move to less than 50% or they lose the whole game. Both sides do this, and to some extent the one with better polling and strategy wins the election. We get two candidates, each with a carefully chosen position designed to (according to their own team) just beat the opposition, and the actual result is closer to a random draw driven by chaotic factors.

Well, not quite. As Silver shows, the electoral college stops that from happening. The electoral college means different voters have different value to the candidates, and it makes the system pretty complex. Instead of aiming for a total of voters, you have to worry that position A might help you in Ohio but hurt you in Florida, and the electoral votes happen in big chunks which makes the effect of swing states more chaotic. Thus poll analysis can tell you who will win but not so readily how to tweak things to make the winner be you. The college makes small differences in overall support lead to huge differences in the college.

In Danny’s theory, the two candidates do not have to be the same, they just have to be the same distance from a hypothetical center. (Of course to 3rd parties the two candidates do tend to look nearly identical but to the members of the two main parties they look very different.)

Show me the money?

Many have noted that this election may have cost $6B but produced a very status quo result. Huge money was spent, but opposed forces also spent their money, and the arms race just led to a similar balance of power. Except a lot of rich donors spent a lot of their money, got valuable access to politicians for it, and some TV stations in Ohio and a few other states made a killing. The fear that corporate money would massively swing the process does not appear to have gained much evidence, but it’s clear that influence was bought.

I’m working on a solution to this, however. More to come later on that.

Ballot Propositions

While there have been some fairly good ballot propositions (such as last night’s wins for Marijuana and marriage equality) I am starting to doubt the value of the system itself. As much as you might like the propositions you like, if half of the propositions are negative in value, the system should be scrapped. Indeed, if only about 40% are negative, it should still be scrapped because of the huge cost of the system itself.  read more »

Larry Niven and Greg Benford on "Bowl of Heaven" and Big, Dumb Objects

Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book “Bowl of Heaven.” Here’s a Youtube video of my session. They did a review of the history of SF about “big dumb objects” — stories like Niven’s Ringworld, where a huge construct is a central part of the story.

Syndicate content