Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

The rise of the small and narrow vehicle

Many of the more interesting consequences of a robotic taxi “mobility on demand” service is the ability to open up all sorts of new areas of car design. When you are just summoning a vehicle for one trip, you can be sent a vehicle that is well matched to that trip. Today we almost all drive in 5 passenger sedans or larger, whether we are alone, with a single passenger or in a group. Many always travel in an SUV or Minivan on trips that have no need of that.

The ability to use small, light vehicles means the ability to make transportation much more efficient. While electric cars are a good start (at least in places without coal-based electricity) the reality is today’s electric cars are still sedans and in fact are heavy due to their batteries. As such they use 250 to 350 watt-hours/mile. That’s good, but not great. At the national grid average, 300 wh/mile is around 3000 BTUs/mile or the equivalent of 37mpg. Good, and cleaner if from natural gas, but we can do a lot more.

Half-width vehicles have another benefit — they don’t take up much room on the road, or in parking/waiting. Two half-width vehicles that find one another on the road can pair up to take only one lane space. A road that’s heavy with half-width vehicles (as many are in the developing world) can handle a lot more traffic. Rich folks don’t tend to buy these vehicles, but they would accept one as a taxi if they are alone. Indeed, a half-width face-to-face vehicle should be very nice for 2 people.

The problem with half-width vehicles (about 1.5m or 4.5 feet if you’re going to fit two in a 12’ lane using robotic precision) is that a narrow stance just isn’t that stable, not at decent speeds. You like a wide stance to corner. One answer to that is the ability to bank, which two-wheeled vehicles do well, but which requires special independent suspension to do with 3 or 4 wheels. 2 wheels is great for some purposes, but 3 and 4 have a better grip on the road, particularly if a wet or slippery patch is encountered.

There are quite a number of 3 and 4 wheelers with independently adjustable wheels made. Consider the recent concept I-road by Toyota which exemplifies this well. There are however a number of vehicles that are not concepts, and this (rather long) Gizmag video provides a summary of a variety of real and concept vehicles in this space, as well as enclosed motorcycles and scooters, including the Nissan Landglider, the VW 1L, the Twizzy, the Tango, the Lumeno Smera and many others. Skip to about 13 minutes to see many of the 3-wheelers. Another vehicle I like is the Quadro — Watch this video of the 4 wheel version. These vehicles are aimed more at the motorcycle market and are open, while I suspect the single person robocar will be an enclosed vehicle.

I also wrote earlier about efforts on two wheels, like the concept vehicle the Twill. Other recent efforts have included the gyro-stabilized Lit Motors C-1 which can be fully enclosed on two wheels because you don’t have to stick your legs out.

I suspect the 4 wheeled bankable vehicles are the ideal solution, and the technology is surprisingly far along. Many companies prefer to make 3 wheeled vehicles because those currently get classed as motorcycles and require far less work to meet regulations. These exemptions are reportedly ending soon, and so the effort can shift to 4 wheels which should have the most stability.

The ability to bank is important not just to stay stable with a narrow stance. Banking also means you can tilt the passenger to make turns more comfortable in that the force vector will be mostly up and down, rather than side to side. In a turn it feels more like getting heavy and light rather than being shifted. Some people, however, will have trouble with motion sickness if they are not themselves looking out the window and feeling part of the banking move. Being able to tilt forward and back can have value so that starts and stops also produce more up and down force vectors rather than forward and back. While this is not yet demonstrated, it may be possible to make vehicles which provide minimal discomfort to many passengers when doing things like turns, stops and the roundabout. Roundabouts seem like a great idea for robocars in many ways, since you don’t need to have stop signs or lights, and robocars should be able to insert themselves into gaps in traffic with precision and confidence. Frequent roundabouts, however, would be disconcerting with all the turning and speed changes, to the point that many would prefer just a straight road with timed traffic lights, so that a clever car that knows the timing never hits a red.

Another entry in the narrow vehicle field that got a lot of attention is the autonomous driving Hitachi Ropits. The Ropits — here is a video — is a narrow vehicle with small wheels, and is able to be autonomous because it is super-slow — it only goes 3.7mph — you can keep up to it with a brisk walk — and is meant to go on sidewalks and pedestrian lanes, more of a mobility for the aged than a robocar. However, it is a new entry in the autonomous vehicle pantheon from a new player.

The big question that remains about these vehicles is crash safety. As motorcycles they are not receiving the same sort of testing. In a world that is mostly robocars, one could argue that you don’t need the same levels of crash safety, but we aren’t there yet. All is not lost, however. Recently I sat in a prototype of the Edison2 Very Light Car. The VLC is a 4-seater with a narrow body but a wide stance, for handling. This vehicle has been crash tested with good results, and it could be made with independent suspension and banking and a narrower stance if the market wanted that.

Small vehicles, just 4.5 feet wide and 10-12 feet long can make a huge difference. First of all, they are inherently (except the Tango) going to be light, and light is the most important thing in making them efficient. But they will also take up less space on the road, able to go 2 to a lane (or even lane split in some places.) They will also take up much less space parking. The combination of their small size (about 1/3 of a typical car) and their ability to pack close together “valet style” as robocars means you will be able to fit 4 or 5 of them in the same amount of parking lot area that today fits a single car in a non-valet lot. As noted, while many robocars will not be parking at all because they will be taxis that head off to get their next fare, those that do wish to park will be able to do it at vastly greater densities than we have today, and the consequences of that are big.

There are a few other options for increased stability with normally narrow stance. These might include:

  • Low center of gravity — this is what the Tango does, filling the very bottom with lead-acid batteries. Passengers might sit lower — some vehicle designs involve lowering after the passenger gets in.
  • Variable stance: a possible ability to widen the stance with an extendable axle so the vehicle takes a whole lane when in places that need that cornering ability and stability.
  • Extra wheel: The ability to temporarily deploy an extra wheel (probably not a drive wheel) to one side or both to temporarily increase stability. This wheel might take all the weight on that side, or balance with the others. Vehicles side-by-side could even coordinate to still fit in a lane but that sounds risky.
  • Just go slow: Narrow stance vehicles might just be used in lower speed urban routes, and take corners fairly slow.
  • Gyroscopes, under robotic control.

It’s important to consider that the risk of instability in a narrow vehicle is mostly one for human drivers, who are used to wide stances and may make errors on the physics. A robocar, with full knowledge of the vehicle’s characteristics and the shape of the road simply won’t try any turn that would tip it, and it won’t pick routes that have turns that would require the vehicle go so slowly as to impede traffic. Knowledge of road traction can complete this sort of analysis.

V2V and connected car part 3: Broadcast data

Earlier in part one I examined why it’s hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.

For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.

Today, the main thing that “connected car” means in reality is cell phone connectivity. That began with “telematics” — systems such as OnStar but has grown to using data networks to provide apps in cars. The ITS community hoped that DSRC would provide data service to cars, and this would be one reason for people to deploy it, but the cellular networks took that over very quickly. Unlike DSRC which is, as the name says, short range, the longer range of cellular data means you are connected most of the time, and all of the time in some places, and people will accept nothing less.

I believe there is a potential niche for broadcast data to mobile devices and cars. This would be a high-power shared channel. One obvious way to implement it would be to use a spare TV channel, and use the new ATSC-M/H mobile standard. ATSC provides about 19 megabits. Because TV channels can be broadcast with very high power transmitters, they reach almost everywhere in a large region around the transmitter. For broadcast data, that’s good.

Today we use the broadcast spectrum for radio and TV. Turns out that this makes sense for very popular items, but it’s a waste for homes, and largely a waste for music — people are quite satisfied instead with getting music and podcasts that are pre-downloaded when their device is connected to wifi or cellular. The amount of data we need live is pretty small — generally news, traffic and sports. (Call in talk shows need to be live but their audiences are not super large.)

A nice broadcast channel could transmit a lot of interest to cars.

  • Timing and phase information on all traffic signals in the broadcast zone.
  • Traffic data, highly detailed
  • Alerts about problems, stalled vehicles and other anomalies.
  • News and other special alerts — you could fit quite a few voice-quality station streams into one 19 megabit channel.
  • Differential GPS correction data, and even supplemental GPS signals.

The latency of the broadcast would be very low of course, but what about the latency of uploaded signals? This turns out to not be a problem for traffic lights because they don’t change suddenly on a few milliseconds notice, even if an emergency vehicle is sending them a command to change. If you know the signal is going to change 2 seconds in advance, you can transmit the time of the change over a long latency channel. If need be, a surprise change can even be delayed until the ACK is seen on the broadcast channel, to within certain limits. Most emergency changes have many seconds before the light needs to change.

Stalled car warnings also don’t need low latency. If a car finds itself getting stalled on the road, it can send a report of this over the cellular modem that’s already inside so many cars (or over the driver’s phone.) This may take a few seconds to get into the broadcast stream, but then it will be instantly received. A stalled car is a problem that lasts minutes, you don’t need to learn about it in the first few milliseconds.

Indeed, this approach can even be more effective. Because of the higher power of the radios involved, information can travel between vehicles in places where line of sight communications would not work, or would actually only work later than the server-relayed signal. This is even possible in the “classic” DSRC example of a car running a red light. While a line of sight communication of this is the fastest way to send it, the main time we want this is on blind corners, where LoS may have problems. This is a perfect time for those longer range, higher power communications on the longer waves.

Most phones don’t have ATSC-M/H and neither do cars. But receiver chips for this are cheap and getting cheaper, and it’s a consumer technology that would not be hard to deploy. However, this sort of broadcast standard could also be done in the cellular bands, at some cost in bandwidth for them.

19 megabits is actually a lot, and since traffic incidents and light changes are few, a fair bit of bandwidth would be left over. It could be sold to companies who want a cheaper way to update phones and cars with more proprietary data, including map changes, their own private traffic and so on. Anybody with a lot of customers might fight this more efficient. Very popular videos and audio streams for mobile devices could also use the extra bandwidth. If only a few people want something, point to point is the answer, but once something is wanted by many, broadcast can be the way to go.

What else might make sense to broadcast to cars and mobile phones in a city? While I’m not keen to take away some of the nice whitespaces, there are many places with lots of spare channels if designed correctly.

Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don’t think so. They’re shocked, and see this as a massive setback. They’ve invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here’s why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today’s mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren’t any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn’t planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages — packages that often don’t do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They’re even getting ready to cede their “telematics” (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?  read more »

The importance of serial media vs. sampled and Google Reader

The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won’t comment too much on their reasoning, though I will note that I believe it’s possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google’s statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.

The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It’s easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.

However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:

  • Is it browsed, serial or to be sampled?
  • Is there a core concept of new messages vs. already-read messages?
  • If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
  • Is it designed to make it easy to write and post or easy to read and consume?

Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are “read” and “unread.” A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.

E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read — almost never all of them — and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.

Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.

USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.

In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a “what’s new for me?” view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that.  read more »

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of “vehicle to vehicle” (V2V) communications was mostly orthogonal to that of robocars. That’s very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large — the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio — and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value “early adopter” value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn’t pay for phone wires in the hope that others would some day also pay for wires and they could talk to them — they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people’s social groups join, not at first. And they have the advantage of viral spreading — the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn’t satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.  read more »