Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Computerized road trains in Europe

A proposal is being floated in Europe for computerized convoys or road trains within the next decade. This is a proposal for a system where cars can hand over control to a lead car and follow in a train or convoy, without physical connection.

This idea comes up a lot as an early robocar technology. It is particularly common because it’s much easier to do — a human driver still is in charge, and the robotic control is limited to a very limited and simple environment. It’s safe to say that we could make this work very quickly if we wanted to. There is no navigation or vision required, no recognition of obstacles, no choice of speeds or turns. Cars that come together in a convoy can draft to get a serious boost in fuel efficiency, and of course the un-drivers can now relax and read or work on the trip.

As a robocar booster, people are surprised when I say I am not too thrilled about this idea, at least as an early technology. Rather I think it’s a great idea for later. In spite of the enthusiasm with which I write, the robocar problem is not a simple one. This much simpler problem is tempting but has some snags.

First of all, if you have a bug in a standalone robocar system, it may cause an accident, and that may injure or kill the occupants of the robocar, and perhaps one or two other cars. Death is less likely at urban speeds of course. A problem with a computerized convoy could have terrible results, involving scores of cars. Since most people want this for the highway, the problem would also occur at lethal speed. Convoys are just not the first place we want to test our systems and have our first accidents.

Secondly, forming convoys requires a critical mass of suitably equipped cars. Of course, you don’t need a dozen full robocars to make a train, all you would need is cars with drive-by-wire and some much simpler control circuitry. But even so, the incentive to get a car with this feature has to get over a critical mass hump if it’s going to be worthwhile. It’s not quite as bad as fully ad-hoc trains, since you can have scheduled trains, lead by a bus or truck driver, and cars can see such a lead vehicle and get in behind it. But at first, the odds of many cars all finding one another at the same time is low. If the train is going faster than regular traffic in a carpool lane, as we hope it would, it will not be easy to join a train that moves past you on the highway. If it moves slower than traffic, it is easy to slow down and join it, but then it has to move slower, with all the attendant problems.

Computerized convoys have advantages and disadvantages over physical ones. Physical ones probably can only be formed while stopped, and probably only unformed that way too. One could see the last car in a physical convoy undocking while moving, so with correct ordering it might work out, but it’s a far cry from a virtual convoy which allows anybody to join and leave at any time.

Physical convoys however can transmit power. This is useful if you expect people to be driving short-range electric cars. They would take their short range car and join a convoy, and be powered by the lead locomotive while operating, and even be recharging a bit. After dispersion, the vehicles would only need to go a short destination to their target and back to the evening train.

Physical coupling makes it harder for one car to leave the train due to a failure. On the other hand it means that if the lead car wants to change lanes, all cars must do so. If the lead car leaves the road, they all do. Jack-knifing is a real worry, which is one reason that today even cargo road trains are limited to 2 trailers in urban areas, and 3 trailers in rural areas, if they are allowed at all.

Physical coupling requires specially modified vehicles. This is even more the case if the locomotive will actually be towing the vehicles physically rather than providing them with electricity for their motors and batteries. Either of these is a major modification, while virtual coupling only requires a drive-by-wire car and a small matter of programming.

Even full robocars probably should not form convoys right away. We should wait until our confidence is even higher, in spite of the fuel savings. If one car goes bad, or its occupants try to take over and move to manual driving, the consequences could be nasty in any convoy. And of course, the first robocars on the road will never get to join convoys as they will not meet the others. That’s why you need to solve the solo navigation problem first, and then you get enough on the road to work on the cooperation problems.

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.

Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)

Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.

Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.

This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.

We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.

So I propose two distinct things:

  1. A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
  2. A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.

The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:

  • Full frames (ie. what we do today)
  • Rastered updates to specific rectangles, with ability to scale them.
  • More arbitrary shapes (alpha) and ability to move the shapes with any timebase
  • VNC level abilities
  • X windows level abilities
  • Graphics card (polygon) level abilities
  • In the unlikely extreme, the abilities of high level languages like display postscript.

I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.

One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.

Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)

Note that this approach also delivers something I asked for in 2004 — that it be easy to make any LCD act as a wireless digital picture frame.

It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.

Protocol

As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.

  • Paint line or rectangle with specified values, or gradient fill.
  • Move object, and move entire screen
  • Adjust brightness of rectangle (fade)
  • Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
  • Display pre-buffered rectangle

However, lessons learned from other protocols might expand this list slightly.

One connector?

This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.

There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.

To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.

Final rough description

Here’s a more refined rough set of features of the universal connector:

  • Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
  • Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
  • Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
  • If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
  • Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
  • When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)

Robocar Talk, Volvo automatic pedestrian avoidance

First: I will be speaking on robocars tomorrow, Tuesday Nov 9, at 6:30 pm for the meeting of the Jewish High Tech Community in Silicon Valley. The talk is at 6:30pm at the conference center of Fenwick and West at Castro & California in Mountain View. The public is welcome to attend, there is a $10 fee for non-members. This will be similar to my talk at Stanford 2 weeks ago, and a bit more extensive than the one in New York early in October, which Forbes said was the audience favorite at the event.

Volvo has built a brand around safe cars, and last year committed that nobody would die in a newer Volvo by 2020. They plan to do much of this with better passenger safety systems, akin to the work they have done on airbags and crumple zones. However, they also intend to use a lot of computerized technologies to make it happen. Other teams are pushing to expand the goal inside Volvo to also stop people from being killed by Volvos. To that end, next year’s Volvo S60 will come with a “Pedestrian Avoidance System” which uses a camera and machine vision to identify pedestrians and calculate if the vehicle is about to hit one. If it sees a potential pedestrian collision it will beep and alert the driver. If the driver does nothing, the car will brake.

Here is a video of the S60 in action:

It’s impressive, though pure machine vision suffers problems as lighting changes, which is one reason most work recently has been on LIDAR. It’s also interesting to see if they will be able to avoid making it too conservative. If the warning goes off all the time, even for a pedestrian who will (to the human eye) clearly slide by the side of the car at places like a crosswalk, drivers may learn to ignore the alarm, or get very annoyed and shut the whole system off it it brakes for them when they know an impact is not imminent. I’m hoping to learn more about Volvo’s efforts in the future. No other company has put as much effort into building a brand around safety, so we can expect Volvo, which has slipped in this status of late, to work very hard to maintain it and adapt robocar technologies to safer human driving and fully autonomous driving as well.

Dense triple parking

I have written of a simple algorithm to allow dense Valet style parking of robocars, such as triple parking on the roadsides. In this algorithm, one gap is left in the outer lanes, and the Robocars are able to move together, as an entire row segment, to “move the gap” as quickly as a single car can move. That way, if a car needs to get out from an inner lane, it can signal, and if the gap is currently ahead of it, for example, all the cars from the one next to it to the gap can move forward one space (at the same time) to put the gap next to the vehicle that needs to leave. This can happen in all the other rows and is easy, quiet and efficient for electric cars. It does not even need radio communication, as robocars will sense a car moving behind them or ahead of them, and immediately move in reaction. This request will move up the chain of cars to the gap. Of course, if one car does not move, the car behind it will only move a very short distance before refusing to go further, which would stop the whole effort (or in the case of an error, cause a very slow impact if the car behind keeps coming) and signal a need for human attention.

It seems like this should be possible even without many gaps, as long as there is enough spare space to allow a vehicle to wiggle out of its space. If there is just one gap, and a bit of wiggle room in the other rows, any car can still get out, just a bit more slowly. This is probably better done with a protocol for communication to assure it works quickly.

In this case, a gap on the outside lane (where there must be at least one) can be temporarily moved to the inside, and then back out. Consider 3 lanes of cars, with a gap in the outer late (lane #3) and a car in lane #1 (the curb lane) wanting out. First the lane #3 cars would adjust to move the gap to the right place, a bit forward of our target car. Next, a car from lane #2 would move into this gap, leaving a gap in lane #2 into which our target car can move. This leaves a gap in lane #3 which can be filled by a car from lane #2 which is willing to move in, ideally right next to our target car. Likewise a car from lane #3 can now move into that gap, and the resulting gap in the outer lane #3 can be moved to allow exit by our target car.

This requires a great deal more car moving, though again with electric cars this may not be too expensive. If the cars can turn all their wheels, they can move horizontally as some concept cars can already do. Even without that, a robotic car can wiggle out without much room, and of course the gap would not be placed exactly in place with the target car, but probably slightly forward to allow transfer with fewer wiggles. The result is a whole valet lot with just one blank space needed to get any car reasonably quickly. Of course, this would only be done when the lot needed to be totally full. For any partially full lot, gaps would be left to minimize the car moves needed to get any car out. However, if space is at a premium — so much so as to justify the extra moving — it can be done.

Bluetooth in all video cameras, and smart microphones

I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.

The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.

For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.

Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.

The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.

I also noted a number of features for still cameras as well as video ones:

  • Notes by the photographer, as above
  • Universal protocol for control of remote flashes
  • Remote control firing of the camera with all that USB has
  • At 480mbits, downloading of photos and even live video streams to a master recorder somewhere

It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.

There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes. Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.

And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.

I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.

So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)

However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.

Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.

There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.

However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.

One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?

We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.

Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.

Do you get Twitter? Is a "sampled" medium good or bad?

I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.

Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.

The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.

This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.

At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)

And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.

Twitter has made me revise slightly my two axes of social media — serial vs. browsed and reader-friendly vs. writer friendly. Twitter is generally serial, and I would say it is writer-friendly (it is easy to tweet) but not so reader friendly (the volume gets too high.)

However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.

This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)

TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.

Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.

The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.

On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.

New Robocar center at Stanford, Audi TT to race up Pikes Peak

Saturday saw the dedication of a new autonomous vehicle research center at Stanford, sponsored by Volkswagen. VW provided the hardware for Stanley and Junior, which came 1st and 2nd in the 2nd and 3rd Darpa Grand Challenges, and Junior was on display at the event, driving through the parking lot and along the Stanford streets, then parking itself to a cheering crowd.

Junior continues to be a testing platform with its nice array of sensors and computers, though the driving it did on Saturday was largely done with the Velodyne LIDAR that spins on top of it, and an internal map of the geometry of the streets at Stanford.

New and interesting was a demonstration of the “Valet Parking” mode of a new test vehicle, for now just called Junior 3. What’s interesting about J3 is that it is almost entirely stock. All that is added are two lower-cost LIDAR sensors on the rear fenders. It also has a camera at the rear-view mirror (which is stock in cars with night-assist mode) and a few radar sensors used in the fixed-distance cruise control system. J3 is otherwise a Passat. Well, the trunk is filled with computers, but there is no reason what it does could not be done with a hidden embedded computer.

What it does is valet park itself. This is an earlier than expected implementation of one of the steps I outlined in the roadmap to Robocars as robo-valet parking. J3 relies on the fact the “valet” lot is empty of everything but cars and pillars. Its sensors are not good enough to deal well with random civilians, so this technology would only work in an enclosed lot where only employees enter the lot if needed. To use it, the driver brings the car to an entrance marked by 4 spots on the ground the car can see. Then the driver leaves and the car takes over. In this case, it has a map of the garage in its computer, but it could also download that on arrival in a parking lot. Using the map, and just the odometer, it is able to cruise the lanes of the parking lot, looking for an empty spot, which it sees using the radar. (Big metal cars of course show clearly on the radar.) It then drives into the spot.

 read more »

Every connector, including video, should send power both ways

I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?

As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.

In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.

An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.

Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.

I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.

Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)

Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)

Nissan emulates school of fish, and Singularity Summit Robocar notes

Some time ago I proposed the “School of Fish Test” as a sort of turing test for robocars. In addition to being a test for the cars, it is also intended to be a way to demonstrate to the public when the vehicles have reached a certain level of safety. (In the test, a swarm of robocars moves ona track, and a skeptic in a sportscar is unable to hit one no matter what they do, like a diver trying to touch fish when swimming through a school.)

I was interested to read this month that Nissan has built test cars based on fish-derived algorithms as part of a series of experiments based on observing how animals swarm. (I presume this is coincidental, and the Nissan team did not know of my proposed test.)

The Nissan work (building on earlier work on bees) is based upon a swarm of robots which cooperate. The biggest test involves combining cooperating robots, non-cooperating robots and (mostly non-cooperating) human drivers, cyclists and pedestrians. Since the first robocars on the road will be alone, it is necessary to develop fully safe systems that do not depend on any cooperation with other cars. It can of course be useful to communicate with other cars, determine how much you trust them, and then cooperate with them, but this is something that can only be exploited later rather than sooner. In particular, while many people propose to me that building convoys of cars which draft one another is a good initial application of robotics (and indeed you can already get cars with cruise control that follows at a fixed distance) the problem is not just one of critical mass. A safety failure among cooperating cars runs the risk of causing a multi-car collision, with possible multiple injured parties, and this is a risk that should not be taken in early deployments of the technology.

My talk at the Singularity Summit on robocars was quite well received. Many were glad to see a talk on more near-future modest AI after a number of talks on full human level AI, while others wanted only the latter. A few questions raised some interesting issues:

  • One person asked about the insurance and car repair industries. I got a big laugh by saying, “fuck ‘em.” While I am not actually that mean spirited about it, and I understand why some would react negatively to trends which will obsolete their industries, we can’t really be that backwards-looking.
  • Another wondered if, after children discover that they nice cars will never hit them, they then travel to less safe roads without having learned proper safety instincts. This is a valid point, though I have already worried about what to do about the disruption to passengers who have to swerve around kids who play in the streets when it is not so clearly dangerous. Certain types of jaywalking that interfere with traffic will need to be discouraged or punished, though safe jaywalking, when no car is near, should be allowed and even encouraged.
  • One woman asked if we might become disassociated with our environments if we spend our time in cars reading or chatting, never looking out. This is already true in a taxicab city like New York, though only limos offer face-to-face chat. I think the ability to read or work instead of focus on the road is mostly a feature and not a bug, but she does have a point. Still, we get even more divorced from the environment on things like subways.

As expected, the New York audience, unlike other U.S. audiences, saw no problem with giving up driving. Everywhere else I go, people swear that Americans love their cars and love driving and will never give it up. While some do feel that way, it’s obviously not a permanent condition.

Some other (non-transportation) observations from Singularity Summit are forthcoming.

BTW, I will be giving a Robocar talk next Wednesday, Oct 28 at Stanford University for the ME302 - Future of the Automobile class. (This is open to the general Stanford community, affiliates of their CARS institute, and a small number of the public. You can email btm@templetons.com if you would like to go.)

Great power graphic tells us -- put solar power in New Mexico, but how?

I’m impressed with a great interactive map of the U.S. power grid produced by NPR. It lets you see the location of existing and proposed grid lines, and all power plants, plus the distribution of power generation in each state.

On this chart you can see which states use coal most heavily — West Virginia at 98%, Utah, Wyoming, North Dakota, Indiana at 95% and New Mexico at 85%. You can see that California uses very little coal but 47% natural gas, that the NW uses mostly Hydro from places like Grand Coulee and much more. I recommend clicking on the link.

They also have charts of where solar and other renewable plants are (almost nowhere) and the solar radiation values.

Seeing it all together makes something clear that I wrote about earlier. If you want to put up solar panels, the best thing to do is to put them somewhere with good sun and lots of coal burning power plants. That’s places like New Mexico and Utah. Putting up a solar panel in California will give it pretty good sunlight — but will only offset natural gas. A solar panel in the midwest will offset coal but won’t get as much sun. In the Northeast it gets even less sun and offsets less coal.

Much better than putting up solar panels anywhere, howevever, is actually using the money to encourage real conservation in the coal-heavy areas like West Virginia, Wyoming, North Dakota or Indiana.

While, as I have written, solar panels are a terrible means of greening the power grid from a cost standpoint, people still want to put them up. If that’s going to happen, what would be great would be a way for those with money and a desire to green the grid to make that money work in the places it will do the best. This is a difficult challenge. People sadly are more interested in feeling they are doing the right thing rather than actually doing it, and they feel good when they see solar panels on their roof, and see their meter going backwards. It makes up for the pain of the giant cheque they wrote, without actually ever recovering the money. Writing that cheque so somebody else’s meter can go backwards (even if you get the savings) just isn’t satisfying to people.

It would make even more sense to put solar-thermal plants (at least at today’s prices,) wind or geothermal in these coal-heavy areas.

It might be interesting to propose a system where rich greens can pay to put solar panels on the roofs of houses where it will do the most good. The homeowner would still pay for power, but at a lower price than they paid before. This money would mostly go to the person who financed the solar panels. The system would include an internet-connected control computer, so the person doing the financing could still watch the meter go backwards, at least virtually, and track power generated and income earned. The only problem is, the return would be sucky, so it’s hard to make this satisfying. To help, the display would also show tons of coal that were not burned, and compare it to what would have happened if they had put the panels on their own roof.

Of course, another counter to this is that California and a few other places have very high tiered electrical rates which may not exist in the coal states. Because of that — essentially a financial incentive set up by the regulators to encourage power conservation — it may be much more cost-effective to have the panels in low-coal California than in high-coal areas, even if it’s not the greenest thing.

An even better plan would be to find a way for “rich greens” (people willing to spend some money to encourage clean power) to finance conservation in coal-heavy areas. To do this, the cooperation of the power companies would be required. For example, one of the best ways to do this would be to replace old fridges with new ones. (Replacing fridges costs $100 per MWH removed from the grid compared to $250 for solar panels.)

  • The rich green would provide money to help buy the new fridge.
  • An inspector comes to see the old fridge and confirms it is really in use as the main fridge. Old receipts may be demanded though these may be rare. A device is connected to assure it is not unplugged, other than in a local power failure.
  • A few months later — to also assure the old fridge was really the one in use — the new fridge would be delivered by a truck that hauls the old one away. Inspectors confirm things and the buyer gets a rebate on their new fridge thanks to the rich green.
  • The new, energy-efficient fridge has built in power monitoring and wireless internet. It reports power usage to the power company.
  • The new fridge owner pays the power company 80% of what they used to pay for power for the old fridge. Ie. they pay more than their actual new power usage.
  • The excess money goes to the rich green who funded the rebate on the fridge, until the rebate plus a decent rate of return is paid back.

To the person with the old fridge, they get a nice new fridge at a discount price — possibly even close to free. Their power bill on the fridge goes down 20%. The rest of the savings (about 30% of the power, typically) goes to the power company and then to the person who financed the rebate.

A number of the steps above are there to minimize fraud. For example, you don’t want people deliberately digging out an ancient fridge and putting it in place to get a false rebate. You also don’t want them taking the old fridge and moving it into the garage as a spare, which would actually make things worse. The latter is easy to assure by having the delivery company haul away the old one. The former is a bit tricky. The above plan at least demands that the old fridge be in place in their kitchen for a couple of months, and there be no obvious signs that it was just put in place. The metering plan demands wireless internet in the home, and the ability to configure the appliance to use it. That’s getting easier to demand, even of poor people with old fridges. Unless the program is wildly popular, this requirement would not be hard to meet.

Instead of wireless internet, the fridge could also just communicate the usage figures to a device the meter-reader carries when she visits the home to read the regular meter. Usage figures for the old fridge would be based on numbers for the model, not the individual unit.

It’s a bit harder to apply this to light bulbs, which are the biggest conservation win. Yes, you could send out crews to replace incandescent bulbs with CFLs, but it is not cost effective to meter them and know how much power they actually saved. For CFLs, the program would have to be simpler with no money going back to the person funding the rebates.

All of this depends on a program which is popular enough to make the power monitoring chips and systems in enough quantity that they don’t add much to the cost of the fridge at all.

European intelligent vehicle test

Robocar news:

This press release describes a European research project on various intelligent vehicle technologies which will take place next year. As I outline in the roadmap a number of pre-robocar technologies are making their way into regular cars, so they can be sold as safer and more convenient. This project will actively collect data to learn about and improve the systems.

Today’s systems are fairly simple of course, and will learn a lot from this. This matches my prediction for how a robocar test suite will be developed, by gathering millions and later billions of miles of sample data including all accidents and anomalous events, over time with better and better sensors. Today’s sensors are very simple of course but this will change over time.

Initial reaction to these systems (which will have early flaws) may colour user opinion of them. For example, some adaptive cruise controls reportedly are too eager to decide there is a stopped car and will suddenly stop a vehicle. One of the challenges of automatic vehicle design will be finding ways to keep it safe without it being too conservative because real drivers are not very conservative. (They are also not very safe, but this defines the standards people expect.)

Text me if you lose my luggage

Just back from a weeklong tour including speaking at Singularity Summit, teaching classes at Cushing Academy and a big Thanksgiving dinner (well, Thanksgiving is actually today but we had it earlier) and drive through fabulous fall colour in Muskoka.

This time United Airlines managed to misplace my luggage in both directions. (A reminder of why I don’t like to check luggage.) The first time the had an “excuse” in that we checked it only about 10 minutes before the baggage check deadline and the TSA took extra time on it. The way back it missed a 1 hour, 30 minute connection — no excuse for that.

However, again, my rule for judging companies is how they handle their mistakes as well as how often they make them. And, in JFK, when we went to baggage claim, they actually had somebody call our name and tell us the bag was not on the flight, so we went directly to file the missing luggage report. However, on the return flight, connecting in Denver to San Jose, we got the more “normal” experience — wait a long time at the baggage claim until you realize no more bags are coming and you’re among the last people waiting, and then go file a lost luggage report.

This made me realize — with modern bag tracking systems, the airline knows your bag is not on the plane at the time they close the cargo hold door, well before takeoff. They need to know that as this is part of the passenger-to-bag matching system they tried to build after the Pan Am 103 Lockerbie bombing. So the following things should be done:

  • If they know my mobile number (and they do, because they text me delays and gate changes) they should text me that my luggage did not make the plane.
  • The text should contain a URL where I can fill out my lost luggage report or track where my luggage actually is.
  • Failing this, they should have a screen at the gate when you arrive with messages for passengers, including lost luggage reports. Or just have the gate agent print it and put it on the board if a screen costs too much.
  • Failing this, they should have a screen at the baggage claim with notes for passengers about lost luggage so you don’t sit and wait.
  • Failing this, an employee can go to the baggage claim and page the names of passengers, which is what they did in JFK.
  • Like some airlines do, they should put a box with “Last Bag, Flight nnn” written on it on the luggage conveyor belt when the last bag has gone through, so people know not to wait in vain.

I might very well learn my luggage is not on before the plane closes the door. In that case I might even elect to not take the flight, though I can see that the airline might not want people to do this as they are usually about to close the door, if they have not already closed it.

Letting me fill out the form on the web saves the airline time and saves me time. I can probably do it right on the plane after it lands and cell phone use is allowed. I don’t even have to go to baggage claim. Make it mobile browser friendly of course.

Scanning table for old digital cameras

I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.

I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.

My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.

Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.

In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.

You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.

The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.

Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.

One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.

A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.

There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.

The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.

In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.

One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.

Serve the multi-monitor market better with thin or removable bezels

A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30” monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it’s not the most cost effective way to get a lot of screen real estate. Today 24” 1080p monitors are down to $200, and flat panels don’t take so much space, so it makes a lot of sense to have two monitors or more.

Except there’s a big gap between them. And while there are a few monitors that advertise being thin bezel, even these have at least half an inch, so two monitors together will still have an inch of (usually black) between them.

I’m quite interested in building a panoramic photo wall with this new generation of cheap panels, but the 1” bars will be annoying, though tolerable from a distance. But does it have to be?

There are 1/4” bezel monitors made for the video wall industry, but it’s all very high end, and in fact it’s hard to find these monitors for sale on the regular market from what I have seen. If they are, they no doubt cost 2-3x as much as “specialty” market monitors. I really think it’s time to push multi-monitor as more than a specialty market.

I accept that you need to have something strong supporting and protecting the edge of your delicate LCD panel. But we all know from laptops it doesn’t have to be that wide. So what might we see?

  • Design the edges of the monitor to interlock, and have the supporting substrate further back on the left and further forward on the right. Thus let the two panels get closer together. Alternately let one monitor go behind the other and try to keep the distance to a minimum.
  • Design monitors that can be connected together by removing the bezel and protection/mounting hardware and carefully inserting a joiner unit which protects the edges of both panels but gets them as close together as it can, and firmly joins the two backs for strength. May not work as well for 2x2 grids without special joiners.
  • Just sell a monitor that has 2, 3 or 4 panels in it, mounted as close as possible. I think people would buy these, allowing them to be priced even better than two monitors. Offer rows of 1, 2 or 3 and a 2x2 grid. I will admit that a row of 4, which is what I want, is not likely to be as big a market.
  • Sell components to let VARs easily build such multi-panel monitors.

When it comes to multi-panel, I don’t know how close you could get the panels but I suspect it could be quite close. So what do you put in the gap? Well, it could be a black strip or a neutral strip. It could also be a translucent one that deliberately covers one or two pixels on each side, and thus shines and blends their colours. It might be interesting to see how much you could reduce visual effect of the gap. The eye has no problem looking through grid windows at a scene and not seeing the bars, so it may be that bars remain the right answer.

It might even be possible to cover the gap with a small thin LCD display strip. Such a strip, designed to have a very sharp edge, would probably go slightly in front of the panels, and appear as a bump in the screen — but a bump with pixels. From a distance this might look like a video wall with very obscured seams.

For big video walls, projection is still a popular choice, other than the fact that such walls must be very deep. With projection, you barely need the bezel at all, and in fact you can overlap projectors and use special software to blend them for a completely seamless display. However, projectors need expensive bulbs that burn out fairly quickly in constant use, so they have a number of downsides. LCD panel walls have enough upsides that people would tolerate the gaps if they can be made small using techniques above.

Anybody know how the Barco wall at the Comcast center is done? Even in the video from people’s camcorders, it looks very impressive.

If you see LCD panels larger than 24” with thin bezels (3/8 inch or less) at a good price (under $250) and with a good quality panel (doesn’t change colour as you move your head up and down) let me know. The Samsung 2443 looked good until I learned that it, and many others in this size, have serious view angle problems.

Flashforward, Deja Vu and Hollywood's problem with time travel

Tonight I watched the debut of FlashForward, which is based on the novel of the same name by Rob Sawyer, an SF writer from my hometown whom I have known for many years. However, “based on” is the correct phrase because the TV show features Hollywood’s standard inability to write a decent time travel story. Oddly, just last week I watched the fairly old movie “Deja Vu” with Denzel Washington, which is also a time travel story.

Hollywood absolutely loves time travel. It’s hard to find a Hollywood F/SF TV show that hasn’t fallen to the temptation to have a time travel episode. Battlestar Galactica’s producer avowed he would never have time travel, and he didn’t, but he did have a god who delivered prophecies of the future which is a very close cousin of that. Time travel stories seem easy, and they are fun. They are often used to explore alternate possibilities for characters, which writers and viewers love to see.

But it’s very hard to do it consistently. In fact, it’s almost never done consistently, except perhaps in shows devoted to time travel (where it gets more thought) and not often even then. Time travel stories must deal with the question of whether a trip to the past (by people or information) changes the future, how it changes it, who it changes it for, and how “fast” it changes it. I have an article in the works on a taxonomy of time travel fiction, but some rough categories from it are:

  • Calvinist: Everything is cast, nothing changes. When you go back into the past it turns out you always did, and it results in the same present you came from.
  • Alternate world: Going into the past creates a new reality, and the old reality vanishes (at varying speeds) or becomes a different, co-existing fork. Sometimes only the TT (time traveler) is aware of this, sometimes not even she is.
  • Be careful not to change the past: If you change it, you might erase yourself. If you break it, you may get a chance to fix it in some limited amount of time.
  • Go ahead and change the past: You won’t get erased, but your world might be erased when you return to it.
  • Try to change the past and you can’t: Some magic force keeps pushing things back the way they are meant to be. You kill Hitler and somebody else rises to do the same thing.

Inherent in several of these is the idea of a second time dimension, in which there is a “before” the past was changed and an “after” the past was changed. In this second time dimension, it takes time (or rather time-2) for the changes to propagate. This is mainly there to give protagonists a chance to undo changes. We see Marty Mcfly slowly fade away until he gets his parents back together, and then instantly he’s OK again.

In a time travel story, it is likely we will see cause follow effect, reversing normal causality. However, many writers take this as an excuse to throw all logic out the window. And almost all Hollywood SF inconsistently mixes up the various modes I describe above in one way or another.

Spoilers below for the first episode of FlashForward, and later for Deja Vu.

Update note: The fine folks at io9 asked FlashForward’s producers about the flaw I raise but they are not as bothered by it.  read more »

No, I don't want to participate in a customer satisfaction survey every time

It seems that with more and more of the online transactions I engage in — and sometimes even when I don’t buy anything — I will get a request to participate in a customer satisfaction survey. Not just some of the time in some cases, but with every purchase. I’m also seeing it on web sites — sometimes just for visiting a web site I will get a request to do a survey, either while reading, or upon clicking on a link away from the site.

On the surface this may seem like the company is showing they care. But in reality it is just the marketing group’s thirst for numbers both to actually improve things and to give them something to do. But there’s a problem with doing it all the time, or most of the time.

First, it doesn’t scale. I do a lot of transactions, and in the future I will do even more. I can’t possibly fill out a survey on each, and I certainly don’t want to. As such I find the requests an annoyance, almost spam. And I bet a lot of other people do.

And that actually means that if you ask too much, you now will get a self-selected subset of people who either have lots of free time, or who have something pointed to say (ie. they got a bad experience, or perhaps rarely a very good one.) So your survey becomes valueless as data collection the more people you ask to do it, or rather the more refusals you get. Oddly, you will get more useful results asking fewer people.

Sort of. Because if other people keep asking everybody, it creates the same burn-out and even a survey that is only requested from 1 user out of 1000 will still see high rejection and self-selection. There is no answer but for everybody to truly only survey a tiny random subset of the transactions, and offer a real reward (not some bogus coupon) to get participation.

I also get phone surveys today from companies I have actually done business with. I ask them, “Do you have this survey on the web?” So far, they always say no, so I say, “I won’t do it on the phone, sorry. If you had it on the web I might have.” I’m lying a bit, in that the probability is still low I would do it, but it’s a lot higher. I can do a web survey in 1/10th the time it takes to get quizzed on the phone, and my time is valuable. Telling me I need to do it on the phone instead of the web says the company doesn’t care about my time, and so I won’t do it and the company loses points.

Sadly, I don’t see companies learning these lessons, unless they hire better stats people to manage their surveys.

Also, I don’t want a reminder from everybody I buy from on eBay to leave feedback. In fact, remind me twice and I’ll leave negative feedback if I’m in a bad mood. I prefer to leave feedback in bulk, that way every transaction isn’t really multiple transactions. Much better if ebay sends me a reminder once a month to leave feedback for those I didn’t report on, and takes me right to the bulk feedback page.

Panoramas of Burning Man 2009

I have put up a gallery of panoramas for Burning Man 2009. This year I went with the new Canon 5D Mark II, which has remarkable low-light shooting capabilities. As such, I generated a number of interesting new night panoramas in addition to the giant ones of the day.

In particular, you will want to check out the panorama of the crowd around the burn, as seen from the Esplanade, and the night scene around the Temple, and a twilight shot.

Below you see a shot of the Gothic Raygun Rocket, not because it is the best of the panoramas, but because it is one of the shortest and thus fits in the blog!

Some of these are still in progress. Check back for more results, particularly in the HDR department. The regular sized photos will also be processed and available in the future.

Finally, I have gone back and rebuilt the web pages for the last 5 years of panoramas at a higher resolution and with better scaling. So you may want to look at them again to see more detail. A few are also up as gigapans including one super high-res 2009 shot in a zoomable viewer.

Events: Randall Monroe (xkcd) tonight in SF, Singularity Summit Oct 3-4 in NYC

Two events I will be at…

Tonight, at 111 Minna Gallery in San Francisco, we at EFF will be hosting a reading by Randall Monroe, creator of the popular nerd comic “xkcd.” There is a regular ticket ($30) and a VIP reception ticket ($100) and just a few are still available. Payments are contributions to the EFF.

In two weeks, on Oct 3-4, I will be speaking on the future of robot cars at the Singularity Summit in New York City. Lots of other good speakers on the podium too.

See you there…

Burning Man Exodus, Part II

Two years ago, I discussed solutions for Burning Man Exodus. The problem: Get 45,000 people off the playa in 2 days, 95% of them taking a single highway south which goes through a small town which has a chokepoint capacity of about 450 cars/hour. Quite often wait times to get onto the road are 4 hours or more, though this year things were smoother (perhaps due to a lower attendance) and the number of people with 4 hour waits was lower. In a bad year, we might imagine that 25,000 people wait an average of 3 hours, or 75,000 person hours, almost 40 man-years of human labour.

Some judge my prior solution, with appointments, as too complex. Let’s try something which is perhaps simpler, at least at its base, though I have also thought up some complexities that may have value.

When you are ready to leave, drive to the gate. There, as was tried in 2,000, you would be directed into a waiting lot, shaped with cones at the front. The lot would have perhaps 20 rows of 10 cars (around 150 vehicles as there are so many trailers and RVs.) The lot would have a big number displayed. There you would park. You would then have three options:

  1. Stay in the lot and party with the other people in the lot, or sit in your car. This is in fact what you would do in the current situation, except there you start your engine and go forward 30 feet every minute. Share leftover food. Give donations to DPW crew. Have a good time.
  2. Go to the exodus station near the parking lots. Get a padded envelope and write your address on it, and your plate number, DL number and car description. Put your spare set of keys in the envelope. Get on your bike, or walk back to the city, and have a good time there. Listen to Exodus Radio. They will give reports on when your lot is going to move, in particular a 30 minute warning. When you get it, go back, pick up your keys and get ready.
  3. Volunteer to help with city cleanup. Do that by driving to the premium section of the waiting lot. Park there, wait a bit, and then get on a bus which takes you somewhere to do an hour shift of clean-up. You moop check the playa, clean, take down infrastructure, or spend an hour doing Exodus work which you trained for earlier. At the end of your shift, you are free to take a bus back to the lot, or wait in the city with friends. Listen to Exodus Radio. They will call your premium lot. It will be called well before the regular lot. Ideally give an hour, gain an hour! Get there and be ready.

When your lot is called, the Exodus worker pulls back the cones and the lanes stream out, non-stop (but still 10mph) off the playa. The road does not have to be lengthened to hold more cars. At the blacktop entrance, an Exodus worker with a temporary traffic light has it set to a green left arrow except when other traffic is coming, when it’s red. You turn without hesitation (people do that on green arrows, but slow down for flag workers.) Off you go.

Keys

As noted, it seems a good idea that people who want to leave the lot leave a set of keys. Not their only set — it is foolish to bring only one set to the playa anyway, and if you read the instructions you knew this. This allows exodus workers to easily move vehicles for people who don’t return, and there will be some. Even so the lots should be designed so it’s not hard to get around them. If the first lane has a spare lane to the other direction that works. It does require somebody to hand back the keys. If you read the instructions, they will say to bring photos of yourself (and alternate drivers.) Tape that photo to the key envelope and it makes it very quick and easy for the key wrangler to hand you the right keys. Don’t bring a photo and they must confirm a DL number, which they won’t have time to do if time is short — so get there in plenty of time.

If you don’t get there in time to get your keys, you can wait, or you can pay $10 (or whatever it costs) and the BM Org will mail you the keys after the event. Of course with rental vehicles this is not an option, so be there early.

However, it may also be simpler to not do the key system, and tow people who don’t show up, and charge them a fat fee for that. Or tow those who don’t leave a key. People might leave a fake key, which would result in a tow and an even larger fine, perhaps. As such I am not wedded to the key desk idea and it may be simpler to first see if no-shows are a big problem. No-shows can be punished in lots of ways if they signed a contract before leaving.

There is an issue for people who do volunteer work and then head for the city. They need to have left keys before the volunteer shift, or return to the lot to leave them, or not leave them and risk a tow.

Volunteers

Volunteers would get a leader who directs them what they will be doing. A common task will be doing a playa walk/MOOP sweep. The leader will listen to Exodus Radio and know if things move quickly and the volunteers must return. Normally, however, volunteer shifts would be taken only when the line is very long, much longer than a volunteer shift. People can of course offer to do more than one shift when the line is long but in that case they should bring their own portable radio, just as people who leave the lot should bring one or be near one. The shift leaders could also have a radio on loud enough for all to hear, hopefully the DJ will be doing something fun between exodus lot announcements.

As noted, one of the things people can volunteer for is exodus work itself. The offer of early exodus in exchange for an hour of exodus work assures there can never be a shortage of workers as long as there is a base of workers that does it without that reward. You’re helping the people ahead of you in line get out earlier. However, regular (non-leaving) volunteers are needed for when the line is short and first bunches up, and for when it shortens again.

To do exodus work you would have to attend training in advance, and be certified as able to do it. Probably done in SF, but possibly on-playa. Some other volunteer jobs (such as cleanup crew leader) would require some training and approval.

Staff needed are

  • An exodus DJ (in a tower overlooking the line and the lots) with assistant or two who are controlling the whole operation.
  • Flag worker controlling the traffic light at the blacktop. Possibly others in Gerlach.
  • 2 crews of 1-2 workers directing cars into the lot currently filling up. They also prepare the lot, replacing the exit cones and possibly moving no-show cars to the side. May have a golf cart.
  • 1-2 workers diverting cars from the main exit lane merge (the “fallopian tubes”) to the staging lots when needed. A cop would be very handy here.
  • 1 worker to remove the cones at the lot being emptied and wave cars out of it. (The Exodus DJ is also telling those people to get going.) When only one lane is left, this person moves to the next lot. Worker probably has a scooter or golf cart.
  • 1-2 workers to man the key desk.

Police

The police come in huge numbers and spend a lot of time on victimless crimes. Managing traffic is a a great way to make really effective use of their police powers. Police can be there to deal with people who ignore signs, bypass or cut out of lots, or who leave their car without doing a key drop or contract.

How to start the lots

It is an interesting problem how to start the lots close to the city. Initially the volume is low and people exit directly, and will tend to go in multiple lanes without a lot of work, eager as faster vehicles will be to pass slow ones. Eventually they will bunch up at the forced merge, and then the bunch up will spread backwards, traffic-jam style. However, this is taking place three miles from the city, at least 15 minutes drive at 10mph. There is a magic amount of back-up at which point you should start holding cars, and then a point at which you should release a lot full of them. Fortunately any short gaps you put in the stream are not wasted as they are re-smoothed on the blacktop before Gerlach-Empire, which is believed to be the primary choke point. However, it will take experience to learn the exact right times, so the first year will not do as well as later years. Data has been kept on car counts from the past, presumably broken down by hour, which could help.

So what would it cost to allow pre-existing conditions?

There is a number that should not be horribly hard to calculate by the actuaries of the health insurance companies. In fact, it’s a number that they have surely already calculated. What would alternate health insurance systems cost?

A lot of confusion in the health debate concerns two views the public has of health insurance. On the one hand, it’s insurance. Which means that of course insurers would not cover things like most pre-existing conditions. Insurance is normally only sold to cover unknown risk in every other field. If your neighbours regularly shoot flaming arrows onto your house, you will not get fire insurance to cover that, except at an extreme price. Viewed purely as insurance, it is silly to ask insurance companies to cover these things. Or to cover known and voluntary expenses, like preventative care, or birth control pills and the like. (Rather, an insurance company should decide to raise your prices if you don’t take preventative care, or allocate funds for the ordinary costs of planned events, because they don’t want to cover choices, just risks.)

However, we also seek social goals for the health insurance system. So we put rules on health insurance companies of all sorts. And now the USA is considering a very broad change — “cover everybody, and don’t ding them for pre-existing conditions.”

From a purely business standpoint, if you don’t have pre-existing conditions, you don’t want your insurance company to cover them. While your company may not be a mutual one, in a free market all should be not too far off the range of such a plan. Everything your company covers that is expensive and not going to happen to you raises your premiums. If you are a healthy-living, healthy person, you want to insure with a company that covers only such people’s unexpected illnesses, as this will give you the lowest premiums by a wide margin.

However, several things are changing the game. First of all, taxes are paying for highly inefficient emergency room care for the uninsured, and society is paying other costs for a sick populace, including the spread of disease. Next, insurance companies have discovered that if the application process is complex enough, then it becomes possible to find a flaw in the application of many patients who make expensive claims, and thus deny them coverage. Generally you don’t want to insure with a company that would do this: while your premiums will be lower, it is too hard to predict if this might happen to you. The more complex the policy rules, the more impossible it is to predict. However, it is hard to discover this in advance when buying a policy, and hard to shop on.

But when an insurance company decides on a set of rules, it does so under the guidance of its actuaries. They tell the officers, “If you avoid covering X, it will save us $Y” and they tell it with high accuracy. It is their job.

As such, these actuaries should already know the cost of a system where a company must take any client at a premium decided by some fairly simple factors (age being the prime one) compared to a system where they can exclude or surcharge people who have higher risks of claims. Indeed, one might argue that while clearly older people have a higher risk of claims, that is not their fault, and even this should not be used. Every factor a company uses to deny or surcharge coverage is something that reduces its costs (and thus its premiums) or they would not bother doing it.

On the other hand, elimination of such factors of discrimination would reduce costs in selling policies and enforcing policies, though not enough to make up for it, or they would already do it, at least in a competitive market. (It’s not, since any company that took all comers at the same price would quickly be out of business as it would get only the rejects of other companies.)

Single payer systems give us some suggestion on what this costs, but since they all cost less than the current U.S. system it is hard to get guidance. They get these savings for various reasons that people argue about, but not all of them translate into the U.S. system.

There is still a conundrum in a “sell to everybody” system. Insurance plans will still compete on how good the care they will buy is. What doctors can you go to? HMO or PPO? What procedures will they pay for, what limits will they have? The problem is this: If I’m really sick, it is very cost effective for me to go out and buy a very premium plan, with the best doctors and the highest limits. Unlike a random person, I know I am going to use them. It’s like letting people increase the coverage on their fire insurance after their house is on fire. If people can change their insurance company after they get sick then high-end policies will not work. This leaves us back at trying to define pre-existing conditions, and for example allowing people only to switch to an equivalent-payout plan for those conditions, while changing the quality of the plan on unknown risks. This means you need to buy high-end insurance when you are young, which most people don’t. And it means companies still have an incentive to declare things as pre-existing conditions to cap their costs. (Though at least it would not be possible for them to deny all coverage to such customers, just limit it.)

Some would argue that this problem is really just a progressive tax — the health plans favoured by the wealthy end up costing 3 times what they normally would while poorer health plans are actually cheaper than they should be. But it should put pressure on all the plans up the chain, as many poor people can’t afford a $5,000/month premium plan no matter that it gives them $50,000/month in benefits, but the very wealthy still can. So they will then switch to the $2,000/month plan the upper-middle class prefer, and go broke paying for it, but stay alive.

Or let’s consider a new insurance plan, the “well person’s insurance” which covers your ordinary medical costs, and emergencies, but has a lifetime cap of $5,000 on chronic or slow-to-treat conditions like cancer, diabetes and heart disease. You can do very well on this coverage, until you get cancer. Then you leave the old policy and sign up for premium coverage that includes it, which can’t be denied in spite of your diagnosis.

This may suggest that single-payer may be the only plan which works if you want to cover everybody. But single-payer (under which I lived for 30 years in Canada) is not without its issues. Almost all insurance companies ration care, including single payer ones, but in single payer you don’t get a choice on how much there will be.

However, it would be good if the actuaries would tell us the numbers here. Just what will the various options truly cost and what premiums will they generate? Of course, the actuaries have a self-interest or at least an employer’s interest in reporting these numbers, so it may be hard to get the truth, but the truth is at least out there.

Syndicate content