Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

Happy Seasons to all

I’ve been feeling we in the secular, atheist world should still have an official event at the end of the year, since with the Christians and the Jews making merry, it’s a good time to do it. We have New Year’s Eve of course, but so does everybody.

This new holiday, to mark the changing of the Seasons might be called “Seasons.” Of course that is in part so that all the people saying “Seasons Greetings” (without the apostrophe, oddly enough) will now be making our greeting. Another name for it could be “Holidays” but that does have religious roots.

Now the real changing of the seasons is around Dec 21, but it makes far more sense to celebrate a gift holiday on Dec 28-29. That way you can buy gifts for everybody at half price. And we seculars are smart and thrifty.

So Happy Seasons to all.

Video windows that simulate 3-D

I’m waiting for the right price point on a good >24” monitor with a narrow bezel to drop low enough that I can buy 4 or 5 of them to make a panoramic display wall without the gaps being too large.

However, another idea that I think would be very cool would be to exploit the gaps between the monitors to create a simulated set of windows in a wall looking out onto a scene. It’s been done before in lab experiments with single monitors, but not as a large panoramic installation or something long term from what I understand. The value in the multi display approach is that now the gap between displays is a feature rather than a problem, and viewers can see the whole picture by moving. (Video walls must edit out the seams from the picture, removing the wonderful seamlessness of a good panorama.) We restore the seamlessness in the temporal dimension.

To do this, it would be necessary to track the exact location of the eyes of the single viewer. This would only work for one person. From the position of the eyes (in all 3 dimensions) and the monitors the graphics card would then project the panoramic image on the monitors as though they were windows in a wall. As the viewer’s head moved, the image would move the other way. As the viewer approached the wall (to a point) the images would expand and move, and likewise shrink when moving away. Fortunately this sort of real time 3-D projection is just what modern GPUs are good at.

The monitors could be close together, like window panes with bars between them, or further apart like independent windows. Now the size of the bezels is not important.

For extra credit, the panoramic scene could be shot on layers, so it has a foreground and background, and these could be moved independently. To do this is would be necessary to shoot the panorama from spots along a line and both isolate foreground and background (using parallax, focus and hand editing) and also merge the backgrounds from the shots so that the background pixels behind the foreground ones are combined from the left and right shots. This is known as “background subtraction” and there has been quite a lot of work in this area. I’m less certain over what range this would look good. You might want to shoot above and below to get as much of the hidden background as possible in that layer. Of course having several layers is even better.

The next challenge is to very quickly spot the viewer’s head. One easy approach that has been done, at least with single screens, is to give the viewer a special hat or glasses with easily identified coloured dots or LEDs. It would be much nicer if we could do face detection as quickly as possible to identify an unadorned person. Chips that do this for video cameras are becoming common, the key issue is whether the detection can be done with very low latency — I think 10 milliseconds (100hz) would be a likely goal. The use of cameras lets the system work for anybody who walks in the room, and quickly switch among people to give them turns. A camera on the wall plus one above would work easily, two cameras on the left and right sides of the wall should also be able to get position fairly quickly.

Even better would be doing it with one camera. With one camera, one can still get a distance to the subject (with less resolution) by examining changes in the size of features on the head or body. However, that only provides relative distance, for example you can tell if the viewer got 20% closer but not where they started from. You would have to guess that distance, or learn it from other queues (such as a known sized object like the hat) or even have the viewer begin the process by standing on a specific spot. This could also be a good way to initiate the process, especially for a group of people coming to view the illusion. Stand still in the spot for 5 seconds until it beeps or flashes, and then start moving around.

If the face can be detected with high accuracy and quickly, a decent illusion should be possible. I was inspired by this clever simulated 3-D videoconferencing system which simulates 3-D in this way and watches the face of the viewer.

You need high resolution photos for this, as only a subset of the image appears in the “windows” at any given time, particularly when standing away from the windows. It could be possible to let the viewer get reasonably close to the “window” if you have a gigapan style panorama, though a physical barrier (even symbolic) to stop people from getting so close that the illusion breaks would be a good idea.

Twitter clients, only shorten URLs as much as you truly need to and make them readable

I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don’t know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that’s easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.

I understand the attraction, and not just on twitter. Some sites generate hugely long URLs which fold over many lines if put in text files or entered for display in comments and other locations. The result, though, is that you can no longer determine where the link will take you from the URL. This hurts the UI of the web, and makes it possible to fool people into going to attack sites or Rick Astley videos. Because of this, some better twitter clients re-expand the shortened URLs when displaying on a larger screen.

Anyway, here’s an idea for the Twitter clients and URL shorteners, if they must be used. In a tweet, figure out how much room there is to put the compacted URL, and work with a shortener that will let you generate a URL of exactly that length. And if that length has some room, try to put in some elements from the original URL so I can see them. For example, you can probably fit the domain name, especially if you strip off the “www.” from it (in the visible part, not in the real URL.) Try to leave as many things that look like real words, and strip things that look like character encoded binary codes and numbers. Of course, in the end you’ll need something to make the short URL unique, but not that much. Of course, if there already is a URL created for the target, re-use that.

Google just did its own URL shortener. I’m not quite sure what the motives of URL shortener sites are. While sometimes I see redirects that pause at the intermediate site, nobody wants that and so few ever use such sites. The search engines must have started ignoring URL redirect sites when it comes to pagerank long ago. They take donations and run ads on the pages where people create the tiny URLs, but when it comes to ones used on Twitter, these are almost all automatically generated, so the user never sees the site.

Wanted: An IRC Bot to gateway to a twitter backchannel

It’s now becoming common to kludge a conference “backchannel” onto Twitter. I am quite ambivalent about this. I don’t think Twitter works nearly as well as an internal backchannel, even though there are some very nice and fancy twitter clients to help make this look nicer.

But the real problem comes from the public/private confusion. Tweets are (generally) public, and even if tagged by a hashtag to be seen by those tracking an event, they are also seen by your regular followers. This has the following consequences, good and bad.

  • Some people tweet a lot while in a conference. They use it as a backchannel. That’s overwhelming to their followers who are not at the conference, and it fills up the feed.
  • When multiple people do it, it’s almost like a spam. I believe that conferences like using Twitter as backchannel because it causes constant mentions of their conference to be broadcast out into the world.
  • While you can filter out a hashtag in many twitter clients, it’s work to do so, and the general flooding of the feed is annoying to many.
  • People tweeting at a conference are never sure about who they are talking to. Some tweets will clearly be aimed at fellow conference attendees. But many are just repeats of salient lines said on stage, aimed only at the outsiders.
  • While you can use multiple tags and filters to divide up different concurrent sessions of a conference, this doesn’t work well.
  • The interface on Twitter is kludged on, and poor.
  • Twitter’s 140 character limit is a burden on backchannel. Backchannel comments are inherently short, and no fixed limit is needed on them. Sure, sometimes you go longer but never much longer.
  • The Twitter limit forces URLs to be put into URL shorteners, which obscure where they go and are generally a bane of the world.

Dedicated backchannels are better, I think. They don’t reach the outside world unless the outsiders decide to subscribe to them, but I think that’s a plus. I think the right answer is a dedicated, internal-only backchannel, combined with a minimal amount of tweeting to the public (not the meeting audience) for those who want to give their followers some snippets of the conferences their friends are going to. The public tweets may not use a hashtag at all, or a different one from the “official” backchannel as they are not meant for people at the conference.

The most common dedicated backchannel tool is IRC. While IRC has its flaws, it is much better at many things than any of the web applications I have seen for backchannel. It’s faster and has a wide variety of clients available to use with it. While this is rarely done, it is also possible for conferences to put an IRC server on their own LAN so the backchannel is entirely local, and even keeps working when the connection to the outside world gets congested, as is common on conference LANs. I’m not saying IRC is ideal, but until something better comes along, it works. Due to the speed, IRC backchannels tend to be much more rapid fire, with dialog, jokes, questions and answers. Some might view this as a bug, and there are arguments that slowing things down is good, but Twitter is not the way to attain that.

However, we won’t stop those who like to do it via Twitter. As noted, conferences like it because it spams the tweetsphere with mentions of their event.

I would love to see an IRC Bot designed to gateway with the Twitter world. Here are some of the features it might have.  read more »

Why facebook wants you to open up your profile

There is some controversy, including a critique from our team at the EFF of Facebook’s new privacy structure, and their new default and suggested policies that push people to expose more of their profile and data to “everyone.”

I understand why Facebook finds this attractive. “Everyone” means search engines like Google, and also total 3rd party apps like those that sprung up around Twitter.

On Twitter, I tried to have a “protected” profile, open only to friends, but that’s far from the norm there. And it turns out it doesn’t work particularly well. Because twitter is mostly exposed to public view, all sorts of things started appearing to treat twitter as more a micro blogging platform than a way to share short missives with friends. All of these new functions didn’t work on a protected account. With a protected account, you could not even publicly reply to people who did not follow you. Even the Facebook app that imports your tweets to Facebook doesn’t work on protected accounts, though it certainly could.

Worse, many people try to use twitter as a “backchannel” for comments about events like conferences. I think it’s dreadful as a backchannel, and conferences encourage it mostly as a form of spam: when people tweet to one another about the conference, they are also flooding the outside world with constant reminders about the conference. To use the backchannel though, you put in tags and generally this is for the whole world to see, not just your followers. People on twitter want to be seen.

Not so on Facebook and it must be starting to scare them. On Facebook, for all its privacy issues, mainly you are seen by your friends. Well, and all those annoying apps that, just to use them, need to know everything about you. You disclose a lot more to Facebook than you do to Twitter and so it’s scary to see a push to make it more public.

Being public means that search engines will find material, and that’s hugely important commercially, even to a site as successful as Facebook. Most sites in the world are disturbed to learn they get a huge fraction of their traffic from search engines. Facebook is an exception but doesn’t want to be. It wants to get all the traffic it gets now, plus more.

And then there’s the cool 3rd party stuff. Facebook of course has its platform, and that platform has serious privacy issues, but at least Facebook has some control over it, and makes the “apps” (really embedded 3rd party web sites) agree to terms. But you can’t beat the innovation that comes from having less controlled entrepreneurs doing things, and that’s what happens on twitter. Facebook doesn’t want to be left behind.

What’s disturbing about this is the idea that we will see sites starting to feel that abandoning or abusing privacy gives them a competitive edge. We used to always hope that sites would see protecting their users’ privacy as a competitive edge, but the reverse could take place, which would be a disaster.

Is there an answer? It may be to try to build applications in more complex ways that still protect privacy. Though in the end, you can’t do that if search engines are going to spider your secrets in order to do useful things with them; at least not the way search engines work today.

Robocars of the future from 1958 and Google Tech Talk

You may have seen it already but it’s amusing to watch this encoding of a 1958 Disney show on the highway of the future:

This highway features a mixture of human driven cars, robocars and PRT style robocars on private guideways. Much of it is typical of ancient predictions of the future, with an expectation of remarkably cheap and strong materials and portable atomic power, but some of it is on the mark (like urban sprawl.) The roles of mom and dad don’t change in 50 years. It is always humbling to go back into the past and see how futurists have got it wrong, and wonder where you are going wrong in your own predictions. (I’ve learned you should never predict dates for things because while you might have a sense about how long it would take to develop the technology, you can’t as easily predict how markets and governments will react.)

However, I will be prognosticating again next week, giving my Robocar talk in the Google “Tech Talk” series at Google HQ in Mountain View on Dec 14 at 11am. While not generally open to the public, I can bring in guests if they all come in at around 10:30am. Contact me if this is of interest. They will put the talk up on Google Video when done. Of course, if you are a Googler, I hope to see you there.

It’s amusing to note that many of the vision seen in the movie “Minority Report” are found in this Disney video from much earlier.

Make e-Ink tablets an add-on for our phone/PDAs, not stand-alone

It’s over 17 years since I first too a stab at e-Books, and while I was far too early, I must admit I had not predicted I would be that early. The market is now seeing a range of e-Ink based electronic book readers, such as the kindle, and some reasonable adoption. But I don’t have one yet. But I do read e-books on my tiny phone screen. Why?

The phone has the huge advantage that it is always with me. It gives me a book any time I am caught waiting. On a train, in a doctor’s office, there is always a way to catch up on reading. It’s not ideal, and I don’t use it to read at home in bed, but it’s there. The tablets are all large, and for a good reading experience, people like them even larger. This means they are only there when you make deliberate plans to read, and pack them in your bag.

I’m not that thrilled with e-Ink yet, both for its low contrast and the annoying way it has to flash black in order to reset, causing a distracting delay when turning the page. There are ways to help that, but as yet it suffers. e-Ink also can’t readily be used for annotation or interactive operation, so many devices will keep a strip of LCD for things like selecting from menus and the like. Many of the devices also waste a lot of space with a keyboard, and the Kindle includes a cellular radio in order to download books. e-Ink does have a huge advantage in battery life.

What makes sense to me instead would be a sheet (or two sheets, folded) of e-Ink with very little in the way of smarts inside the device. Instead, it would be designed so that a variety of cell phones could dock to the e-Ink sheet and provide the brains. Phones have different form factors, of course, and different connectors though almost all can do USB. (Though annoyingly only as a slave, but this can be kludged around.) It would be necessary to make small plastic holders for the different phone models which can mate to a mount on the book display, ideally connecting the data port at the same time. The tablet of course should be able to connect to a laptop via USB (this time as a slave) but do the same reading actions. The docking can also be, I am reminded by the commenters, done by bluetooth, with interesting consequences.

This has many large advantages:

  • Done right, this tablet is a fair bit cheaper. It has minimal brains inside, and no cell phone. In fact, for most people, it also does not include the cost of a cell phone data service. (I presume with the Kindle the cost of that is split between the unit and the book sales, but either way, you pay for it.)
  • The cell phone provides an interactive LCD screen to use with all the reader’s interactive functions — book buying, annotating etc.
  • The cell phone provides a data connection for downloading books, newspapers and web pages.
  • The cell phone provides a keyboard for the few times you use a keyboard on an e-Book reader
  • When you don’t have your e-Ink tablet, you still have all your books, and can still order books.

The main thing the cell phone doesn’t have is huge battery life. The truth is, however, that cell phones have excellent battery life if they are not turning on their screen or doing complex network apps. We do such activities of course, and they drain our batteries, but we expect that and thus charge regularly and carry more. I’m not too scared at the idea of not being able to read my books with the phone dead.

The tablet could also be used with a laptop, especially a netbook. Laptops can actually run for a very long time if you put them in a power conserving mode, turning off the screen and disks, possibly even suspending the CPU between complex operations.

However, there is no need to run it at all. While I described the tablet as being dumb, it takes very little smarts for it to let you page through a pre-rendered book that was fed to it by the phone or laptop. That can be done with a low power microcontroller. It just would not do any fancy interactive operations without turning on the phone or laptop. And indeed, for the plain reading of a single book, akin to what you can do with the paper version, it would be able to operate on its own.

Of course, the vendors would not want to support every phone. But they could cut a deal to let people use old supported phones (which are in plentiful supply as people recycle phones constantly) with a minimal books-only data plan similar to the plans they have cut for the dedicated devices. In the GSM world, they could offer a special SIM good only for book operations for use in an older phone of the class they do support. And they could also build a custom module that slots perfectly into the tablet with the cell modem, small LCD screen and keyboard for those who still want a stand-alone device.

This approach also allows you to upgrade your tablet and your phone independently.

As noted, I think a folding tablet makes a lot of sense. This is true for two reasons. First, you get more screen real estate in half the width of tablet. Secondly, with two e-Ink panels, you can play some tricks so that you flash-refresh the panel you aren’t reading rather than the one you are finishing. While slightly distracting (depending how it’s done) it means that when you want to switch to the next page, you do it with your eyes, with no delay. You have to push a button when you switch (even going from left to right though it’s not apparently needed) so that the page you have fully finished refreshes while you are reading the next one. This could also be done with timings. Or even with a small camera watching your eyes, though I was trying to make the tablet dumber and this takes CPU and power right now. I can imagine other tricks that would work, such as how you hold the tablet (capacitive detection of your grip, or accelerometer detection of the angle.)

The tablet could also be built so the two pages of e-Ink are on the front and back. In this case it would not fold, though a slipcover would be a good idea. A “flip tablet” would display page 1 to you with page 2 on the back. To read page 2 you would physically flip it over. It would detect that of course, and change page 1 to page 3 when it was on the other side. This would mean the distraction of the flash-refresh would not be visible to you, which is a nice plus.

Cutely, the flip tablet could detect which direction you flip it. So if you flip it counter clockwise, you get the next page. If you flip it clockwise you get the previous page. Changing direction means you might briefly see the flash while you are flipping the unit but the UI seems pretty good to me. For those who don’t like this interface, the unit could still hinge out in the middle to show both pages at once.

Bluetooth connection

Using bluetooth for the connection has a number of interesting consequences. It does use power, and does not allow exchange of power between the tablet and device, but it means you don’t have to physically put the phone on the tablet at all. This may be a pain in some circumstances (needing two hands to do interactive things) but in other circumstances having a remote control to use to flip pages can be a real win.

I have found a very nice way to do e-reading is to have the pages displayed in front of you, at eye height, rather than down low in your hands. In particular, if you can mount your tablet on the top back of an airplane seat, it is much more comfortable than holding a book or tablet in your hands. The main downside is that the overhead light does not shine on the page there, so you need a backlight or LED book light. The ability to do remote control from your phone, in your rested hand would be great. Unfortunately they have the strange idea that they want to ban bluetooth on planes, though it poses no risk. They don’t even like wires.

In the 90s I built a device for reading books on planes where I got a book holder (they do make those) and I rigged it to attach with velcro and hang from the back of the seat in front. In those days it was quite common to have velcro on the top of the seat. Combined with a book light, I found this to be way more comfortable than holding a book in my hands, and I read much more pleasantly. Today you might have to build it so that a plate wedges between the raised table and seatback and a rod sticks out to hold the tablet.

Or on some planes they could support e-books on the screen in the seatback, with the remote control that is in your armrest. Alas, they would indeed need to use bluetooth so your PDA could display the book on that screen. (In general, letting your PDA use the screen in front of you would be very nice. It’s too sucky a resolution for laptops, since it must have been designed years ago in an era of sucky resolution. Today 1650 x 950 displays cost $100.

How to do a distributed Twitter (MSM)

Dave Winer recently made a call for an open source twitter shell, which he suggests be perhaps done with a javascript framework to let any site act like twitter.com. Many people are interested in this sort of suggestion, because while the folks at twitter.com are generally well loved and felt to be good actors, many people fear that no publishing system that becomes important should be controlled by just one company.

For success, such a system would need to be as easy to use and set up as twitter for users, and pretty easy to set up for server operators. One thing it can’t do so easily, alas, is use a simple single namespace the way twitter does. A distributed system probably has to make names be domains, like E-mail addresses. That almost surely means something longer than twitter names and no use of the @name syntax popular in Twitter to refer to users. On the other hand almost everybody already has a domain based ID, ie. their E-mail address. On the other hand most people are afraid to use this ID in public where it might get spam. It’s a shame, but many might well prefer to get a different ID from their E-mail, or of course to use one at twitter, which would now look like user@twitter.com to the outside world instead of @user within twitter.

Naming problems aside, the denizens of the internet are certainly up to building a publish/subscribe based short message multicasting service, which is what twitter is using terms much older than the company. I might propose the name MSM for the techology (Multicast Short Message)  read more »

911 should be able to stop a train

It was over 5 years ago that I blogged about a robot that would travel in front of a train to spot cars stuck on the tracks in time to stop.

I recently read a local story about an RV that was demolished while stuck on the tracks here. The couple had time to talk to 911, who told them to get out, and it’s not clear from the story but it seems like a moderate amount of time may have passed (a couple of minutes) before their RV was smashed.

Here’s what should happen, and perhaps it does happen in some places:

  1. The 911 service should receive GPS and cell tower location on the caller. The moment the caller indicates they are stuck on the tracks, the 911 operator should push a button which figures out which tracks it might be and which trains might be approaching that crossing.
  2. Ideally trains are reporting their location with GPS as some do, but schedules can be used, or all trains anywhere near the area can be alerted.
  3. Signal lights close to the crossing should immediately go red, and cell phones of operators on the relevant trains should be called, and the computer or 911 operator can indicate which crossing is blocked. If the engineer is approaching that crossing they can emergency brake.

This can be enhanced a few ways:

  • Each crossing can have a big sign, “If stuck, get out of vehicle immediately, clear track (show direction) and call 911, and give this crossing number NNN.” The crossing number would work even if GPS and cell towers don’t locate the crossing.
  • Alternately, there could be a 10 digit phone number, different for each crossing. There is, however, some risk of abuse and false reports. You don’t want a war dialing telemarketer to stop trains. An operator may still need to confirm.
  • As noted, the sign should try to tell people to clear to the area slightly “upstream” (ie. towards the oncoming train, but not on the tracks, obviously.) That’s because when the train hits the car it throws it sideways and forward, never backwards along the path the train came from.
  • If you don’t see or hear a train, it makes slight sense to get out and call while walking so the call comes sooner. If you can see the train they can see you and it’s probably too late anyway. But human safety is more important.
  • The trains may have another way to reach the engineer, such as a private radio system, but just having a cell phone on each train (plus knowing trains staff personal cell phones and calling all of them) seems like a quick and easy solution. The cell in the train can have a very loud and flashing ringer, especially if it’s an emergency call.

It takes a long time to stop a train, but I bet most vehicles that get stuck on the tracks are stuck minutes before the train comes.

Swap should be encrypted by default

There are a variety of tools that offer encrypted filesystems for the various OSs. None of them are as easy to use as we would like, and none have reached the goal of “Zero User Interface” (ZUI) that is the only thing which causes successful deployment of encryption (ie. Skype, SSH and SSL.)

Many of these tools have a risk of failure if you don’t also encrypt your swap/paging space, because your swap file will contain fragments of memory, including encrypted files and even in some cases decryption keys. There is a lot of other confidential data which can end up in swap — web banking passwords and just about anything else.

It’s not too hard to encrypt your swap on linux, and the ecryptfs tools package includes a tool to set up encrypted swap (which is not done with ecryptfs, but rather with dm-crypt, the block-device encryptor, but it sets it up for you.)

However, I would propose that swap be encrypted by default, even if the user does nothing. When you boot, the system would generate a random key for that session, and use it to encrypt all writes and reads to the swap space. That key of course would never be swapped out, and furthermore, the kernel could even try to move it around in memory to avoid the attacks the EFF recently demonstrated where the RAM of a computer that’s been turned off for a short time is still frequently readable. (In the future, computers will probably come with special small blocks of RAM in which to store keys which are guaranteed — as much as that’s possible — to be wiped in a power failure, and also hard to access.)

The automatic encryption of swap does bring up a couple of issues. First of all, it’s not secure with hibernation, where your computer is suspended to disk. Indeed, to make hibernation work, you would have to save the key at the start of the hibernation file. Hibernation would thus eliminate all security on the data — but this is no worse than the situation today, where all swap is insecure. And many people never hibernate.  read more »

What's this odd twitter spam about?

Some recent searches have revealed unusual activity on twitter, and I wonder where it’s going. Narcissus searches on twitter reveal a variety of accounts tweeting links into my blog and sites, for reasons not clearly apparent.

For example, a week ago, a half dozen identical twitter accounts all tweeted my post about electric cars playing music. All the accounts had pictures of models as their icon, and the exact same set of twitter posts, which seem to be a random collection of blog and news URLs with a bit.ly pointer to the item, all posted via twitterfeed. These accounts seem to follow and be followed by about 500, presumably the same list.

Here are some of the accounts:

Then more recently I see another set of accounts which all follow about 20 people but are followed by about 200 to 500. They are all posting “from API” and again are just posting links, this time with tinyurl.com. The account names are odd, too.

  • sheen0uz
  • http://twitter.com/moshelir3u
  • http://twitter.com/felecin9v

These also seem to to have cute girls as icons. However, strangely, the many followers appear to be real, or at least some of them appear to be. Why are people following a spam robot? Are the followers people who were paid to do it, or are in some twitter-optimization scheme?

What I am curious about is the motive. Are they linking to real sites in the hope of gaining some sort of legitimacy in twitter indexing engines, so that later they can start linking to people who pay for it? (Twitter SEO?) Are they trying to form twitter equivalents of link farms? Are they just hoping that site authors will see the backlinks and look at them for some later purpose? (You would be amazed how many hits on a web server are there just to put a spammer in the “Referer” field, either to get you to look, or to show up in referer logs that some sites post to the web.)

Thoughts on what’s up?

Negative copier for digital camera

As digital cameras have developed enough resolution to work as scanners, such as in the scanning table proposal I wrote about earlier, some people are also using them to digitize slides. You can purchase what is called a “slide copier” which is just a simple lens and holder which goes in front of the camera to take pictures of slides. These have existed for a long time as they were used to duplicate slides in film days. However, they were not adapted for negatives since you can’t readily duplicate a colour negative this way, because it is a negative and because it has an orange cast from the substrate.

There is at least one slide copier (The Opteka) which offers a negative strip holder, however that requires a bit of manual manipulation and the orange cast reduces the color gamut you will get after processing the image. Digital photography allows imaging of negatives because we can invert and colour adjust the result.

To get the product I want, we don’t have too far to go. First of all, you want a negative strip holder which has wheels in the sprocket holes. Once you have placed your negative strip correctly with one wheel, a second wheel should be able to advance exactly one frame, just like the reel in the camera did when it was shooting. You may need to do some fine adjustments, but it is also satisfactory to have the image cover more than 36mm so that you don’t have to be perfectly accurate, and have the software do some cropping.

Secondly, you would like it so that ideally, after you wind one frame, it triggers the shutter using a remote release. (Remote release is sadly a complex thing, with many different ways for different cameras, including wired cable releases where you just close a contact but need a proprietary connector, infrared remote controls and USB shooting. Sadly, this complexity might end up adding more to the cost than everything else, so you may have to suffer and squeeze it yourself.) As a plus, a little air bulb should be available to blow air over negatives before shooting them.

Next, you want an illuminator behind the negative or slide. For slides you want white of course. For negatives however, you would like a colour chosen to undo the effects of the orange cast, so that the gamut of light received matches the range of the camera sensors. This might be done most easily with 3 LEDs matched to camera sensors in the appropriate range of brightness.

You could also simply make a product out of this light, to be used with existing slide duplicators; that’s the simplest way to do this in the small scale.

Why do all this, when a real negative scanner is not that expensive, and higher quality? Digitizing your negatives this way would be fast. Negative scanners all tend to be very slow. This approach would let you slot in a negative strip, and go wind-click-wind-click-wind-click-wind-click in just a couple of seconds, not unlike shooting fast on an old film camera. You would get quite decent scans with today’s high quality DLSRs. My 5d Mark II with 21 megapixels would effectively be getting around 4000 dpi, though with bayer interpolation. If you wanted a scan for professional work or printing, you could then go back to that negative and do it on a more expensive negative scanner, cleaning it first etc.

Another solution is just to send all the negatives off to one of the services which send them to India for cheap scanning, though these tend to be at a more modest resolution. This approach would let you quickly get a catalog of your negatives.

Light Table

Of course, to get a really quick catalog, another approach would be to create a grid of 3 rows of negative strip holder which could then be placed on a light table — ideally a light table with a blueish light to compensate for the orange cast. Take a photo of the entire grid to get 12 individual photos in one shot. This will result (on the 5D) in about 1.5 megapixel versions of each negative. Not sufficient to work with but fine for screen and web use, and not too far off the basic service you get from the consumer scanning companies.

I have some of my old negatives in plastic sheets that go in binders, so I could do it directly with them, but it’s work to put negatives into these and would be much easier to slide strips into a plastic holder which keeps them flat. Of course, another approach would be to simply lay the strips on the light table and put a sheet of clear plexiglass on top of them, and shoot in a dim room to avoid reflections.

Negative viewer

It would also be useful if digital cameras or video cameras tossed in a “view colour negative” mode which did its best to show an invert of the live preview image with orange cast reverted. Then you could browse your negatives by holding them up to your camera (in macro mode) and see them in their true form, if at lower resolution. Of course you can usually figure out what’s in a negative but sometimes it’s not so easy and requires the loupe, and it might not in this case.

Should be many choices when the phone rings

Long ago I described how I want my cell phone to let me command it to play a recording to a caller noting that I have answered but need some time before I can talk, and also how I want the phone to stop ringing once I start fumbling for it. I learned that a few phones do have the former feature in a simple form, and it is something that is within the range of an app in some OSs but not others.

Let me expand those ideas to a more complete list of what a phone and voicemail system could and should do when a call is coming in. My friends Rohit Kare and Salim Ismail recently released a cool product they called Caller ID 2.0, which shows you more advanced screen pops on the incoming caller, such as their recent tweets and facebook status, which is quite cute if a bit spooky. But I refer instead to the choices I might make after seeing their number and other such information.

First of all, as before, I should be offered the ability to answer the call and play a couple of different recordings until I start speaking into the phone. As described, these recordings would be along the lines of, “I’m going to take your call but I am briefly busy, driving or in an audience. Please hold on while I get somewhere that we can talk.” Since the phone should even know (based on rate of change of cell towers or GPS) that I am driving it should be able to figure out which of the two conditions I should report. While there are some minor privacy issues, it is worthwhile to let the other person know you are driving, as you really should have a different sort of conversation. This is so useful it would even be useful to let people know in the ringback that you are driving, but there are privacy issues on doing that, particularly with strangers, but even with spouses.

If the network will cooperate, it also makes sense to have choices that will, like the current “ignore” button, send the call to voice mail. These buttons however would control what sort of greeting is played, and perhaps other actions.

For example, you might send the call to a voice mail saying “Hi, I was too busy to take the call but I am with my phone, and I plan to get back to you within a few minutes. No need to leave a message.” (Though if there was no caller ID, you might indicate that they should enter their phone number for the callback.) You could also have 2 buttons, to describe a longer wait time or different procedures, such as “I will call you back when I get to the office.” As before, one button might make the greeting reveal things to the caller that you want to reveal, such as “Tell my location and speed.” After all, quite often with a trusted caller, the main purpose of the call will be to ask where you are and when you are going to get where you’re going.

Of course, as I proposed in 2005, if they do start leaving voice mail for you, and are on the same carrier, then an attempt to call them back should break into the voicemail session rather than giving you their voice mail.

Can we stop electric cars from playing music for safety?

I struck a nerve several years ago when I blogged about the horrible beep-beep noise made by heavy equipment when it backs up. Eventually a British company came up with a solution: a pulsed burst of white noise which is very evident when you are near the backing up vehicle but which disperses quickly so it doesn’t travel and annoy people a mile away as the beeps do.

Now I am seeing more and more suggestions that electric cars, which run quite silently when slow, make some noise for safety. This is fine, but there are also suggestions that there will be music and vanity noises, like ringtones or “cartones.” I can certainly see why this would appeal to people. (Already many think that their car is the place to play mind-numbing bass to announce musical taste to all others on the street.) There are even proposed laws.

While the cartones would be quieter than the backup beep or the heavy bass, I really fear that people will overdo what they think is the purpose — being attention grabbing. They will want to distract, and that will create a cacophony on the roads. It’s hard to make sounds that are meant to be attention grabbing (or vanity oriented) not travel beyond the range that you need them for safety.

I don’t want to imagine what it might be like living as I do with a 3-way stop outside my window, with each car singing a different tune or strange noise every time it slows down and starts up again. Who will want to live near intersections or parking lots?

I have a few proposals:

  1. Like the beep-beep solution, use white noise that just doesn’t travel very far, but is easily noticed when close.
  2. Use natural sounds, like waves crashing, birds chirping, wind blowing. We are tuned to hear those sounds in an otherwise silent environment, but our brains also can easily ignore them in background form.
  3. Do indeed tune the volume based on ambient noise. This is suggested in the O’Reilly article linked above. They propose it to be loud enough. It should also be quiet enough.
  4. Don’t do it at a speed where the tires and wind and electric motors are making enough noise already.
  5. As robocar sensors become more common, such as LIDAR and radar, only make the noise when there are people who might come in contact with the vehicle. Otherwise, be silent.
  6. Since robocars will not hit people in any normal operation, even people who don’t know they are there, such vehicles need not make any noise. HOwever, if they see a human or anything else on a collision course, let them make a more loud and useful noise that really gets attention, like a burst of white or pink noise, or even a horn if they ignore that. Start quiet, get louder if it is not reacted to in a human reaction time.

Let’s not give up on this opportunity to return peace to our public spaces as electric cars and robocars become popular.

A super-fast web transaction (and Google SPDY)

(Update: I had a formatting error in the original posting, this has been fixed.)

A few weeks ago when I wrote about the non deployment of SSL I touched on an old idea I had to make web transactions vastly more efficient. I recently read about Google’s proposed SPDY protocol which goes in a completely opposite direction, attempting to solve the problem of large numbers of parallel requests to a web server by multiplexing them all in a single streaming protocol that works inside a TCP session.

While calling attention to that, let me outline what I think would be the fastest way to do very simple web transactions. It may be that such simple transactions are no longer common, but it’s worth considering.

Consider a protocol where you want to fetch the contents of a URL like “www.example.com/page.html” and you have not been to that server recently (or ever.) You want only the plain page, you are not yet planning to fetch lots of images and stylesheets and javascript.

Today the way this works is pretty complex:

  1. You do a DNS request for www.example.com via a UDP request to your DNS server. In the pure case this also means first asking where “.com” is but your DNS server almost surely knows that. Instead, a UDP request is sent to the “.com” master server.
  2. The “.com” master server returns with the address of the server for example.com.
  3. You send a DNS request to the example.com server, asking where “www.example.com is.”
  4. The example.com DNS server sends a UDP response back with the IP address of www.example.com
  5. You open a TCP session to that address. First, you send a “SYN” packet.
  6. The site responds with a SYN/ACK packet.
  7. You respond to the SYN/ACK with an ACK packet. You also send the packet with your HTTP “GET” reqequest for “/page.html.” This is a distinct packet but there is no roundtrip so this can be viewed as one step. You may also close off your sending with a FIN packet.
  8. The site sends back data with the contents of the page. If the page is short it may come in one packet. If it is long, there may be several packets.
  9. There will also be acknowledgement packets as the multiple data packets arrive in each direction. You will send at least one ACK. The other server will ACK your FIN.
  10. The remote server will close the session with a FIN packet.
  11. You will ACK the FIN packet.

You may not be familiar with all this, but the main thing to understand is that there are a lot of roundtrips going on. If the servers are far away and the time to transmit is long, it can take a long time for all these round trips.

It gets worse when you want to set up a secure, encrypted connection using TLS/SSL. On top of all the TCP, there are additional handshakes for the encryption. For full security, you must encrypt before you send the GET because the contents of the URL name should be kept encrypted.

A simple alternative

Consider a protocol for simple transactions where the DNS server plays a role, and short transactions use UDP. I am going to call this the “Web Transaction Protocol” or WTP. (There is a WAP variant called that but WAP is fading.)

  1. You send, via a UDP packet, not just a DNS request but your full GET request to the DNS server you know about, either for .com or for example.com. You also include an IP and port to which responses to the request can be sent.
  2. The DNS server, which knows where the target machine is (or next level DNS server) forwards the full GET request for you to that server. It also sends back the normal DNS answer to you via UDP, including a flag to say it forwarded the request for you (or that it refused to, which is the default for servers that don’t even know about this.) It is important to note that quite commonly, the DNS server for example.com and the www.example.com web server will be on the same LAN, or even be the same machine, so there is no hop time involved.
  3. The web server, receiving your request, considers the size and complexity of the response. If the response is short and simple, it sends it in one UDP packet, though possibly more than one, to your specified address. If no ACK is received in reasonable time, send it again a few times until you get one.
  4. When you receive the response, you send an ACK back via UDP. You’re done.

The above transaction would take place incredibly fast compared to the standard approach. If you know the DNS server for example.com, it will usually mean a single packet to that server, and a single packet coming back — one round trip — to get your answer. If you only know the server for .com, it would mean a single packet to the .com server which is forwarded to the example.com server for you. Since the master servers tend to be in the “center” of the network and are multiplied out so there is one near you, this is not much more than a single round trip.  read more »

Computerized road trains in Europe

A proposal is being floated in Europe for computerized convoys or road trains within the next decade. This is a proposal for a system where cars can hand over control to a lead car and follow in a train or convoy, without physical connection.

This idea comes up a lot as an early robocar technology. It is particularly common because it’s much easier to do — a human driver still is in charge, and the robotic control is limited to a very limited and simple environment. It’s safe to say that we could make this work very quickly if we wanted to. There is no navigation or vision required, no recognition of obstacles, no choice of speeds or turns. Cars that come together in a convoy can draft to get a serious boost in fuel efficiency, and of course the un-drivers can now relax and read or work on the trip.

As a robocar booster, people are surprised when I say I am not too thrilled about this idea, at least as an early technology. Rather I think it’s a great idea for later. In spite of the enthusiasm with which I write, the robocar problem is not a simple one. This much simpler problem is tempting but has some snags.

First of all, if you have a bug in a standalone robocar system, it may cause an accident, and that may injure or kill the occupants of the robocar, and perhaps one or two other cars. Death is less likely at urban speeds of course. A problem with a computerized convoy could have terrible results, involving scores of cars. Since most people want this for the highway, the problem would also occur at lethal speed. Convoys are just not the first place we want to test our systems and have our first accidents.

Secondly, forming convoys requires a critical mass of suitably equipped cars. Of course, you don’t need a dozen full robocars to make a train, all you would need is cars with drive-by-wire and some much simpler control circuitry. But even so, the incentive to get a car with this feature has to get over a critical mass hump if it’s going to be worthwhile. It’s not quite as bad as fully ad-hoc trains, since you can have scheduled trains, lead by a bus or truck driver, and cars can see such a lead vehicle and get in behind it. But at first, the odds of many cars all finding one another at the same time is low. If the train is going faster than regular traffic in a carpool lane, as we hope it would, it will not be easy to join a train that moves past you on the highway. If it moves slower than traffic, it is easy to slow down and join it, but then it has to move slower, with all the attendant problems.

Computerized convoys have advantages and disadvantages over physical ones. Physical ones probably can only be formed while stopped, and probably only unformed that way too. One could see the last car in a physical convoy undocking while moving, so with correct ordering it might work out, but it’s a far cry from a virtual convoy which allows anybody to join and leave at any time.

Physical convoys however can transmit power. This is useful if you expect people to be driving short-range electric cars. They would take their short range car and join a convoy, and be powered by the lead locomotive while operating, and even be recharging a bit. After dispersion, the vehicles would only need to go a short destination to their target and back to the evening train.

Physical coupling makes it harder for one car to leave the train due to a failure. On the other hand it means that if the lead car wants to change lanes, all cars must do so. If the lead car leaves the road, they all do. Jack-knifing is a real worry, which is one reason that today even cargo road trains are limited to 2 trailers in urban areas, and 3 trailers in rural areas, if they are allowed at all.

Physical coupling requires specially modified vehicles. This is even more the case if the locomotive will actually be towing the vehicles physically rather than providing them with electricity for their motors and batteries. Either of these is a major modification, while virtual coupling only requires a drive-by-wire car and a small matter of programming.

Even full robocars probably should not form convoys right away. We should wait until our confidence is even higher, in spite of the fuel savings. If one car goes bad, or its occupants try to take over and move to manual driving, the consequences could be nasty in any convoy. And of course, the first robocars on the road will never get to join convoys as they will not meet the others. That’s why you need to solve the solo navigation problem first, and then you get enough on the road to work on the cooperation problems.

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.

Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)

Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.

Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.

This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.

We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.

So I propose two distinct things:

  1. A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
  2. A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.

The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:

  • Full frames (ie. what we do today)
  • Rastered updates to specific rectangles, with ability to scale them.
  • More arbitrary shapes (alpha) and ability to move the shapes with any timebase
  • VNC level abilities
  • X windows level abilities
  • Graphics card (polygon) level abilities
  • In the unlikely extreme, the abilities of high level languages like display postscript.

I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.

One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.

Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)

Note that this approach also delivers something I asked for in 2004 — that it be easy to make any LCD act as a wireless digital picture frame.

It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.

Protocol

As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.

  • Paint line or rectangle with specified values, or gradient fill.
  • Move object, and move entire screen
  • Adjust brightness of rectangle (fade)
  • Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
  • Display pre-buffered rectangle

However, lessons learned from other protocols might expand this list slightly.

One connector?

This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.

There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.

To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.

Final rough description

Here’s a more refined rough set of features of the universal connector:

  • Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
  • Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
  • Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
  • If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
  • Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
  • When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)

Robocar Talk, Volvo automatic pedestrian avoidance

First: I will be speaking on robocars tomorrow, Tuesday Nov 9, at 6:30 pm for the meeting of the Jewish High Tech Community in Silicon Valley. The talk is at 6:30pm at the conference center of Fenwick and West at Castro & California in Mountain View. The public is welcome to attend, there is a $10 fee for non-members. This will be similar to my talk at Stanford 2 weeks ago, and a bit more extensive than the one in New York early in October, which Forbes said was the audience favorite at the event.

Volvo has built a brand around safe cars, and last year committed that nobody would die in a newer Volvo by 2020. They plan to do much of this with better passenger safety systems, akin to the work they have done on airbags and crumple zones. However, they also intend to use a lot of computerized technologies to make it happen. Other teams are pushing to expand the goal inside Volvo to also stop people from being killed by Volvos. To that end, next year’s Volvo S60 will come with a “Pedestrian Avoidance System” which uses a camera and machine vision to identify pedestrians and calculate if the vehicle is about to hit one. If it sees a potential pedestrian collision it will beep and alert the driver. If the driver does nothing, the car will brake.

Here is a video of the S60 in action:

It’s impressive, though pure machine vision suffers problems as lighting changes, which is one reason most work recently has been on LIDAR. It’s also interesting to see if they will be able to avoid making it too conservative. If the warning goes off all the time, even for a pedestrian who will (to the human eye) clearly slide by the side of the car at places like a crosswalk, drivers may learn to ignore the alarm, or get very annoyed and shut the whole system off it it brakes for them when they know an impact is not imminent. I’m hoping to learn more about Volvo’s efforts in the future. No other company has put as much effort into building a brand around safety, so we can expect Volvo, which has slipped in this status of late, to work very hard to maintain it and adapt robocar technologies to safer human driving and fully autonomous driving as well.

Dense triple parking

I have written of a simple algorithm to allow dense Valet style parking of robocars, such as triple parking on the roadsides. In this algorithm, one gap is left in the outer lanes, and the Robocars are able to move together, as an entire row segment, to “move the gap” as quickly as a single car can move. That way, if a car needs to get out from an inner lane, it can signal, and if the gap is currently ahead of it, for example, all the cars from the one next to it to the gap can move forward one space (at the same time) to put the gap next to the vehicle that needs to leave. This can happen in all the other rows and is easy, quiet and efficient for electric cars. It does not even need radio communication, as robocars will sense a car moving behind them or ahead of them, and immediately move in reaction. This request will move up the chain of cars to the gap. Of course, if one car does not move, the car behind it will only move a very short distance before refusing to go further, which would stop the whole effort (or in the case of an error, cause a very slow impact if the car behind keeps coming) and signal a need for human attention.

It seems like this should be possible even without many gaps, as long as there is enough spare space to allow a vehicle to wiggle out of its space. If there is just one gap, and a bit of wiggle room in the other rows, any car can still get out, just a bit more slowly. This is probably better done with a protocol for communication to assure it works quickly.

In this case, a gap on the outside lane (where there must be at least one) can be temporarily moved to the inside, and then back out. Consider 3 lanes of cars, with a gap in the outer late (lane #3) and a car in lane #1 (the curb lane) wanting out. First the lane #3 cars would adjust to move the gap to the right place, a bit forward of our target car. Next, a car from lane #2 would move into this gap, leaving a gap in lane #2 into which our target car can move. This leaves a gap in lane #3 which can be filled by a car from lane #2 which is willing to move in, ideally right next to our target car. Likewise a car from lane #3 can now move into that gap, and the resulting gap in the outer lane #3 can be moved to allow exit by our target car.

This requires a great deal more car moving, though again with electric cars this may not be too expensive. If the cars can turn all their wheels, they can move horizontally as some concept cars can already do. Even without that, a robotic car can wiggle out without much room, and of course the gap would not be placed exactly in place with the target car, but probably slightly forward to allow transfer with fewer wiggles. The result is a whole valet lot with just one blank space needed to get any car reasonably quickly. Of course, this would only be done when the lot needed to be totally full. For any partially full lot, gaps would be left to minimize the car moves needed to get any car out. However, if space is at a premium — so much so as to justify the extra moving — it can be done.

Bluetooth in all video cameras, and smart microphones

I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.

The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.

For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.

Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.

The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.

I also noted a number of features for still cameras as well as video ones:

  • Notes by the photographer, as above
  • Universal protocol for control of remote flashes
  • Remote control firing of the camera with all that USB has
  • At 480mbits, downloading of photos and even live video streams to a master recorder somewhere

It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.

There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes. Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.

And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.

I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.

So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)

However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.

Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.

There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.

However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.

One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?

We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.

Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.

Syndicate content