Technology

My phone should know when I start a trip

Every day I get into my car and drive somewhere. My mobile phone has a lot of useful apps for travel, including maps with traffic and a lot more. And I am usually calling them up.

I believe that my phone should notice when I am driving off from somewhere, or about to, and automatically do some things for me. Of course, it could notice this if it ran the GPS all the time, but that’s expensive from a power standpoint, so there are other ways to identify this:

  • If the car has bluetooth, the phone usually associates with the car. That’s a dead giveaway, and can at least be a clue to start looking at the GPS.
  • Most of my haunts have wireless, and the phone associates with the wireless at my house and all the places I work. So it can notice when it disassociates and again start checking the GPS. To get smart, it might even notice the MAC addresses of wireless networks it can’t see inside the house, but which it does see outside or along my usual routes.
  • Of course moving out to the car involves jostling and walking in certain directions (it has a compass.)

Once it thinks it might be in the car, it should go to a mode where my “in the car” apps are easy to get to, in particular the live map of the location with the traffic displayed, or the screen for the nav system. Android has a “car mode” that tries to make it easy to access these apps, and it should enter that mode.

It should also now track me for a while to figure out which way I am going. Depending on which way I head and the time of day, it can probably guess which of my common routes I am going to take. For regular commuters, this should be a no-brainer. This is where I want it to be really smart: Instead of me having to call up the traffic, it should see that I am heading towards a given highway, and then check to see if there are traffic jams along my regular routes. If it sees one, Then it should beep to signal that, and if I turn it on, I should see that traffic jam. This way if I don’t hear it beep, I can feel comfortable that there is light traffic along the route I am taking. (Or that if there is traffic, it’s not traffic I can avoid with alternate routes.)

This is the way I want location based apps to work. I don’t want to have to transmit my location constantly to the cloud, and have the cloud figure out what to do at any given location. That’s privacy invading and uses up power and bandwidth. Instead the phone should have a daemon that detects location “events” that have been programmed into it, and then triggers programs when those events occur. Events include entering and leaving my house or places I work, driving certain roads and so on.

And yes, for tools like shopkick, they can even be entering stores I have registered. And as I blogged at the very beginning of this blog many years ago, we can even have an event for when we enter a store with a bad reputation. The phone can download a database of places and wireless and Bluetooth MACs that should trigger events, and as such the network doesn’t need to know my exact location to make things happen. But most importantly, I don’t want to have to know to ask if there is something important near me, I want the right important things to tell me when I get near them.

Where will 3-D cameras like Kinect lead?

This year, I bought Microsoft Kinect cameras for the nephews and niece. At first they will mostly play energetic X-box games with them but my hope is they will start to play with the things coming from the Kinect hacking community — the videos of the top hacks are quite interesting. At first, MS wanted to lock down the Kinect and threaten the open source developers who reverse engineered the protocol and released drivers. Now Microsoft has official open drivers.

This camera produced a VGA colour video image combined with a Z (depth) value for each pixel. This makes it trivial to isolate objects in the view (like people and their hands and faces) and splitting foreground from background is easy. The camera is $150 today (when even a simple one line LIDAR cost a fortune not long ago) and no doubt cameras like it will be cheap $30 consumer items in a few years time. As I understand it, the Kinect works using a mixture of triangulation — the sensor being in a different place from the emitter — combined with structured light (sending out arrays of dots and seeing how they are bent by the objects they hit.) An earlier report that it used time-of-flight is disputed, and implies it will get cheaper fast. Right now it doesn’t do close up or very distant, however. While projection takes power, meaning it won’t be available full time in mobile devices, it could still show up eventually in phones for short duration 3-D measurement.

I agree with those that think that something big is coming from this. Obviously in games, but also perhaps in these other areas.

Gestural interfaces and the car

While people have already made “Minority Report” interfaces with the Kinect, studies show these are not very good for desktop computer use — your arms get tired and are not super accurate. They are good for places where your interaction with the computer will be short, or where using a keyboard is not practical.

One place that might make sense is in the car, at least before the robocar. Fiddling with the secondary controls in a car (such as the radio, phone, climate system or navigation) is always a pain and you’re really not supposed to look at your hands as you hunt for the buttons. But taking one hand off the wheel is OK. This can work as long as you don’t have to look at a screen for visual feedback, which is often the case with navigation systems. Feedback could come by audio or a heads up display. Speech is also popular here but it could be combined with gestures.

A Gestural interface for the TV could also be nice — a remote control you can’t ever misplace. It would be easy to remember gestures for basic functions like volume and channel change and arrow keys (or mouse) in menus. More complex functions (like naming shows etc.) are best left to speech. Again speech and gestures should be combined in many cases, particularly when you have a risk that an accidental gesture or sound could issue a command you don’t like.

I also expect gestures to possibly control what I am calling the “4th screen” — namely an always-on wall display computer. (The first 3 screens are Computer, TV and mobile.) I expect most homes to eventually have a display that constantly shows useful information (as well as digital photos and TV) and you need a quick and unambiguous way to control it. Swiping is easy with gesture control so being able to just swipe between various screens (Time/weather, transit arrivals, traffic, pending emails, headlines) might be nice. Again in all cases the trick is not being fooled by accidental gestures while still making the gestures simple and easy.

In other areas of the car, things like assisted or automated parking, though not that hard to do today, become easier and cheaper.

Small scale robotics

I expect an explosion in hobby and home robotics based on these cameras. Forget about Roombas that bump into walls, finally cheap robots will be able to see. They may not identify what they see precisely, though the 3D will help, but they won’t miss objects and will have a much easier time doing things like picking them up or avoiding them. LIDARs have been common in expensive robots for some time, but having it cheap will generate new consumer applications.

Mobile

There will be some gestural controls for phones, particularly when they are used in cars. I expect things to be more limited here, with big apps to come in games. However, history shows that most of the new sensors added to mobile devices cause an explosion of innovation so there will be plenty not yet thought of. 3-D maps of areas (particularly when range is longer which requires power) can also be used as a means of very accurate position detection. The static objects of a space are often unique and let you figure out where you are to high precision — this is how the Google robocars drive.

Security & facial recognition

3-D will probably become the norm in the security camera business. It also helps with facial recognition in many ways (both by isolating the face and allowing its shape to play a role) and recognition of other things like gait, body shape and animals. Face recognition might become common at ATMs or security doors, and be used when logging onto a computer. It also makes “presence” detection reliable, allowing computers to see how and where people are in a room and even a bit of what they are doing, without having to object recognition. (Though as the kinect hacks demonstrate, they help object recognition as well.)

Face recognition is still error-prone of course so its security uses will be initially limited, but it will get better at telling among people.

Virtual worlds & video calls

While some might view this as gaming, we should also see these cameras heavily used in augmented reality and virtual world applications. It makes it easy to insert virtual objects into a view of the physical world and have a good sense of what’s in front and what’s behind. In video calling, the ability to tell the person from the background allows better compression, as well as blanking of the background for privacy. Effectively you get a “green screen” without the need for a green screen.

You can also do cool 3-D effects by getting an easy and cheap measurement of where the viewer’s head is. Moving a 3-D viewpoint in a generated or semi-generated world as the viewer moves her head creates a fun 3-D effect without glasses and now it will be cheap. (It only works for one viewer, though.) Likewise in video calls you can drop the other party into a different background and have them move within it in 3-D.

With multiple cameras it is also possible to build a more complete 3-D model of an entire scene, with textures to paint on it. Any natural scene can suddenly become something you can fly around.

Amateur video production

Some of the above effects are already showing up on YouTube. Soon everybody will be able to do it. The Kinect’s firmware already does “skeleton” detection, to map out the position of the limbs of a person in the view of the camera. That’s good for games but also allows motion capture for animation on the cheap. It also allows interesting live effects distorting the body or making light sabres glow. Expect people in their own homes to be making their own Avatar like movies, at least on a smaller scale.

These cameras will become so popular we may need to start worrying about interference by their structured light. These are apps I thought of in just a few minutes. I am sure there will be tons more. If you have something cool to imagine, put it in the comments.

Happy Seasons to all! and a Merry New Year.

I'm loving the Shweeb concept

There was a bit of a stir when Google last week announced that one of the winners of their 10^100 contest would be Shweeb, a pedal-powered monorail from New Zealand that has elements of PRT. Google will invest $1M in Shweeb to help them build a small system, and if it makes any money on the investment, that will go into transportation related charities.

While I had a preference that Google fund a virtual world for developing and racing robocars I have come to love a number of elements about Shweeb, though it’s not robocars and the PRT community seems to not think it’s PRT. I think it is PRT, in that it’s personal, public and, according to the company, relatively rapid through the use of offline stations and non-stop point to point trips. PRT is an idea from the sixties that makes sense but has tried for almost 50 years to get transit planners to believe in it and build it. A micro-PRT has opened as a Heathrow parking shuttle, but in general transit administrators simply aren’t early adopters. They don’t innovate.

What impresses me about Shweeb is its tremendous simplicity. While it’s unlikely to replace our cars or transit systems, it is simple enough that it can actually be built. Once built, it can serve as a testbed for many of PRT’s concepts, and go through incremental improvements.  read more »

Using the phone as its own mouse, and trusting the keyboard

I’ve written a bunch about my desire to be able to connect an untrusted input device to my computer or phone so that we could get hotels and other locations to offer both connections to the HDTVs in the rooms for monitors and a usable keyboard. This would let one travel with small devices like netbooks, tablet computers and smart phones yet still use them for serious typing and UI work while in the hotel or guest area.

I’ve proposed that the connection from device to the monitor be wireless. This would make it not very good for full screen video but it would be fine for web surfing, email and the like. This would allow us to use the phone as its own mouse, either by having a deliberate mouse style sensor on the back, or using the camera on the back of the phone as a reader of the surface. (A number of interesting experiments have shown this is quite doable if the camera can focus close and can get an LED to light up the surface.) This provides a mouse which is more inherently trustable, and buttons on the phone (or on its touchscreen) can be the mouse buttons. This doesn’t work for tablets and netbooks — for them you must bring your own mini-mouse or use the device as a touchpad. I am still a fan of the “trackpoint” nubbins and they can also make very small but usable mice.

The keyboard issue is still tough. While it would seem a wired connection is more secure, not all devices will be capable of such a connection, while almost all will do bluetooth. Wired USB connections can pretend to be all sorts of devices, including CD-Roms with autorun CDs in them. However, I propose the creation of a new bluetooth HID profile for untrusted keyboards.

When connecting to an untrusted keyboard, the system would need to identify any privileged or dangerous operations. If such operations (like software downloads, destructive commands etc.) come from the keyboard, the system would insist on confirmation from the main device’s touchscreen or keyboard. So while you would be able to type on the keyboard to fill text boxes or write documents and emails, other things would be better done with the mouse or they would require a confirmation on the screen. Turns out this is how many people use computers these days anyway. We command line people would feel a bit burdened but could create shells that are good at spotting commands that might need confirmation.  read more »

An open source licence for FOSS platforms only

Here’s a suggestion that will surely rankle some in the free software/GPL community, but which might be of good benefit to the overall success of such systems.

What I propose is a GPL-like licence under which source code could be published, but which forbids effectively one thing: Work to make it run on proprietary operating systems, in particular Windows and MacOS.

The goal would be to allow the developers of popular programs for Windows, in particular, to release their code and allow the FOSS community to generate free versions which can run on Linux, *BSD and the like. Such companies would do this after deciding that there isn’t enough market on those platforms to justify a commercial venture in the area. Rather than, as Richard Stallman would say, “hoarding” their code, they could release it in this fashion. However, they would not fear they were doing much damage to their market on Windows. They would have to accept that they were disclosing their source code to their competitors and customers, and some companies fear that and will never do this. But some would, and in fact some already have, even without extra licence protection.

An alternate step would be to release it specifically so the community and make sure the program runs under WINE, the Windows API platform for Linux and others. Many windows programs already run under WINE, but almost all of them have little quirks and problems. If the programs are really popular, the WINE team patches WINE to deal with them, but it would be much nicer if the real program just got better behaved. In this case, the licence would have some rather unusual terms, in that people would have to produce versions and EXEs that run only under WINE — they would not run on native Windows. They could do this by inserting calls to check if they are running on WINE and aborting, or they could do something more complex like make use of some specific APIs added to WINE that are not found in Windows. Of course, coders could readily remove these changes and make binaries that run on Windows natively, but coders can also just pirate the raw Windows binaries — both would be violations of copyright, and the latter is probably easier to do.  read more »

A BIOS and OS designed for very fast booting (and aborting)

We all know how annoying it is that today’s much faster computers take such a long time to boot, and OS developers are working on speeding it up. Some time ago I proposed a defragmenter that notice what blocks were read in what order at boot and put the contiguous on the disk. I was told that experiments with this had not had much success, but more recently I read reports of how the latest Linux distributions will boot as much a 3 times faster on solid state disks as on rotating ones. There are some SSDs with performance that high (and higher) but typical ones range more in the 120 mb/second rate, better than 80 mb/second HDDs but getting more wins from the complete lack of latency.

However, today I want to consider something which is a large portion of the boot time, namely the power-on-self-test or “POST.” This is what the BIOS does before it gets ready to load the real OS. It’s important, but on many systems is quite slow.

I propose an effort to make the POST multitask with the loading of the real OS. Particularly on dual-core systems, this would be done by having one core do the POST and other BIOS (after testing all the cores of course) and other cores be used by the OS for loading. There are ways to do all this with one core I will discuss below, but this one is simple and almost all new computers have multiple cores.

Of course, the OS has to know it’s not supposed to touch certain hardware until after the BIOS is done initializing it and testing it. And so, there should be BIOS APIs to allow the OS to ask about this and get events as BIOS operations conclude. The OS, until given ownership of the screen, would output its status updates to the screen via a BIOS call. In fact, it would do that with all hardware, though the screen, keyboard and primary hard disk are the main items. When I say the OS, I actually mean both the bootloader that loads the OS and the OS itself once it is handed off to.

Next, the BIOS should, as soon as it has identified that the primary boot hard disks are ready, begin transferring data from the boot area into RAM. Almost all machines have far more RAM than they need to boot, and so pre-loading all blocks needed for boot into a cache, done in optimal order, will mean that by the time the OS kernal takes over, many of the disk blocks it will want to read will already be sitting in ram. Ideally, as I noted, the blocks should have been pre-stored in contiguous zones on the disk by an algorithm that watched the prior boots to see what was accessed and when.

Indeed, if there are multiple drives, the pre-loader could be configured to read from all of them, in a striping approach. Done properly, a freshly booted computer, once the drives had spun up, would start reading the few hundred megabytes of files it needs to boot from multiple drives into ram. All of this should be doable in just a few seconds on RAID style machines, where 3 disks striped can deliver 200mb/second or more of disk read performance. But even on a single drive, it should be quick. It would begin with the OS kernel and boot files, but then pre-cache all the pages from files used in typical boots. For any special new files, only a few seeks will be required after this is done.  read more »

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we’ll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it’s time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Even though digital video methods are so fast, the standard DVI cable is not able to drive my 4 megapixel monitor — this requires dual-link DVI, which as the name suggests, runs 2 sets of DVI wires (in the same cable and plug) to double the bandwidth. The expensive 8 megapixel monitors need two dual-link DVI cables.

Now we want enough bandwidth to completely redraw a screen at a suitable refresh (100hz) if we can get it. But we may want to consider how we can get that, and what to do if we can’t get it, either because our equipment is older than our display, or because it is too small to have the right connector, or must send the data over a medium that can’t deliver the bandwidth (like wireless, or long wires.)

Today all video is delivered in “frames” which are an update of the complete display. This was the only way to do things with analog rasterized (scan line) displays. Earlier displays actually were vector based, and the computer sent the display a series of lines (start at x,y then draw to w,z) to draw to make the images. There was still a non-fixed refresh interval as the phosphors would persist for a limited time and any line had to be redrawn again quickly. However, the background of the display was never drawn — you only sent what was white.

Today, the world has changed. Displays are made of pixels but they all have, or can cheaply add, a “frame buffer” — memory containing the current image. Refresh of pixels that are not changing need not be done on any particular schedule. We usually want to be able to change only some pixels very quickly. Even in video we only rarely change all the pixels at once.

This approach to sending video was common in the early remote terminals that worked over ethernet, such as X windows. In X, the program sends more complex commands to draw things on the screen, rather than sending a complete frame 60 times every second. X can be very efficient when sending things like text, as the letters themselves are sent, not the bitmaps. There are also a number of protocols used to map screens over much slower networks, like the internet. The VNC protocol is popular — it works with frames but calculates the difference and only transmits what changes on a fairly basic level.

We’re also changing how we generate video. Only video captured by cameras has an inherent frame rate any more. Computer screens, and even computer generated animation are expressed as a series of changes and movements of screen objects though sometimes they are rendered to frames for historical reasons. Finally, many applications, notably games, do not even work in terms of pixels any more, but express what they want to display as polygons. Even videos are now all delivered compressed by compressors that break up the scene into rarely updated backgrounds, new draws of changing objects and moves and transformations of existing ones.

So I propose two distinct things:

  1. A unification of our high speed data protocols so that all of them (external disks, SAN, high speed networking, peripheral connectors such as USB and video) benefit together from improvements, and one family of cables can support all of them.
  2. A new protocol for displays which, in addition to being able to send frames, sends video as changes to segments of the screen, with timestamps as to when they should happen.

The case for approach #2 is obvious. You can have an old-school timed-frame protocol within a more complex protocol able to work with subsets of the screen. The main issue is how much complexity you want to demand in the protocol. You can’t demand too much or you make the equipment too expensive to make and too hard to modify. Indeed, you want to be able to support many different levels, but not insist on support for all of them. Levels can include:

  • Full frames (ie. what we do today)
  • Rastered updates to specific rectangles, with ability to scale them.
  • More arbitrary shapes (alpha) and ability to move the shapes with any timebase
  • VNC level abilities
  • X windows level abilities
  • Graphics card (polygon) level abilities
  • In the unlikely extreme, the abilities of high level languages like display postscript.

I’m not sure the last layers are good to standardize in hardware, but let’s consider the first few levels. When I bought my 4 megapixel (2560x1600) monitor, it was annoying to learn that none of my computers could actually display on it, even at a low frame rate. Technically single DVI has the bandwidth to do it at 30hz but this is not a desirable option if it’s all you ever get to do. While I did indeed want to get a card able to make full use, the reality is that 99.9% of what I do on it could be done over the DVI bandwith with just the ability to update and move rectangles, or to do so at a slower speed. The whole screen never is completely replaced in a situation where waiting 1/30th of a second would not be an issue. But the ability to paint a small window at 120hz on displays that can do this might well be very handy. Adoption of a system like this would allow even a device with a very slow output (such as USB 2 at 400mhz) to still use all the resolution for typical activities of a computer desktop. While you might think that video would be impossible over such a slow bus, if the rectangles could scale, the 400 megabit bus could still do things like paying DVDs. While I do not suggest every monitor be able to decode our latest video compression schemes in hardware, the ability to use the post-compression primatives (drawing subsections and doing basic transforms on them) might well be enough to feed quite a bit of video through a small cable.

One could imagine even use of wireless video protocols for devices like cell phones. One could connect a cell phone with an HDTV screen (as found in a hotel) and have it reasonably use the entire screen, even though it would not have the gigabit bandwidths needed to display 1080p framed video.

Sending in changes to a screen with timestamps of when they should change also allows the potential for super-smooth movement on screens that have very low latency display elements. For example, commands to the display might involve describing a foreground object, and updating just that object hundreds of times a second. Very fast displays would show those updates and present completely smooth motion. Slower displays would discard the intermediate steps (or just ask that they not be sent.) Animations could also be sent as instructions to move (and perhaps rotate) a rectangle and do it as smoothly as possible from A to B. This would allow the display to decide what rate this should be done at. (Though I think the display and video generator should work together on this in most cases.)

Note that this approach also delivers something I asked for in 2004 — that it be easy to make any LCD act as a wireless digital picture frame.

It should be noted that HDMI supports a small amount of power (5 volts at 50ma) and in newer forms both it and DisplayPort have stopped acting like digitized versions of analog signals and more like highly specialized digital buses. Too bad they didn’t go all the way.

Protocol

As noted, it is key the basic levels be simple to promote universal adoption. As such, the elements in such a protocol would start simple. All commands could specify a time they are to be executed if it is not immediate.

  • Paint line or rectangle with specified values, or gradient fill.
  • Move object, and move entire screen
  • Adjust brightness of rectangle (fade)
  • Load pre-buffered rectangle. (Fonts, standard shapes, quick transitions)
  • Display pre-buffered rectangle

However, lessons learned from other protocols might expand this list slightly.

One connector?

This, in theory allows the creation of a single connector (or compatible family of connectors) for lots of data and lots of power. It can’t be just one connector though, because some devices need very small connectors which can’t handle the number of wires others need, or deliver the amount of power some devices need. Most devices would probably get by with a single data wire, and ideally technology would keep pushing just how much data can go down that wire, but any design should allow for simply increasing the number of wires when more bandwidth than a single wire can do is needed. (Presumably a year later, the same device would start being able to use a single wire as the bandwidth increases.) We may, of course, not be able to figure out how to do connectors for tomorrow’s high bandwidth single wires, so you also want a way to design an upwards compatible connector with blank spaces — or expansion ability — for the pins of the future, which might well be optical.

There is also a security implication to all of this. While a single wire that brings you power, a link to a video monitor, LAN and local peripherals would be fabulous, caution is required. You don’t want to be able to plug into a video projector in a new conference room and have it pretend to be a keyboard that takes over your computer. As this is a problem with USB in general, it is worth solving regardless of this. One approach would be to have every device use a unique ID (possibly a certified ID) so that you can declare trust for all your devices at home, and perhaps everything plugged into your home hubs, but be notified when a never seen device that needs trust (like a keyboard or drive) is connected.

To some extent having different connectors helps this problem a little bit, in that if you plug an ethernet cable into the dedicated ethernet jack, it is clear what you are doing, and that you probably want to trust the LAN you are connecting to. The implicit LAN coming down a universal cable needs a more explicit approval.

Final rough description

Here’s a more refined rough set of features of the universal connector:

  • Shielded twisted pair with ability to have varying lengths of shield to add more pins or different types of pins.
  • Asserts briefly a low voltage on pin 1, highly current limited, to power negotiator circuit in unpowered devices
  • Negotiator circuits work out actual power transfer, at what voltages and currents and on what pins, and initial data negotiation about what pins, what protocols and what data rates.
  • If no response is given to negotiation (ie. no negotiator circuit) then measure resistance on various pins and provide specified power based on that, but abort if current goes too high initially.
  • Full power is applied, unpowered devices boot up and perform more advanced negotiation of what data goes on what pins.
  • When full data handshake is obtained, negotiate further functions (hubs, video, network, storage, peripherals etc.)

Bluetooth in all video cameras, and smart microphones

I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.

The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.

For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.

Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.

The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.

I also noted a number of features for still cameras as well as video ones:

  • Notes by the photographer, as above
  • Universal protocol for control of remote flashes
  • Remote control firing of the camera with all that USB has
  • At 480mbits, downloading of photos and even live video streams to a master recorder somewhere

It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.

There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes. Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.

And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.

I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.

So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)

However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.

Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.

There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.

However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.

One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?

We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.

Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.

Every connector, including video, should send power both ways

I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?

As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.

In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.

An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.

Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.

I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.

Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)

Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)

Serve the multi-monitor market better with thin or removable bezels

A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30” monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it’s not the most cost effective way to get a lot of screen real estate. Today 24” 1080p monitors are down to $200, and flat panels don’t take so much space, so it makes a lot of sense to have two monitors or more.

Except there’s a big gap between them. And while there are a few monitors that advertise being thin bezel, even these have at least half an inch, so two monitors together will still have an inch of (usually black) between them.

I’m quite interested in building a panoramic photo wall with this new generation of cheap panels, but the 1” bars will be annoying, though tolerable from a distance. But does it have to be?

There are 1/4” bezel monitors made for the video wall industry, but it’s all very high end, and in fact it’s hard to find these monitors for sale on the regular market from what I have seen. If they are, they no doubt cost 2-3x as much as “specialty” market monitors. I really think it’s time to push multi-monitor as more than a specialty market.

I accept that you need to have something strong supporting and protecting the edge of your delicate LCD panel. But we all know from laptops it doesn’t have to be that wide. So what might we see?

  • Design the edges of the monitor to interlock, and have the supporting substrate further back on the left and further forward on the right. Thus let the two panels get closer together. Alternately let one monitor go behind the other and try to keep the distance to a minimum.
  • Design monitors that can be connected together by removing the bezel and protection/mounting hardware and carefully inserting a joiner unit which protects the edges of both panels but gets them as close together as it can, and firmly joins the two backs for strength. May not work as well for 2x2 grids without special joiners.
  • Just sell a monitor that has 2, 3 or 4 panels in it, mounted as close as possible. I think people would buy these, allowing them to be priced even better than two monitors. Offer rows of 1, 2 or 3 and a 2x2 grid. I will admit that a row of 4, which is what I want, is not likely to be as big a market.
  • Sell components to let VARs easily build such multi-panel monitors.

When it comes to multi-panel, I don’t know how close you could get the panels but I suspect it could be quite close. So what do you put in the gap? Well, it could be a black strip or a neutral strip. It could also be a translucent one that deliberately covers one or two pixels on each side, and thus shines and blends their colours. It might be interesting to see how much you could reduce visual effect of the gap. The eye has no problem looking through grid windows at a scene and not seeing the bars, so it may be that bars remain the right answer.

It might even be possible to cover the gap with a small thin LCD display strip. Such a strip, designed to have a very sharp edge, would probably go slightly in front of the panels, and appear as a bump in the screen — but a bump with pixels. From a distance this might look like a video wall with very obscured seams.

For big video walls, projection is still a popular choice, other than the fact that such walls must be very deep. With projection, you barely need the bezel at all, and in fact you can overlap projectors and use special software to blend them for a completely seamless display. However, projectors need expensive bulbs that burn out fairly quickly in constant use, so they have a number of downsides. LCD panel walls have enough upsides that people would tolerate the gaps if they can be made small using techniques above.

Anybody know how the Barco wall at the Comcast center is done? Even in the video from people’s camcorders, it looks very impressive.

If you see LCD panels larger than 24” with thin bezels (3/8 inch or less) at a good price (under $250) and with a good quality panel (doesn’t change colour as you move your head up and down) let me know. The Samsung 2443 looked good until I learned that it, and many others in this size, have serious view angle problems.

A standard OS mini-daemon, saving power and memory

On every system we use today (except the iPhone) a lot of programs want to be daemons — background tasks that sit around to wait for events or perform certain regular operations. On Windows it seems things are the worst, which is why I wrote before about how Windows needs a master daemon. A master daemon is a single background process that uses a scripting language to perform most of the daemon functions that other programs are asking for. A master daemon will wait for events and fire off more full-fledged processes when they happen. Scripts would allow detection of connection on ports, updated software versions becoming available, input from the user and anything else that becomes popular.

(Unix always had a simple master daemon for internet port connections, called inetd, but today Linux systems tend to be full of always-running deamons.)

Background tasks make a system slow to start up, and take memory. This is becoming most noticed on our new, lower powered devices like smartphones. So much so that Apple made the dramatic decision to not allow applications to run in the background. No multitasking is allowed. This seriously restricts what the iPhone can do, but Apple feels the increase in performance is worth it. It is certainly true that on Windows Mobile (which actually made it hard to terminate a program once you started it running) very quickly bloats down and becomes unusable.

Background tasks are also sucking battery life on phones. On my phone it’s easy to leave Google maps running in the background by mistake, and then it will sit there constantly sucking down maps, using the network and quickly draining the battery. I have not tried all phones, but Windows Mobile on my HTC is a complete idiot about battery management. Once you start up the network connection you seem to have to manually take it down, and if you don’t you can forget about your battery life. Often is the time you’ll pull the phone out to find it warm and draining. I don’t know if the other multitasking phones, like the Android, Pre and others have this trouble.

The iPhone’s answer is too draconian. I think the answer lies in a good master daemon, where programs can provide scripts in a special language to get the program invoked on various events. Whatever is popular should be quickly added to the daemon if it’s not too large. (The daemon itself can be modular so it only keeps in ram what it really needs.)

In particular, the scripts should say how important quick response time is, and whether the woken code will want to use the network. Consider an e-mail program that wants to check for new e-mail every 10 minutes. (Ideally it should have IMAP push but that’s another story.)

The master daemon scheduler should realize the mail program doesn’t have to connect exactly every 10 minutes, though that is what a background task would do. It doesn’t mind if it’s off by even a few minutes. So if there are multiple programs that want to wake up and do something every so often, they can be scheduled to only be loaded one or two at a time, to conserve memory and CPU. So the e-mail program might wait a few minutes for something else to complete. In addition, since the e-Mail program wants to use the network, groups of programs that want to use the network could be executed in order (or even, if appropriate, at the same time) so that the phone ends up setting up a network connection (on session based networks) and doing all the network daemons, and then closing it down.

The master daemon could also centralize event notifications coming from the outside. Programs that want to be woken up for such events (such as incoming emails or IMs) could register to be woken up on various events on ports. If the wireless network doesn’t support that it might allow notifications to come in via SMS that a new task awaits. When this special SMS comes in, the network connection would be brought up, and the signalled task would run, along with other tasks that want to do a quick check of the network. As much of this logic should be in the daemon script, so that the full program is only woken up if that is truly needed.

The daemon would of course handle all local events (key presses, screen touches) and also events from other sensors, like the GPS (wake me up if we get near hear, or more than 100 meters from there, etc.) It would also detect gestures with the accelerometer. If the user shakes the phone or flips it in a certain way, a program might want to be woken up.

And of course, it should be tied to the existing daemon that handles incoming calls and SMSs. Apps should be able to (if given permission) take control of incoming communications, to improve what the regular phone does.

This system could give the illusion of a full multitasking phone without the weight of it. Yes, loading in an app upon an event might be slightly slower than having it sitting there in ram. But if there is spare ram, it would of course be cached there anyway. An ideal app would let itself be woken up in stages, with a small piece of code loading quickly to give instant UI response, and the real meat loading more slowly if need be.

While our devices are going to get faster, this is not a problem which will entirely go away. The limiting factors in a portable device are mostly based on power, including the power to keep the network radios on. And applications will keep getting bigger the faster our CPUs get and the bigger our memories get. So this approach may have more lifetime than you think.

Design for a universal plug

I’ve written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. This week saw the announcement that all the companies selling cell phones in Europe will standardize on a single charging connector, based on micro-USB. (A large number of devices today use the now deprecated Mini-USB plug, and it was close to becoming a standard by default.) As most devices are including a USB plug for data, this is not a big leap, though it turned out a number of devices would not charge from other people’s chargers, either from stupidity or malice. (My Motorola RAZR will not charge from a generic USB charger or even an ordinary PC. It needs a special charger with the data pins shorted, or if it plugs into a PC, it insists on a dialog with the Motorola phone tools driver before it will accept a charge. Many suspect this was to just sell chargers and the software.) The new agreement is essentially just a vow to make sure everybody’s chargers work with everybody’s devices. It’s actually a win for the vendors who can now not bother to ship a charger with the phone, presuming you have one or will buy one. It is not required they have the plug — supplying an adapter is sufficient, as Apple is likely to do. Mp3 player vendors have not yet signed on.

USB isn’t a great choice since it only delivers 500ma at 5 volts officially, though many devices are putting 1 amp through it. That’s not enough to quickly charge or even power some devices. USB 3.0 officially raised the limit to 900ma, or 4.5 watts.

USB is a data connector with some power provided which has been suborned for charging and power. What about a design for a universal plug aimed at doing power, with data being the secondary goal? Not that it would suck at data, since it’s now pretty easy to feed a gigabit over 2 twisted pairs with cheap circuits. Let’s look at the constraints

Smart Power

The world’s new power connector should be smart. It should offer 5 volts at low current to start, to power the electronics that will negotiate how much voltage and current will actually go through the connector. It should also support dumb plugs, which offer only a resistance value on the data pins, with each resistance value specifying a commonly used voltage and current level.

Real current would never flow until connection (and ground if needed) has been assured. As such, there is minimal risk of arcing or electric shock through the plug. The source can offer the sorts of power it can deliver (AC, DC, what voltages, what currents) and the sink (power using device) can pick what it wants from that menu. Sinks should be liberal in what they take though (as they all have become of late) so they can be plugged into existing dumb outlets through simple adapters.

Style of pins

We want low current plugs to be small, and heavy current plugs to be big. I suggest a triangular pin shape, something like what is shown here. In this design, two main pins can only go in one way. The lower triangle is an optional ground — but see notes on grounding below.  read more »

Anti-atrocity system with airdropped video cameras

Our world has not rid itself of atrocity and genocide. What can modern high-tech do to help? In Bosnia, we used bombs. In Rwanda, we did next to nothing. In Darfur, very little. Here’s a proposal that seems expensive at first, but is in fact vastly cheaper than the military solutions people have either tried or been afraid to try. It’s the sunlight principle.

First, we would mass-produce a special video recording “phone” using the standard parts and tools of the cell phone industry. It would be small, light, and rechargeable from a car lighter plug, or possibly more slowly through a small solar cell on the back. It would cost a few hundred dollars to make, so that relief forces could airdrop tens or even hundreds of thousands of them over an area where atrocity is taking place. (If they are $400/pop, even 100,000 of them is 40 million dollars, a drop in the bucket compared to the cost of military operations.) They could also be smuggled in by relief workers on a smaller scale, or launched over borders in a pinch. Enough of them so that there are so many that anybody performing an atrocity will have to worry that there is a good chance that somebody hiding in bushes or in a house is recording it, and recording their face. This fear alone would reduce what took place.

Once the devices had recorded a video, they would need to upload it. It seems likely that in these situations the domestic cell system would not be available, or would be shut down to stop video uploads. However, that might not be true, and a version that uses existing cell systems might make sense, and be cheaper because the hardware is off the shelf. It is more likely that some other independent system would be used, based on the same technology but with slightly different protocols.

The anti-atrocity team would send aircraft over the area. These might be manned aircraft (presuming air superiority) or they might be very light, autonomous UAVs of the sort that already are getting cheap in price. These UAVs can be small, and not that high-powered, because they don’t need to do that much transmitting — just a beacon and a few commands and ACKs. The cameras on the ground will do the transmitting. In fact, the UAVs could quite possibly be balloons, again within the budget of aid organizations, not just nations.  read more »

Connecting untrusted devices to your computer

My prior post about USB charging hubs in hotel rooms brought up the issue of security, as was the case for my hope for a world with bluetooth keyboards scattered around.

Is it possible to design our computers to let them connect to untrusted devices? Clearly to a degree, in that an ethernet connection is generally always untrusted. But USB was designed to be fully trusted, and that limits it.

Perhaps in the future, an OS can be designed to understand the difference between trusted and untrusted devices connected (wired or wirelessly) to a computer or phone. This might involve a different physical interface, or using the same physical interface, but a secure protocol by which devices can be identified (and then recognized when plugged in again) and tagged once as trusted the first time they are plugged in.

For example, an unknown keyboard is a risky thing to plug in. It could watch you type and remember passwords, or it could simply send fake keys to your computer to get it to install trojan software completely taking it over. But we might allow an untrusted keyboard to type plain text into our word processors or E-mail applications. However, we would have to switch to the trusted keyboard (which might just be a touch-screen keyboard on a phone or tablet) for anything dangerous, including of course entry of passwords, URLs and commands that go beyond text entry. Would this be tolerable, constantly switching like this, or would we just get used to it? We would want to mount the inferior keyboard very close to our comfy but untrusted one.

A mouse has the same issues. We might allow an untrusted mouse to move the pointer within a text entry window and to go to a set of menus that can’t do anything harmful on the machine, but would it drive us crazy to have to move to a different pointer to move out of the application? Alas, an untrusted mouse can (particularly if it waits until you are not looking) run applications, even bring up the on-screen keyboard most OSs have for the disabled, and then do anything with your computer.

It’s easier to trust output devices, like a printer. In fact, the main danger with plugging in an unknown USB printer is that a really nasty one might pretend to be a keyboard or CD-Rom to infect you. A peripheral bus that allows a device to only be an output device would be safer. Of course an untrusted printer could still record what you print.

An untrusted screen is a challenge. While mostly safe, one can imagine attacks. An untrusted screen might somehow get you to go to a special web-site. There, it might display something else, perhaps logins for a bank or other site so that it might capture the keys. Attacks here are difficult but not impossible, if I can control what you see. It might be important to have the trusted screen nearby somehow helping you to be sure the untrusted screen is being good. This is a much more involved attack than the simple attacks one can do by pretending to be a keyboard.

An untrusted disk (including a USB thumb drive) is actually today’s biggest risk. People pass around thumb drives all the time, and they can pretend to be auto-run CD-roms. In addition, we often copy files from them, and double click on files on them, which is risky. The OS should never allow code to auto-run from an untrusted disk, and should warn if files are double-clicked from them. Of course, even then you are not safe from traps inside the files themselves, even if the disk is just being a disk. Many companies try to establish very tight firewalls but it’s all for naught if they allow people to plug external drives and thumbsticks into the computers. Certain types of files (such as photos) are going to be safer than others (like executables and word processor files with macros or scripts.) Digital cameras, which often look like drives, are a must, and can probably be trusted to hand over jpegs and other image and video files.

A network connection is one of the things you can safely plug in. After all, a network connection should always be viewed as hostile, even one behind a firewall.

There is a risk in declaring a device trusted, for example, such as your home keyboard. It might be compromised later, and there is not much you can do about that. A common trick today is to install a key-logger in somebody’s keyboard to snoop on them. This is done not just by police but by suspicious spouses and corporate spies. Short of tamper-proof hardware and encryption, this is a difficult problem. For now, that’s too much cost to add to consumer devices.

Still, it sure would be nice to be able to go to a hotel and use their keyboard, mouse and monitor. It might be worth putting up with having to constantly switch back to get full sized input devices on computers that are trying to get smaller and smaller. But it would also require rewriting of a lot of software, since no program could be allowed to take input from an untrusted device unless it has been modified to understand such a protocol. For example, your e-mail program would need to be modified to declare that a text input box allows untrusted input. This gets harder in web browsing — each web page would need to have to declare, in its input boxes, whether untrusted input was allowed.

As a starter, however, the computer could come with a simple “clipboard editor” which brings up a box in which one can type and edit with untrusted input devices. Then, one could copy the edited text to the OS clipboard and, using the trusted mouse or keyboard, paste it into any application of choice. You could always get back to the special editing windows using the untrusted keyboard and mouse, you would have to use the trusted ones to leave that window. Cumbersome, but not as cumbersome as typing a long e-mail on an iPhone screen.

Linux distributions, focus on a 1gb flashdrive, not on a CD ISO

I’m looking at you Ubuntu.

For some time now, the standard form for distributing a free OS (ie. Linux, *BSD) has been as a CD-ROM or DVD ISO file. You burn it to a CD, and you can boot and install from that, and also use the disk as a live CD.

There are a variety of pages with instructions on how to convert such an ISO into a bootable flash drive, and scripts and programs for linux and even for windows — for those installing linux on a windows box.

And these are great and I used one to make a bootable Ubuntu stick on my last install. And wow! It’s such a much nicer, faster experience compared to using CD that it’s silly to use CD on any system that can boot from a USB drive, and that’s most modern systems. With a zero seek time, it is much nicer.

So I now advocate going the other way. Give me a flash image I can dd to my flash drive, and a tool to turn that into an ISO if I need an ISO.

This has a number of useful advantages:

  • I always want to try the live CD before installing, to make sure the hardware works in the new release. In fact, I even do that before upgrading most of the time.
  • Of course, you don’t have old obsolete CDs lying around.
  • Jumping to 1 gigabyte allows putting more on the distribution, including some important things that are missing these days, such as drivers and mdadm (the RAID control program.)
  • Because flash is a dynamic medium, the install can be set up so that the user can, after copying the base distro, add files to the flash drive, such as important drivers — whatever they choose. An automatic script could even examine a machine and pull down new stuff that’s needed.
  • You get a much faster and easier to use “rescue stick.”
  • It’s easier to carry around.
  • No need for an “alternate install” and perhaps easier as well to have the upgrader use the USB stick as a cache of packages during upgrades.
  • At this point these things are really cheap. People give them away. You could sell them. This technique would also work for general external USB drives, or even plain old internal hard drives temporarily connected to a new machine being built if boot from USB is not practical. Great and really fast for eSata.
  • Using filesystems designed not to wear out flash, the live stick can have a writable partition for /tmp, installed packages and modifications (with some security risk if you run untrusted code.)

The perils of recalls of electronic products

Product recalls have been around for a while. You get a notice in the mail. You either go into a dealer at some point, any point, for service, or you swap the product via the mail. Nicer recalls mail you a new product first and then you send in the old one, or sign a form saying you destroyed it. All well and good. Some recalls are done as “hidden warranties.” They are never announced, but if you go into the dealer with a problem they just fix it for free, long after the regular warranty, or fix it while working on something else. These usually are for items that don’t involve safety or high liability.

Today I had my first run-in with a recall of a connected electronic product. I purchased an “EyeFi” card for my sweetie for valentines day. This is an SD memory card with an wifi transmitter in it. You take pictures, and it stores them until it encounters a wifi network it knows. It then uploads the photos to your computer or to photo sharing sites. All sounds very nice.

When she put in the card and tried to initialize it, up popped a screen. “This card has a defect. Please give us your address and we’ll mail you a new one, and you can mail back the old one, and we’ll give you a credit in our store for your trouble.” All fine, but the product refused to let her register and use the product. We can’t even use the product for a few days to try it out (knowing it may lose photos.) What if I wanted to try it out to see if I was going to return it to the store. No luck. I could return it to the store as-is, but that’s work and may just get another one on the recall list.

This shows us the new dimension of the electronic recall. The product was remotely disabled to avoid liability for the company. We had no option to say, “Let us use the card until the new one arrives, we agree that it might fail or lose pictures.” For people who already had the card, I don’t know if it shut them down (possibly leaving them with no card) or let them continue with it. You have to agree on the form that you will not use the card any more.

This can really put a damper on a gift, when it refuses to even let you do a test the day you get it.

With electronic recall, all instances of a product can be shut down. This is similar to problems that people have had with automatic “upgrades” that actually remove features (like adding more DRM) or which fix you jailbreaking your iPhone. You don’t own the product any more. Companies are very worried about liability. They will “do the safe thing” which is shut their product down rather than let you take a risk. With other recalls, things happened on your schedule. You were even able to just decide not to do the recall. The company showed it had tried its best to convince you to do it, and could feel satisfied for having tried.

This is one of the risks I list in my essays on robocars. If a software flaw is found in a robocar (or any other product with physical risk) there will be pressure to “recall” the software and shut down people’s cars. Perhaps in extreme cases while they are driving on the street! The liability of being able to shut down the cars and not doing so once you are aware of a risk could result in huge punitive damages under the current legal system. So you play it safe.

But if people find their car shutting down because of some very slight risk, they will start wondering if they even want a car that can do that. Or even a memory card. Only with public pressure will we get the right to say, “I will take my own responsibility. You’ve informed me, I will decide when to take the product offline to get it fixed.”

Better forms of Will-Call (phone and photo)

Most of us have had to stand in a long will-call line to pick up tickets. We probably even paid a ticket “service fee” for the privilege. Some places are helping by having online printable tickets with a bar code. However, that requires that they have networked bar code readers at the gate which can detect things like duplicate bar codes, and people seem to rather have giant lines and many staff rather than get such machines.

Can we do it better?

Well, for starters, it would be nice if tickets could be sent not as a printable bar code, but as a message to my cell phone. Perhaps a text message with coded string, which I could then display to a camera which does OCR of it. Same as a bar code, but I can actually get it while I am on the road and don’t have a printer. And I’m less likely to forget it.

Or let’s go a bit further and have a downloadable ticket application on the phone. The ticket application would use bluetooth and a deliberately short range reader. I would go up to the reader, and push a button on the cell phone, and it would talk over bluetooth with the ticket scanner and authenticate the use of my ticket. The scanner would then show a symbol or colour and my phone would show that symbol/colour to confirm to the gate staff that it was my phone that synced. (Otherwise it might have been the guy in line behind me.) The scanner would be just an ordinary laptop with bluetooth. You might be able to get away with just one (saving the need for networking) because it would be very fast. People would just walk by holding up their phones, and the gatekeeper would look at the screen of the laptop (hidden) and the screen of the phone, and as long as they matched wave through the number of people it shows on the laptop screen.

Alternately you could put the bluetooth antenna in a little faraday box to be sure it doesn’t talk to any other phone but the one in the box. Put phone in box, light goes on, take phone out and proceed.

Photo will-call

One reason many will-calls are slow is they ask you to show ID, often your photo-ID or the credit card used to purchase the item. But here’s an interesting idea. When I purchase the ticket online, let me offer an image file with a photo. It could be my photo, or it could be the photo of the person I am buying the tickets for. It could be 3 photos if any one of those 3 people can pick up the ticket. You do not need to provide your real name, just the photo. The will call system would then inkjet print the photos on the outside of the envelope containing your tickets.

You do need some form of name or code, so the agent can find the envelope, or type the name in the computer to see the records. When the agent gets the envelope, identification will be easy. Look at the photo on the envelope, and see if it’s the person at the ticket window. If so, hand it over, and you’re done! No need to get out cards or hand them back and forth.

A great company to implement this would be paypal. I could pay with paypal, not revealing my name (just an E-mail address) and paypal could have a photo stored, and forward it on to the ticket seller if I check the box to do this. The ticket seller never knows my name, just my picture. You may think it’s scary for people to get your picture, but in fact it’s scarier to give them your name. They can collect and share data with you under your name. Your picture is not very useful for this, at least not yet, and if you like you can use one of many different pictures each time — you can’t keep using different names if you need to show ID.

This could still be done with credit cards. Many credit cards offer a “virtual credit card number” system which will generate one-time card numbers for online transactions. They could set these up so you don’t have to offer a real name or address, just the photo. When picking up the item, all you need is your face.

This doesn’t work if it’s an over-21 venue, alas. They still want photo ID, but they only need to look at it, they don’t have to record the name.

It would be more interesting if one could design a system so that people can find their own ticket envelopes. The guard would let you into the room with the ticket envelopes, and let you find yours, and then you can leave by showing your face is on the envelope. The problem is, what if you also palmed somebody else’s envelope and then claimed yours, or said you couldn’t find yours? That needs a pretty watchful guard which doesn’t really save on staff as we’re hoping. It might be possible to have the tickets in a series of closed boxes. You know your box number (it was given to you, or you selected it in advance) so you get your box and bring it to the gate person, who opens it and pulls out your ticket for you, confirming your face. Then the box is closed and returned. Make opening the boxes very noisy.

I also thought that for Burning Man, which apparently had a will-call problem this year, you could just require all people fetching their ticket be naked. For those not willing, they could do regular will-call where the ticket agent finds the envelope. :-)

I’ve noted before that, absent the need of the TSA to know all our names, this is how boarding passes should work. You buy a ticket, provide a photo of the person who is to fly, and the gate agent just looks to see if the face on the screen is the person flying, no need to get out ID, or tell the airline your name.

Making RAID easier

Hard disks fail. If you prepared properly, you have a backup, or you swap out disks when they first start reporting problems. If you prepare really well you have offsite backup (which is getting easier and easier to do over the internet.)

One way to protect yourself from disk failures is RAID, especially RAID-5. With RAID, several disks act together as one. The simplest protecting RAID, RAID-1, just has 2 disks which work in parallel, known as mirroring. Everything you write is copied to both. If one fails, you still have the other, with all your data. It’s good, but twice as expensive.

RAID-5 is cleverer. It uses 3 or more disks, and uses error correction techniques so that you can store, for example, 2 disks worth of data on 3 disks. So it’s only 50% more expensive. RAID-5 can be done with many more disks — for example with 5 disks you get 4 disks worth of data, and it’s only 25% more expensive. However, having 5 disks is beyond most systems and has its own secret risk — if 2 of the 5 disks fail at once — and this does happen — you lose all 4 disks worth of data, not just 2 disks worth. (RAID-6 for really large arrays of disks, survives 2 failures but not 3.)

Now most people who put in RAID do it for more than data protection. After all, good sysadmins are doing regular backups. They do it because with RAID, the computer doesn’t even stop when a disk fails. You connect up a new disk live to the computer (which you can do with some systems) and it is recreated from the working disks, and you never miss a beat. This is pretty important with a major server.

But RAID has value to those who are not in the 99.99% uptime community. Those who are not good at doing manual backups, but who want to be protected from the inevitable disk failures. Today it is hard to set up, or expensive, or both. There are some external boxes like the “readynas” that make it reasonably easy for external disks, but they don’t have the bandwidth to be your full time disks.

RAID-5 on old IDE systems was hard, they usually could truly talk to only 2 disks at a time. The new SATA bus is much better, as many motherboards have 4 connectors, though soon one will be required by blu-ray drives.  read more »

A near-ZUI encrypted disk, for protection from Customs

Top

Recently we at the EFF have been trying to fight new rulings about the power of U.S. customs. Right now, it’s been ruled they can search your laptop, taking a complete copy of your drive, even if they don’t have the normally required reasons to suspect you of a crime. The simple fact that you’re crossing the border gives them extraordinary power.

We would like to see that changed, but until then what can be done? You can use various software to encrypt your hard drive — there are free packages like truecrypt, and many laptops come with this as an option — but most people find having to enter a password every time you boot to be a pain. And customs can threaten to detain you until you give them the password.

There are some tricks you can pull, like having a special inner-drive with a second password that they don’t even know to ask about. You can put your most private data there. But again, people don’t use systems with complex UIs unless they feel really motivated.

What we need is a system that is effectively transparent most of the time. However, you could take special actions when going through customs or otherwise having your laptop be out of your control.  read more »

Syndicate content