Technology

I'm loving the Shweeb concept

There was a bit of a stir when Google last week announced that one of the winners of their 10^100 contest would be Shweeb, a pedal-powered monorail from New Zealand that has elements of PRT. Google will invest $1M in Shweeb to help them build a small system, and if it makes any money on the investment, that will go into transportation related charities.

While I had a preference that Google fund a virtual world for developing and racing robocars I have come to love a number of elements about Shweeb, though it's not robocars and the PRT community seems to not think it's PRT. I think it is PRT, in that it's personal, public and, according to the company, relatively rapid through the use of offline stations and non-stop point to point trips. PRT is an idea from the sixties that makes sense but has tried for almost 50 years to get transit planners to believe in it and build it. A micro-PRT has opened as a Heathrow parking shuttle, but in general transit administrators simply aren't early adopters. They don't innovate.

What impresses me about Shweeb is its tremendous simplicity. While it's unlikely to replace our cars or transit systems, it is simple enough that it can actually be built. Once built, it can serve as a testbed for many of PRT's concepts, and go through incremental improvements.

Using the phone as its own mouse, and trusting the keyboard

I've written a bunch about my desire to be able to connect an untrusted input device to my computer or phone so that we could get hotels and other locations to offer both connections to the HDTVs in the rooms for monitors and a usable keyboard. This would let one travel with small devices like netbooks, tablet computers and smart phones yet still use them for serious typing and UI work while in the hotel or guest area.

I've proposed that the connection from device to the monitor be wireless. This would make it not very good for full screen video but it would be fine for web surfing, email and the like. This would allow us to use the phone as its own mouse, either by having a deliberate mouse style sensor on the back, or using the camera on the back of the phone as a reader of the surface. (A number of interesting experiments have shown this is quite doable if the camera can focus close and can get an LED to light up the surface.) This provides a mouse which is more inherently trustable, and buttons on the phone (or on its touchscreen) can be the mouse buttons. This doesn't work for tablets and netbooks -- for them you must bring your own mini-mouse or use the device as a touchpad. I am still a fan of the "trackpoint" nubbins and they can also make very small but usable mice.

The keyboard issue is still tough. While it would seem a wired connection is more secure, not all devices will be capable of such a connection, while almost all will do bluetooth. Wired USB connections can pretend to be all sorts of devices, including CD-Roms with autorun CDs in them. However, I propose the creation of a new bluetooth HID profile for untrusted keyboards.

When connecting to an untrusted keyboard, the system would need to identify any privileged or dangerous operations. If such operations (like software downloads, destructive commands etc.) come from the keyboard, the system would insist on confirmation from the main device's touchscreen or keyboard. So while you would be able to type on the keyboard to fill text boxes or write documents and emails, other things would be better done with the mouse or they would require a confirmation on the screen. Turns out this is how many people use computers these days anyway. We command line people would feel a bit burdened but could create shells that are good at spotting commands that might need confirmation.

Topic: 

An open source licence for FOSS platforms only

Here's a suggestion that will surely rankle some in the free software/GPL community, but which might be of good benefit to the overall success of such systems.

What I propose is a GPL-like licence under which source code could be published, but which forbids effectively one thing: Work to make it run on proprietary operating systems, in particular Windows and MacOS.

The goal would be to allow the developers of popular programs for Windows, in particular, to release their code and allow the FOSS community to generate free versions which can run on Linux, *BSD and the like. Such companies would do this after deciding that there isn't enough market on those platforms to justify a commercial venture in the area. Rather than, as Richard Stallman would say, "hoarding" their code, they could release it in this fashion. However, they would not fear they were doing much damage to their market on Windows. They would have to accept that they were disclosing their source code to their competitors and customers, and some companies fear that and will never do this. But some would, and in fact some already have, even without extra licence protection.

An alternate step would be to release it specifically so the community and make sure the program runs under WINE, the Windows API platform for Linux and others. Many windows programs already run under WINE, but almost all of them have little quirks and problems. If the programs are really popular, the WINE team patches WINE to deal with them, but it would be much nicer if the real program just got better behaved. In this case, the licence would have some rather unusual terms, in that people would have to produce versions and EXEs that run only under WINE -- they would not run on native Windows. They could do this by inserting calls to check if they are running on WINE and aborting, or they could do something more complex like make use of some specific APIs added to WINE that are not found in Windows. Of course, coders could readily remove these changes and make binaries that run on Windows natively, but coders can also just pirate the raw Windows binaries -- both would be violations of copyright, and the latter is probably easier to do.

Topic: 

Towards frameless (clockless) video

Recently I wrote about the desire to provide power in every sort of cable in particular the video cable. And while we'll be using the existing video cables (VGA and DVI/HDMI) for some time to come, I think it's time to investigate new thinking in sending video to monitors. The video cable has generally been the highest bandwidth cable going out of a computer though the fairly rare 10 gigabit ethernet is around the speed of HDMI 1.3 and DisplayPort, and 100gb ethernet will be yet faster.

Topic: 
Tags: 

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car's software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

Every connector, including video, should send power both ways

I've written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I've mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

Tags: 

Serve the multi-monitor market better with thin or removable bezels

A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30" monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it's not the most cost effective way to get a lot of screen real estate. Today 24" 1080p monitors are down to $200, and flat panels don't take so much space, so it makes a lot of sense to have two monitors or more.

Topic: 

A standard OS mini-daemon, saving power and memory

On every system we use today (except the iPhone) a lot of programs want to be daemons -- background tasks that sit around to wait for events or perform certain regular operations. On Windows it seems things are the worst, which is why I wrote before about how Windows needs a master daemon. A master daemon is a single background process that uses a scripting language to perform most of the daemon functions that other programs are asking for. A master daemon will wait for events and fire off more full-fledged processes when they happen.

Design for a universal plug

I've written before about both the desire for universal dc power and more simply universal laptop power at meeting room desks. This week saw the announcement that all the companies selling cell phones in Europe will standardize on a single charging connector, based on micro-USB. (A large number of devices today use the now deprecated Mini-USB plug, and it was close to becoming a standard by default.) As most devices are including a USB plug for data, this is not a big leap, though it turned out a number of devices would not charge from other people's chargers, either from stupidity or malice. (My Motorola RAZR will not charge from a generic USB charger or even an ordinary PC. It needs a special charger with the data pins shorted, or if it plugs into a PC, it insists on a dialog with the Motorola phone tools driver before it will accept a charge. Many suspect this was to just sell chargers and the software.) The new agreement is essentially just a vow to make sure everybody's chargers work with everybody's devices. It's actually a win for the vendors who can now not bother to ship a charger with the phone, presuming you have one or will buy one. It is not required they have the plug -- supplying an adapter is sufficient, as Apple is likely to do. Mp3 player vendors have not yet signed on.

USB isn't a great choice since it only delivers 500ma at 5 volts officially, though many devices are putting 1 amp through it. That's not enough to quickly charge or even power some devices. USB 3.0 officially raised the limit to 900ma, or 4.5 watts.

USB is a data connector with some power provided which has been suborned for charging and power. What about a design for a universal plug aimed at doing power, with data being the secondary goal? Not that it would suck at data, since it's now pretty easy to feed a gigabit over 2 twisted pairs with cheap circuits. Let's look at the constraints

Smart Power

The world's new power connector should be smart. It should offer 5 volts at low current to start, to power the electronics that will negotiate how much voltage and current will actually go through the connector. It should also support dumb plugs, which offer only a resistance value on the data pins, with each resistance value specifying a commonly used voltage and current level.

Real current would never flow until connection (and ground if needed) has been assured. As such, there is minimal risk of arcing or electric shock through the plug. The source can offer the sorts of power it can deliver (AC, DC, what voltages, what currents) and the sink (power using device) can pick what it wants from that menu. Sinks should be liberal in what they take though (as they all have become of late) so they can be plugged into existing dumb outlets through simple adapters.

Style of pins

We want low current plugs to be small, and heavy current plugs to be big. I suggest a triangular pin shape, something like what is shown here. In this design, two main pins can only go in one way. The lower triangle is an optional ground -- but see notes on grounding below.

Topic: 
Tags: 

Anti-atrocity system with airdropped video cameras

Our world has not rid itself of atrocity and genocide. What can modern high-tech do to help? In Bosnia, we used bombs. In Rwanda, we did next to nothing. In Darfur, very little. Here's a proposal that seems expensive at first, but is in fact vastly cheaper than the military solutions people have either tried or been afraid to try. It's the sunlight principle.

First, we would mass-produce a special video recording "phone" using the standard parts and tools of the cell phone industry. It would be small, light, and rechargeable from a car lighter plug, or possibly more slowly through a small solar cell on the back. It would cost a few hundred dollars to make, so that relief forces could airdrop tens or even hundreds of thousands of them over an area where atrocity is taking place. (If they are $400/pop, even 100,000 of them is 40 million dollars, a drop in the bucket compared to the cost of military operations.) They could also be smuggled in by relief workers on a smaller scale, or launched over borders in a pinch. Enough of them so that there are so many that anybody performing an atrocity will have to worry that there is a good chance that somebody hiding in bushes or in a house is recording it, and recording their face. This fear alone would reduce what took place.

Once the devices had recorded a video, they would need to upload it. It seems likely that in these situations the domestic cell system would not be available, or would be shut down to stop video uploads. However, that might not be true, and a version that uses existing cell systems might make sense, and be cheaper because the hardware is off the shelf. It is more likely that some other independent system would be used, based on the same technology but with slightly different protocols.

The anti-atrocity team would send aircraft over the area. These might be manned aircraft (presuming air superiority) or they might be very light, autonomous UAVs of the sort that already are getting cheap in price. These UAVs can be small, and not that high-powered, because they don't need to do that much transmitting -- just a beacon and a few commands and ACKs. The cameras on the ground will do the transmitting. In fact, the UAVs could quite possibly be balloons, again within the budget of aid organizations, not just nations.

Connecting untrusted devices to your computer

My prior post about USB charging hubs in hotel rooms brought up the issue of security, as was the case for my hope for a world with bluetooth keyboards scattered around.

Is it possible to design our computers to let them connect to untrusted devices? Clearly to a degree, in that an ethernet connection is generally always untrusted. But USB was designed to be fully trusted, and that limits it.

Topic: 

Linux distributions, focus on a 1gb flashdrive, not on a CD ISO

I'm looking at you Ubuntu.

For some time now, the standard form for distributing a free OS (ie. Linux, *BSD) has been as a CD-ROM or DVD ISO file. You burn it to a CD, and you can boot and install from that, and also use the disk as a live CD.

There are a variety of pages with instructions on how to convert such an ISO into a bootable flash drive, and scripts and programs for linux and even for windows -- for those installing linux on a windows box.

Topic: 

The perils of recalls of electronic products

Product recalls have been around for a while. You get a notice in the mail. You either go into a dealer at some point, any point, for service, or you swap the product via the mail. Nicer recalls mail you a new product first and then you send in the old one, or sign a form saying you destroyed it. All well and good. Some recalls are done as "hidden warranties." They are never announced, but if you go into the dealer with a problem they just fix it for free, long after the regular warranty, or fix it while working on something else.

Topic: 

Better forms of Will-Call (phone and photo)

Most of us have had to stand in a long will-call line to pick up tickets. We probably even paid a ticket "service fee" for the privilege. Some places are helping by having online printable tickets with a bar code. However, that requires that they have networked bar code readers at the gate which can detect things like duplicate bar codes, and people seem to rather have giant lines and many staff rather than get such machines.

Can we do it better?

Making RAID easier

Hard disks fail. If you prepared properly, you have a backup, or you swap out disks when they first start reporting problems. If you prepare really well you have offsite backup (which is getting easier and easier to do over the internet.)

One way to protect yourself from disk failures is RAID, especially RAID-5. With RAID, several disks act together as one. The simplest protecting RAID, RAID-1, just has 2 disks which work in parallel, known as mirroring. Everything you write is copied to both. If one fails, you still have the other, with all your data. It's good, but twice as expensive.

RAID-5 is cleverer. It uses 3 or more disks, and uses error correction techniques so that you can store, for example, 2 disks worth of data on 3 disks. So it's only 50% more expensive. RAID-5 can be done with many more disks -- for example with 5 disks you get 4 disks worth of data, and it's only 25% more expensive. However, having 5 disks is beyond most systems and has its own secret risk -- if 2 of the 5 disks fail at once -- and this does happen -- you lose all 4 disks worth of data, not just 2 disks worth. (RAID-6 for really large arrays of disks, survives 2 failures but not 3.)

Now most people who put in RAID do it for more than data protection. After all, good sysadmins are doing regular backups. They do it because with RAID, the computer doesn't even stop when a disk fails. You connect up a new disk live to the computer (which you can do with some systems) and it is recreated from the working disks, and you never miss a beat. This is pretty important with a major server.

But RAID has value to those who are not in the 99.99% uptime community. Those who are not good at doing manual backups, but who want to be protected from the inevitable disk failures. Today it is hard to set up, or expensive, or both. There are some external boxes like the "readynas" that make it reasonably easy for external disks, but they don't have the bandwidth to be your full time disks.

RAID-5 on old IDE systems was hard, they usually could truly talk to only 2 disks at a time. The new SATA bus is much better, as many motherboards have 4 connectors, though soon one will be required by blu-ray drives.

Topic: 

A near-ZUI encrypted disk, for protection from Customs

Recently we at the EFF have been trying to fight new rulings about the power of U.S. customs. Right now, it's been ruled they can search your laptop, taking a complete copy of your drive, even if they don't have the normally required reasons to suspect you of a crime. The simple fact that you're crossing the border gives them extraordinary power.

We would like to see that changed, but until then what can be done? You can use various software to encrypt your hard drive -- there are free packages like truecrypt, and many laptops come with this as an option -- but most people find having to enter a password every time you boot to be a pain. And customs can threaten to detain you until you give them the password.

There are some tricks you can pull, like having a special inner-drive with a second password that they don't even know to ask about. You can put your most private data there. But again, people don't use systems with complex UIs unless they feel really motivated.

What we need is a system that is effectively transparent most of the time. However, you could take special actions when going through customs or otherwise having your laptop be out of your control.

Windows needs a master daemon

It seems that half the programs I try and install under Windows want to have a "daemon" process with them, which is to say a portion of the program that is always running and which gets a little task-tray icon from which it can be controlled. Usually they want to also be run at boot time. In Windows parlance this is called a service.

OCR Page numbers and detect double feed

I'm scanning my documents on an ADF document scanner now, and it's largely pretty impressive, but I'm surprised at some things the system won't do.

Double page feeding is the bane of document scanning. To prevent it, many scanners offer methods of double feed detection, including ultrasonic detection of double thickness and detection when one page is suddenly longer than all the others (because it's really two.)

Topic: 

Pages