Archives

Date

A universal Web-USB plugin for all browsers

As our devices get more and more complex, configuring them gets harder and harder. And for members of the non-tech-savvy public, close to impossible.

Here’s an answer: Develop a simple browser plug-in for all platforms that can connect a USB peripheral to a TCP socket back to the server where the plugin page came from. (This is how flash and Java applets work, in fact this could be added to flash or Java.)

Once activated, the remote server would be able to talk to the device like its USB master, sending and receiving data from it and talking other USB protocol commands. And that means it could do any configuration or setup you might like to do, under the control of a web application that has access to the full UI toolset that web applications have. You could upload new firmware into devices that can accept that, re-flash configuration, read configuration — do anything the host computer can do.

As a result, for any new electronics device you buy — camera, TV remote control, clock, TV, DVD player, digital picture frame, phone, toy, car, appliance etc. — you could now set it up with a nice rich web interface, or somebody else could help you set it up. It would work on any computer — Mac, Linux, Windows and more, and the web UIs would improve and be reprogrammed with time. No software install needed, other than the plug-in. Technicians could remotely diagnose problems and fix them in just about anything.

So there is of course one big question — security. Of course, the plug-in would never give a remote server access to a USB device without providing a special, not-in-browser prompt for the user to confirm the grant of access, with appropriate warnings. Certain devices might be very hard to give access to, such as USB hard drives, the mouse, the keyboard etc. In fact, any device which has a driver in the OS and is mounted by it would need extra confirmation (though that would make it harder to have devices that effectively look like standard USB flash drives into which basic config is simply read and written.)

One simple security technique would be to insist the device be hot plugged during the session. Ie. the plugin would only talk to USB devices that were not plugged in when the page was loaded, and then were plugged in as the app was running. The plugin would not allow constant reloading of the page to trick it on this.

For added security, smarter devices could insist on an authentication protocol with the server. Thus the USB device would send a challenge, which the server would sign/hash with its secret key, and the USB device could then check that using a public key to confirm its talking to its manufacturer. (This however stops 3rd parties from making better configuration tools, so it has its downsides.) It could also be arranged that only devices that exhibit a standard tag in their identification would allow remote control, so standard computer peripherals would not allow this. And the plugin could even maintain and update a list of vendors and items which do or don’t want to allow this.

There are probably some other security issues to resolve. However, should we resolve this it could result in a revolution of configuring consumer electronics, as finally everything would get a big screen, full mouse and keyboard web UI. (Non portable devices like cars and TVs would require a wireless laptop to make this work, but many people have that. Alternately they could use bluetooth, and the plugin could have a similar mode for working with paired bluetooth devices. Again, doing nothing without a strong user confirmation.)

This works because basic USB chips are very cheap now. Adding a small bit of flash to your electronics device and a mini-USB socket that can read and write the flash would add only a small amount to the cost of most items — nothing to many of them, as they already have it. Whatever new toy you buy, you could set it up on the web, and if the company provides a high level of service, you could speak to a tech support agent who could help you set it up right there.

Predictive traction control

Yesterday I wrote about predictive suspension, to look ahead for bumps on the road and ready the suspension to compensate. There should be more we can learn by looking at the surface of the road ahead, or perhaps touching it, or perhaps getting telemetry from other cars.

It would be worthwhile to be able to estimate just how much traction there is on the road surfaces the tires will shortly be moving over. Traction can be estimated from the roughness of dry surfaces, but is most interesting for wet and frozen surfaces. It seems likely that remote sensing can tell the temperature of a surface, and whether it is wet or not. Wet ice is more slippery than colder ice. It would be interesting to research techniques for estimating traction well in front of the car. This could of course be used to slow the car down to the point that it can stop more easily, and to increase gaps between cars. However, it might do much more.

A truly accurate traction measurement could come by actually moving wheels at slightly different speeds. Perhaps just speeding up wheels at two opposite corners (very slightly) or slowing them down could measure traction. Or perhaps it would make more sense to have a small probe wheel at the front of the car that is always measuring traction in icy conditions. Of course, anything learned by the front wheels about traction could be used by the rear wheels.

For example, even today an anti-lock brake system could, knowing the speed of the vehicle, notice when the front wheels lock up and predict when the rear wheels will be over that same stretch of road. Likewise if they grip, it could be known as a good place to apply more braking force when the rear wheels go over.

In addition, this is something cars could share information about. Each vehicle that goes over a stretch of road could learn about the surface, and transmit that for cars yet to come, with timestamps of course. One car might make a very accurate record of the road surface that other cars passing by soon could use. If for nothing else, this would allow cars to know what a workable speed and inter-car gap is. This needs positioning more accurate that GPS, but that could easily be attained with mile marker signs on the side of the road that an optical scanner can read, combined with accurate detection of the dotted lines marking the lanes. GPS can tell you what lane you're in if you can't figure it out. Lane markers could themselves contain barcodes if desired -- highly redundant barcodes that would tolerate lots of missing pieces of course.

This technology could be applied long before the cars drive themselves. It's a useful technology for a human driven car where the human driver gets advice and corrections from an in-car system. "Slow down, there's a patch of ice ahead" could save lives. I've predicted that the roadmap to the self-driving car involves many incremental improvements which can be sold in luxury human-driven cars to make them safer and eventually accident proof. This could be a step.

Predictive suspension

I’m not the first to think of this idea, but in my series of essays on self driving cars I thought it would be worth discussing some ideas on suspension.

Driven cars need to have a modestly tight suspension. The driver needs to feel the road. An AI driven car doesn’t need that, so the suspension can be tuned for the maximum comfort of the passengers. You can start bu just making it much softer than a driver would like, but you can go further.

There are active suspension systems that use motors, electromagnets or other systems to control the ride. Now there are even products to use ferrofluids, whose viscosity can be controlled by magnetic fields, in a shock absorber.

I propose combining that with a scanner which detects changes in the road surface and predicts exactly the right amount of active suspension or shock absorption needed for a smooth ride. This could be done with a laser off the front bumper, or even mechanically with a small probe off the front with its own small wheel in front of the main wheel.

As such systems improve, you could even imagine it making sense to give a car more than 4 wheels. With the proper distribution of wheels, it could become possible, if a bump is coming up for just one or two of the wheels to largely decouple the vehicle from those wheels and put the weight on the others. With this most bumps might barely affect the ride. This could mean a very smooth ride even on a bumpy dirt or gravel road, or a poorly maintained road with potholes. (The decoupling would also stop the pothole from doing much damage to the tire.)

As a result, our self-driving cars could give us another saving, by reducing the need for spending on road maintenance. You would still need it, but not as much. Of course you still can’t get rid of hills and dips.

I predict that some riders at least will be more concerned with ride comfort than speed. If their self-driving car is a comfortable work-pod, with computer/TV and phone, time in the car will not be “downtime” if the ride is comfortable enough. Riders will accept a longer trip if there are no bumps, turns and rapid accelerations to distract them from reading or working.

Now perfect synchronization with traffic lights and other vehicles will avoid starts and stops. But many riders will prefer very gradual accelerations when starts and stops are needed. They will like slower, wider turns with a vehicle which gimbals perfectly into the turn. And fewer turns to boot. They’ll be annoyed at the human driven cars on the road which are more erratic, and force distracting changes of speed or vector. Their vehicles may try to group together, and avoid lanes with human drivers, or choose slightly slower routes with fewer human drivers.

The cars will warn their passengers about impending turns and accelerations so they can look up — the main cause of motion sickness is a disconnect between what your eyes see and your inner ear feels, so many have a problem reading or working in an accelerating vehicle.

People like a smooth, distraction free trip. In Japan, the Shinkansen features the express Nozomi trains which include cars where they do not make announcements. You are responsible for noticing your stop and getting off. It is a much nicer place to work, sleep or read.

Virtual machines need to share memory

A big trend in systems operation these days is the use of virtual machines — software systems which emulate a standalone machine so you can run a guest operating system as a program on top of another (host) OS. This has become particularly popular for companies selling web hosting. They take one fast machine and run many VMs on it, so that each customer has the illusion of a standalone machine, on which they can do anything. It’s also used for security testing and honeypots.

The virtual hosting is great. Typical web activity is “bursty.” You would like to run at a low level most of the time, but occasionally burst to higher capacity. A good VM environment will do that well. A dedicated machine has you pay for full capacity all the time when you only need it rarely. Cloud computing goes beyond this.

However, the main limit to a virtual machine’s capacity is memory. Virtual host vendors price their machines mostly on how much RAM they get. And a virtual host with twice the RAM often costs twice as much. This is all based on the machine’s physical ram. A typical vendor might take a machine with 4gb, keep 256mb for the host and then sell 15 virtual machines with 256mb of ram. They will also let you “burst” your ram, either into spare capacity or into what the other customers are not using at the time, but if you do this for too long they will just randomly kill processes on your machine, so you don’t want to depend on this.

The problem is when they give you 256MB of ram, that’s what you get. A dedicated linux server with 256mb of ram will actually run fairly well, because it uses paging to disk. The server loads many programs, but a lot of the memory used for these programs (particularly the code) is used rarely, if ever, and swaps out to disk. So your 256mb holds the most important pages of ram. If you have more than 256mb of important, regularly used ram, you’ll thrash (but not die) and know you need to buy more.

The virtual machines, however, don’t give you swap space. Everything stays in ram. And the host doesn’t swap it either, because that would not be fair. If one VM were regularly swapping to disk, this would slow the whole system down for everybody. One could build a fair allocation for that but I have not heard of it.

In addition, another big memory saving is lost — shared memory. In a typical system, when two processes use the same shared library or same program, this is loaded into memory only once. It’s read-only so you don’t need to have two copies. But on a big virtual machine, we have 15 copies of all the standard stuff — 15 kernels, 15 MYSQL servers, 15 web servers, 15 of just about everything. It’s very wasteful.

So I wonder if it might be possible to do one of the following:

  • Design the VM so that all binaries and shared libraries can be mounted from a special read-only filesystem which is actually on the host. This would be an overlay filesystem so that individual virtual machines could change it if need be. The guest kernel, however, would be able to load pages from these files, and they would be shared with any other virtual machine loading the same file.
  • Write a daemon that regularly uses spare CPU to scan the pages of each virtual machine, hashing them. When two pages turn out to be identical, release one and have both VMs use the common copy. Mark it so that if one writes to it, a duplicate is created again. When new programs start it would take extra RAM, but within a few minutes the memory would be shared.

These techniques require either a very clever virtualizer or modified guests, but their savings are so worthwhile that everybody would want to do it this way on any highly loaded virtual machine. Of course, that goes against the concept of “run anything you like” and makes it “run what you like, but certain standard systems are much cheaper.”

This, and allowing some form of fair swapping, could cause a serious increase in the performance and cost of VMs.

Laptops could get smart while power supplies stay stupid

If you have read my articles on power you know I yearn for the days when we get smart power so we have have universal supplies that power everything. This hit home when we got a new Thinkpad Z61 model, which uses a new power adapter which provides 20 volts at 4.5 amps and uses a new, quite rare power tip which is 8mm in diameter. For almost a decade, thinkpads used 16.5 volts and used a fairly standard 5.5mm plug. It go so that some companies standardized on Thinkpads and put cheap 16 volt TP power supplies in all the conference rooms, allowing employees to just bring their laptops in with no hassle.

Lenovo pissed off their customers with this move. I have perhaps 5 older power supplies, including one each at two desks, one that stays in the laptop bag for travel, one downstairs and one running an older ThinkPad. They are no good to me on the new computer.

Lenovo says they knew this would annoy people, and did it because they needed more power in their laptops, but could not increase the current in the older plug. I’m not quite sure why they need more power — the newer processors are actually lower wattage — but they did.

Here’s something they could have done to make it better.  read more »

The impact of Peer to Peer on ISPs

I’m a director of BitTorrent Inc. (though not speaking for it) and so the recent debate about P2P applications and ISPs has been interesting to me. Comcast has tried to block off BitTorrent traffic by detecting it and severing certain P2P connections by forging TCP reset packets. Some want net neutrality legislation to stop such nasty activity, others want to embrace it. Brett Glass, who runs a wireless ISP, has become a vocal public opponent of P2P.

Some base their opposition on the fact that since BitTorrent is the best software for publishing large files, it does get used by copyright infringers a fair bit. But some just don’t like the concept at all. Let’s examine the issues.

A broadband connection consists of an upstream and downstream section. In the beginning, this was always symmetric, you had the same capacity up as down. Even today, big customers like universities and companies buy things like T-1 lines that give 1.5 megabits in each direction. ISPs almost always buy equal sized pipes to and from their peers.

With aDSL, the single phone wire is multiplexed so that you get much less upstream than downstream. A common circuit will give 1.5mbps down and say 256kb up — a 6 to 1 ratio. Because cable systems weren’t designed for 2 way data, they have it worse. They can give a lot down, but they share the upstream over a large block of customers under the existing DOCSIS system. They also will offer upstream on near the 6 to 1 ratio but unlike the DSL companies, there isn’t a fixed line there.  read more »

Whose call is it to say what's legal?

As many of you will know, it’s been a tumultuous week in President Bush’s battle to get congress to retroactively nullify our lawsuit against AT&T over the illegal wiretaps our witnesses have testified to. The President convinced the Senate to pass a bill with retroactive immunity for the phone companies — an immunity against not just this but all sorts of other illegal activities that have been confirmed but not explained by administration officials. But the House stood firm, and for now has refused. A battle is looming as the two bills must be reconciled. I encourage you to contact your members of congress soon to tell them you don’t want immunity.

And here, I’m going to outline in a slightly different way, why.

I’ve talked about the rule of law, and the problems with retroactive get out of jail free cards that “make it legal.” But let’s go back to when these programs started, and ask some important questions about the nature of democracy and its checks and balances.

The White House decided it wanted a new type of wiretap, and that it wouldn’t, or most probably couldn’t get a warrant from the special court convened just to deal with foreign intelligence wiretaps. They have their reasoning as to why this is legal, which we don’t agree with, but even assuming they believe it themselves, there is no denying by anybody — phone company employees, administration officials, members of congress or FISA judges — that these wiretaps were treading on new, untested ground. Wiretaps of course are an automatic red flag, because they involve the 4th amendment, and in just about every circumstance, everybody agrees they need a warrant as governed by the 4th amendment. Any wiretap without a warrant is enough to start some fine legal argument.

In the USA, the government is designed with a system of checks and balances. This is most important when the bill of rights is being affected, as it is here. The system is designed so that no one branch is allowed to interfere with rights on its own. The other branches get some oversight, they have a say.

So when the NSA came to the phone companies, asking for a new type of wiretap with no warrant, the phone companies had to decide what to do about it. The law tells them to say no, and exacts financial penalties if they don’t say no to an illegal request. The law is supposed to be simple and to not ask for too much judgment on the part of the private sector. In this situation, with a new type of wiretap being requested, the important question is who makes the call? Who should decide if the debatable orders are really legal or not?

There are two main choices. Phone company executives or federal judges. If, as the law requires, the phone company says “come back with a warrant” this puts the question of whether the program is legal in the hands of a judge. The phone company is saying, “this is not our call to make — let’s ask the right judge.”

If the administration says, “No, we say it’s legal, we will not be asking a judge, are you going to do this anyway?” then we’re putting the call in the hands of phone company executives.

That’s what happened. The phone companies made the decision. The law told them to kick it back to the judge, but the White House, it says, assured them the program was legal. And now that lawsuits like ours are trying to ask a different federal judge if the program was legal, the Senate has passed this retroactive immunity. This immunity does a lot of bad things, but among them it says that “it was right for the phone companies to be making the call.” That the pledges of the administration that the program was legal were enough. We’ve even be told we should thank the phone companies for being patriots.

But it must be understood. Even if you feel this program was necessary for the security of the nation, and was undertaking by patriots, this was not the only decision the phone company made. We’re not suing them because they felt they had a patriotic duty to help wiretap al Qaeda. We’re suing them because they took the decidedly non-patriotic step of abandoning the checks and balances that keep us free by not insisting on going to either a judge or congress or both.

Officials in the three branches take a solemn oath to defend the constitution. Phone company executives, as high minded or patriotic as they might be, don’t. So the law was written to tell them it is not their call whether a wiretap is legal, and to tell them there are heavy penalties if they try to make that decision. Those who desire immunity may think they are trying to rescue patriots, but instead they will be rewarding the destruction of proper checks and balances. And that’s not patriotic at all.

Some have argued that there was a tremendous urgency to this program, and this required the phone companies to act quickly and arrange the warrantless wiretaps. While I disagree, I can imagine how people might think that for the first week or two after the requests come in. But this wasn’t a week or two. This has gone on since 2001. There was over half a decade of time in which to consult with judges, congress or both about the legitimacy of the wiretaps. It’s not that they didn’t know — one company, Qwest, refused them at their own peril. If you argued for immunity for the actions of that first week or two, I could understand the nature of your argument. But beyond that, it’s very hard to see. For this is immunity not just for illegal wiretapping. This is immunity for not standing by the law and saying “let’s ask a judge.” For years, and years. Why we would want to grant immunity for that I just can’t understand, no matter how patriotic the goals. This system of freedom, with checks and balances, is the very core of what patriots are supposed to be defending.

Where are the savoury chocolate/cocoa dishes?

I’ve read studies that say that “chocolate” is the world’s favourite flavour. That’s not too surprising. Coming from central America after the Spanish conquest, the candy at least quickly was adopted all over Europe and to a lesser degree elsewhere. So did many other new world ingredients, such as corn, beans, squash, chiles, potatoes, vanilla, tomatoes, peanuts and many others. And we’ve seen many of these become common, and even essential ingredients in many overseas cuisines. (I often wonder what Italian meals were like before pasta came from China and tomatoes from the americas!)

But oddly, the tastiest and most complex of the ingredients never got exported in any significant way for savoury cooking. You can find excellent cocao based mole sauces in Mexican and southwest cuisine, but this is to be expected, as the ingredients come from there. Those dishes are centuries old. And if they didn’t exist one might conclude that chocolate only works as a sweet. But it doesn’t. So why did the talented chefs of Europe, India, China, Japan and other places never develop a popular dish with this ingredient, when they did so much with the other new ingredients? I say popular because there certainly are dishes, but they are by and large obscure. Just about every culture has a range of well known potato and tomato dishes, for example.

I’ll presume it’s different. But modern fusion chefs, with fancy tools, knowledge of chemistry and the world’s ingredients should be able to do it. Not just come up with dishes, but come up with something both tasty and simple enough to spread as a popular choice. Though for now we won’t feel too bad having to limit ourselves to French hot chocolate and Belgian truffles.

Rental car that personalizes to you

Rental car companies are often owned by car manufacturers and are their biggest customers. As cars get more and more computerized, how about making rental cars that know how to personalize to the customer?

When Hertz assigns me a car, they could load into its computer things like the dimensions of my body, so that the seat and mirrors are already set for me (simply remembered from the last time I rented such a car, for example.) If I have a co-driver, a switch would set them for her. The handsfree unit would be paired in advance with my bluetooth phone.

The prep crew would have made sure there was a charger for our cell phones and other mobile devices in the car, at least for the major charger types such as USB and mini-USB, which should become standard on car dashes soon anyway. Perhaps there could even be a docking cradle.

The radio stations should be set to how I set them the last time I was in the rental town. If this is unknown, stations of the formats I like should be on the buttons I use. (Button 1 for NPR/CBC, Button 2 for Jazz, Button 3 for Rock, Button 4 for Classical, Button 5 for Traffic etc.) Or if satellite radio is used, settings for that could be preserved all over the world.

Any other car settings should be remembered and re-loaded for me.

All cars will have a GPS soon of course, but it should also be a bluetooth one that will transmit to my laptop or PDA if I want that. While I don’t want the company keeping a log of where I drive, it would be nice if I could specify destinations I plan to visit on the rental car web site when I reserve the car, and these would be pre-loaded into the GPS. And perhaps it could also be trained to my voice. For cars with a keycode entry, the code could be “my” keycode.

In other words, every possible thing you can easily customize about your own car should be available for loading into a rental car, to make it seem more like your car. And, of course, if you already drive such a car, it could very well be your car. (Though in the USA, because the rental car companies have these close relationships with Ford, GM and the like, don’t expect that if you drive an imported car.)

Is it that much time to set up a car when you rent it? Not really. But this is just something nice for the future. Regular readers will know I predict that as cars drive themselves, we will far more routinely use hired vehicles, and this sort of “make it mine” technology will become more important then.

I'll pay a lot for the ultimate tourist's mobile device

Fast internet access at home has spoiled me. Like Manfred Macx in Tourist I feel like I’ve lost my glasses when I’m a tourist. I get annoyed that I can’t quickly and easily get at all the information that’s out there.

I would gladly rent the ultimate tourist mobile device. A large GPS equipped PDA (and also a cell phone for tourists roaming from other countries or from CDMA vs. GSM) that has everything. Every database that can be had on geo-data for the region I’m walking. It has mobile data service of course but also just pre-caches the region I’m in.

Not just the maps and the lists of tourist-related items like restaurants. I want reviews of those restaurants and ratings and even the menus, so I can easily ask “Where’s a the best place in the $15/plate range near here” and similar questions. I don’t just want every hotel in a town (not just the ones in the popular databases) I want their recently updated price offers. And with the data connection, I want something like Wotif for the hotels tied into the computer reservation networks.

I don’t just want to know where the museum is, I want all of its literature. I want its internal map, with all of the placards translated into my language. Indeed, I want just about everything I need to read in a geolocation translated into my language.

And I want opinions on everything, from travel writers, tourists and locals. I want every single major travel book on the area loaded and ready and searchable. (Because I will be searching I want this to be bigger than a typical PDA/phone and have a moderately usable keyboard, or a really big touchscreen keyboard.)

I want it to have a decent camera, both in case I forget to bring mine with me, but for something grander. I want to be able to photograph any sign, any menu, and have it upload the photo to a system that OCRs the text and translates it for me. This is no longer science fiction — decent camera based OCR is available, and while translation software still has its hiccups it’s starting to get decent. In fact, as this gets better, the need for a database of signs at locations becomes less. Of course it should also be able to let locals type messages for me on it which it translates.

It should be trainable to my voice as well, so I can enter text with speech recognition instead of typing. Both for using the device, and saying things that are translated for locals, either to the screen or output from today’s quality text to speech systems. This will get better as the translation software gets better. In some cases, the processing may be done in the cloud to save battery on my device. But as I’ve noted the normal portability requirements on this device are not the same as for my everyday PDA. I don’t mind if this is big and a bit heavy, sized more like a Kindle than an iPhone.

It should be able to take me on walking and driving tours, of course.

And finally, at additional cost, it should connect me to a person, via voice or IM, who can help me. That can be a travel agent to book me a room of course, but it can also be a local expert — somebody who perhaps even works sometimes as a tourist guide. Earlier I wrote of the ability to call a local expert where people with local expertise would register, and when they were online, they could receive calls, billed by the minute. Your device would know where you were, and might well connect you with somebody living one street over who speaks your language and can tell you things you want to know about the area.

Now some of the things I have described are expensive, though as such a device became popular the economies of scale kick in for popular tourist areas. But I’m imagining tourists paying $20 to $30 a day for such a device. Rented 2/3 of the year, that’s $5,000 to $7,000 of revenue in a single year — enough to pay for the things I describe — every travel guide, every database, high volume data service and more. And I want the real thing, not the advertising-biased false information found in typical tourist guides or the “I’m afraid to be critical of anything” information generated by local tourist bureaus.

Why would I pay so much? Travel costs for a party of tourists are an order of magnitude higher than this. I think it would be a rare day that such a device didn’t save you more than this by finding you better food at a better price, savings on hotels and more. And it would save you time. If you are paying $200 to $400/day to travel, including your airfare, your hours are precious. You want to spend them seeing the best things for your taste — not wondering where things are. Saving you an hour of futzing pays for the device.

With scale, it could come down under $10/day, making it crazy not to get it. In fact, locals would start to want some of these databases.

Of course, UI is paramount. You must not have to spend the time you save trying to figure out the UI of the device. That is non-trivial, but doable for a budget like this.

Put my PIN into the phone number

When you call to get your voice mail, even from your cell phone, it typically asks for a PIN. There's a reason for that -- there is no authentication on Caller ID, and anybody can forge it. So if you don't require a PIN, and the voice mail let you in directly, anybody could listen to your voice mail or hack it in other ways. (The phone companies could of course authenticate Caller ID within their own networks, but this must be harder than it sounds because they don't.) Some services don't bother with a PIN if they identify the caller ID because the odds of somebody trying to hack it are low. In some cases that's because the hacking party would need to know what services a person uses.

Setting caller ID is actually pretty useful. I have coded into my PBX to call my cell phone voice mail using the caller ID of my cell phone, so I have a speed dial on my desk phone that calls my Sprint voice mail. I do still have to enter the PIN.

So here's the idea. Get a bank of phone number, 10,000 of them, for voice mail dial-in. This can be a bank in some rural area code that still has entire exchanges free. Getting an entire exchange is not trivial but turns out to be not that expensive if you can justify it. Then let a user with a PIN put that PIN into the last 4 digits of the phone number. They would call that special number, and only that number, to pick up their voice mail (or use whatever service.) If somebody called other numbers in the block using their caller-ID, this would be a sign of an attack, and too many attempts would turn on a switch so that any call to any number in the block now requires some identification. (This is a minor DOS attack but not too bad of one if you can still remember a different ID code.)

This done you can put your magic, PIN-embedded number into your speed dial and just use that for instant access to voice mail or other services.

Of course the rural number will look like long distance, but that's no issue to your own phone company. Indeed, if you only want this for use by phone companies for internal calls, we could devote an entire virtual area code -- but you could not call these numbers from another phone. All companies could share the area code because it would not actually exist. (Of course, authenticating their own caller-ID is easier, this is just a kludge to do it with existing tools.)

A block in the 866/877/888 band of toll-free numbers would be nice too but these are harder to come by.

Detecting bad photos in camera and after

As I’ve noted, with digital cameras we all take tons of photos, and the next task is to isolate out the winners. I’ve outlined better workflow for this and there are still more improvements we need in photo management software, but one task both cameras and photo management software could make easier is eliminating the plain bad shots.

I’ve always wanted the camera to have a display mode that immediately shows, at 1:1, the most contrasty (sharpest) section of a photo I have taken. If I look at that, and see it’s blurry then I know the whole photo is blurry, whether it be from camera shake or bad focus. If it’s sharp but not the thing I wanted to emphasize, I may realize the autofocus found the wrong thing. (My newest camera shows in the review pane what autofocus points it used, which is handy.)

Indeed, if a camera finds that there is no section of the photo which is sharp, it might even display or sound a warning. Yes, sometimes I will take shots of fuzzy clouds where this will be normal. I can handle the false warning then. It might be so dark I can’t get a good shot and will also ignore the warning, but other times it might tell me to shoot that one again.

(Nikon cameras have a feature where they take 3 shots and keep the sharpest of them. That’s handy, but I still want to know if the sharpest of them is still no good.)

The camera could go further. With more sensitive accelerometers, it could actually calculate how much the camera rotated while the shutter was open, and since it also knows the focal length, it could calculate the amount of motion blur there will be in the shot. Again, it could warn you when it’s too much, and tag this acceleration data in the EXIF fields of the file. Yes, sometimes one takes a tracking shot where you pan on a moving object and deliberately blur the background. In theory the detection of sharp objects in the field would reveal this, but in any event you can also just ignore the warning here.

For those will full flash cards, such detection could help in removing turkeys when you have to delete.

Until our cameras can do this, our photo management software could help. As noted, the first task in photo management is to divide the photos into groups. I divide into 5 groups myself — bad shots, boring shots, average shots, winners and super-winners. Winners go into the slideshow for the particular shooting trip, super winners will go into a “best of the year” category.

The photo management software could scan over the photos, and find ones that are blurry. It could then let me do a quick scan over them, either as large thumbnails, or perhaps again showing me at 1:1 zoom the highest contrast crop. I could quickly pull out any pictures I still want and relegate the others to the bad photo pile, or even delete them. The same could apply for images that are obviously overexposed and underexposed. Again, I will still scan to see if there is anything to save, and in the case of the underexposed, I can do the scan in a mode where a compensation is done to brighten them to see what can be recovered. But after that, I don’t want them in the way of my real workflow, to find the winners.

Automatic retracting pen

I put pens in my pockets. However, sometimes I put them in without caps, or I put in retractable pens without retracting them to keep the tip inside.

The result, as all who do this know, is from time to time a pen leaks out and ruins a pair of pants, sometimes more than that. It’s expensive, and hard to solve. Since the earliest days the badge of the nerd has been the shirt pocket protector, but I put them in my pants. You could try tyvek pocket liners, I suppose, but it’s hard to see how to easily add them.

I wonder if we couldn’t come up with designs for retractable pens where there is some timed decay to the extension of the tip, so that it automatically returns to being inside after a modest time, perhaps half an hour to an hour. It could either just return at a very slow pace with the spring pushing back against something firm enough to keep the tip in place, or something that slowly bends and releases the ratchet. The latter is better because of course the tip must be firmly held for writing, we don’t want to be able to push it back in with the pressure of writing.

The time to return might well be fairly short. Today I find that I only use pens for short bursts of writing. I do all serious writing on a keyboard. I will pull a pen out to make quick notes and then I am done. While it might be annoying from time to time, I could even imagine it clicking back after just a couple of minutes. Of course many pens would not do this — which is a problem, because one will still be regularly picking up other pens, as one often does. But you could still reduce the number of times pen accidents happen if you bought mostly pens like this for yourself.

Electronics getting as cheap as they are these days, this could also be done instead with a sensor. Clicking the pen to extend the tip stores energy in the spring and might store it elsewhere, so that after a couple of minutes it beeps if it hasn’t been reset.