The social networks could hold great political power due to GOTV. Should they?

The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections?

I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter.

The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%)

Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to:

  • They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data
  • They can measure the strength of your political convictions through your updates
  • They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.)
  • In many cases, the voter registration data also reveals if you voted in prior elections
  • Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability.

Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something they already score.  read more »

Everybody should have RAID and a filesystem to manage it

For many years, I have been using RAID for my home storage. With RAID (and its cousins) everything is stored redundantly so that if any disk drive fails, you don’t lose your data, and in fact your system doesn’t even go down. This can come at a cost of anywhere from about 25% to 50% of your disk space (but disk is cheap) and it also often increases disk performance. Some years ago I wrote about how disk drives should be sold in form factors designed for easy RAID in every PC, and I still believe that.

RAID comes with a few costs. One of them is that you need to do too much sysadmin to get it working right. The nastiest cost is there are some edge cases where RAID can cause you to lose all your data where you would not have lost it (or all of it) if you had not used RAID. That’s bad — it should never make things worse.

A few years ago I switched to one of the new filesystems which put the RAID-like functionality right into the filesystem, instead of putting that into a layer underneath. I think that’s the right thing, and in fact, fear of layer violations is generally a mistake here. I am using BTRFS. Others use ZFS and a few other players. BTRFS is new and so its support for RAID-5 (Which only costs 25-33% of your space and is fast) is too young, so I use its RAID-1, where everything is just written twice onto two different disks. Unlike traditional RAID, BTRFS will do RAID-1 on more than 2 drives, and they don’t have to be all of equal size. That’s good, though I ran into some problems with the fairly common operation of increasing the size of my storage by replacing my smallest drive with a much larger one.

The long term goal of such systems should be near-trivial sysadmin. The system should handle all drives and partitions thrown at it in a “just works” way. You give it any amount of drives and it figures out the best thing to do, and adapts as you change. You should only need to tell it a few policies, such as how much need you have for reliability and speed and how much space you are willing to pay for it. The systems should never put you at more risk than you ask for, or more risk than you would have had with having just one drive or a set of non-redundant drives. That’s hard, but it is a worthwhile goal.

But I think we could do more, and we could do it in a way that we get better and better storage with less sysadmin.

Multiple drives, but not too many

I think most users will probably stick to 2 drives, and rarely go above 3. The reality is that 4 or more is for servers and heavy users, because each drive takes power and generates heat. However, adding an SSD to the mix is always a good idea but it’s not for redundancy.

The OS should understand what’s happening and reflect it in the filesystem

The truth is not all files need as much redundancy and speed. The OS can know a lot about that and identify:

  • Files that are accessed frequently vs. ones not accessed much, or for a long time
  • Files that are accessed by interactive applications which cause those applications to be IO bound. (ie. slowed by waiting for the disk.)
  • Files that have been backed up in particular ways, and when.

Your OS should start by storing everything redundantly (RAID 1 or 5) until such time as the disk starts getting close to full. When that happens, it should of course alert you it is time to upgrade your drives or add another. But it can also offer another option which ou can explicitly ask for, namely reduce the redundancy on files which are rarely accessed, have not been used for a while, and have been backed up.

It turns out, that’s often a lot of the files on a disk. In particular, the thing that uses up most of the disk space for the ordinary user is their collection of photos and videos. Other than the few that get regular access, there is no actual need for RAID level redundancy on these images. If their own drive is lost, there is a backup where you can get them. They aren’t needed for regular system operation.

The systems already know what files belong to the OS, and can keep them redundant, though most home users are not looking for 100% uptime, they really only want 100% data safety.

To do this right, programs need to tell the OS why they are accessing files. Your photo organizer possibly scans your photo collection regularly, but this scan doesn’t make the files system crucial. My goal is not to have the users designate these things, though that is one option. Ideally the system should figure it out.

The system can also take the most important files, the ones that cause the system to block, and make sure they are both redundantly stored and found on SSD.

Easier backup

Backup needs to be easy and automatic. When systems boot up, they should offer to do backup for others who are nearby and semi-nearby, and then they should trade backup space. My system should offer space to others, and make use of their space for either general backup (if in the same house/company/LAN) and offsite backup (remote but with good bandwidth.) Of course, ISPs and other providers can also provide this space for money.

The key thing is this should happen with almost no setup by the user. One problem for me is that I can come back from a trip with 50gb of new photos, and they would clog my upstream for remote backup. The system should understand what files have priority, and if the backlog gets too much, request I plug in an external USB drive to offer a backup until the backlog can be cleared. Otherwise I should not have to deal with it. Of course, the backup I offer others does not need RAID redundancy. Instead, I should be queried regularly to prove I still have the backups, and if not, the person I am backing up should seek another place.


Of course all remote backup must be encrypted by me. In fact, all disks should be encrypted, but too much desire for security can cause risk of losing all your data. Systems must understand the reduced threat model of the ordinary user and make sure keys are backed up in enough places that the chances of losing them are nil, even if it increases the chance that the NSA might get the keys. This is actually pretty hard. The typical “What was your pet’s name” pseudo security questions are not strong enough, but going stronger makes it more likely there can be key loss. Proposals such as my friendscrow can work if the system knows your social network. They have the advantage that there is zero UI to escrowing the key, and a lot of work to recover it. This is the ideal model because if there is ZUI on storing it, you are sure it will be stored. Nobody minds extra work if they have lost all the normal paths to getting their key.

Will bed-bound seniors experience the world through VR telepresence robots?

I’ve written before about my experiences inhabiting a telepresence robot. I did it again this weekend to attend a reunion, with a different robot that’s still in prototype form.

I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera.

The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed.

At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not.

And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available.

If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera.

Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later.

Surrogates, etc.

At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today.

Local robots

One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do.

So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor.

What’s interesting is we could build this today, and soon we can build it pretty well.

What do others see?

One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces.

One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age.

Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience.

For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that?

Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference.

Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today.

Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence.

Don't be fooled by robots falling down at Darpa Robotics Challenge

This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:

What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.

Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.

Check out my Gallery of Photos from the DARPA Robotics Challenge Finals.

What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)

We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:

  • Starting in a car, drive it down a simple course with a few turns and park it by a door.
  • Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
  • Go to a door and open it
  • Walk through the door into a room
  • In the room, go up to a valve with circular handle and turn it 360 degrees
  • Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
  • Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
  • Either walk over a field of cinder blocks, or roll through a field of light debris
  • Climb a set of stairs

The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.  read more »

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Rise of the selfie drones. Is tethered a good idea?

At CES, there were a couple of “selfie drones.” The Nixie is designed to be worn on your wrist, taken off, thrown, and then it returns to you after taking a photo or video. There was also the Zano which is fancier and claims it will follow you around, tracking you as you mountain bike or ski to make a video of you just as you do your cool trick.

The selfie is everywhere. In Rome, literally hundreds of vendors tried to sell me selfie sticks in all the major tourist areas, even with a fat Canon DSLR hanging from my neck. It’s become the most common street vendor gadget. (The blue LED wind up helicopters were driving me nuts anyway.)

I also had been thinking about this, and came up with a design that’s not as capable as these designs, but might be better. My selfie drone would be tethered. You would put down the base which would have the batteries and a retractable cord. Up would fly the camera drone, which would track your phone to get a great shot of you. (If it were for me, it would also offer panorama mode where it spun around at the top shooting a pano, with you or without you.)

This drone could not follow you as you do a sport, of course, or get above a certain height. But unlike the free designs, it would not get lost over the cliff in the winds, as I think might happen to a number of these free selfie drones. It turns out that cliffs and outlook points are a common place to want to take these photos, they are the place you really need a high view to capture you and what’s below you.

Secondly, with the battery on the ground, and only a short tether wire needed, you can have a much better camera as payload. Only needing a short flight time and not needing to carry the batteries means more capabilities for the drone.

It’s also less dangerous, and is unlikely to come under regulation because it physically can’t fly beyond a certain altitude or distance from the base. It could not shoot you from water or from over the edge of the cliff as the other drones could if you were willing to risk them.

My variation would probably be a niche. Most selfies are there to show off where you were, not to be top quality photos. Only more serious photographers would want one capable of hauling up a quality lens. Because mine probably wants a motor in the base to reel it back in (so you don’t have to wind the cables) it might even cost more, not less.

The pano mode would be very useful. In so many pano spots, the view is fantastic but is blocked by bushes and trees, and the spectacular pano shot is only available if you go up enough. For daytime a tethered drone would probably do fine. I’m still waiting on the Panono — a ball, studded with cameras from Berlin that was funded by Kickstarter. You throw the ball up, and it figures when it is at the top of its flight and shoots the panorama all at once. Something like that could also be carried by a tethered drone, and it has the advantage of not moving between shots as a spinning drone would be at risk for. This is another thing I’ve wanted for a while. After my first experiments in airplane and helicopter based panoramas showed you really want to shoot everything all at once, I imagined having decent digital cameras getting cheap enough to buy 16 of them and put them in a circle. Sadly, once cameras starting doing that, there were always better cameras that I now decided I needed that were too expensive to buy for that purpose.

The world needs standardized LEDs which adjust brightness

I’m sure, like me, you have lots of electronic gadgets that have status LEDs on them. Some of these just show the thing is on, some blink when it’s doing things. Of late, as blue LEDs have gotten cheap, it has been very common to put disturbingly bright blue LEDs on items.

These become much too bright at night, and can be a serious problem if the device needs to be in a bedroom or hotel room. Which things like laptops, phone and camera chargers and many other devices need to do. I end up putting small pieces of electrical tape over these blue LEDs.

I call upon the factories of Shenzen and elsewhere to produce low cost, standardized status LEDs. These LEDs will come with an included photosensor that measures the light in the room, and adjusts the LED so that it is just visible at that lighting level. Or possibly turns it off in the dark, because do we really need to know that our charger is on after we’ve turned off the lights?

Of course, one challenge is that the light from the LED gets into the photosensor. For most LEDs, the answer is pretty easy — put a filter that blocks out the colour of the LED over the photosensor. If you truly need a white LED, you could make a fancy circuit that turns it off for a few milliseconds every so often (the eye won’t notice that) and measures the ambient light while it’s off. All of this is very simple, and adds minimally to the cost. (In fact, the way you adjust the brightness of an LED is typically to turn it on and off very fast.)

Get these made and make it standard that all our gear uses them for status LEDs. Frankly, I think it would be a good idea even for consumer goods that don’t get into our bedrooms. My TV rooms and computer rooms don’t need to look like Christmas scenes.

Day 3 of CES -- BMW and robots

Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.

Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.

More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.

Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.

I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.

I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.

CES Day 2 Gallery and notes

After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery

Near-perfect virtual reality of recent times and tourism

Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.

We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)

Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.

This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.

It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.

This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.

VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.

In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.

I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.

Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing

  • With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
  • With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
  • CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
  • Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.

Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.

The paradox of Bitcoin proof-of-work mining

Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.

Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.

How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.

Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.

Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.

Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)

What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.

(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)

CPU/GPU mining

Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:

  • Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
  • Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
  • Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
  • In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
  • When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.

The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too.  read more »

The failure of the pan-tilt camera in video calls

This year, we stayed with Kathryn’s family for the holidays, so I attended dinner in my own mother’s home via Skype. Once again, the technology was frustrating. And it need not be.

There were many things that can be better. For those of us who Skype regularly, we don’t understand that there is still hassle for those not used to it. Setting up a good videoconferencing setup is still work. As I have found is always the case in a group-to-solos videoconference, the group folks do not care nearly as much about the conference as the remote solos, so a fundamental rule of design here is that if the remotes can do something, they should be the ones doing it, since they care the most. If there is to be UI, leave the UI to the remotes (who are sitting at computers and care) and not to the meeting room locals. Many systems get this exactly backwards — they imagine the meeting room is the “master” and thus has the complex UI.

In this family setting, however, the clearest problem for me is that no camera can show the whole room. It’s like sitting at the table unable to move your head, with blinders on. You can’t really be part of the group. You also have to be away from the table so everybody there can see you, since screens are only visible over a limited viewing angle.

One clear answer to this is the pan/tilt camera, which is to say a webcam with servo motors that allow it to look around. This technology is very cheap — you’ll find pan/tilt IP security cameras online for $30 or less, and there are even some low priced Chinese made pan/tilt webcams out there — I just picked another up for $20. I also have the Logitech Orbit AF. This was once a top of the line HD webcam, and still is very good, but Logitech no longer makes it. Logitech also makes the BCC950 — a $200 conference room pan/tilt webcam which has extremely good HD quality and a built-in hardware compressor for 1080p video that is superb with Skype. We have one of these, and it advertises “remote control” but in fact all that means is there is an infrared remote the people in the room can use to steer the camera. In our meetings, nobody ever uses this remote for the reason I specify above — the people in the room aren’t the motivated ones.

This is compounded by the fact that the old method — audio conference speakerphones — have a reasonably well understood UI. Dial the conference bridge, enter a code, and let the remotes handle their own calling in. Anything more complex than that gets pushback — no matter how much better it is.  read more »

The RV of the future

Over the years, particularly after Burning Man, I’ve written posts about how RVs can be improved. This year I did not use an regular RV but rather a pop-up camping trailer. However, I thought it was a good time to summarize a variety of the features I think should be in every RV of the future.

Smart Power

We keep talking about smart power and smart grids but power is expensive and complex when camping, and RVs are a great place for new technologies to develop.

To begin with, an RV power system should integrate the deep cycle house batteries, a special generator/inverter system, smart appliances and even the main truck engine where possible.

Today the best small generators are inverter based. Rather than generating AC directly from an 1800rpm motor and alternator, they have a variable speed engine and produce the AC via an inverter. These are smaller, more efficient, lighter and quieter than older generators, and produce cleaner power. Today they are more expensive, but not more expensive than most RV generators. RV generators are usually sized at 3,600 to 4,000 watts in ordinary RVs — that size dictated by the spike of starting up the air conditioner compressor when something else, like the microwave is running.

An inverter based generator combined with the RV’s battery bank doesn’t have to be that large. It can draw power for the surge of starting a motor from the battery. The ability to sustain 2,000 watts is probably enough, with a few other tricks. Indeed, it can provide a lot of power even with the generator off, though the generator should auto-start if the AC is to be used, or the microwave will be used for a long time.

By adding a data network, one can be much more efficient with power. For example, the microwave could just turn off briefly when the thermostat wants to start the AC’s compressor, or even the fans. The microwave could also know if it’s been told to cook for 30 seconds (no need to run generator) or 10 minutes (might want to start it.) It could also start the generator in advance of cooling need.

If the master computer has access to weather data, it could even decide what future power needs for heating fans and air conditioning will be, and run the generator appropriately. With a GPS database, it could even know the quiet times of the campsite it’s in and respect them.

A modern RV should have all-LED lighting. Power use is so low on those that the lights become a blip in power planning. Only the microwave, AC and furnace fan would make a difference. Likewise today’s TVs, laptops and media players which all draw very few watts.

A smart power system could even help plugging into shore power, particularly a standard 15a circuit. Such circuits are not enough to start many ACs, or to run the AC with anything else. With surge backup from the battery, an RV could plug into an ordinary plug and act almost like it had a high power connection.

To go further, for group camping, RVs should have the ability to form an ad-hoc power grid. This same ability is already desired in the off-grid world, so it need not be developed just for RVs. RVs able to take all sorts of input power could also eventually get smart power from RV campsites. After negotiation, a campsite might offer 500v DC at 12 amps instead of 115v AC, allowing the largest dual-AC RVs to plug into small wires.  read more »

Augmented Reality as documentation and the "context" button

I’ve been a little skeptical of many augmented reality apps I’ve seen, feeling they were mostly gimmick and not actually useful.

I’m impressed by this new one from Audi where you point your phone (iPhone only, unfortunately) at a feature on your car, and you get documentation on it. An interesting answer to car user manuals that are as thick as the glove compartment and the complex UIs they describe.

Like so many apps, however, this one will suffer the general problem of the amount of time it takes to fumble for your phone, unlock it, invoke an app, and then let the app do its magic. Of course fumbling for the manual and looking up a button in the index takes time too.

I’ve advocated for a while that phones become more aware of their location, not just in the GPS sense, but in the sense of “I’m in my car” and know what apps to make very easy to access, and even streamline their use. This can include allowing these apps to be right on the lock screen — there’s no reason to need to unlock the phone to use an app like this one. In fact, all the apps you use frequently in your car that don’t reveal personal info should be on the lock screen when you get near the car, and some others just behind it. The device can know it is in the car via the bluetooth in the car. (That bluetooth can even tell you if you’re in another car of a different make, if you have a database mapping MAC addresses to car models.)

Bluetooth transmitters are so cheap and with BT Low Energy they can last a year on a watch battery, so one of the more compelling “Internet of Things” applications — that’s also often a gimmick term — is to scatter these devices around the world to give our phones this accurate sense of place.

Some of this philosophy is expressed in Google Now, a product that goes the right way on many of these issues. Indeed, the Google Now cards are one of the more useful aspects of Glass, which otherwise is inherently limited in its user interface making it harder for you to ask Glass things than it is to ask a phone or desktop.

The car app has some wrinkles of course. Since you don’t always have an iPhone (or may not have your phone even if you own an iPhone) you still need the thick manual, though perhaps it can be in the trunk. And I will wager that some situations, like odd lighting, may make it not as fast as in the video.

By and large, pointing your phone at QR codes to learn more has not caught on super well, in part again because it takes time to get most phones to the point where they are scanning the code. Gesture interfaces can help there but you can only remember and parse a limited number of gestures, so many applications call out for being the special one. Still a special shake which means “Look around you in all ways you can to figure out if there is something in this location, time or camera view that I might want you to process.” Constant looking eats batteries which is why you need such a shake.

I’ve proposed that even though phones have slowly been losing all their physical buttons, I would put this back as a physical button I call the “context” button. “Figure out the local context, and offer me the things that might be particularly important in this context.” This would offer many things:

  • Standing in front of a restaurant or shop, the reviews, web site or app of the shop
  • In the car, all the things you like in the car, such as maps/nav, the manual etc.
  • In front of a meeting room, the schedule for that room and ability to book it
  • At a tourist attraction, info on it.
  • In a hotel, either the ability to book a room, or if you have a room, hotel services

There are many contexts, but you can usually sort them so that the most local and the most rare come first. So if you are in a big place you are frequently, such as the office complex you work at, the general functions for your company would not be high on the list unless you manually bumped them.

Of course, one goal is that car UIs will become simpler and self-documenting, as cars get screens. Buttons will still do the main functions you do all the time — and which people already understand — but screens will do the more obscure things you might need to look up in the manual, and document it as they go. You obviously can’t ever do something you need to look up in the manual while driving.

There is probably a trend that the devices in our lives with lots of buttons and complex controls and modes, like home electronics, cars and some appliances, will move to having screens in their UIs and thus not need the augmented reality.

RAID, backyard backup and the future of backup

Had my second RAID failure last week. In the end, things were OK but the reality is that many RAID implementations are much more fragile than they should be. Write failures on a drive caused the system to hang. Hard reset caused the RAID to be marked dirty, which mean it would not boot until falsely marked clean (and a few other hoops,) leaving it with some minor filesystem damage that was reparable. Still, I believe that a proper RAID-like system should have as its maxim that the user is never worse off because they built a RAID than if they had not done so. This is not true today, both due to fragility of systems, and the issues I have outlined before with deliberately replacing a disk in a RAID, where it does not make use of the still-good but aging old disk when rebuilding the replacement.

A few years ago I outlined a plan for disks to come as two-packs for easy, automatic RAID because disks are so cheap that everybody should be doing it. The two-pack would have two SATA ports on it, but if you only plugged in one, it would look like a single disk, and be a RAID-1 inside. If you gave it a special command, it could look like other things, including a RAID-0, or two drives, or a JBOD concatenation. If you plugged into the second port it would look like two disks, with the RAID done elsewhere.

I still want this, but RAID is not enough. It doesn’t save you from file deletion, or destruction of the entire system. The obvious future trend is network backup, which is both backup and offsite. The continuing issue with network backup is that some people (most notably photographers and videographers) generate huge amounts of data. I can come back from a weekend with 16gb of new photos, and that’s a long slog over DSL with limited upstream for network backup. To work well, network backup also needs to understand all databases, as a common database file might be gigabytes and change every time there is a minor update to a database record. (Some block-level incrementalism can work here if the database is not directly understood.)

Network backup is also something that should be automatic. There are already peer-to-peer network backups, that make use of the disks of friends or strangers (encrypted of course) but it would be nice if this could “just happen” when any freshly installed computer unless you turn it off. The user must keep the key stored somewhere safe, which is not zero-UI, though if all they want is to handle file-deletion and rollback they can get away without it.

Another option that might be interesting would be the outdoor NAS. Many people now like to use NAS boxes over gigabit networks. This is not as fast as SATA with a flash drive, or RAID, or even modern spinning disk, but it’s fast enough for many applications.

An interesting approach would be a NAS designed to be placed outdoors, away from the house, such as in the back corner of a yard, so that it would survive a fire or earthquake. The box would be waterproof and modestly fireproof, but ideally it is located somewhere a fire is unlikely to reach. It could either be powered by power-over-ethernet or could have its own power and even use WIFI (in which case it is only suitable for backup, not as a live NAS.)

This semi-offsite backup would be fast and cheap (network storage tends to be much more expensive than local drives.) It would be encrypted, of course, so that nobody can steal your data. Encryption would be done in the clients, not the NAS, so even somebody who taps the outside wire would get nothing.

This semi-offsite backup could be used in combination with network backup. Large files and new files would be immediately sent to the backyard backup. The most important files could then go to network backup, or all of them, just much more slowly.

A backyard backup could also be shared by neighbours, especially on wifi, which might make it quite cost effective. Due to encryption, nobody could access their neighbour’s data.

If neighbours are going to cooperate, this can also be built by just sharing servers or NAS boxes in 2 or more houses. This provides decent protection and avoids having to be outside, but there is the risk that some fires burn down multiple houses depending on the configuration.

A backyard backup would be so fast that many would reverse what I said above, and have no need for RAID. Files would be mirrored to the backyard backup within seconds or minutes. RAID would only be needed for those who need to have systems that won’t even burp in a disk failure (which is a rare need in the home) or which must not lose even a few minutes of data.

The laptop in the tablet world

I have owned a laptop for decades, and I’ve always gone for the “small and light” laptop class because as a desktop user, my laptop is only for travel, and ease of carrying is thus very important. Of course once I get there I have envied the larger screens and better keyboards and other features of the bigger laptops people carry, but generally been happy with the decision.

Others have gone for “desktop replacement” laptops which are powerful, big and heavy. Those folks don’t have a desktop, at most they plug their laptop into an external monitor and other peripherals at home. The laptop is a bitch to carry but of course all files come with it.

Today, the tablet is changing that equation. I now find that when I am going into a situation where I want a minimal device that’s easy to carry, the tablet is the answer, and even better the tablet and bluetooth keyboard. I even carry a keyboard that’s a fair bit larger than the tablet, but still very light compared to a laptop. When I am in a meeting, or sitting attending an event, I am not going to do the things I need the laptop for. Well, not as much, anyway. On the airplane, the tablet is usually quite satisfactory — in fact better when in coach, though technically the keyboard is not allowed on a plane. (My tablet can plug in a USB keyboard if needed.)

Planes are a particular problem. It’s not safe to check LCD screens in your luggage, so any laptop screen has to come aboard with you, and this is a pain if the computer is heavy.

With the tablet dealing with the “I want small and light” situations, what is the right laptop answer?

One obvious solution are the “convertible tablet” computers being offered by various vendors. These are laptops where the screen is a tablet and it can be removed. These tend to be Windows devices, and somewhat expensive, but the approximate direction is correct.

Another option would be to break the laptop up into 3 or more components:  read more »

  • The tablet, running your favourite tablet OS
  • A keyboard, of your choice, which can be carried easily with the tablet for typing-based applications. Able to hold the laptop and connect to it in a permitted way on the plane. Touchpad or connection for mouse.
  • A “block,” whose form factor is now quite variable, with the other stuff.

Documentary on the first six programmers funded and going into production

I am pleased to report that a documentary on the first software developers, the 6 women who were hired to program the ENIAC — the first electronic computer — has after many years received a funding grant sufficient to produce it.

Back in the 90s, my close friend Kathy Kleiman was researching computer history and came upon photos of the ENIAC and wondered who the unnamed women in the photos were. At first, she was told they were models hired to decorate the computer, but further investigation revealed they were the ones programming it.

The six women were professional computers, which was a job title early in the century — people with math degrees hired to perform calculations, in particular ballistic firing tables for the war. Because of the war, skilled women got these jobs, and the best of the team were asked to write software to get the machine to do the tables they were doing by hand. They were given no instruction, just the wiring diagrams and other machine designs, and created the first software applications, including inventing things like the first sort routine and many other things fundamental to our profession.

Because nobody knew the history of these founders of our profession, Kathy sought them out, and was able to record video interviews with 4 of them. These interviews have languished in the can for many years, and alas, all 6 of them are now deceased. I’ve been trying to help for many years, but in a fortuitous lunch, I was able to make the introductions necessary to arrange funding through the efforts and support of my friends Megan Smith, Anne Wojcicki and Lucy Southworth.

Kathy got to make the announcement at Google I/O in a special session about female techmakers featuring an array of accomplished women in technology. She showed a small section of the movie’s trailer. Her section can be seen 9 minutes into the video, and the programmers at 11:30. (Megan accidentally called me Brad Feldman, but I forgive her :-)

Software development is perhaps the most important new profession of the 20th century — and there were many — and the story of the six unsung founders of that profession will finally be presented to a large audience. I’ll announce when the documentary is released.

We need a security standard for USB and other plug-in devices

Studies have shown that if you leave USB sticks on the ground outside an office building, 60% of them will get picked up and plugged into a computer in the building. If you put the company logo on the sticks, closer to 90% of them will get picked up and plugged in.

USB sticks, as you probably know, can pretend to be CD-ROMs and that means on many Windows systems, the computer will execute an “autorun” binary on the stick, giving it control of your machine. (And many people run as administrator.) While other systems may not do this, almost every system allows a USB stick to pretend to be a keyboard, and as a keyboard it also can easily take full control of your machine, waiting for the machine to be idle so you won’t see it if need be. Plugging malicious sticks into computers is how Stuxnet took over Iranian centrifuges, and yet we all do this.

I wish we could trust unknown USB and bluetooth devices, but we can’t, not when they can be pointing devices and mice and drives we might run code from.

New OS generations have to create a trust framework for plug-in hardware, which includes USB and firewire and to a lesser degree even eSata.

When we plug in any device that might have power over the machine, the system should ask us if we wish to trust it, and how much. By default, we would give minimum trust to drives, and no trust to pointing devices or keyboards and the like. CD-Roms would not get the ability to autorun, though it could be granted by those willing to take this risk, poor a choice as it is.

Once we grant the trust, the devices should be able to store a provided key. After that, the device can then use this key to authenticate itself and regain that trust when plugged in again. Going forward all devices should do this.

The problem is they currently don’t, and people won’t accept obsoleting all their devices. Fortunately devices that look like writable drives can just have a token placed on the drive. This token would change every time, making it hard to clone.

Some devices can be given a unique identifier, or a semi-unique one. For devices that have any form of serial number, this can be remembered and the trust level associated with it. Most devices at least have a lot of identifiers related to the make and model of device. Trusting this would mean that once you trusted a keyboard, any keyboard of the same make and model would also be trusted. This is not super-secure but prevents generic attacks — attacks would have to be directly aimed at you. To avoid a device trying to pretend to be every type of keyboard until one is accepted, the attempted connection of too many devices without a trust confirmation should lock out the port until a confirmation is given.

The protocol for verification should be simple so it can be placed on an inexpensive chip that can be mass produced. In particular, the industry would mass produce small USB pass-through authentication devices that should cost no more than $1. These devices could be stuck on the plugs of old devices to make it possible for them to authenticate. They could look like hubs, or be truly pass-through.

All of this would make USB attacks harder. In the other direction, I believe as I have written before that there is value in creating classes of untrusted or less trusted hardware. For example, an untrusted USB drive might be marked so that executable code can’t be loaded from it, only classes of files and archives that are well understood by the OS. And an untrusted keyboard would only be allowed to type in boxes that say they will accept input from an untrusted keyboard. You could write the text of emails with the untrusted keyboard, but not enter URLs into the URL bar or passwords into password boxes. (Browser forms would have to indicate that an untrusted keyboard could be used.) In all cases, a mini text-editor would be available for use with the untrusted keyboard, from where one could cut and paste using a trusted device into other boxes.

A computer that as yet has no trusted devices of a given class would have to trust the first one plugged in. Ie. if you have a new computer that’s never had a keyboard, it has to trust its first keyboard unless there is another way to confirm trust when that first keyboard is plugged in. Fortunately mobile devices all have built in input hardware that can be trusted at manufacture, avoiding this issue. If a computer has lost all its input devices and needs a new one, you could either trust implicitly, or provide a pairing code to type on the new keyboard (would not work for mouse) to show you are really there. But this is only a risk on systems that normally have no input device at all.

For an even stronger level of trust, we might want to be able to encrypt the data going through. This stops the insertion of malicious hubs or other MITM intercepts that might try to log keystrokes or other data. Encryption may not be practical in low power devices that need to be drives and send data very fast, but it would be fine for all low speed devices.

Of course, we should not trust our networks, even our home networks. Laptops and mobile devices constantly roam outside the home network where they are not protected, and then come back inside able to attack if trusted. However, some security designers know this and design for this.

Yes, this adds some extra UI the first time you plug something in. But that’s hopefully rare and this is a big gaping hole in the security of most of our devices, because people are always plugging in USB drives, dongles and more.

Time for the post office to let anybody print postage with no minimums

For some time, the US Postal Service has allowed people to generate barcoded postage. You can do that on the expensive forms of mail such as priority mail and express mail, but if you want to do it on ordinary mail, like 1st class mail or parcel post, you need an account with a postage meter style provider, and these accounts typically include a monthly charge of $10/month or more. For an office, that’s no big deal, and cheaper than the postage meters that most offices used to buy — and the pricing model is based on them to some extent, even though now there is no hardware needed. But for an ordinary household, $120/year is far more than they are going to spend on postage.

There is one major exception I know of — if you buy something via PayPal, they allow you to print a regular postage shipping label with electronic postage. This is nice and convenient, but no good for sending ordinary letters and other small items.

I think the USPS is shooting itself in the foot by not letting people just buy postage online with no monthly fee. The old stamp system is OK for regular letters, and indeed they finally changed things so that old first class stamps still work after price raises, but for anything else you have to keep lots of stamps in supply and you often waste postage, or make a trip to a mailing office. This discourages people from using the post office, and will only hasten its demise. Make it trivial to mail things and people will mail more.

It could be a web printed mailing label as you can use for priority mail, but most software vendors would quickly support such a system. If people wanted, they could even buy “stamps” which were collections of electronic postage in various denominations that could be used by programs so there is no need to handle transactions. Address label printers would all quickly also do postage.

Of course the official suppliers like Endica and would fight this completely. They love being official suppliers and charging large fees. They have more lobbying power than ordinary mailers. So the post office is going to quietly slip away into that good night, instead of taking advantage of the fact that it’s the one delivery company that comes to my door every day (for both pick up and delivery) and all the effiencies that provides.

Modify E-book reader designs for digital signs

One of the useful attributes of electronic paper (such as E-Ink) is that it doesn’t take any power to retain an image, it only takes power to change the image. This is good for long-lasting E-readers, and digital signs are one of the other key applications of electronic paper, though today they are sold with a focus on the retail market.

Earlier, I wrote about concepts for a fourth screen which is an always-on wall computer that has effectively no user interface — its purpose is to show you stuff that is probably of interest to you based on time of day and who is looking at the screen. That proposal requires that the display be located where there is power, but there are many locations where wiring in permanent power is not a readily available option.

The typical e-book reader has all the hardware needed to act as a very low-power digital wall display. Such a display would have electronic paper and wifi. It would only wake up very rarely to briefly check, over the wifi (or better still bluetooth) if there is new data to display, in which case it would download it and display it. During these updates, it might also check to see if there is a new updating schedule.

You can do better than wifi, which usually requires a process of associating with an access point, getting an IP address, and then making queries. Bluetooth can connect with lower power. Even better would be a chip which is able to listen constantly at very low power for a special radio pulse (“wake on pulse”) from a powered transmitter, and then power on the rest of the system for data transfer. The panel could be put anywhere, and then a pulse generator would be put somewhere nearby that has power and is close enough to wake up the panel. (It might be something that plugs into a wall outlet and even does networking over the power lines.) This would allow the valuable ability to push information to the panel.

The panel’s battery would of course die in time, so there would need to be a battery swap ability or if need be a means to charge with a temporary extension cord, a battery-powered charger or taking the panel off the wall.

An immediate market for these would be the doors of meeting rooms, so that they can show the schedule for the meeting room. Many hotels and convention centers have screens to do this now, but due to the need for power and other integration, these tend to be quite expensive, while ebook readers are now in the $100 range.

But they would also be useful around the home for 4th screen applications, displaying useful info. They could also be put near fridges or stoves to display recipes and family information. Obviously if you can put in a powered LCD display, that’s going to be able to do more, but without the power constraint more people might use it. They do need to be lit by external light, of course, but also are visible in bright sun in a way that lcds are not. And a product like this might well start eating into the retail digital signage market — anybody know what the price points are these days in that market?

Syndicate content