Technology

A cryptographic solution to securely aggregate allegations could make it easier to come forward

Nobody wants to be the first person to do or say a risky thing. One recent example of this is the revelations that a number of powerful figures, like Harvey Weinstein, Roger Ailes, Bill O’Reilly and Bill Cosby, had a long pattern of sexual harassment and even assault, and many people were aware of it, but nobody came forward until much later.

People finally come forward when one brave person goes public, and then another, and finally people see they are not alone. They might be believed, and action might be done.

Eleven years ago, I proposed a system to test radical ideas, primarily aimed at voting in bodies like congress. The idea was to create a voting system where people could cast encrypted votes, with the voter’s identity unrevealed. Once a majority of yes votes were cast, however, the fragments of the decoding key would assemble and the votes and the voter identities could be decoded.

This would allow, for example, a vote on issues where a majority of the members support something but few are willing to admit it. Once the total hit the majority, it would become a passed bill, with no fear in voting.

I still would like to see that happen, but I wonder if the approach could have more application. The cryptographic approach is doable when you have a fixed group of members voting who can even meet physically. It’s much harder when you want to collect “votes” from the whole world.

You can easily build the system, though, if you have a well trusted agency. It must be extremely trusted, and even protected from court orders telling it to hand over its data. Let’s discuss the logistics below, but first give a description of how it would work.

Say somebody wants to make an allegation, such as “I was raped by Bill Cosby” or “The Mayor insisted I pay a bribe” or “This bank cheated me.” They would enter that allegation as some form of sworn legal statement, but additional details and their identity would be encrypted. Along with the allegation would be instructions, “Reveal my allegation once more than N people make the same allegation (at threshold N or less.)”

In effect, it would make saying “#metoo” have power, and even legal force. It also tries to balance the following important principles, which are very difficult to balance otherwise:

  1. Those wronged by the powerful must be able to get justice
  2. People are presumed innocent
  3. The accused have a right to confront the evidence against them and their accusers

How well this work would depend on various forms of how public the information is:

  • A cryptographic system would require less (or no) trusting individual entities or governments, but would make public the number of allegations entered. It would be incorruptible if designed well.
  • An agency system which publishes allegation counts and actual allegations when the threshold is reached.
  • An agency system which keeps allegation counts private until the threshold is reached.
  • An agency system which keeps everything private, and when the threshold is reached discloses the allegation only to authorities (police, boards of directors).

There are trade-offs as can be shown above. If allegations are public, that can tell other victims they are not alone. However, it can also be a tool in gaming the system.

The allegation must be binding, in that there will be consequences for making a false allegation once the allegations are disclosed, especially if the number of existing allegations is public. We do not want to create a power to make false anonymous allegations. If it were public that “3 people allege rape by person X” that would still create a lot of public shame and questions for X, which is fine if the allegations are true, but terrible if they are not. If X is not a rapist, for example, and the threshold is high, it will never be reached, and those making the allegations would know that. Our system of justice is based important principles of presumption of innocence, and a right to confront your accusers and the evidence against you.  read more »

To make video-meetings work, force people to stay engaged

Our videoconferencing tools have been getting better, but meetings with remote video participants still don’t work very well. One problem is poor use of the technology (such as a lack of headsets) which I outlined in my guide to room based video meetings. These can be worked on and the tech keeps improving.

The other big area for improvement is the discipline of the people in the meeting. The big challenge in typical meetings is that some of the participants are 2nd class. This is obvious when you have a meeting room with multiple local people and some remote users. It can also happen when people have differing levels of technology. In an ideal meeting, everybody in the meeting is on the same footing as far as their presence and ability to communicate.

We break this rule often. It is quite common to have remote attendees turn off sending video, or mute their audio, for example, making them be more like a TV audience than members of the meeting. It makes sense because it saves bandwidth, and people don’t like being watched. We also tolerate having some people present just on the phone, while others are there in person and others are on low and high quality video systems.

If you hope for a good meeting, you also want to express that the main value of the conferencing system is to let people attend without travel. It is not there to let them attend without the same effort and engagement they would put into a meeting they did travel to. The things I describe may seem minor, and they may veto features of great convenience, but those features are actually bugs and disrupt meetings more than people realize.

Here are some principles to get around this:

No meeting room

In an ideal video meeting, everybody is on their own personal video station. There is no meeting room. This means that even if several of the attendees are in the same building, they don’t go to a room, they stay at their desks and join the meeting just like any other remote.

This is obviously hard to do if the majority of participants are in the building, but it can be worth it. It also means you don’t need room-based videoconferencing systems, which are expensive and don’t work well. But if only 2 or 3 of the participants are in the same place, definitely consider having no meeting room. The big benefit is that when everybody has their own microphone, everybody hears everybody really well.

Today you can’t have people in the same room using their own computer because they hear the other people both via their headset and through the air. Perhaps some day a smart videoconferencing system will understand that some people are in the same room (you can tell because some sounds do get into the microphones) and adjust. It would allow those who still want a physical meeting room to get the great audio and video that comes from everybody using a computer. Those in the room together would still be 1st class participants, but remotes would not be that badly off.

Headsets at all times

We have gotten seduced by how well some voip systems handle speakerphone mode in one on one conversations. Don’t be fooled. They don’t do group meetings well at all. They seem like they do, but quickly you realize that now everybody hears all the random noises from the location of a speakerphone user. They do things like step away from their desks to eat, chat or take a phone call, and everybody hears it. Keyboards and mice clickety-clack. Sirens go by. It’s easy to ignore this in a one on one call, but it disrupts a meeting.  read more »

CES Gallery -- Smart Connected IoT home and more

I go to CES first to see the cars but it’s also good to see all the latest gadgets. My gallery, with captions you will see at the bottom as you page through them, provides photos and comments on interesting and stupid products and gadgets for this year.

Gallery of CES gadgets

CES always contains an amazing array of “What are they thinking?” products. This year, more than ever, we had more things that were made “smart” and “connected” for little reason one can discern. I was quite disappointed to read various media lists of top gadgets of CES 2017 and not find a single one that was actually exciting. There are a few that will be exciting one day — the clothes folding robot, the human carrying drone — but they are not here yet.

Car NAS for semi-offsite backup

Everybody should have off-site backup of their files. For most people, the biggest threat is fire, but here in California, the most likely disaster you will encounter is an earthquake. Only a small fraction of houses will burn down, but everybody will experience the big earthquake that is sure to come in the next few decades. Of course, fortunately only a modest number of houses will collapse, but many computers will be knocked off desks or have things fall on them.

To deal with this, I’ve been keeping a copy of my data in my car — encrypted of course. I park in my driveway, so nothing will fall on the car in a quake, and only a very large fire would have risk of spreading to the car, though it’s certainly possible.

The two other options are network backup and truly remote backup. Network backup is great, but doesn’t work for people who have many terabytes of storage. I came back from my latest trip with 300gb of new photos, and that would take a very long time to upload if I wanted network storage. In addition, many TB of network storage is somewhat expensive. Truly remote storage is great, but the logistics of visiting it regularly, bringing back disks for update and then taking them back again is too much for household and small business backup. In fact, even being diligent about going down to the car to get out the disk and update is difficult.

A possible answer — a wireless backup box stored in the car. Today, there are many low-cost linux based NAS boxes and they mostly run on 12 volts. So you could easily make a box that goes into the car, plugs into power (many cars now have 12v jacks in the trunk or other access to that power) and wakes up every so often to see if it is on the home wifi, and triggers a backup sync, ideally in the night.  read more »

Our routers need to remove the "internet" from the "internet of things" to stop DDOS

I frequently say that there is no “internet of things.” That’s a marketing phrase for now. You can’t go buy a “thing” and plug it into the “internet of things.” IoT is still interesting because underneath the name is a real revolution from the way that computing, sensing and communications are getting cheaper, smaller and using less power. New communications protocols are also doing interesting things.

We learned a lesson on Friday though, about why using the word “internet” is its own mistake. The internet — one of the world’s greatest inventions — was created as a network of networks where anything could talk to anything, and it was useful for this to happen. Later, for various reasons, we moved to putting most devices behind NATs and firewalls to diminish this vision, but the core idea remains.

Attackers on Friday made use of growing collection of low cost IoT devices with low security to mount a DDOS attack on DYN’s domain name servers, shutting off name lookup for some big sites. While not the only source of the attack, a lot of attention has come to certain Chinese brands of IP based security cameras and baby monitors. To make them easy to use, they are designed with very poor security, and as a result they can be hijacked and put into botnets to do DDOS — recruiting a million vulnerable computers to all overload some internet site or service at once.

Most applications for small embedded systems — the old and less catchy name of the “internet of things” — aren’t at all in line with the internet concept. They have no need or desire to be able to talk to the whole world the way your phone, laptop or web server do. They only need to talk to other local devices, and sometimes to cloud servers from their vendor. We are going to see billions of these devices connected to our networks in the coming years, perhaps hundreds of billions. They are going to be designed by thousands of vendors. They are going to be cheap and not that well made. They are not going to be secure, and little we can do will change that. Even efforts to make punishments for vendors of insecure devices won’t change that.

So here’s an alternative; a long term plan for our routers and gateways to take the internet out of IoT.

Our routers should understand that two different classes of devices will connect to them. The regular devices, like phones and laptops, should connect to the internet as we expect today. There should also be a way to know that the connecting devices does not want regular internet access, and not to give it. One way to do that is for the devices to know about this, and to convey how much access they need when they first connect. One proposal for this is my friend Eliot Lear’s MUD proposal. Unfortunately, we can’t count on devices to do this. We must limit stupid devices and old devices too.  read more »

The social networks could hold great political power due to GOTV. Should they?

The social networks have access (or more to the point can give their users access) to an unprecedented trove of information on political views and activities. Could this make a radical difference in affecting who actually shows up to vote, and thus decide the outcome of elections?

I’ve written before about how the biggest factor in US elections is the power of GOTV - Get Out the Vote. US Electoral turnout is so low — about 60% in Presidential elections and 40% in off-year — that the winner is determined by which side is able to convince more of their weak supporters to actually show up and vote. All those political ads you see are not going to make a Democrat vote Republican or vice versa, they are going to scare a weak supporter to actually show up. It’s much cheaper, in terms of votes per dollar (or volunteer hour) to bring in these weak supporters than it is to swing a swing voter.

The US voter turnout numbers are among the worst in the wealthy world. Much of this is blamed on the fact the US, unlike most other countries, has voter registration; effectively 2 step voting. Voter registration was originally implemented in the USA as a form of vote suppression, and it’s stuck with the country ever since. In almost all other countries, some agency is responsible for preparing a list of citizens and giving it to each polling place. There are people working to change that, but for now it’s the reality. Registration is about 75%, Presidential voting about 60%. (Turnout of registered voters is around 80%)

Scary negative ads are one thing, but one of the most powerful GOTV forces is social pressure. Republicans used this well under Karl Rove, working to make social groups like churches create peer pressure to vote. But let’s look at the sort of data sites like Facebook have or could have access to:

  • They can calculate a reasonably accurate estimate of your political leaning with modern AI tools and access to your status updates (where people talk politics) and your friend network, along with the usual geographic and demographic data
  • They can measure the strength of your political convictions through your updates
  • They can bring in the voter registration databases (which are public in most states, with political use allowed on the data. Commercial use is forbidden in a portion of states but this would not be commercial.)
  • In many cases, the voter registration data also reveals if you voted in prior elections
  • Your status updates and geographical check-ins and postings will reveal voting activity. Some sites (like Google) that have mobile apps with location sensing can detect visits to polling places. Of course, for the social site to aggregate and use this data for its own purposes would be a gross violation of many important privacy principles. But social networks don’t actually do (too many) things; instead they provide tools for their users to do things. As such, while Facebook should not attempt to detect and use political data about its users, it could give tools to its users that let them select subsets of their friends, based only on information that those friends overtly shared. On Facebook, you can enter the query, “My friends who like Donald Trump” and it will show you that list. They could also let you ask “My Friends who match me politically” if they wanted to provide that capability.

Now imagine more complex queries aimed specifically at GOTV, such as: “My friends who match me politically but are not scored as likely to vote” or “My friends who match me politically and are not registered to vote.” Possibly adding “Sorted by the closeness of our connection” which is something they already score.  read more »

Will bed-bound seniors experience the world through VR telepresence robots?

I’ve written before about my experiences inhabiting a telepresence robot. I did it again this weekend to attend a reunion, with a different robot that’s still in prototype form.

I’ve become interested in the merger of virtual reality and telepresence. The goal would be to have VR headsets and telepresence robots able to transmit video to fill them. That’s a tall order. On the robot you would have a array of cameras able to produce a wide field view — perhaps an entire hemisphere, or of course the full sphere. You want it in high resolution, so this is actually a lot of camera.

The lowest bandwidth approach would be to send just the field of view of the VR glasses in high resolution, or just a small amount more. You would send the rest of the hemisphere in very low resolution. If the user turned their head, you would need to send a signal to the remote to change the viewing box that gets high resolution. As a result, if you turned your head, you would see the new field, but very blurry, and after some amount of time — the round trip time plus the latency of the video codec — you would start seeing your view sharper. Reports on doing this say it’s pretty disconcerting, but more research is needed.

At the next level, you could send a larger region in high-def, at the cost of bandwidth. Then short movements of the head would still be good quality, particularly the most likely movements, which would be side to side movements of the head. It might be more acceptable if looking up or down is blurry, but looking left and right is not.

And of course, you could send the whole hemisphere, allowing most head motions but requiring a great deal of bandwidth. At least by today’s standards — in the future such bandwidth will be readily available.

If you want to look behind you, there you could just have cameras capturing the full sphere, and that would be best, but it’s probably acceptable to have servos move the camera, and also to not be sending the rear information. It takes time to turn your head, and that’s time to send signals to adjust the remote parameters or camera.

Still, all of this is more bandwidth than most people can get today, especially if we want lifelike resolution — 4K per eye or probably even greater. Hundreds of megabits. There are fiber operators selling such bandwidth, and Google fiber sells it cheap. It does not need to be symmetrical for most applications — more on that later.

Surrogates, etc.

At this point, you might be thinking of the not-very-exciting Bruce Willis movie “surrogates” where everybody just lay in bed all day controlling surrogate robots that were better looking versions of themselves. Those robot bodies passed on not just VR but touch and smell and taste — the works — by a neural interface. That’s science fiction, but a subset could be possible today.

Local robots

One place you can easily get that bandwidth is within a single building, or perhaps even a town. Within a short distance, it is possible to get very low latency, and in a neighbourhood you can get millisecond latency from the network. Low latency from the video codec means less compression in the codec, but that can be attained if you have lots of spare megabits to burst when the view moves, which you do.

So who would want to operate a VR robot that’s not that far from them? This disabled, and in particular the bedridden, which includes many seniors at the end of their lives. Such seniors might be trapped in bed, but if they can sit up and turn their heads, they could get a quality VR experience of the home they live in with their family, or the nursing home they move to. With the right data pipes, they could also be in a nursing home but get a quality VR experience of being the homes of nearby family. They could have multiple robots in houses with stairs to easily “move” from floor to floor.

What’s interesting is we could build this today, and soon we can build it pretty well.

What do others see?

One problem with using VR headsets with telepresence is a camera pointed at you sees you wearing a giant headset. That’s of limited use. Highly desired would be software that, using cameras inside the headset looking at the eyes, and a good captured model of the face, digitally remove the headset in a way that doesn’t look creepy. I believe such software is possible today with the right effort. It’s needed if people want VR based conferencing with real faces.

One alternative is to instead present an avatar, that doesn’t look fully real, but which offers all the expression of the operator. This is also doable, and Philip Rosedale’s “High Fidelity” business is aimed at just that. In particular, many seniors might be quite pleased at having an avatar that looks like a younger version of themselves, or even just a cleaned up version of their present age.

Another alternative is to use fairly small and light AR glasses. These could be small enough that you don’t mind seeing the other person wearing them and you are able to see their eyes direction, at most behind a tinted screen. That would provide less a sense of being there, but also might provide a more comfortable experience.

For those who can’t set up, experiments are needed to see if they can make a system to do this that isn’ t nausea inducing, as I suspect wearing VR that shifts your head angle will be. Anybody tried that?

Of course, the bedridden will be able to use VR for virtual space meetings with family and friends, just as the rest of the world will use them — still having these problems. You don’t need a robot in that case. But the robot gives you control of what happens on the other end. You can move around the real world and it makes a big difference.

Such systems might include some basic haptic feedback, allowing things like handshakes or basic feelings of touch, or even a hug. Corny as it sounds, people do interpret being squeezed by an actuator with emotion if it’s triggered by somebody on the other side. You could build the robot to accept a hug (arms around the screen) and activate compressed air pumps to squeeze the operator — this is also readily doable today.

Barring medical advances, many of us may sadly expect to spend some of their last months or years bedridden or housebound in a wheelchair. Perhaps they will adopt something like this, or even grander. And of course, even the able bodied will be keen to see what can be done with VR telepresence.

Don't be fooled by robots falling down at Darpa Robotics Challenge

This weekend I went to Pomona, CA for the 2015 DARPA Robotics Challenge which had robots (mostly humanoid) compete at a variety of disaster response and assistance tasks. This contest, a successor of sorts to the original DARPA Grand Challenge which changed the world by giving us robocars, got a fair bit of press, but a lot of it was around this video showing various robots falling down when doing the course:

What you don’t hear in this video are the cries of sympathy from the crowd of thousands watching — akin to when a figure skater might fall down — or the cheers as each robot would complete a simple task to get a point. These cheers and sympathies were not just for the human team members, but in an anthropomorphic way for the robots themselves. Most of the public reaction to this video included declarations that one need not be too afraid of our future robot overlords just yet. It’s probably better to watch the DARPA official video which has a little audience reaction.

Don’t be fooled as well by the lesser-known fact that there was a lot of remote human tele-operation involved in the running of the course.

Check out my Gallery of Photos from the DARPA Robotics Challenge Finals.

What you also don’t see in this video is just how very far the robots have come since the first round of trials in December 2013. During those trials the amount of remote human operation was very high, and there weren’t a lot of great fall videos because the robots had tethers that would catch them if they fell. (These robots are heavy and many took serious damage when falling, so almost all testing is done with a crane, hoist or tether able to catch the robot during the many falls which do occur.)

We aren’t yet anywhere close to having robots that could do tasks like these autonomously, so for now the research is in making robots that can do tasks with more and more autonomy with higher level decisions made by remote humans. The tasks in the contest were:

  • Starting in a car, drive it down a simple course with a few turns and park it by a door.
  • Get out of the car — one of the harder tasks as it turns out, and one that demanded a more humanoid form
  • Go to a door and open it
  • Walk through the door into a room
  • In the room, go up to a valve with circular handle and turn it 360 degrees
  • Pick up a power drill, and use it to cut a large enough hole in a sheet of drywall
  • Perform a surprise task — in this case throwing a lever on day one, and on day 2 unplugging a power cord and plugging it into another socket
  • Either walk over a field of cinder blocks, or roll through a field of light debris
  • Climb a set of stairs

The robots have an hour to do this, so they are often extremely slow, and yet to the surprise of most, the audience — a crowd of thousands and thousands more online — watched with fascination and cheering. Even when robots would take a step once a minute, or pause at a task for several minutes, or would get into a problem and spend 10 minutes getting fixed by humans as a penalty.  read more »

Matternet launches drone delivery platform

I often speak about deliverbots — the potential for ground based delivery robots. There is also excitement about drone (UAV/quadcopter) based delivery. We’ve seen many proposed projects, including Amazon prime Air and much debate. Many years ago I also was perhaps the first to propose that drones deliver a defibrillator anywhere and there are a few projects underway to do this.

Some of my students in the Singularity University Graduate Studies Program in 2011 really caught the bug, and their team project turned into Matternet — a company with a focus in drone delivery in the parts of the world without reliable road infrastructure. Example applications including moving lightweight items like medicines and test samples between remote clinics and eventually much more.

I’m pleased to say they just announced moving to a production phase called Matternet One. Feel free to check it out.

When it comes to ground robots and autonomous flying vehicles, there are a number of different trade-offs:

  • Drones will be much faster, and have an easier time getting roughly to a location. It’s a much easier problem to solve. No traffic, and travel mostly as the crow flies.
  • Deliverbots will be able to handle much heavier and larger cargo, consuming a lot less energy in most cases. Though drones able to move 40kg are already out there.
  • Regulations stand in the way of both vehicles, but current proposed FAA regulations would completely prohibit the drones, at least for now.
  • Landing a drone in a random place is very hard. Some drone plans avoid that by lowering the cargo on a tether and releasing the tether.
  • Driving to a doorway or even gate is not super easy either, though.
  • Heavy drones falling on people or property are an issue that scares people, but they are also scared of robots on roads and sidewalks.
  • Drones probably cost more but can do more deliveries per hour.
  • Drones don’t have good systems in place to avoid collisions with other drones. Deliverbots won’t go that fast and so can stop quickly for obstacles seen with short range sensors.
  • Deliverbots have to not hit cars or pedestrians. Really not hit them.
  • Deliverbots might be subject to piracy (people stealing them) and drones may have people shoot at them.
  • Drones may be noisy (this is yet to be seen) particularly if they have heavier cargo.
  • Drones can go where their are no roads or paths. For ground robots, you need legs like the BigDog.
  • Winds and rain will cause problems for drones. Deliverbots will be more robust against these, but may have trouble on snow and ice.

In the long run, I think we’ll see drones for urgent, light cargo and deliverbots for the rest, along with real trucks for the few large and heavy things we need.

Rise of the selfie drones. Is tethered a good idea?

At CES, there were a couple of “selfie drones.” The Nixie is designed to be worn on your wrist, taken off, thrown, and then it returns to you after taking a photo or video. There was also the Zano which is fancier and claims it will follow you around, tracking you as you mountain bike or ski to make a video of you just as you do your cool trick.

The selfie is everywhere. In Rome, literally hundreds of vendors tried to sell me selfie sticks in all the major tourist areas, even with a fat Canon DSLR hanging from my neck. It’s become the most common street vendor gadget. (The blue LED wind up helicopters were driving me nuts anyway.)

I also had been thinking about this, and came up with a design that’s not as capable as these designs, but might be better. My selfie drone would be tethered. You would put down the base which would have the batteries and a retractable cord. Up would fly the camera drone, which would track your phone to get a great shot of you. (If it were for me, it would also offer panorama mode where it spun around at the top shooting a pano, with you or without you.)

This drone could not follow you as you do a sport, of course, or get above a certain height. But unlike the free designs, it would not get lost over the cliff in the winds, as I think might happen to a number of these free selfie drones. It turns out that cliffs and outlook points are a common place to want to take these photos, they are the place you really need a high view to capture you and what’s below you.

Secondly, with the battery on the ground, and only a short tether wire needed, you can have a much better camera as payload. Only needing a short flight time and not needing to carry the batteries means more capabilities for the drone.

It’s also less dangerous, and is unlikely to come under regulation because it physically can’t fly beyond a certain altitude or distance from the base. It could not shoot you from water or from over the edge of the cliff as the other drones could if you were willing to risk them.

My variation would probably be a niche. Most selfies are there to show off where you were, not to be top quality photos. Only more serious photographers would want one capable of hauling up a quality lens. Because mine probably wants a motor in the base to reel it back in (so you don’t have to wind the cables) it might even cost more, not less.

The pano mode would be very useful. In so many pano spots, the view is fantastic but is blocked by bushes and trees, and the spectacular pano shot is only available if you go up enough. For daytime a tethered drone would probably do fine. I’m still waiting on the Panono — a ball, studded with cameras from Berlin that was funded by Kickstarter. You throw the ball up, and it figures when it is at the top of its flight and shoots the panorama all at once. Something like that could also be carried by a tethered drone, and it has the advantage of not moving between shots as a spinning drone would be at risk for. This is another thing I’ve wanted for a while. After my first experiments in airplane and helicopter based panoramas showed you really want to shoot everything all at once, I imagined having decent digital cameras getting cheap enough to buy 16 of them and put them in a circle. Sadly, once cameras starting doing that, there were always better cameras that I now decided I needed that were too expensive to buy for that purpose.

The world needs standardized LEDs which adjust brightness

I’m sure, like me, you have lots of electronic gadgets that have status LEDs on them. Some of these just show the thing is on, some blink when it’s doing things. Of late, as blue LEDs have gotten cheap, it has been very common to put disturbingly bright blue LEDs on items.

These become much too bright at night, and can be a serious problem if the device needs to be in a bedroom or hotel room. Which things like laptops, phone and camera chargers and many other devices need to do. I end up putting small pieces of electrical tape over these blue LEDs.

I call upon the factories of Shenzen and elsewhere to produce low cost, standardized status LEDs. These LEDs will come with an included photosensor that measures the light in the room, and adjusts the LED so that it is just visible at that lighting level. Or possibly turns it off in the dark, because do we really need to know that our charger is on after we’ve turned off the lights?

Of course, one challenge is that the light from the LED gets into the photosensor. For most LEDs, the answer is pretty easy — put a filter that blocks out the colour of the LED over the photosensor. If you truly need a white LED, you could make a fancy circuit that turns it off for a few milliseconds every so often (the eye won’t notice that) and measures the ambient light while it’s off. All of this is very simple, and adds minimally to the cost. (In fact, the way you adjust the brightness of an LED is typically to turn it on and off very fast.)

Get these made and make it standard that all our gear uses them for status LEDs. Frankly, I think it would be a good idea even for consumer goods that don’t get into our bedrooms. My TV rooms and computer rooms don’t need to look like Christmas scenes.

Day 3 of CES -- BMW and robots

Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.

Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.

More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.

Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.

I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.

I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.

CES Day 2 Gallery and notes

After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.

I’ve expanded my gallery of notable things with captions with cars and other technology.

Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)

This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.

None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.

The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.

As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.

For notes and photos, browse the gallery

Near-perfect virtual reality of recent times and tourism

Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.

We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)

Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.

This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.

It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.

This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.

VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.

In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.

I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.

Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing

  • With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
  • With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
  • CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
  • Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.

Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.

The paradox of Bitcoin proof-of-work mining

Everybody knows about bitcoin, but fewer know what goes on under the hood. Bitcoin provides the world a trustable ledger for transactions without trusting any given party such as a bank or government. Everybody can agree with what’s in the ledger and what order it was put there, and that makes it possible to write transfers of title to property — in particular the virtual property called bitcoins — into the ledger and thus have a money system.

Satoshi’s great invention was a way to build this trust in a decentralized way. Because there are rewards, many people would like to be the next person to write a block of transactions to the ledger. The Bitcoin system assures that the next person to do it is chosen at random. Because the winner is chosen at random from a large pool, it becomes very difficult to corrupt the ledger. You would need 6 people, chosen at random from a large group, to all be part of your conspiracy. That’s next to impossible unless your conspiracy is so large that half the participants are in it.

How do you win this lottery to be the next randomly chosen ledger author? You need to burn computer time working on a math problem. The more computer time you burn, the more likely it is you will hit the answer. The first person to hit the answer is the next winner. This is known as “proof of work.” Technically, it isn’t proof of work, because you can, in theory, hit the answer on your first attempt, and be the winner with no work at all, but in practice, and in aggregate, this won’t happen. In effect, it’s “proof of luck,” but the more computing you throw at the problem, the more chances of winning you have. Luck is, after all, an imaginary construct.

Because those who win are rewarded with freshly minted “mined” bitcoins and transaction fees, people are ready to burn expensive computer time to make it happen. And in turn, they assure the randomness and thus keep the system going and make it trustable.

Very smart, but also very wasteful. All this computer time is burned to no other purpose. It does no useful work — and there is debate about whether it inherently can’t do useful work — and so a lot of money is spent on these lottery tickets. At first, existing computers were used, and the main cost was electricity. Over time, special purpose computers (dedicated processors or ASICs) became the only effective tools for the mining problem, and now the cost of these special processors is the main cost, and electricity the secondary one.

Money doesn’t grow on trees or in ASIC farms. The cost of mining is carried by the system. Miners get coins and will eventually sell them, wanting fiat dollars or goods and affecting the price. Markets, being what they are, over time bring closer and closer the cost of being a bitcoin miner and the reward. If the reward gets too much above the cost, people will invest in mining equipment until it normalizes. The miners get real, but not extravagant profits. (Early miners got extravagant profits not because of mining but because of the appreciation of their coins.)

What this means is that the cost of operating Bitcoin is mostly going to the companies selling ASICs, and to a lesser extent the power companies. Bitcoin has made a funnel of money — about $2M a day — that mostly goes to people making chips that do absolutely nothing and fuel is burned to calculate nothing. Yes, the miners are providing the backbone of Bitcoin, which I am not calling nothing, but they could do this with any fair, non-centralized lottery whether it burned CPU or not. If we can think of one.

(I will note that some point out that the existing fiat money system also comes with a high cost, in printing and minting and management. However, this is not a makework cost, and even if Bitcoin is already more efficient doesn’t mean there should not be effort to make it even better.)

CPU/GPU mining

Naturally, many people have been bothered by this for various reasons. A large fraction of the “alt” coins differ from Bitcoin primarily in the mining system. The first round of coins, such as Litecoin and Dogecoin, use a proof-of-work system which was much more difficult to solve with an ASIC. The theory was that this would make mining more democratic — people could do it with their own computers, buying off-the-shelf equipment. This has run into several major problems:

  • Even if you did it with your own computer, you tended to need to dedicate that computer to mining in the end if you wanted to compete
  • Because people already owned hardware, electricity became a much bigger cost component, and that waste of energy is even more troublesome than ASIC buying
  • Over time, mining for these coins moved to high-end GPU cards. This, in turn caused mining to be the main driver of demand for these GPUs, drying up the supply and jacking up the prices. In effect, the high end GPU cards became like the ASICs — specialized hardware being bought just for mining.
  • In 2014, vendors began advertising ASICs for these “ASIC proof” algorithms.
  • When mining can be done on ordinary computers, it creates a strong incentive for thieves to steal computer time from insecure computers (ie. all computers) in order to mine. Several instances of this have already become famous.

The last point is challenging. It’s almost impossible to fix. If mining can be done on ordinary computers, then they will get botted. In this case a thief will even mine at a rate that can’t pay for the electricity, because the thief is stealing your electricity too.  read more »

The failure of the pan-tilt camera in video calls

This year, we stayed with Kathryn’s family for the holidays, so I attended dinner in my own mother’s home via Skype. Once again, the technology was frustrating. And it need not be.

There were many things that can be better. For those of us who Skype regularly, we don’t understand that there is still hassle for those not used to it. Setting up a good videoconferencing setup is still work. As I have found is always the case in a group-to-solos videoconference, the group folks do not care nearly as much about the conference as the remote solos, so a fundamental rule of design here is that if the remotes can do something, they should be the ones doing it, since they care the most. If there is to be UI, leave the UI to the remotes (who are sitting at computers and care) and not to the meeting room locals. Many systems get this exactly backwards — they imagine the meeting room is the “master” and thus has the complex UI.

In this family setting, however, the clearest problem for me is that no camera can show the whole room. It’s like sitting at the table unable to move your head, with blinders on. You can’t really be part of the group. You also have to be away from the table so everybody there can see you, since screens are only visible over a limited viewing angle.

One clear answer to this is the pan/tilt camera, which is to say a webcam with servo motors that allow it to look around. This technology is very cheap — you’ll find pan/tilt IP security cameras online for $30 or less, and there are even some low priced Chinese made pan/tilt webcams out there — I just picked another up for $20. I also have the Logitech Orbit AF. This was once a top of the line HD webcam, and still is very good, but Logitech no longer makes it. Logitech also makes the BCC950 — a $200 conference room pan/tilt webcam which has extremely good HD quality and a built-in hardware compressor for 1080p video that is superb with Skype. We have one of these, and it advertises “remote control” but in fact all that means is there is an infrared remote the people in the room can use to steer the camera. In our meetings, nobody ever uses this remote for the reason I specify above — the people in the room aren’t the motivated ones.

This is compounded by the fact that the old method — audio conference speakerphones — have a reasonably well understood UI. Dial the conference bridge, enter a code, and let the remotes handle their own calling in. Anything more complex than that gets pushback — no matter how much better it is.  read more »

The RV of the future

Over the years, particularly after Burning Man, I’ve written posts about how RVs can be improved. This year I did not use an regular RV but rather a pop-up camping trailer. However, I thought it was a good time to summarize a variety of the features I think should be in every RV of the future.

Smart Power

We keep talking about smart power and smart grids but power is expensive and complex when camping, and RVs are a great place for new technologies to develop.

To begin with, an RV power system should integrate the deep cycle house batteries, a special generator/inverter system, smart appliances and even the main truck engine where possible.

Today the best small generators are inverter based. Rather than generating AC directly from an 1800rpm motor and alternator, they have a variable speed engine and produce the AC via an inverter. These are smaller, more efficient, lighter and quieter than older generators, and produce cleaner power. Today they are more expensive, but not more expensive than most RV generators. RV generators are usually sized at 3,600 to 4,000 watts in ordinary RVs — that size dictated by the spike of starting up the air conditioner compressor when something else, like the microwave is running.

An inverter based generator combined with the RV’s battery bank doesn’t have to be that large. It can draw power for the surge of starting a motor from the battery. The ability to sustain 2,000 watts is probably enough, with a few other tricks. Indeed, it can provide a lot of power even with the generator off, though the generator should auto-start if the AC is to be used, or the microwave will be used for a long time.

By adding a data network, one can be much more efficient with power. For example, the microwave could just turn off briefly when the thermostat wants to start the AC’s compressor, or even the fans. The microwave could also know if it’s been told to cook for 30 seconds (no need to run generator) or 10 minutes (might want to start it.) It could also start the generator in advance of cooling need.

If the master computer has access to weather data, it could even decide what future power needs for heating fans and air conditioning will be, and run the generator appropriately. With a GPS database, it could even know the quiet times of the campsite it’s in and respect them.

A modern RV should have all-LED lighting. Power use is so low on those that the lights become a blip in power planning. Only the microwave, AC and furnace fan would make a difference. Likewise today’s TVs, laptops and media players which all draw very few watts.

A smart power system could even help plugging into shore power, particularly a standard 15a circuit. Such circuits are not enough to start many ACs, or to run the AC with anything else. With surge backup from the battery, an RV could plug into an ordinary plug and act almost like it had a high power connection.

To go further, for group camping, RVs should have the ability to form an ad-hoc power grid. This same ability is already desired in the off-grid world, so it need not be developed just for RVs. RVs able to take all sorts of input power could also eventually get smart power from RV campsites. After negotiation, a campsite might offer 500v DC at 12 amps instead of 115v AC, allowing the largest dual-AC RVs to plug into small wires.  read more »

Augmented Reality as documentation and the "context" button

I’ve been a little skeptical of many augmented reality apps I’ve seen, feeling they were mostly gimmick and not actually useful.

I’m impressed by this new one from Audi where you point your phone (iPhone only, unfortunately) at a feature on your car, and you get documentation on it. An interesting answer to car user manuals that are as thick as the glove compartment and the complex UIs they describe.

Like so many apps, however, this one will suffer the general problem of the amount of time it takes to fumble for your phone, unlock it, invoke an app, and then let the app do its magic. Of course fumbling for the manual and looking up a button in the index takes time too.

I’ve advocated for a while that phones become more aware of their location, not just in the GPS sense, but in the sense of “I’m in my car” and know what apps to make very easy to access, and even streamline their use. This can include allowing these apps to be right on the lock screen — there’s no reason to need to unlock the phone to use an app like this one. In fact, all the apps you use frequently in your car that don’t reveal personal info should be on the lock screen when you get near the car, and some others just behind it. The device can know it is in the car via the bluetooth in the car. (That bluetooth can even tell you if you’re in another car of a different make, if you have a database mapping MAC addresses to car models.)

Bluetooth transmitters are so cheap and with BT Low Energy they can last a year on a watch battery, so one of the more compelling “Internet of Things” applications — that’s also often a gimmick term — is to scatter these devices around the world to give our phones this accurate sense of place.

Some of this philosophy is expressed in Google Now, a product that goes the right way on many of these issues. Indeed, the Google Now cards are one of the more useful aspects of Glass, which otherwise is inherently limited in its user interface making it harder for you to ask Glass things than it is to ask a phone or desktop.

The car app has some wrinkles of course. Since you don’t always have an iPhone (or may not have your phone even if you own an iPhone) you still need the thick manual, though perhaps it can be in the trunk. And I will wager that some situations, like odd lighting, may make it not as fast as in the video.

By and large, pointing your phone at QR codes to learn more has not caught on super well, in part again because it takes time to get most phones to the point where they are scanning the code. Gesture interfaces can help there but you can only remember and parse a limited number of gestures, so many applications call out for being the special one. Still a special shake which means “Look around you in all ways you can to figure out if there is something in this location, time or camera view that I might want you to process.” Constant looking eats batteries which is why you need such a shake.

I’ve proposed that even though phones have slowly been losing all their physical buttons, I would put this back as a physical button I call the “context” button. “Figure out the local context, and offer me the things that might be particularly important in this context.” This would offer many things:

  • Standing in front of a restaurant or shop, the reviews, web site or app of the shop
  • In the car, all the things you like in the car, such as maps/nav, the manual etc.
  • In front of a meeting room, the schedule for that room and ability to book it
  • At a tourist attraction, info on it.
  • In a hotel, either the ability to book a room, or if you have a room, hotel services

There are many contexts, but you can usually sort them so that the most local and the most rare come first. So if you are in a big place you are frequently, such as the office complex you work at, the general functions for your company would not be high on the list unless you manually bumped them.

Of course, one goal is that car UIs will become simpler and self-documenting, as cars get screens. Buttons will still do the main functions you do all the time — and which people already understand — but screens will do the more obscure things you might need to look up in the manual, and document it as they go. You obviously can’t ever do something you need to look up in the manual while driving.

There is probably a trend that the devices in our lives with lots of buttons and complex controls and modes, like home electronics, cars and some appliances, will move to having screens in their UIs and thus not need the augmented reality.

RAID, backyard backup and the future of backup

Had my second RAID failure last week. In the end, things were OK but the reality is that many RAID implementations are much more fragile than they should be. Write failures on a drive caused the system to hang. Hard reset caused the RAID to be marked dirty, which mean it would not boot until falsely marked clean (and a few other hoops,) leaving it with some minor filesystem damage that was reparable. Still, I believe that a proper RAID-like system should have as its maxim that the user is never worse off because they built a RAID than if they had not done so. This is not true today, both due to fragility of systems, and the issues I have outlined before with deliberately replacing a disk in a RAID, where it does not make use of the still-good but aging old disk when rebuilding the replacement.

A few years ago I outlined a plan for disks to come as two-packs for easy, automatic RAID because disks are so cheap that everybody should be doing it. The two-pack would have two SATA ports on it, but if you only plugged in one, it would look like a single disk, and be a RAID-1 inside. If you gave it a special command, it could look like other things, including a RAID-0, or two drives, or a JBOD concatenation. If you plugged into the second port it would look like two disks, with the RAID done elsewhere.

I still want this, but RAID is not enough. It doesn’t save you from file deletion, or destruction of the entire system. The obvious future trend is network backup, which is both backup and offsite. The continuing issue with network backup is that some people (most notably photographers and videographers) generate huge amounts of data. I can come back from a weekend with 16gb of new photos, and that’s a long slog over DSL with limited upstream for network backup. To work well, network backup also needs to understand all databases, as a common database file might be gigabytes and change every time there is a minor update to a database record. (Some block-level incrementalism can work here if the database is not directly understood.)

Network backup is also something that should be automatic. There are already peer-to-peer network backups, that make use of the disks of friends or strangers (encrypted of course) but it would be nice if this could “just happen” when any freshly installed computer unless you turn it off. The user must keep the key stored somewhere safe, which is not zero-UI, though if all they want is to handle file-deletion and rollback they can get away without it.

Another option that might be interesting would be the outdoor NAS. Many people now like to use NAS boxes over gigabit networks. This is not as fast as SATA with a flash drive, or RAID, or even modern spinning disk, but it’s fast enough for many applications.

An interesting approach would be a NAS designed to be placed outdoors, away from the house, such as in the back corner of a yard, so that it would survive a fire or earthquake. The box would be waterproof and modestly fireproof, but ideally it is located somewhere a fire is unlikely to reach. It could either be powered by power-over-ethernet or could have its own power and even use WIFI (in which case it is only suitable for backup, not as a live NAS.)

This semi-offsite backup would be fast and cheap (network storage tends to be much more expensive than local drives.) It would be encrypted, of course, so that nobody can steal your data. Encryption would be done in the clients, not the NAS, so even somebody who taps the outside wire would get nothing.

This semi-offsite backup could be used in combination with network backup. Large files and new files would be immediately sent to the backyard backup. The most important files could then go to network backup, or all of them, just much more slowly.

A backyard backup could also be shared by neighbours, especially on wifi, which might make it quite cost effective. Due to encryption, nobody could access their neighbour’s data.

If neighbours are going to cooperate, this can also be built by just sharing servers or NAS boxes in 2 or more houses. This provides decent protection and avoids having to be outside, but there is the risk that some fires burn down multiple houses depending on the configuration.

A backyard backup would be so fast that many would reverse what I said above, and have no need for RAID. Files would be mirrored to the backyard backup within seconds or minutes. RAID would only be needed for those who need to have systems that won’t even burp in a disk failure (which is a rare need in the home) or which must not lose even a few minutes of data.

The laptop in the tablet world

I have owned a laptop for decades, and I’ve always gone for the “small and light” laptop class because as a desktop user, my laptop is only for travel, and ease of carrying is thus very important. Of course once I get there I have envied the larger screens and better keyboards and other features of the bigger laptops people carry, but generally been happy with the decision.

Others have gone for “desktop replacement” laptops which are powerful, big and heavy. Those folks don’t have a desktop, at most they plug their laptop into an external monitor and other peripherals at home. The laptop is a bitch to carry but of course all files come with it.

Today, the tablet is changing that equation. I now find that when I am going into a situation where I want a minimal device that’s easy to carry, the tablet is the answer, and even better the tablet and bluetooth keyboard. I even carry a keyboard that’s a fair bit larger than the tablet, but still very light compared to a laptop. When I am in a meeting, or sitting attending an event, I am not going to do the things I need the laptop for. Well, not as much, anyway. On the airplane, the tablet is usually quite satisfactory — in fact better when in coach, though technically the keyboard is not allowed on a plane. (My tablet can plug in a USB keyboard if needed.)

Planes are a particular problem. It’s not safe to check LCD screens in your luggage, so any laptop screen has to come aboard with you, and this is a pain if the computer is heavy.

With the tablet dealing with the “I want small and light” situations, what is the right laptop answer?

One obvious solution are the “convertible tablet” computers being offered by various vendors. These are laptops where the screen is a tablet and it can be removed. These tend to be Windows devices, and somewhat expensive, but the approximate direction is correct.

Another option would be to break the laptop up into 3 or more components:  read more »

  • The tablet, running your favourite tablet OS
  • A keyboard, of your choice, which can be carried easily with the tablet for typing-based applications. Able to hold the laptop and connect to it in a permitted way on the plane. Touchpad or connection for mouse.
  • A “block,” whose form factor is now quite variable, with the other stuff.
Syndicate content