Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

PR2 robots and open source

I don’t often write about robots that don’t go on roads, but last night I stopped by Willow Garage, the robot startup created by my old friend Scott Hassan. Scott is investing in building open robotics platforms, and giving much of it out free to the world, because he thinks progress in robotics has been far too slow. Last night they unveiled their beta PR2 robots and gave 11 of them to teams from 11 different schools and labs. Those institutions will be all trying to do something creative with the robots, just as a Berkeley team quickly made it able to fold towels a few months ago.

I must admit, as they marched out the 11 robots and had them do synchronous dance there was a moment (about 2 minutes 20 seconds in that video) when it reminded me of a scene from some techno thriller, where the evil overload unveils his new robots to an applauding crowd, and the robots then turn and kill all the humans. Fortunately this did not happen. The real world is very different, and these robots will do a lot of good. They have a lot of processing power, various nice sensors and 2 arms with 7 degrees of freedom. They run ROS, an open source robot operating system which now runs on many other robots.

I was interested because I have proposed that having an open simulator platform for robocars could also spur development from people without the budgets to build their own robocars (and crash them during testing.) A robocar test model is going to involve at least $150,000 today and will get damaged in development, and that’s beyond small developers. The PR2 beta models cost more than that, but Willow Garage’s donations will let these teams experiment in personal robotics.

Of course, it would be nice for robocars if there were an inexpensive robocar that teams could get and test. Right now though, everybody wants a sensor as nice as the $75,000 Velodyne LIDAR that powered most of the top competitors in the DARPA urban challenge, and you can’t get that cheaply yet — except perhaps in simulator.

When is "opt out" a "cop out?"

As many expected would happen, Mark Zuckerberg did an op-ed column with a mild about face on Facebook’s privacy changes. Coming soon, you will be able to opt out of having your basic information defined as “public” and exposed to outside web sites. Facebook has a long pattern of introducing a new feature with major privacy issues, being surprised by a storm of protest, and then offering a fix which helps somewhat, but often leaves things more exposed than they were before.

For a long time, the standard “solution” to privacy exposure problems has been to allow users to “opt out” and keep their data more private. Companies like to offer it, because the reality is that most people have never been exposed to a bad privacy invasion, and don’t bother to opt out. Privacy advocates ask for it because compared to the alternative — information exposure with no way around it — it seems like a win. The companies get what they want and keep the privacy crowd from getting too upset.

Sometimes privacy advocates will say that disclosure should be “opt in” — that systems should keep information private by default, and only let it out with the explicit approval of the user. Companies resist that for the same reason they like opt-out. Most people are lazy and stick with the defaults. They fear if they make something opt-in, they might as well not make it, unless they can make it so important that everybody will opt in. As indeed is the case with their service as a whole.

Neither option seems to work. If there were some way to have an actual negotiation between the users and a service, something better in the middle would be found. But we have no way to make that negotiation happen. Even if companies were willing to have negotiation of their “I Agree” click contracts, there is no way they would have the time to do it.  read more »

Review of Everyman HD 720p webcam and Skype HD calling

I’ve been interested in videoconferencing for some time, both what it works well at, and what it doesn’t do well. Of late, many have believed that quality makes a big difference, and HD systems, such as very expensive ones from Cisco, have been selling on that notion.

A couple of years ago Skype added what they call HQ calling — 640 x 480 at up to 30fps. That’s the resolution of standard broadcast TV, though due to heavy compression it never looks quite that good. But it is good and is well worth it, especially at Skype’s price: free, though you are well advised to get a higher end webcam, which they initially insisted on.

So there was some excitement about the new round of 720p HD webcams that are coming out this year, with support for them in Skype, though only on the Windows version. This new generation of cams has video compression hardware in the webcam. Real time compression of 1280x720 video requires a lot of CPU, so this is a very good idea. In theory almost any PC can send HD from such a webcam with minimal CPU usage. Even the “HQ” 640x480 line video requires a fair bit of CPU, and initially Skype insisted on a dual core system if you wanted to send it. Receiving 720p takes far less CPU, but still enough that Skype refuses to do it on slower computers, such as a 1.6ghz Atom netbook. Such netbooks are able to play stored 720p videos, but Skype is judging them as unsuitable for playing this. On the other hand, modern video chips (Such as all Nvidia 8xxx and above) contain hardware for decoding H.264 video and can play this form of video readily, but Skype does not support that.

The other problem is bandwidth. 720p takes a lot of it, especially when it must be generated in real time. Skype says that you need 1.2 megabits for HD, and in fact you are much better off if you have 2 or more. On a LAN, it will use about 2.5 megabits. Unfortunately, most DSL customers don’t have a megabit of upstream and can’t get it. In the 90s, ISPs and telcos decided that most people would download far more than they uploaded, and designed DSL to have limited upload in order to get more download. The latest cable systems using DOCSIS 3 are also asymmetric but offer as much as 10 megabits if you pay for it, and 2 megabits upstream to the base customers. HD video calling may push more people into cable as their ISP.  read more »

BigDog, and walking Robocars

Last week, I attended a talk by Marc Raibert the former MIT Professor who founded Boston Dynamics, the makers of the BigDog 4-legged walking robot. If you haven’t seen the various videos of BigDog you should watch them immediately, as this is some of the most interesting work in robotics today.

Walking pack robots like BigDog have a number of obvious applications, but at present they are rather inefficient. BigDog is powered by a a 2 stroke compressor that drives hydraulics. That works well because the legs don’t need engines but can exert a lot of force. However, its efficiency is in the range of 2 gallons per mile, though this is just a prototype level. It is more efficient on flat terrain and pavement, but of course wheels are vastly more efficient there. As efficient as animals are, wheeled vehicles are better if you don’t make them heavy as tanks and SUVs.

BigDog walks autonomously but today is steered by a human, or in newer versions, can follow a human walking down a trail, walking where she walked. In the future they want to make an autonomous delivery robot that can be told to take supplies to troops in the field, or carry home a wounded soldier.

I wondered if BigDog isn’t trying too hard to be a mule, carrying all the weight up high. This makes it harder for it to do its job. If it could just tow a sledge (perhaps a container with a round teflon bottom with some low profile or retractable wheels) it might be able to haul more weight. Particularly because it could pay out line while negotiating something particularly tricky and then once stable again, reel in the line. This would not work if you had to go through boulders that might catch the trailer but for many forms of terrain it would be fine. Indeed, Boston Dynamics wants to see if this can work. On the other hand, they did not accept my suggestion that they put red dye in the hydraulic fluid so that it spurts red blood if damaged or shot.

The hydraulic design of BigDog made me wonder about applications to robocars. In particular, it seems as though it will be possible to build a light robocar that has legs folded up under the chassis. When the robocar got to the edge of the road, it could put down the legs and be able to climb stairs, go over curbs, and even go down dirt paths and rough terrain. At least a lightweight single person robocar or deliverbot might do this.  read more »

Mini roads for robocars

At the positive end of my prediction that robocars will enable people to travel in “the right vehicle for the trip” and given that most trips are short urban ones, it follows that most robocars, if we are efficient, will be small light vehicles meant for 1-2 people, with a lesser number of larger ones for 4-5 people. 2 person cars can even be face to face, allowing them to be under 5’ wide, though larger ones will be as wide as today’s cars, with some number as big as vans, RVs and buses.

Small, lightweight vehicles are not just greener than transit, they also require far less expensive road. While the initial attraction of robocars is that they can provide private, automated, efficient transportation without any new infrastructure, eventually we will begin building new development with robocars in mind. Various estimates I have seen for multi-use paths suitable for people, bikes and golf carts range around $100K to $200K per mile, though I have heard of projects which, thanks to the wonders of government contracting, soar up to $1M per mile. On the other hand, typical urban streets cost $2M to $3M per mile, an order of magnitude more.

Consider a residential robocar block. It might well be served by a single 10’ lightweight use lane. That lane might run along the backs of the houses — such back alley approaches are found in a number of cities, and people love them since the garage (if there is one) does not dominate the front of your home. It might also be in the front of the house. New construction could go either way. Existing areas might decide to reclaim their street into a block park or more land for the homeowners, with a robocar street, sidewalk and bike path where the road used to be.

We only need a single lane in one direction on most streets, though the desire to get 8’ wide vehicles in means there would be 2 lanes for the narrow vehicles. The lane would have no specific direction, rather it would be controlled by a local computer, which would tell incoming vehicles from which direction to enter the lane and command waiting vehicles to get out of the way. Small wider spots or other temporary holding spots would readily allow cars to pass through even if another vehicle is doing something.

You would not need a garage for your robocar as you can store it anywhere nearby that you can find space, or hire it out when you don’t need it. You might not even own any robocar, in which case you certainly don’t need a garage to store one. However, you probably will want a “delivery room,” which is something like a garage which has a driveway up to it. Deliverbots could use this room — they would be given the code to open the door — to drop off deliveries for you in a protected place. You could also have the “room of requirement” I describe in the deliverbots page.

This plan leaves out one important thing — heavy vehicles. We still need occasional heavy vehicles. They will deliver large and heavy items to our houses, ranging from hot tubs to grand pianos. But even heavier are the construction machines used in home construction and renovation, ranging from cranes to earth movers. How can they come in, when their weight would tear up a light-duty road?

The answer is, not surprisingly, in robotics. The heavy trucks, driven by robots, will be able to place their tires quiet precisely. We can engineer our robocar paths to include two heavy duty strips with deeper foundations and stronger asphalt, able to take the load.

Alternately, since the tires of the trucks will be further apart than our robocars, they might just run their tires on either side of a more narrow path, essentially on the shoulders of the path. These shoulders could be made not from heavy duty materials, but from cheap ones, like gravel or dirt. The trucks would move only very slowly on these residential blocks. If they did disturb things there, repair would be easy, and in fact it’s not too much of a stretch to predict either a road repair robot or a small road repair truck with a construction worker which moves in when problems are detected.

The volume of heavy trucks can be controlled, and their frequency. Their use can be avoided in most cases in times when the pavement is more fragile, such as when the ground is soaked or freezing. If they do damage the road, repair can be done swiftly — but in fact robocars can also be programmed to both go slowly in such alleys (as they already would) and avoid any potholes until the gravel robot fills them. Robocars will be laser scanning the road surface ahead of them at all times to avoid such things in other areas.

I keep coming up with dramatic savings that robocars offer, and the numbers, already in the trillions of dollars and gigatons of CO2 seem amazing, but this is another one. Urban “local roads” are 15% of all U.S. road mileage, and rural local roads are 54%. (There are just over 2.6 million paved road-miles in the USA.) To add to the value, road construction and asphalt are major greenhouse gas sources.

To extend this further, I speculate on what might happen if small robocars had legs, like BigDog.

Volvo collision avoidance fails and other things that will happen again

Last week, Volvo was demoing some new collision avoidance features in their S60. I’ve talked about the S60 before, as it surprised me putting pedestrian detection into a car before I expected it to happen. Unfortunately in an extreme case of demo disease known to all computer people, somebody has made an error with the battery, and in front of a crowd of press, the car smashed into the truck it was supposed to avoid. The wired article links to a video.

Poor Volvo, having this happen in front of all the press. Of course, their system is meant to be used in human driven cars, warning the driver and braking if the driver fails to act — not in a self-driving vehicle. And they say that had their been a driver there would have been an indication that the system was not operating.

While this mistake is the result of a lack of maturity in the technology, it is important to realize that as robocars are developed there will be crashes, and some of the crashes will hurt people and a few will quite probably kill people. It’s a mistake to assume this won’t happen, or not to plan for it. The public can be very harsh. Toyota’s problems with their car controllers (if that’s where the problems are — Toyota claims they are not — have been a subject of ridicule for what was (and probably still is) one of the world’s most respected brands. The public asks, if programmers can’t program simple parts of today’s cars, can they program one that does all the driving?

There are two answers to that. First of all, they can and do program computerized parts of today’s cars all the time and by and large have perfect safety records.

But secondly, no they can’t make a complete driving system perfectly safe, certainly not at first. It is a complex problem and we’ll wait a long time before the accident rate is zero. And while we wait, human drivers will kill millions.

Our modern society has always had a tough time with that trade-off. Of late we’ve been coming to demand perfect safety, though it is impossible. Few new products are allowed out if it is known that they will have any death rate due to their own flaws. Even if those flaws are not known in the specific, but are known to be highly likely to exist in some fashion. American juries, faced with minutes of a meeting where the company decided to “release the product, even though predictions show that bugs will kill X people” will punish the company nastily, even though the alternative was “don’t release and have human drivers kill 10X people.” The 9X who were saved will not be in the courtroom. This is one reason robocars may arise outside the USA first.

Of course, there might be cases the other way. A drunk who kills somebody when he could have taken a robocar might get a stiffer punishment. A corporation that had its employees drive when robotic systems were clearly superior might find a nasty judgement — but that would require that it was OK to have the cars on the road in the first place.

But however this plays out, developers must expect there will be bugs, an bugs with dire consequences. Nobody will want those bugs, and all the injuries will be tragic, but so is being too cautious on deployment. Can the USA figure a way to make that happen?

The peril of the Facebook anti-privacy pattern

There’s been a well justified storm about Facebook’s recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.

But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.

Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1.  read more »

Data Hosting architectures and the safe deposit box

With Facebook seeming to declare some sort of war on privacy, it’s time to expand the concept I have been calling “Data Hosting” — encouraging users to have some personal server space where their data lives, and bringing the apps to the data rather than sending your data to the companies providing interesting apps.

I think of this as something like a “safe deposit box” that you can buy from a bank. While not as sacrosanct as your own home when it comes to privacy law, it’s pretty protected. The bank’s role is to protect the box — to let others into it without a warrant would be a major violation of the trust relationship implied by such boxes. While the company owning the servers that you rent could violate your trust, that’s far less likely than 3rd party web sites like Facebook deciding to do new things you didn’t authorize with the data you store with them. In the case of those companies, it is in fact their whole purpose to think up new things to do with your data.

Nonetheless, building something like Facebook using one’s own data hosting facilities is more difficult than the way it’s done now. That’s because you want to do things with data from your friends, and you may want to combine data from several friends to do things like search your friends.

One way to do this is to develop a “feed” of information about yourself that is relevant to friends, and to authorize friends to “subscribe” to this feed. Then, when you update something in your profile, your data host would notify all your friend’s data hosts about it. You need not notify all your friends, or tell them all the same thing — you might authorize closer friends to get more data than you give to distant ones.  read more »

Review: Billy: The Early Years (DVD and book)

I have written in the past about my late father’s careers most of which are documented in his memoirs and other places. In spite of being almost 60 years in the past, his religious career still gets a lot of attention, as I recently reported in the story of the strange exhibit about him in the infamous Creation Museum.

Recently, two movies have been released in which he is a character. I recently watched Billy: The Early Years which is a movie about the early life of Billy Graham told from the supposed viewpoint of my father on his deathbed. Charles Templeton and Billy Graham were best friends for many years, touring and preaching together, and the story of how my father lost his faith as he studied more while Graham grew closer to his has become a popular story in the fundamentalist community.

While it doesn’t say that it’s fictional, this movie portrays an entirely invented interview with Charles Templeton, played by Martin Landau, in a hospital bed in 2001, shortly before his death. (In reality, while he did have a few hospital trips, he spent 2001 in an Alzheimer’s care facility and was not coherent most of the time.) Fleshed out in the novelization, the interview is supposedly conducted on orders from an editor trying to find some dirty on Billy Graham. Most of the movie is flashbacks to Graham’s early days (including times before they met) and their time together preaching and discussing the truth of the Bible.

It is disturbing to watch Landau’s portrayal of my father, as well as that by Mad Men’s Krisoffer Polaha as the younger version. I’m told it is always odd to see somebody you know played by an actor, and no doubt this is true. However, more disturbing is the role they have cast him in for this allegedly true story — namely Satan. As I believe is common in movies aimed at the religious market, Graham’s story is told in what appears to be an allegory of the temptation of Christ. In the film, Graham is stalwart, but my father keeps coming to him with doubts about the bible. The lines written for the actors are based in part on his writings and in part on invention, and as such don’t sound at all like he would speak in real life, but they are there, I think, to take the role of the attempted temptation of the pure man.  read more »