Robocar challenge from Italy to China

Today marks the start of a remarkable robocar trek from Italy to China. The team from the Vislab International Autonomous Challenge start in Italy and will trek all the way to Shanghai in electric autonomous vehicles, crossing borders, handling rough terrain and going over roads for which there are no maps in areas where there is no high-accuracy GPS.

This would be impossible today so they are solving that problem by having a lead car which drives mostly autonomously, but sometimes has the humans take over, particularly in areas where there are no maps. This vehicle can be seen by the other vehicles and also transmits GPS waypoints to them, so they can follow those waypoints and use their sensors to fill in the rest. The other vehicles also will have humans to correct them in case of error, and the amount of correction needed will be recorded. Some of the earliest robocar experiments in Germany used this approach, driving the highways with occasional human correction. (The DARPA grand challenges required empty vehicles on a closed course, and no human intervention, except the kill switch, was allowed.) This should be a tremendous challenge with much learned along the way about what works and what doesn’t. As a computer vision lab, these cars appear to want to use vision a lot more than other robocars, which have gone LIDAR all the way. (There are LIDARs on the Vislab cars, but not as fancy as the 64 line Velodyne.)

They are using electric cars to send a green message. While I do believe that the robocars of the future will indeed be electric, and that self-recharge is a cruicial element of the value of robocars, I am not as fond of this decision. “One thing at a time” is the philosophy that makes sense, so I think it’s better to start with proven and easy to refuel gasoline cars and get the autonomy working, then improve what’s underneath. But this is a minor quibble about an exciting project.

They have a live tracking tool (not up yet) and a blog you can follow.

More robocar news to come. Yesterday I had an interesting ride in Junior (Darpa Grand Challenge II winner) and we trusted it enough to have Kathryn stand in the crosswalk while Junior drove up to it, then stopped and waited for her to walk out of it.

Can your computer be like your priest?

I’ve had a blogging hiatus of late because I was heavily involved last week with Singularity University a new teaching institution about the future created by Nasa, Google, Autodesk and various others. We’ve got 80 students, most from outside North America, here for the summer graduate program, and they are quite an interesting group.

On Friday, I gave a lecture to open the policy, law and ethics track and I brought up one of the central questions — should we let our technology betray us? Now our tech can betray us in a number of ways, but in this case I mean something more literal, such as our computer ratting us out to the police, or providing evidence that will be used against us in court. Right now this is happening a lot.

I put forward the following challenge: In history, certain service professions have been given a special status when it comes to being forced to betray us. Your lawyer, your doctor and your priest must keep most of what you tell them in confidence, and can’t be compelled to reveal it in court. We have given them this immunity because we feel their services are essential, and that people might be afraid to use them if they feared they could be betrayed.

Our computers are becoming essential too, and even more intimately entangled with our lives. We’re carrying our cell phone on our body all day long, with its GPS and microphone and camera, and we’re learning that it is telling our location to the police if they ask. Soon we’ll have computers implanted in our bodies — will they also betray us?

So can we treat our personal computer like a priest or doctor? Sadly, while people we trust have been given this exemption, technology doesn’t seem to get it. And there may be a reason, too. People don’t seem as afraid to disclose incriminating data to their computers as they are of disclosing it to other people. Right now, we know that people can blab, but we don’t seem to appreciate how much computers can blab. If we do, we’ll become more afraid to trust our computers and other technology, which hurts their value.

Can the ethics that developed around the trusted professions move to our technology? That’s for the future to see.

Using the phone as its own mouse, and trusting the keyboard

I’ve written a bunch about my desire to be able to connect an untrusted input device to my computer or phone so that we could get hotels and other locations to offer both connections to the HDTVs in the rooms for monitors and a usable keyboard. This would let one travel with small devices like netbooks, tablet computers and smart phones yet still use them for serious typing and UI work while in the hotel or guest area.

I’ve proposed that the connection from device to the monitor be wireless. This would make it not very good for full screen video but it would be fine for web surfing, email and the like. This would allow us to use the phone as its own mouse, either by having a deliberate mouse style sensor on the back, or using the camera on the back of the phone as a reader of the surface. (A number of interesting experiments have shown this is quite doable if the camera can focus close and can get an LED to light up the surface.) This provides a mouse which is more inherently trustable, and buttons on the phone (or on its touchscreen) can be the mouse buttons. This doesn’t work for tablets and netbooks — for them you must bring your own mini-mouse or use the device as a touchpad. I am still a fan of the “trackpoint” nubbins and they can also make very small but usable mice.

The keyboard issue is still tough. While it would seem a wired connection is more secure, not all devices will be capable of such a connection, while almost all will do bluetooth. Wired USB connections can pretend to be all sorts of devices, including CD-Roms with autorun CDs in them. However, I propose the creation of a new bluetooth HID profile for untrusted keyboards.

When connecting to an untrusted keyboard, the system would need to identify any privileged or dangerous operations. If such operations (like software downloads, destructive commands etc.) come from the keyboard, the system would insist on confirmation from the main device’s touchscreen or keyboard. So while you would be able to type on the keyboard to fill text boxes or write documents and emails, other things would be better done with the mouse or they would require a confirmation on the screen. Turns out this is how many people use computers these days anyway. We command line people would feel a bit burdened but could create shells that are good at spotting commands that might need confirmation.  read more »

Losing your passport

Last week, on my trip to Berlin, I managed to drop my passport. I don’t know where — it might have been in the bathroom of Brussels airport trying to change clothes in a tiny room after a long red-eye, or it might have been when Brussels Air made me gate check a bag requiring a big rearrangement of items, or somewhere else. But two days later, arriving at a Pension in Berlin I discovered it was missing, and a lot of calling around revealed nobody had turned it in.

In today’s document hungry world this can be a major calamity. I actually have a pretty pleasant story to report, though there were indeed lots of hassles. But it turned out I had prepared for this moment in a number of ways, and you may want to do the same.

The upshot was that I applied for a passport on Wednesday, got it on Thursday, flew on Friday and again on Monday and got my permanent passport that same Monday — remarkable efficiency for a ministry with a reputation for long bureaucracy.

After concluding it was lost, I called the Canadian Embassy in Berlin. Once you declare the passport lost, it is immediately canceled, even if you find it again, so you want to be sure that it’s gone. The Embassy was just a couple of U-bahn stops away, so I ventured there. I keep all my documents in my computer, and the security guy was shocked I had brought it. He put all that gear in a locker, and even confiscated my phone — more on that later.  read more »

Rolling travel bag that plugs in

Since I’m on the road (Washington DC right now, then Berlin on Monday for a few days and then Toronto for the weekend of the 11th) I will lament on the problem I have noted before in travel power. We have to carry so many chargers. I have also found it’s a pain to take them all out and put them back in again.

So how about an electrified rollaboard travel bag. It would plug in, and of course you would have the right adapters for the countries you are going to. Then, along the bottom it would offer a power strip of sorts, with receptacles for your home plug form. The back of these units tends to have spare room due to the bars.

It would also feature an internal USB powering hub, with a few USB jacks, but also built in would be some retractable cables with micro-usb (the new power standard for phones and some other devices) or mini-usb if you still need that. (Alternately have one and adapters for the other.)

Next a universal battery charger. They sell these now with plates that adapt to the various camera batteries, and they even have plates for nimh AA batteries etc. Perhaps even 2 plates.

And of course a universal laptop power supply, but this needs a somewhat long cord. Now I know, you need a power supply to carry with the laptop to meetings, so do you want to carry two? Perhaps not, but I actually like to when space is not super tight. It’s possible this supply could be done in a way that it can snap out, and so all you carry is an extra wall cord. Since I like retractables however you might want another laptop cord and special tip for it.

The advantage: One thing to plug in and unplug when you go from room to room.

And the fact that the wheelies, because of their carry handle, tend to have some extra room to put stuff if it is built in.

The downside: Standards change and your wheelie could get obsolete. The x-ray people may take a bit of time to get used to it as well.

Explicit interfaces for social media

The lastest Facebook flap has caused me to write more about privacy of late, and that will continue has we head into the June 15 conference on Computers, Freedom and Privacy where I will be speaking on privacy implications of robots.

Social networks want nice easy user interfaces, and complex privacy panels are hard to negotiate by users who don’t want to spend the time learning all the nuances of a system. People usually end up using the defaults.

One option that might improve things is to make data publication more explicit in the interface, and to let users choose, in an easy way, the level of exposure for a specific act.

Consider twitter. Instead of having a “Tweet” button, it should have a “Tweet to the world” button and a “Tweet to my followers” button. (Twitter wisely does not tweet when you hit Enter, as many people forget it is not the search box.) For people tweeting by SMS or other means, they could define a special character to put at the front of the tweet, like starting your tweet with a “%” to make it private (or public depending on your default.) Of course, your followers could still log and republish your private tweets, but they would at least not go into public archives. (Unless you’ve accepted a follower who does that, which is admittedly a problem with their design.)

This interface might seem complex but what’s important is that it’s clear. You know what you are doing. Here your choice makes sense to you and you are not squeezed into a set of defaults, ie. their choices.

Facebook has come close to this. There is a little lock icon next to the Share button, and it becomes a select box where you can set who you will share a posting with. It has a bit too much UI, but it’s on the right track. A select box can make it smaller but it should say “With the world” when that is the default state, to make your action explicit for you. This should be extended to many other actions on Facebook, so that buttons which do things which will inform the world, or your friends, say it. “Share this photo with the world.” “Tell all 430 friends your Strawberries are ripe.” The use of the number is a good idea, to make it clear just how many people you are publishing to.

Of course “with the world” is somewhat bulky and “with all friends of your friends” is even bulkier. The UI can start this way, but the user should be able to to a page where they can switch to icons, once it is clear that they know what the icons mean. When facebook again tries to move our social graph out into partner sites, this approach should follow. Instead of “Like” it would be “Tell your friends you Like” and so on. Verbose, but worth being verbose about.

This only applies to social media, of course, where there is a choice. If you comment on this blog it doesn’t yet say “post your comment to everybody” because there really isn’t any other choice expected on public blogs. Private/public blog systems like LiveJournal have featured a means to make postings available only to friends for a long time.

PR2 robots and open source

I don’t often write about robots that don’t go on roads, but last night I stopped by Willow Garage, the robot startup created by my old friend Scott Hassan. Scott is investing in building open robotics platforms, and giving much of it out free to the world, because he thinks progress in robotics has been far too slow. Last night they unveiled their beta PR2 robots and gave 11 of them to teams from 11 different schools and labs. Those institutions will be all trying to do something creative with the robots, just as a Berkeley team quickly made it able to fold towels a few months ago.

I must admit, as they marched out the 11 robots and had them do synchronous dance there was a moment (about 2 minutes 20 seconds in that video) when it reminded me of a scene from some techno thriller, where the evil overload unveils his new robots to an applauding crowd, and the robots then turn and kill all the humans. Fortunately this did not happen. The real world is very different, and these robots will do a lot of good. They have a lot of processing power, various nice sensors and 2 arms with 7 degrees of freedom. They run ROS, an open source robot operating system which now runs on many other robots.

I was interested because I have proposed that having an open simulator platform for robocars could also spur development from people without the budgets to build their own robocars (and crash them during testing.) A robocar test model is going to involve at least $150,000 today and will get damaged in development, and that’s beyond small developers. The PR2 beta models cost more than that, but Willow Garage’s donations will let these teams experiment in personal robotics.

Of course, it would be nice for robocars if there were an inexpensive robocar that teams could get and test. Right now though, everybody wants a sensor as nice as the $75,000 Velodyne LIDAR that powered most of the top competitors in the DARPA urban challenge, and you can’t get that cheaply yet — except perhaps in simulator.

When is "opt out" a "cop out?"

As many expected would happen, Mark Zuckerberg did an op-ed column with a mild about face on Facebook’s privacy changes. Coming soon, you will be able to opt out of having your basic information defined as “public” and exposed to outside web sites. Facebook has a long pattern of introducing a new feature with major privacy issues, being surprised by a storm of protest, and then offering a fix which helps somewhat, but often leaves things more exposed than they were before.

For a long time, the standard “solution” to privacy exposure problems has been to allow users to “opt out” and keep their data more private. Companies like to offer it, because the reality is that most people have never been exposed to a bad privacy invasion, and don’t bother to opt out. Privacy advocates ask for it because compared to the alternative — information exposure with no way around it — it seems like a win. The companies get what they want and keep the privacy crowd from getting too upset.

Sometimes privacy advocates will say that disclosure should be “opt in” — that systems should keep information private by default, and only let it out with the explicit approval of the user. Companies resist that for the same reason they like opt-out. Most people are lazy and stick with the defaults. They fear if they make something opt-in, they might as well not make it, unless they can make it so important that everybody will opt in. As indeed is the case with their service as a whole.

Neither option seems to work. If there were some way to have an actual negotiation between the users and a service, something better in the middle would be found. But we have no way to make that negotiation happen. Even if companies were willing to have negotiation of their “I Agree” click contracts, there is no way they would have the time to do it.  read more »

Review of Everyman HD 720p webcam and Skype HD calling

I’ve been interested in videoconferencing for some time, both what it works well at, and what it doesn’t do well. Of late, many have believed that quality makes a big difference, and HD systems, such as very expensive ones from Cisco, have been selling on that notion.

A couple of years ago Skype added what they call HQ calling — 640 x 480 at up to 30fps. That’s the resolution of standard broadcast TV, though due to heavy compression it never looks quite that good. But it is good and is well worth it, especially at Skype’s price: free, though you are well advised to get a higher end webcam, which they initially insisted on.

So there was some excitement about the new round of 720p HD webcams that are coming out this year, with support for them in Skype, though only on the Windows version. This new generation of cams has video compression hardware in the webcam. Real time compression of 1280x720 video requires a lot of CPU, so this is a very good idea. In theory almost any PC can send HD from such a webcam with minimal CPU usage. Even the “HQ” 640x480 line video requires a fair bit of CPU, and initially Skype insisted on a dual core system if you wanted to send it. Receiving 720p takes far less CPU, but still enough that Skype refuses to do it on slower computers, such as a 1.6ghz Atom netbook. Such netbooks are able to play stored 720p videos, but Skype is judging them as unsuitable for playing this. On the other hand, modern video chips (Such as all Nvidia 8xxx and above) contain hardware for decoding H.264 video and can play this form of video readily, but Skype does not support that.

The other problem is bandwidth. 720p takes a lot of it, especially when it must be generated in real time. Skype says that you need 1.2 megabits for HD, and in fact you are much better off if you have 2 or more. On a LAN, it will use about 2.5 megabits. Unfortunately, most DSL customers don’t have a megabit of upstream and can’t get it. In the 90s, ISPs and telcos decided that most people would download far more than they uploaded, and designed DSL to have limited upload in order to get more download. The latest cable systems using DOCSIS 3 are also asymmetric but offer as much as 10 megabits if you pay for it, and 2 megabits upstream to the base customers. HD video calling may push more people into cable as their ISP.  read more »

BigDog, and walking Robocars

Last week, I attended a talk by Marc Raibert the former MIT Professor who founded Boston Dynamics, the makers of the BigDog 4-legged walking robot. If you haven’t seen the various videos of BigDog you should watch them immediately, as this is some of the most interesting work in robotics today.

Walking pack robots like BigDog have a number of obvious applications, but at present they are rather inefficient. BigDog is powered by a a 2 stroke compressor that drives hydraulics. That works well because the legs don’t need engines but can exert a lot of force. However, its efficiency is in the range of 2 gallons per mile, though this is just a prototype level. It is more efficient on flat terrain and pavement, but of course wheels are vastly more efficient there. As efficient as animals are, wheeled vehicles are better if you don’t make them heavy as tanks and SUVs.

BigDog walks autonomously but today is steered by a human, or in newer versions, can follow a human walking down a trail, walking where she walked. In the future they want to make an autonomous delivery robot that can be told to take supplies to troops in the field, or carry home a wounded soldier.

I wondered if BigDog isn’t trying too hard to be a mule, carrying all the weight up high. This makes it harder for it to do its job. If it could just tow a sledge (perhaps a container with a round teflon bottom with some low profile or retractable wheels) it might be able to haul more weight. Particularly because it could pay out line while negotiating something particularly tricky and then once stable again, reel in the line. This would not work if you had to go through boulders that might catch the trailer but for many forms of terrain it would be fine. Indeed, Boston Dynamics wants to see if this can work. On the other hand, they did not accept my suggestion that they put red dye in the hydraulic fluid so that it spurts red blood if damaged or shot.

The hydraulic design of BigDog made me wonder about applications to robocars. In particular, it seems as though it will be possible to build a light robocar that has legs folded up under the chassis. When the robocar got to the edge of the road, it could put down the legs and be able to climb stairs, go over curbs, and even go down dirt paths and rough terrain. At least a lightweight single person robocar or deliverbot might do this.  read more »

Mini roads for robocars

At the positive end of my prediction that robocars will enable people to travel in “the right vehicle for the trip” and given that most trips are short urban ones, it follows that most robocars, if we are efficient, will be small light vehicles meant for 1-2 people, with a lesser number of larger ones for 4-5 people. 2 person cars can even be face to face, allowing them to be under 5’ wide, though larger ones will be as wide as today’s cars, with some number as big as vans, RVs and buses.

Small, lightweight vehicles are not just greener than transit, they also require far less expensive road. While the initial attraction of robocars is that they can provide private, automated, efficient transportation without any new infrastructure, eventually we will begin building new development with robocars in mind. Various estimates I have seen for multi-use paths suitable for people, bikes and golf carts range around $100K to $200K per mile, though I have heard of projects which, thanks to the wonders of government contracting, soar up to $1M per mile. On the other hand, typical urban streets cost $2M to $3M per mile, an order of magnitude more.

Consider a residential robocar block. It might well be served by a single 10’ lightweight use lane. That lane might run along the backs of the houses — such back alley approaches are found in a number of cities, and people love them since the garage (if there is one) does not dominate the front of your home. It might also be in the front of the house. New construction could go either way. Existing areas might decide to reclaim their street into a block park or more land for the homeowners, with a robocar street, sidewalk and bike path where the road used to be.

We only need a single lane in one direction on most streets, though the desire to get 8’ wide vehicles in means there would be 2 lanes for the narrow vehicles. The lane would have no specific direction, rather it would be controlled by a local computer, which would tell incoming vehicles from which direction to enter the lane and command waiting vehicles to get out of the way. Small wider spots or other temporary holding spots would readily allow cars to pass through even if another vehicle is doing something.

You would not need a garage for your robocar as you can store it anywhere nearby that you can find space, or hire it out when you don’t need it. You might not even own any robocar, in which case you certainly don’t need a garage to store one. However, you probably will want a “delivery room,” which is something like a garage which has a driveway up to it. Deliverbots could use this room — they would be given the code to open the door — to drop off deliveries for you in a protected place. You could also have the “room of requirement” I describe in the deliverbots page.

This plan leaves out one important thing — heavy vehicles. We still need occasional heavy vehicles. They will deliver large and heavy items to our houses, ranging from hot tubs to grand pianos. But even heavier are the construction machines used in home construction and renovation, ranging from cranes to earth movers. How can they come in, when their weight would tear up a light-duty road?

The answer is, not surprisingly, in robotics. The heavy trucks, driven by robots, will be able to place their tires quiet precisely. We can engineer our robocar paths to include two heavy duty strips with deeper foundations and stronger asphalt, able to take the load.

Alternately, since the tires of the trucks will be further apart than our robocars, they might just run their tires on either side of a more narrow path, essentially on the shoulders of the path. These shoulders could be made not from heavy duty materials, but from cheap ones, like gravel or dirt. The trucks would move only very slowly on these residential blocks. If they did disturb things there, repair would be easy, and in fact it’s not too much of a stretch to predict either a road repair robot or a small road repair truck with a construction worker which moves in when problems are detected.

The volume of heavy trucks can be controlled, and their frequency. Their use can be avoided in most cases in times when the pavement is more fragile, such as when the ground is soaked or freezing. If they do damage the road, repair can be done swiftly — but in fact robocars can also be programmed to both go slowly in such alleys (as they already would) and avoid any potholes until the gravel robot fills them. Robocars will be laser scanning the road surface ahead of them at all times to avoid such things in other areas.

I keep coming up with dramatic savings that robocars offer, and the numbers, already in the trillions of dollars and gigatons of CO2 seem amazing, but this is another one. Urban “local roads” are 15% of all U.S. road mileage, and rural local roads are 54%. (There are just over 2.6 million paved road-miles in the USA.) To add to the value, road construction and asphalt are major greenhouse gas sources.

To extend this further, I speculate on what might happen if small robocars had legs, like BigDog.

Volvo collision avoidance fails and other things that will happen again

Last week, Volvo was demoing some new collision avoidance features in their S60. I’ve talked about the S60 before, as it surprised me putting pedestrian detection into a car before I expected it to happen. Unfortunately in an extreme case of demo disease known to all computer people, somebody has made an error with the battery, and in front of a crowd of press, the car smashed into the truck it was supposed to avoid. The wired article links to a video.

Poor Volvo, having this happen in front of all the press. Of course, their system is meant to be used in human driven cars, warning the driver and braking if the driver fails to act — not in a self-driving vehicle. And they say that had their been a driver there would have been an indication that the system was not operating.

While this mistake is the result of a lack of maturity in the technology, it is important to realize that as robocars are developed there will be crashes, and some of the crashes will hurt people and a few will quite probably kill people. It’s a mistake to assume this won’t happen, or not to plan for it. The public can be very harsh. Toyota’s problems with their car controllers (if that’s where the problems are — Toyota claims they are not — have been a subject of ridicule for what was (and probably still is) one of the world’s most respected brands. The public asks, if programmers can’t program simple parts of today’s cars, can they program one that does all the driving?

There are two answers to that. First of all, they can and do program computerized parts of today’s cars all the time and by and large have perfect safety records.

But secondly, no they can’t make a complete driving system perfectly safe, certainly not at first. It is a complex problem and we’ll wait a long time before the accident rate is zero. And while we wait, human drivers will kill millions.

Our modern society has always had a tough time with that trade-off. Of late we’ve been coming to demand perfect safety, though it is impossible. Few new products are allowed out if it is known that they will have any death rate due to their own flaws. Even if those flaws are not known in the specific, but are known to be highly likely to exist in some fashion. American juries, faced with minutes of a meeting where the company decided to “release the product, even though predictions show that bugs will kill X people” will punish the company nastily, even though the alternative was “don’t release and have human drivers kill 10X people.” The 9X who were saved will not be in the courtroom. This is one reason robocars may arise outside the USA first.

Of course, there might be cases the other way. A drunk who kills somebody when he could have taken a robocar might get a stiffer punishment. A corporation that had its employees drive when robotic systems were clearly superior might find a nasty judgement — but that would require that it was OK to have the cars on the road in the first place.

But however this plays out, developers must expect there will be bugs, an bugs with dire consequences. Nobody will want those bugs, and all the injuries will be tragic, but so is being too cautious on deployment. Can the USA figure a way to make that happen?

The peril of the Facebook anti-privacy pattern

There’s been a well justified storm about Facebook’s recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.

But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.

Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1.  read more »

Data Hosting architectures and the safe deposit box

With Facebook seeming to declare some sort of war on privacy, it’s time to expand the concept I have been calling “Data Hosting” — encouraging users to have some personal server space where their data lives, and bringing the apps to the data rather than sending your data to the companies providing interesting apps.

I think of this as something like a “safe deposit box” that you can buy from a bank. While not as sacrosanct as your own home when it comes to privacy law, it’s pretty protected. The bank’s role is to protect the box — to let others into it without a warrant would be a major violation of the trust relationship implied by such boxes. While the company owning the servers that you rent could violate your trust, that’s far less likely than 3rd party web sites like Facebook deciding to do new things you didn’t authorize with the data you store with them. In the case of those companies, it is in fact their whole purpose to think up new things to do with your data.

Nonetheless, building something like Facebook using one’s own data hosting facilities is more difficult than the way it’s done now. That’s because you want to do things with data from your friends, and you may want to combine data from several friends to do things like search your friends.

One way to do this is to develop a “feed” of information about yourself that is relevant to friends, and to authorize friends to “subscribe” to this feed. Then, when you update something in your profile, your data host would notify all your friend’s data hosts about it. You need not notify all your friends, or tell them all the same thing — you might authorize closer friends to get more data than you give to distant ones.  read more »

Review: Billy: The Early Years (DVD and book)

I have written in the past about my late father’s careers most of which are documented in his memoirs and other places. In spite of being almost 60 years in the past, his religious career still gets a lot of attention, as I recently reported in the story of the strange exhibit about him in the infamous Creation Museum.

Recently, two movies have been released in which he is a character. I recently watched Billy: The Early Years which is a movie about the early life of Billy Graham told from the supposed viewpoint of my father on his deathbed. Charles Templeton and Billy Graham were best friends for many years, touring and preaching together, and the story of how my father lost his faith as he studied more while Graham grew closer to his has become a popular story in the fundamentalist community.

While it doesn’t say that it’s fictional, this movie portrays an entirely invented interview with Charles Templeton, played by Martin Landau, in a hospital bed in 2001, shortly before his death. (In reality, while he did have a few hospital trips, he spent 2001 in an Alzheimer’s care facility and was not coherent most of the time.) Fleshed out in the novelization, the interview is supposedly conducted on orders from an editor trying to find some dirty on Billy Graham. Most of the movie is flashbacks to Graham’s early days (including times before they met) and their time together preaching and discussing the truth of the Bible.

It is disturbing to watch Landau’s portrayal of my father, as well as that by Mad Men’s Krisoffer Polaha as the younger version. I’m told it is always odd to see somebody you know played by an actor, and no doubt this is true. However, more disturbing is the role they have cast him in for this allegedly true story — namely Satan. As I believe is common in movies aimed at the religious market, Graham’s story is told in what appears to be an allegory of the temptation of Christ. In the film, Graham is stalwart, but my father keeps coming to him with doubts about the bible. The lines written for the actors are based in part on his writings and in part on invention, and as such don’t sound at all like he would speak in real life, but they are there, I think, to take the role of the attempted temptation of the pure man.  read more »

ROFLCon panel on USENET history Saturday in Boston

Just a note that I’ll be in Boston this weekend attending the 2nd day of ROFLCon, a convention devoted to internet memes and legends. They’re having a panel on USENET on Saturday and have invited me to participate. Alas, registration is closed, but there are some parties and events on the schedule that I suspect people can go to. See you there.

Robomagellan contest disappoints

This weekend I attended the annual “Robogames” competition, which took place here in the Bay Area. Robogames is mostly a robot battle competition, with a focus on heavily armed radio-controlled robots fighting in a protected arena. For several years robot fighting was big enough to rate some cable TV shows dedicated to it. The fighting is a lot of fun, but almost entirely devoid of automation — in fact efforts to use automation in battle robots have mostly been a failure.

The RC battles are fierce and violent, and today one of the weapons of choice is something heavy that spins at very high speed so that it builds up a lot of angular momentum and kinetic energy, to transfer into the enemy. People like to see robots flying through the air and losing parts to flying sparks. (I suspect this need to make robots very robust against attack makes putting sensors on the robots for automation difficult, as many weapons would quickly destroy a lot of popular sensors types.) The games also featured a limited amount of automated robot competition. This included some lightweight (3lb and 1lb) automated battles which I did not get to watch, and some some hobby robot competitions for maze-running, line following, ribbon climbing and LEGO mindstorms. There was also semi-autonomous robot battle called “kung fu” where humanoid robots who take high level commands (like punch, and step) try to push one another over. There is also sumo, a game where robots must push the other robot out of the ring.

I had hoped the highlight would be the Robo-magellan contest. This is a hobbyist robot car competition, usually done with small robots 1 to 2 feet in length. Because it is hobbyists, and often students, the budgets are very small, and the contest is very simple. Robots must make it through a simple outdoor course to touch an orange cone about 100 yards away. They want to do this in the shortest time, but for extra points they can touch bonus cones along the way. Contestants are given GPS coordinates for the target cones. They get three tries. In this particular contest, to make it even easier, contestants were allowed to walk the course and create some extra GPS waypoints for their robots.

These extra waypoints should have made it possible to do the job with just a GPS and camera, but the hobbyists in this competition were mostly novices, and no robot reached the final cone. The winner got within 40 feet on their last run, but no performance was even remotely impressive. This was unlike past years, where I was told that 6 or more robots would reach the target and there would be real competition. This year’s poor showing was blamed on budgets, and the fact that old teams who had done well had moved on from the sport. Only 5 teams showed up.

The robots were poor for sensors. While all would have a GPS, in 1 or 2 cases the GPS systems failed and the robots quickly wandered into things. A few had sonar or touch-bars for obstacle detection, but others did not, and none of them did their obstacle detection well at all. For most, if they ran into something, that was it for that race. Some used a compass or accelerometers to help judge when to turn and where to aim, since a GPS is not very good as a compass.  read more »

YouTube makes statement on Content-ID takedowns

Last night, YouTube posted a note on the official YouTube Blog concerning the recent firestorm over Content-ID takedowns like the one I wrote about earlier in the week regarding my Downfall DMCA Parody.

In the post, they are kind enough to link to my video (now back up on YouTube thanks to my disputing the Content-ID takedown) as an example of a fair use parody, and to a talk by (former) fellow EFF director Larry Lessig which incorporated some copyrighted music.

However, some of the statements in the post deserve a response. Let me start first that I hope I do understand a bit of YouTube’s motivations in creating the Content-ID system. YouTube certainly has a lot of copyright violations on it, and it’s staring down the barrel of a billion dollar lawsuit from Viacom and other legal burdens. I can understand why it wants to show the content owners that it wants to help them and wants to be their partner. It is a business and is free to host what it wants. However, it is also part of Google, whose mission is “to organize the world’s information and make it universally accessible and useful,” and of course to not “be evil” in the process of doing so. On the same blog, YouTube declares its dedication to free speech very eloquently.  read more »

Generating delicious fake regional cuisine

One of the greatest things that can give a region a sense of identity is the presence of a regional cuisine. In addition to identity it brings in tourists, so every region probably really wishes it had one.

Of course a real regional cuisine takes a long time to develop, even centuries. The world’s great cuisines all were a long time coming, and were often based on the presence of particular local ingredients as much as on the food culture. Some cuisines have arisen quickly, particularly fusion cuisines which arise due to immigrants mixing and from colonialism. Today the market for ingredients is global, though there are still places where particular ingredients are at their best.

One recent regional food, the “Buffalo” chicken wing, is believed to have come from a single restaurant (The Anchor Bar in Buffalo) and spread out to other local establishments and then around the world. Part of its success in spreading around the world is its simplicity and the fact that (unlike many other regional-source foods) it features ingredients found all around the world. Every town would like to have its equivalent of the Buffalo Wing.

To make this happen, I think towns should hold contests among local restaurants to develop such dishes. Restaurants might enter dishes they already specialize in, or come up with something new. The winner, by popular vote, would get their dish named after the town, and found on the menus of other competing restaurants for some period of time.

The following rules might make sense:

  • Ideally, the dish should try to be based on an ingredient which is available locally, and perhaps at its best locally, but which still can be found in the rest of the world so the dish can spread.
  • All restaurants submitting a dish must agree that should they win, they will publish recipes for the dish and claim no exclusive on it. They will, however, be the only restaurant to say they have the original dish and were the winner of the contest.
  • Ideally, recipes will be published in advance, so other restaurants can also make the dish during the contest, in particular restaurants that are not competing. (Competing chefs might deliberately make the dish badly.) In fact, advance publication (and a contest cookbook) might be part of the rules.
  • “None of the above” should be an encouraged choice on the voting form. The first round might not create a dish worthy of the town.
  • A panel of chefs would rate the dishes according to difficulty. Dishes that are easier would be encouraged, as these can spread more easily. The list of difficulties would be published for voters to use in making their decisions. Ie. voters might pick the 2nd most tasty dish if it’s much easier to make.
  • Every dish must be available in “chef-approved” form at some minimum number of restaurants, so it is easy to try each dish. Private chefs can compete if they can recruit restaurants to offer their dish.
  • At the end of the contest, the city’s tourist board would have a budget to promote the dish to tourists.
  • Voting would be done online, but voters would need to get a token to vote somewhere based on a unique ID so they can’t vote more than once. They need not pick a single dish. The “Approval” voting system, where voters can list as many dishes as they find qualified, and the one with the most votes wins, can be used.
  • It is certainly possible as well to have multiple winners, and the creation of variations on the winning dish would be encouraged.

Would this be an authentic regional cuisine that “comes from the people?” Of course not. But it might be tasty, and if chosen by the people, might grow into something that really belongs to that city.

Studio does content-ID takedown of my Hitler video about takedowns

In a bizarre twist of life imitating art that may be too “meta” for your brain, Constantin Films, the producer of the war movie “Downfall” has caused the takedown of my video which was put up to criticise their excessive use of takedowns.

Update: YouTube makes an official statement and I respond.

A brief history:

Starting a few years ago, people started taking a clip from Downfall where Hitler goes on a rampage, and adding fake English subtitles to produce parodies on various subjects. Some were very funny and hundreds of different ones were made. Some were even made about how many parodies there were. The German studio, Constantin, did some DCMA takedowns on many of these videos.

So I made, with considerable effort, my own video, which depicted Hitler as a producer at Constantin Films. He hears about all the videos and orders DMCA takdowns. His lawyers (generals) have to explain why you can’t just do that, and he gets angry. I have a blog post about the video, including a description of all the work I had to do to make sure my base video was obtained legally.

Later, when the video showed up on the EFF web site, Apple decided to block an RSS reader from the iPhone app store because it pointed to the video and Hitler says a bad word that shocked the Apple reviewers.

Not to spoil things too much, but the video also makes reference to an alternate way you can get something pulled off YouTube. Studios are able to submit audio and video clips to YouTube which are “fingerprinted.” YouTube then checks all uploaded videos to see if they match the audio or video of some allegedly copyrighted work. When they match, YouTube removes the video. That’s what I have Hitler decide to do instead of more DMCA takedowns, and lo, Constantin actually ordered this, and most, though not all of the Downfall parodies are now gone from YouTube. Including mine.

Now I am sure people will debate the extent to which some of the parodies count as “fair use” under the law. But in my view, my video is about as good an example of a parody fair use as you’re going to see. It uses the clip to criticise the very producers of the clip and the takedown process. The fair use exemption to copyright infringement claims was created, in large part, to assure that copyright holders didn’t use copyright law to censor free speech. If you want to criticise content or a content creator — an important free speech right — often the best way to do that will make use of the content in question. But the lawmakers knew you would rarely get permission to use copyrighted works to make fun of them, and wanted to make sure critical views were not stifled.  read more »

Syndicate content