Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.

This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.

PR2 robots and open source

I don’t often write about robots that don’t go on roads, but last night I stopped by Willow Garage, the robot startup created by my old friend Scott Hassan. Scott is investing in building open robotics platforms, and giving much of it out free to the world, because he thinks progress in robotics has been far too slow. Last night they unveiled their beta PR2 robots and gave 11 of them to teams from 11 different schools and labs. Those institutions will be all trying to do something creative with the robots, just as a Berkeley team quickly made it able to fold towels a few months ago.

I must admit, as they marched out the 11 robots and had them do synchronous dance there was a moment (about 2 minutes 20 seconds in that video) when it reminded me of a scene from some techno thriller, where the evil overload unveils his new robots to an applauding crowd, and the robots then turn and kill all the humans. Fortunately this did not happen. The real world is very different, and these robots will do a lot of good. They have a lot of processing power, various nice sensors and 2 arms with 7 degrees of freedom. They run ROS, an open source robot operating system which now runs on many other robots.

I was interested because I have proposed that having an open simulator platform for robocars could also spur development from people without the budgets to build their own robocars (and crash them during testing.) A robocar test model is going to involve at least $150,000 today and will get damaged in development, and that’s beyond small developers. The PR2 beta models cost more than that, but Willow Garage’s donations will let these teams experiment in personal robotics.

Of course, it would be nice for robocars if there were an inexpensive robocar that teams could get and test. Right now though, everybody wants a sensor as nice as the $75,000 Velodyne LIDAR that powered most of the top competitors in the DARPA urban challenge, and you can’t get that cheaply yet — except perhaps in simulator.

When is "opt out" a "cop out?"

As many expected would happen, Mark Zuckerberg did an op-ed column with a mild about face on Facebook’s privacy changes. Coming soon, you will be able to opt out of having your basic information defined as “public” and exposed to outside web sites. Facebook has a long pattern of introducing a new feature with major privacy issues, being surprised by a storm of protest, and then offering a fix which helps somewhat, but often leaves things more exposed than they were before.

For a long time, the standard “solution” to privacy exposure problems has been to allow users to “opt out” and keep their data more private. Companies like to offer it, because the reality is that most people have never been exposed to a bad privacy invasion, and don’t bother to opt out. Privacy advocates ask for it because compared to the alternative — information exposure with no way around it — it seems like a win. The companies get what they want and keep the privacy crowd from getting too upset.

Sometimes privacy advocates will say that disclosure should be “opt in” — that systems should keep information private by default, and only let it out with the explicit approval of the user. Companies resist that for the same reason they like opt-out. Most people are lazy and stick with the defaults. They fear if they make something opt-in, they might as well not make it, unless they can make it so important that everybody will opt in. As indeed is the case with their service as a whole.

Neither option seems to work. If there were some way to have an actual negotiation between the users and a service, something better in the middle would be found. But we have no way to make that negotiation happen. Even if companies were willing to have negotiation of their “I Agree” click contracts, there is no way they would have the time to do it.  read more »

Review of Everyman HD 720p webcam and Skype HD calling

I’ve been interested in videoconferencing for some time, both what it works well at, and what it doesn’t do well. Of late, many have believed that quality makes a big difference, and HD systems, such as very expensive ones from Cisco, have been selling on that notion.

A couple of years ago Skype added what they call HQ calling — 640 x 480 at up to 30fps. That’s the resolution of standard broadcast TV, though due to heavy compression it never looks quite that good. But it is good and is well worth it, especially at Skype’s price: free, though you are well advised to get a higher end webcam, which they initially insisted on.

So there was some excitement about the new round of 720p HD webcams that are coming out this year, with support for them in Skype, though only on the Windows version. This new generation of cams has video compression hardware in the webcam. Real time compression of 1280x720 video requires a lot of CPU, so this is a very good idea. In theory almost any PC can send HD from such a webcam with minimal CPU usage. Even the “HQ” 640x480 line video requires a fair bit of CPU, and initially Skype insisted on a dual core system if you wanted to send it. Receiving 720p takes far less CPU, but still enough that Skype refuses to do it on slower computers, such as a 1.6ghz Atom netbook. Such netbooks are able to play stored 720p videos, but Skype is judging them as unsuitable for playing this. On the other hand, modern video chips (Such as all Nvidia 8xxx and above) contain hardware for decoding H.264 video and can play this form of video readily, but Skype does not support that.

The other problem is bandwidth. 720p takes a lot of it, especially when it must be generated in real time. Skype says that you need 1.2 megabits for HD, and in fact you are much better off if you have 2 or more. On a LAN, it will use about 2.5 megabits. Unfortunately, most DSL customers don’t have a megabit of upstream and can’t get it. In the 90s, ISPs and telcos decided that most people would download far more than they uploaded, and designed DSL to have limited upload in order to get more download. The latest cable systems using DOCSIS 3 are also asymmetric but offer as much as 10 megabits if you pay for it, and 2 megabits upstream to the base customers. HD video calling may push more people into cable as their ISP.  read more »

BigDog, and walking Robocars

Last week, I attended a talk by Marc Raibert the former MIT Professor who founded Boston Dynamics, the makers of the BigDog 4-legged walking robot. If you haven’t seen the various videos of BigDog you should watch them immediately, as this is some of the most interesting work in robotics today.

Walking pack robots like BigDog have a number of obvious applications, but at present they are rather inefficient. BigDog is powered by a a 2 stroke compressor that drives hydraulics. That works well because the legs don’t need engines but can exert a lot of force. However, its efficiency is in the range of 2 gallons per mile, though this is just a prototype level. It is more efficient on flat terrain and pavement, but of course wheels are vastly more efficient there. As efficient as animals are, wheeled vehicles are better if you don’t make them heavy as tanks and SUVs.

BigDog walks autonomously but today is steered by a human, or in newer versions, can follow a human walking down a trail, walking where she walked. In the future they want to make an autonomous delivery robot that can be told to take supplies to troops in the field, or carry home a wounded soldier.

I wondered if BigDog isn’t trying too hard to be a mule, carrying all the weight up high. This makes it harder for it to do its job. If it could just tow a sledge (perhaps a container with a round teflon bottom with some low profile or retractable wheels) it might be able to haul more weight. Particularly because it could pay out line while negotiating something particularly tricky and then once stable again, reel in the line. This would not work if you had to go through boulders that might catch the trailer but for many forms of terrain it would be fine. Indeed, Boston Dynamics wants to see if this can work. On the other hand, they did not accept my suggestion that they put red dye in the hydraulic fluid so that it spurts red blood if damaged or shot.

The hydraulic design of BigDog made me wonder about applications to robocars. In particular, it seems as though it will be possible to build a light robocar that has legs folded up under the chassis. When the robocar got to the edge of the road, it could put down the legs and be able to climb stairs, go over curbs, and even go down dirt paths and rough terrain. At least a lightweight single person robocar or deliverbot might do this.  read more »

Mini roads for robocars

At the positive end of my prediction that robocars will enable people to travel in “the right vehicle for the trip” and given that most trips are short urban ones, it follows that most robocars, if we are efficient, will be small light vehicles meant for 1-2 people, with a lesser number of larger ones for 4-5 people. 2 person cars can even be face to face, allowing them to be under 5’ wide, though larger ones will be as wide as today’s cars, with some number as big as vans, RVs and buses.

Small, lightweight vehicles are not just greener than transit, they also require far less expensive road. While the initial attraction of robocars is that they can provide private, automated, efficient transportation without any new infrastructure, eventually we will begin building new development with robocars in mind. Various estimates I have seen for multi-use paths suitable for people, bikes and golf carts range around $100K to $200K per mile, though I have heard of projects which, thanks to the wonders of government contracting, soar up to $1M per mile. On the other hand, typical urban streets cost $2M to $3M per mile, an order of magnitude more.

Consider a residential robocar block. It might well be served by a single 10’ lightweight use lane. That lane might run along the backs of the houses — such back alley approaches are found in a number of cities, and people love them since the garage (if there is one) does not dominate the front of your home. It might also be in the front of the house. New construction could go either way. Existing areas might decide to reclaim their street into a block park or more land for the homeowners, with a robocar street, sidewalk and bike path where the road used to be.

We only need a single lane in one direction on most streets, though the desire to get 8’ wide vehicles in means there would be 2 lanes for the narrow vehicles. The lane would have no specific direction, rather it would be controlled by a local computer, which would tell incoming vehicles from which direction to enter the lane and command waiting vehicles to get out of the way. Small wider spots or other temporary holding spots would readily allow cars to pass through even if another vehicle is doing something.

You would not need a garage for your robocar as you can store it anywhere nearby that you can find space, or hire it out when you don’t need it. You might not even own any robocar, in which case you certainly don’t need a garage to store one. However, you probably will want a “delivery room,” which is something like a garage which has a driveway up to it. Deliverbots could use this room — they would be given the code to open the door — to drop off deliveries for you in a protected place. You could also have the “room of requirement” I describe in the deliverbots page.

This plan leaves out one important thing — heavy vehicles. We still need occasional heavy vehicles. They will deliver large and heavy items to our houses, ranging from hot tubs to grand pianos. But even heavier are the construction machines used in home construction and renovation, ranging from cranes to earth movers. How can they come in, when their weight would tear up a light-duty road?

The answer is, not surprisingly, in robotics. The heavy trucks, driven by robots, will be able to place their tires quiet precisely. We can engineer our robocar paths to include two heavy duty strips with deeper foundations and stronger asphalt, able to take the load.

Alternately, since the tires of the trucks will be further apart than our robocars, they might just run their tires on either side of a more narrow path, essentially on the shoulders of the path. These shoulders could be made not from heavy duty materials, but from cheap ones, like gravel or dirt. The trucks would move only very slowly on these residential blocks. If they did disturb things there, repair would be easy, and in fact it’s not too much of a stretch to predict either a road repair robot or a small road repair truck with a construction worker which moves in when problems are detected.

The volume of heavy trucks can be controlled, and their frequency. Their use can be avoided in most cases in times when the pavement is more fragile, such as when the ground is soaked or freezing. If they do damage the road, repair can be done swiftly — but in fact robocars can also be programmed to both go slowly in such alleys (as they already would) and avoid any potholes until the gravel robot fills them. Robocars will be laser scanning the road surface ahead of them at all times to avoid such things in other areas.

I keep coming up with dramatic savings that robocars offer, and the numbers, already in the trillions of dollars and gigatons of CO2 seem amazing, but this is another one. Urban “local roads” are 15% of all U.S. road mileage, and rural local roads are 54%. (There are just over 2.6 million paved road-miles in the USA.) To add to the value, road construction and asphalt are major greenhouse gas sources.

To extend this further, I speculate on what might happen if small robocars had legs, like BigDog.

Volvo collision avoidance fails and other things that will happen again

Last week, Volvo was demoing some new collision avoidance features in their S60. I’ve talked about the S60 before, as it surprised me putting pedestrian detection into a car before I expected it to happen. Unfortunately in an extreme case of demo disease known to all computer people, somebody has made an error with the battery, and in front of a crowd of press, the car smashed into the truck it was supposed to avoid. The wired article links to a video.

Poor Volvo, having this happen in front of all the press. Of course, their system is meant to be used in human driven cars, warning the driver and braking if the driver fails to act — not in a self-driving vehicle. And they say that had their been a driver there would have been an indication that the system was not operating.

While this mistake is the result of a lack of maturity in the technology, it is important to realize that as robocars are developed there will be crashes, and some of the crashes will hurt people and a few will quite probably kill people. It’s a mistake to assume this won’t happen, or not to plan for it. The public can be very harsh. Toyota’s problems with their car controllers (if that’s where the problems are — Toyota claims they are not — have been a subject of ridicule for what was (and probably still is) one of the world’s most respected brands. The public asks, if programmers can’t program simple parts of today’s cars, can they program one that does all the driving?

There are two answers to that. First of all, they can and do program computerized parts of today’s cars all the time and by and large have perfect safety records.

But secondly, no they can’t make a complete driving system perfectly safe, certainly not at first. It is a complex problem and we’ll wait a long time before the accident rate is zero. And while we wait, human drivers will kill millions.

Our modern society has always had a tough time with that trade-off. Of late we’ve been coming to demand perfect safety, though it is impossible. Few new products are allowed out if it is known that they will have any death rate due to their own flaws. Even if those flaws are not known in the specific, but are known to be highly likely to exist in some fashion. American juries, faced with minutes of a meeting where the company decided to “release the product, even though predictions show that bugs will kill X people” will punish the company nastily, even though the alternative was “don’t release and have human drivers kill 10X people.” The 9X who were saved will not be in the courtroom. This is one reason robocars may arise outside the USA first.

Of course, there might be cases the other way. A drunk who kills somebody when he could have taken a robocar might get a stiffer punishment. A corporation that had its employees drive when robotic systems were clearly superior might find a nasty judgement — but that would require that it was OK to have the cars on the road in the first place.

But however this plays out, developers must expect there will be bugs, an bugs with dire consequences. Nobody will want those bugs, and all the injuries will be tragic, but so is being too cautious on deployment. Can the USA figure a way to make that happen?

The peril of the Facebook anti-privacy pattern

There’s been a well justified storm about Facebook’s recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.

But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.

Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1.  read more »

Data Hosting architectures and the safe deposit box

With Facebook seeming to declare some sort of war on privacy, it’s time to expand the concept I have been calling “Data Hosting” — encouraging users to have some personal server space where their data lives, and bringing the apps to the data rather than sending your data to the companies providing interesting apps.

I think of this as something like a “safe deposit box” that you can buy from a bank. While not as sacrosanct as your own home when it comes to privacy law, it’s pretty protected. The bank’s role is to protect the box — to let others into it without a warrant would be a major violation of the trust relationship implied by such boxes. While the company owning the servers that you rent could violate your trust, that’s far less likely than 3rd party web sites like Facebook deciding to do new things you didn’t authorize with the data you store with them. In the case of those companies, it is in fact their whole purpose to think up new things to do with your data.

Nonetheless, building something like Facebook using one’s own data hosting facilities is more difficult than the way it’s done now. That’s because you want to do things with data from your friends, and you may want to combine data from several friends to do things like search your friends.

One way to do this is to develop a “feed” of information about yourself that is relevant to friends, and to authorize friends to “subscribe” to this feed. Then, when you update something in your profile, your data host would notify all your friend’s data hosts about it. You need not notify all your friends, or tell them all the same thing — you might authorize closer friends to get more data than you give to distant ones.  read more »

Review: Billy: The Early Years (DVD and book)

I have written in the past about my late father’s careers most of which are documented in his memoirs and other places. In spite of being almost 60 years in the past, his religious career still gets a lot of attention, as I recently reported in the story of the strange exhibit about him in the infamous Creation Museum.

Recently, two movies have been released in which he is a character. I recently watched Billy: The Early Years which is a movie about the early life of Billy Graham told from the supposed viewpoint of my father on his deathbed. Charles Templeton and Billy Graham were best friends for many years, touring and preaching together, and the story of how my father lost his faith as he studied more while Graham grew closer to his has become a popular story in the fundamentalist community.

While it doesn’t say that it’s fictional, this movie portrays an entirely invented interview with Charles Templeton, played by Martin Landau, in a hospital bed in 2001, shortly before his death. (In reality, while he did have a few hospital trips, he spent 2001 in an Alzheimer’s care facility and was not coherent most of the time.) Fleshed out in the novelization, the interview is supposedly conducted on orders from an editor trying to find some dirty on Billy Graham. Most of the movie is flashbacks to Graham’s early days (including times before they met) and their time together preaching and discussing the truth of the Bible.

It is disturbing to watch Landau’s portrayal of my father, as well as that by Mad Men’s Krisoffer Polaha as the younger version. I’m told it is always odd to see somebody you know played by an actor, and no doubt this is true. However, more disturbing is the role they have cast him in for this allegedly true story — namely Satan. As I believe is common in movies aimed at the religious market, Graham’s story is told in what appears to be an allegory of the temptation of Christ. In the film, Graham is stalwart, but my father keeps coming to him with doubts about the bible. The lines written for the actors are based in part on his writings and in part on invention, and as such don’t sound at all like he would speak in real life, but they are there, I think, to take the role of the attempted temptation of the pure man.  read more »

ROFLCon panel on USENET history Saturday in Boston

Just a note that I’ll be in Boston this weekend attending the 2nd day of ROFLCon, a convention devoted to internet memes and legends. They’re having a panel on USENET on Saturday and have invited me to participate. Alas, registration is closed, but there are some parties and events on the schedule that I suspect people can go to. See you there.

Robomagellan contest disappoints

This weekend I attended the annual “Robogames” competition, which took place here in the Bay Area. Robogames is mostly a robot battle competition, with a focus on heavily armed radio-controlled robots fighting in a protected arena. For several years robot fighting was big enough to rate some cable TV shows dedicated to it. The fighting is a lot of fun, but almost entirely devoid of automation — in fact efforts to use automation in battle robots have mostly been a failure.

The RC battles are fierce and violent, and today one of the weapons of choice is something heavy that spins at very high speed so that it builds up a lot of angular momentum and kinetic energy, to transfer into the enemy. People like to see robots flying through the air and losing parts to flying sparks. (I suspect this need to make robots very robust against attack makes putting sensors on the robots for automation difficult, as many weapons would quickly destroy a lot of popular sensors types.) The games also featured a limited amount of automated robot competition. This included some lightweight (3lb and 1lb) automated battles which I did not get to watch, and some some hobby robot competitions for maze-running, line following, ribbon climbing and LEGO mindstorms. There was also semi-autonomous robot battle called “kung fu” where humanoid robots who take high level commands (like punch, and step) try to push one another over. There is also sumo, a game where robots must push the other robot out of the ring.

I had hoped the highlight would be the Robo-magellan contest. This is a hobbyist robot car competition, usually done with small robots 1 to 2 feet in length. Because it is hobbyists, and often students, the budgets are very small, and the contest is very simple. Robots must make it through a simple outdoor course to touch an orange cone about 100 yards away. They want to do this in the shortest time, but for extra points they can touch bonus cones along the way. Contestants are given GPS coordinates for the target cones. They get three tries. In this particular contest, to make it even easier, contestants were allowed to walk the course and create some extra GPS waypoints for their robots.

These extra waypoints should have made it possible to do the job with just a GPS and camera, but the hobbyists in this competition were mostly novices, and no robot reached the final cone. The winner got within 40 feet on their last run, but no performance was even remotely impressive. This was unlike past years, where I was told that 6 or more robots would reach the target and there would be real competition. This year’s poor showing was blamed on budgets, and the fact that old teams who had done well had moved on from the sport. Only 5 teams showed up.

The robots were poor for sensors. While all would have a GPS, in 1 or 2 cases the GPS systems failed and the robots quickly wandered into things. A few had sonar or touch-bars for obstacle detection, but others did not, and none of them did their obstacle detection well at all. For most, if they ran into something, that was it for that race. Some used a compass or accelerometers to help judge when to turn and where to aim, since a GPS is not very good as a compass.  read more »

YouTube makes statement on Content-ID takedowns

Last night, YouTube posted a note on the official YouTube Blog concerning the recent firestorm over Content-ID takedowns like the one I wrote about earlier in the week regarding my Downfall DMCA Parody.

In the post, they are kind enough to link to my video (now back up on YouTube thanks to my disputing the Content-ID takedown) as an example of a fair use parody, and to a talk by (former) fellow EFF director Larry Lessig which incorporated some copyrighted music.

However, some of the statements in the post deserve a response. Let me start first that I hope I do understand a bit of YouTube’s motivations in creating the Content-ID system. YouTube certainly has a lot of copyright violations on it, and it’s staring down the barrel of a billion dollar lawsuit from Viacom and other legal burdens. I can understand why it wants to show the content owners that it wants to help them and wants to be their partner. It is a business and is free to host what it wants. However, it is also part of Google, whose mission is “to organize the world’s information and make it universally accessible and useful,” and of course to not “be evil” in the process of doing so. On the same blog, YouTube declares its dedication to free speech very eloquently.  read more »

Generating delicious fake regional cuisine

One of the greatest things that can give a region a sense of identity is the presence of a regional cuisine. In addition to identity it brings in tourists, so every region probably really wishes it had one.

Of course a real regional cuisine takes a long time to develop, even centuries. The world’s great cuisines all were a long time coming, and were often based on the presence of particular local ingredients as much as on the food culture. Some cuisines have arisen quickly, particularly fusion cuisines which arise due to immigrants mixing and from colonialism. Today the market for ingredients is global, though there are still places where particular ingredients are at their best.

One recent regional food, the “Buffalo” chicken wing, is believed to have come from a single restaurant (The Anchor Bar in Buffalo) and spread out to other local establishments and then around the world. Part of its success in spreading around the world is its simplicity and the fact that (unlike many other regional-source foods) it features ingredients found all around the world. Every town would like to have its equivalent of the Buffalo Wing.

To make this happen, I think towns should hold contests among local restaurants to develop such dishes. Restaurants might enter dishes they already specialize in, or come up with something new. The winner, by popular vote, would get their dish named after the town, and found on the menus of other competing restaurants for some period of time.

The following rules might make sense:

  • Ideally, the dish should try to be based on an ingredient which is available locally, and perhaps at its best locally, but which still can be found in the rest of the world so the dish can spread.
  • All restaurants submitting a dish must agree that should they win, they will publish recipes for the dish and claim no exclusive on it. They will, however, be the only restaurant to say they have the original dish and were the winner of the contest.
  • Ideally, recipes will be published in advance, so other restaurants can also make the dish during the contest, in particular restaurants that are not competing. (Competing chefs might deliberately make the dish badly.) In fact, advance publication (and a contest cookbook) might be part of the rules.
  • “None of the above” should be an encouraged choice on the voting form. The first round might not create a dish worthy of the town.
  • A panel of chefs would rate the dishes according to difficulty. Dishes that are easier would be encouraged, as these can spread more easily. The list of difficulties would be published for voters to use in making their decisions. Ie. voters might pick the 2nd most tasty dish if it’s much easier to make.
  • Every dish must be available in “chef-approved” form at some minimum number of restaurants, so it is easy to try each dish. Private chefs can compete if they can recruit restaurants to offer their dish.
  • At the end of the contest, the city’s tourist board would have a budget to promote the dish to tourists.
  • Voting would be done online, but voters would need to get a token to vote somewhere based on a unique ID so they can’t vote more than once. They need not pick a single dish. The “Approval” voting system, where voters can list as many dishes as they find qualified, and the one with the most votes wins, can be used.
  • It is certainly possible as well to have multiple winners, and the creation of variations on the winning dish would be encouraged.

Would this be an authentic regional cuisine that “comes from the people?” Of course not. But it might be tasty, and if chosen by the people, might grow into something that really belongs to that city.

Studio does content-ID takedown of my Hitler video about takedowns

In a bizarre twist of life imitating art that may be too “meta” for your brain, Constantin Films, the producer of the war movie “Downfall” has caused the takedown of my video which was put up to criticise their excessive use of takedowns.

Update: YouTube makes an official statement and I respond.

A brief history:

Starting a few years ago, people started taking a clip from Downfall where Hitler goes on a rampage, and adding fake English subtitles to produce parodies on various subjects. Some were very funny and hundreds of different ones were made. Some were even made about how many parodies there were. The German studio, Constantin, did some DCMA takedowns on many of these videos.

So I made, with considerable effort, my own video, which depicted Hitler as a producer at Constantin Films. He hears about all the videos and orders DMCA takdowns. His lawyers (generals) have to explain why you can’t just do that, and he gets angry. I have a blog post about the video, including a description of all the work I had to do to make sure my base video was obtained legally.

Later, when the video showed up on the EFF web site, Apple decided to block an RSS reader from the iPhone app store because it pointed to the video and Hitler says a bad word that shocked the Apple reviewers.

Not to spoil things too much, but the video also makes reference to an alternate way you can get something pulled off YouTube. Studios are able to submit audio and video clips to YouTube which are “fingerprinted.” YouTube then checks all uploaded videos to see if they match the audio or video of some allegedly copyrighted work. When they match, YouTube removes the video. That’s what I have Hitler decide to do instead of more DMCA takedowns, and lo, Constantin actually ordered this, and most, though not all of the Downfall parodies are now gone from YouTube. Including mine.

Now I am sure people will debate the extent to which some of the parodies count as “fair use” under the law. But in my view, my video is about as good an example of a parody fair use as you’re going to see. It uses the clip to criticise the very producers of the clip and the takedown process. The fair use exemption to copyright infringement claims was created, in large part, to assure that copyright holders didn’t use copyright law to censor free speech. If you want to criticise content or a content creator — an important free speech right — often the best way to do that will make use of the content in question. But the lawmakers knew you would rarely get permission to use copyrighted works to make fun of them, and wanted to make sure critical views were not stifled.  read more »

The radio will be a major innovation center in cars, near-term

I’ve been predicting a great deal of innovation in cars with the arrival of robocars and other automatic driving technologies. But there’s a lot of other computerization and new electronics that will be making its way into cars, and to make that happen, we need to make the car into a platform for innovation, rather than something bought as a walled garden from the car vendor.

In the old days, it was fairly common to get a car without a radio, and to buy the radio of your choice. This happened even in higher end cars. However, the advantages in sound quality and dash integration from a factory-installed radio started to win out, especially with horizontal market Japanese companies who were both good at cars and good at radios.

For real innovation, you want a platform, where aftermarket companies come in and compete. And you want early adopters to be able to replace what they buy whenever they get the whim. We replace our computers and phones far more frequently than our cars and the radios inside them.

To facilitate this, I think the car’s radio and “occupant computer” should be merged, but split into three parts:

  1. The speakers and power amplifier, which will probably last the life of the car, and be driven with some standard interface such as 7.1 digital audio over optical fiber.
  2. The “guts” which probably live in the trunk or somewhere else not space constrained, and connect to the other parts
  3. The “interface” which consists of the dashboard panel and screen, with controls, and any other controls and screens, all wired with a network to the guts.

Ideally the hookup between the interface and the guts is a standardized protocol. I think USB 3.0 can handle it and has the bandwidth to display screens on the dashboard, and on the back of the headrests for rear passenger video. Though if you want to imagine an HDTV for the passengers, its possible that we would add a video protocol (like HDMI) to the USB. But otherwise USB is general enough for everything else that will connect to the guts. USB’s main flaw is its master-slave approach, which means the guts needs to be both a master, for control of various things in the car, and a slave, for when you want to plug your laptop into the car and control elements in the car — and the radio itself.

Of course there should be USB jacks scattered around the car to plug in devices like phones and memory sticks and music players, as well as to power devices up on the dash, down in the armrests, in the trunk, under the hood, at the mirror and right behind the grille.

Finally there need to be some antenna wires. That’s harder to standardize but you can be we need antennas for AM/FM/TV, satellite radio, GPS, cellular bands, and various 802.11 protocols including the new 802.11p. In some cases, however, the right solution is just to run USB 3.0 to places an antenna might go, and then have a receiver or tranceiver with integrated antenna which mounts there. A more general solution is best.

This architecture lets us replace things with the newest and latest stuff, and lets us support new radio protocols which appear. It lets us replace the guts if we have to, and replace the interface panels, or customize them readily to particular cars.  read more »

Houseguest from heaven

I recently stayed at the home of a friend up in Vancouver. She had some electrical wiring problems, and since I know wiring, I helped her with them as well as some computer networking issues. Very kindly she said that made me a houseguest from heaven (as opposed to the houseguests from hell we have all heard about.) I was able to leave her place better than I found it. Well, mostly.

This immediately triggered a business idea in my mind which seems like it would be cool but is, alas, probably illegal. The idea would be a service where people with guestrooms, or even temporarily vacant homes, would provide free room (and board) to qualified tradespeople who want to have a cheap vacation. Electricians, handypeople, plumbers, computer wizards, housepainters, au pairs, gardeners and even housecleaners and organizers, would stay in your house, and leave it having done some reapirs or cleanup. In some cases, like cleanup, pool maintenance and yard sweeping, the people need not be skilled professionals, they could be just about anybody.

Obviously there would need to be a lot of logistics to work out. A reliable reputation system would be needed if you’re going to trust your house to such strangers, particularly if trusting the watching of your children. You would need to know both that they are able to do the work and not about to rob you. You would want to know if they will keep the relationship a business one or expect a more friendly experience, like couch surfing.

In addition, the homeowners would need reputations of their own. Because, for a skilled tradesperson, a night of room and board is only worth a modest amount of work. You can’t give somebody a room and expect them to work the whole day on your project — or even much more than an hour. Perhaps if a whole house is given over, with rooms for the person and a whole family, more work could be expected. The homeowner may not be good at estimating the amount of work needed, and come away disappointed when told that the guest spent 2 hours on the problem and decided it was a much bigger problem.

Trading lodging for services is an ancient tradition, particularly on farms. In childcare, the “au pair” concept has institutionalized it and made it legal.

But alas, legality is the rub. The tax man will insist that both parties are making income and want to tax it, as barter is taxable. The local contractor licencing agency will insist that work be done only by locally licenced contractors, to local codes, possibly with permits and inspections. And immigration officials will insist that foreign tourists are illegally working. And there would be the odd civil disputes. An unions might tell members not to take work even from remote members of cousin unions.

The civil disputes could be kept to a minimum by making the jobs short and a good deal for the guests, since for the homeowners, the guest room was typically doing nothing anyway — thus the success of couch surfing — and making slightly more food is no big deal. But the other legal risks would probably make it illegal for a company to get in the middle of all this. At least in the company’s home country. A company based in some small nation might not be subject to remote laws.  read more »

Robot car virtual contest and demolition derby

A couple of weeks ago I wrote about the need for a good robocar driving simulator. Others have been on the track even earlier and are arranging a pair of robotic driving contests in simulator for some upcoming AI conferences.

The main contest is a conventional car race. It will be done in the TORCS simulator I spoke of, where people have been building robot algorithms to control race cars for some time, though not usually academic AI researchers. In addition, they’re adding a demolition derby which should be a lot of fun, though not exactly the code you want to write for safety.

This is, however, not the simulator contest I wrote about. The robots people write for use in computer racing simulators are given a pre-distilled view of the world. They learn exactly where the vehicle is, where the road edges are and where other cars are, without error. Their only concern is to drive based on the road and the physics of their vehicle and the track, and not hit things — or in the case of the derby, to deliberately hit things.

The TORCS engine is a good one, but is currently wired to do only an oval racetrack, and the maintainers, I am told, are not interested in having it support more complex street patterns.

While simulation in an environment where all the sensing problems are solved is a good start, a true robocar simulation needs simulated sensors — cameras, LIDAR, radar, GPS and the works — and then software that takes that and tries to turn it into a map of where the road is and where the vehicles and other things are. Navigation is also an important thing to work out. I will try to attend the Portland version of this conference to see this contest, however, as it should be good fun and generate interest.

Let me print my boarding pass long before my flight

I love online check-in, and printing your boarding pass at home to avoid doing anything but going to the gate at the airport. Airlines are even starting to do something I asked for many years ago and sending a boarding pass to the cell phone that can be held up to a screen for check-in.

But if they can’t do that, I want them to let me to print my boarding pass long before my flight. In particular, to print my return boarding pass when I print my outgoing one. That’s because I have a printer at home but often don’t have one on the road.

Of course, you can’t actually check in until close to the flight, so this boarding pass would be marked as preliminary, but still have bar codes identifying what they need to scan. On the actual day of the flight, I would check in from my phone or laptop, so they know I am coming to the plane. There’s no reason the old boarding pass’s bar codes can’t then be activated as ready to work. Sure, it might not know the gate, and the seat may even change, but such seat changes are rare and perhaps then I would need to go to a kiosk to swap the old pass for a new one. If the flight changes then I may also need to do the swap but the swap can be super easy — hold up old pass, get new one.

I could also get a short code to write on the pass when I do my same-day check-in, such code being usable to confirm the old pass has been validated.

Police robots everywhere?

It is no coincidence that two friends of mine have both founded companies recently to build telepresence robots. These are easy to drive remote control robots which have a camera and screen at head height. You can inhabit the robot, and drive it around a flat area and talk to people by videoconferencing. You can join meetings, go visit people or inspect a factory. Companies building these robots, initially at high prices, intend to sell them both to executives who want to remotely tour remote offices and to companies who want to give cheaper remote employees a more physical presence back at HQ.

There are also a few super-cheap telepresence robots, such as the Spykee, which runs Skype video conferencing and can be had for as low as $150. It’s not very good, and the camera is very low down, and there’s no screen, but it shows just how cheap such a product can get.

“Anybots” QA telepresence robot

When they get down to a price like that, it seems inevitable to me that we will see an emergency services robot on every block, primarily for use by the police. When there is a police, fire or ambulance call to an address, an officer could immediately connect to the robot on that block and drive it to the scene, to be telepresent. The robot would live in a small, powered protective closet either paid for by the city, but more likely just donated by some neighbour on the block who wants the fastest possible emergency response. Called into action, the robot’s garage door would open and the robot would drive out, and probably be at the location of the emergency within 60 to 120 seconds, depending on how densely they are placed. In the meantime actual first responders might also be on the way.

What could such a robot do?  read more »

Transit energy chart updated from latest DoE book

Back in 2008 I wrote a controversial article about whether green transit was a myth in the USA. Today I updated the main chart in that article based on new releases of the Department of Energy Transportation Energy Fact Book 2009 edition. The car and SUV numbers have stayed roughly the same (at about 3500 BTUs/passenger-mile for the average car under average passenger load.)

What’s new?

  • Numbers for buses are now worse at 4300. Source data predates the $4/gallon gas crisis, which probably temporarily improved it.
  • Light (capacity) rail numbers are significantly worse — reason unknown. San Jose’s Light rail shows modest improvement to 5300 but the overall average reported at 7600 is more than twice the energy of cars!
  • Some light rail systems (See Figure 2.3 in Chapter 2) show ridiculously high numbers. Galveston, Texas shows a light rail that takes 8 times as much energy per passenger as the average SUV. Anybody ridden it and care to explain why its ridership is so low?
  • Heavy rail numbers also worsen.
  • Strangely, average rail numbers stay the same. This may indicate an error in the data or a change of methodology, because while Amtrak and commuter rail are mildly better than the average, it’s not enough to reconcile the new average numbers for light and heavy rail with the rail average.
  • I’ve made a note that the electric trike figure is based on today’s best models. Average electric scooters are still very, very good but only half as good as this.
  • I’ve added a figure I found for the East Japan railway system. As expected, this number is very good, twice as good as cars, but suggests an upper bound, as the Japanese are among the best at trains.
  • I removed the oil-fueled-agriculture number for cyclists, as that caused more confusion than it was worth.
  • There is no trolley bus number this year, so I have put a note on the old one.
  • It’s not on the chart, but I am looking into high speed rail. Germany’s ICE reports a number around 1200 BTU/PM. The California HSR project claims they are going to do as well as the German system, which I am skeptical of, since it requires a passenger load of 100M/year, when currently less than 25M fly these routes.

Syndicate content