Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Here’s a nice story about the Kiva warehouse delivery robot now being used by major retailers like The Gap. Factory floor robots have been around for some time, and the field even has a name “automated vehicle guidance systems” but these newer deliverbots kick it up a notch, picking up shelves and bringing them to a central area for distribution, finding their way on their own with sensors.
We’re also seeing more hospital deliverbots, which — very slowly — take things around hospitals, roving the same corridors as the people. When a robot goes very slowly, people are willing to allow it to travel with them. The technological question is, how hard it is it to raise that speed and stay safe, and make people believe that they are safe.
Some applications care little about speed, and the slow robots already have a market there. We would not tolerate super slow robots on our streets, getting in the way of our cars, regularly.
One answer may be “extremely deferential” behaviour. Consider a deliverbot trundling down a low-volume street at 10 kph (6mph). It would be constantly checking for a vehicle coming up behind it, using radar, lasers and cameras. With LIDAR it would get about 90 meters of warning, with other sensors perhaps more. Say it detects a car coming behind it at 50 km/h (30mph). It has 8 seconds, during which it will will cover 22 meters. If it’s a small robot — and we might limit the robots to make them small — odds are reasonable that it might find a place in which to duck, such as a driveway. These robots aren’t parking, so they can move into driveway entrances, fire hydrant locations and many small non-parking spaces along the road.
Indeed, it need not find a place to pause on its own side of the road. If there is no immediate oncoming traffic, it could deek to the other side of the road for a hiding spot. Ideally it would be clever and not pick a driveway which has a moving car or even a car sensors reveal has the engine running.
Indeed, it’s not unreasonable for the deliverbot to simply move into the oncoming lane if it is clear, to let the human vehicle pass. This is a bit disconcerting to our usual sense of how things work — slow vehicles don’t move to the left for us to pass them — but there is no reason it could not be true. This is on urban streets where stopped vehicles, turning vehicles and even pedestrians are found in the middle of the street all the time, and drivers have plenty of time to stop for them. Nobody is going to hit such a vehicle, just get annoyed by it.
For the driver, they would see various slow deliverbots on the road ahead. But in all but unusual circumstances, by the time they got close to those robots, they would have pulled out of the lane, to pause in driveway entrances. The main risk is the driver might start to depend on this, and plow right into such a vehicle (at slow speeds) if there was no place for it to pull over. A deliverbot that doesn’t immediately see a place to pull over would probably start blinking a very obvious flashing light on the back, increasing the warnings if the vehicle does not slow down. It might also speed up a little bit, if safe to do so, to reach a spot to pause.
Why is this interesting? I think we’re much closer to building a vehicle that could go 10 kph on slow city streets, using LIDAR. If the vehicle is small and doesn’t weigh a great deal, it simply won’t be capable of doing much damage to people by hitting them. It could even be equipped with airbags on the outside should this ever become unavoidable. The main problems would be people hitting them, or being annoyed by them.
Once accepted, as safety technology improves, the speed can improve — eventually to a level where they don’t get in the way, other than in the sense that any other vehicle is in your way. There will always be those who want to go faster, and so the deference approach will always be useful.
It was taken with the gigapan imager that I gave a negative review to last month. You can see why I want a better version of this imager. The shot is a great recording of history, as you can see the faces of almost all the dignitaries and high rollers who were there. It has a few stitch errors which would be a lot of work to remove by hand, so I don’t blame the creator for doing just one 5 hour automated pass. When such an imager becomes available for quality DSLRs, the image will be even better — this one faces the limitations of the G10. And due to the long time required to shoot any panorama of this scope, it looks like only some of the crowd are applauding, while others are bored.
I would love to see a shot of the ordinary folks in the far-away crowd too, but he wasn’t in range to get that, and it would have needed a longer lens. A computer might be able to count the faces then, or even tell you their racial mix. The made-the-list area probably has more black faces than ever before, but still a small minority.
A few years in the future, every event will be captured at this resolution, until we start having privacy worries about it.
In the early days of microprocessors, people selling home computers tried to come up with reasons to have them in the home. The real reason you got one was hobby computing, but the companies wanted to push other purposes. A famous one was use in the kitchen. The computer could story your recipe file, and wonder of wonders, could change the amounts of the ingredients based on how many servings you wanted to make.
This never caught on, but computers have come a long way. But still, I mostly see nonsense applications promoted. For example, boosters of RFID tell us that our fridges will be able to track when things went in the fridge, and when it’s time to buy more milk. We should give up huge amounts of privacy to figure out when to order more milk?
With that track record, I should stay away from the area, but let me propose some interesting approaches in the kitchen.
The cooking area should have a screen, of course. Screens are already in the kitchen to watch TV. While you could (and would) put digital recipes up on the screen, I imagine going further, and having TV cooking shows, where you watch a chef prepare a dish. You would be able to pause, rewind and do everything that digital video does, but the show would also come along with encoded instructions tagged to points in the video. When the recipe calls for cooking for 5 minutes, the computer would start appropriate timers.
The computer should have a speech interface, and a good one, allowing you to call out for timers, and to name ingredients and temperatures. More on that later.
The first thing I would like to see is smart, digital wireless scales in a lot of places. A general one on the counter of course, but quite possibly also built into the rack above the burner which holds the pot. You can get scales built into spoons and scoops now, and they could be bluetooth. read more »
Last week, I wrote about issues in providing videoconferencing to the aged. Later, I refined a new interface plan discussed in the comments. I think this would be a very good way for tools like Skype to work, so I am making an independent posting, and will encourage Skype, Google video chat (and others) to follow this approach.
First, it should be possible to reliably attach a PSTN phone number with an online identity. This can be done by the person who owns them (with a security trick) or by the person who wants to call them.
If a user goes to their tool — quite possibly through a USB handset with a dial pad, or through a dedicated IP phone — the system should check if this number belongs to a user, and if that user is online. If the user is online, then just make the call through the VoIP system.
If the user is not online, make the call through the PSTN, ie. SkypeOut. If/when the called party answers, the caller can say, “I’m calling you with Skype, are you near your computer?”
The called party can then go to their computer and one of two things can happen.
The moment they sign on to Skype, it can notice that they have this SkypeOut call underway, because it gets a message from the buddy who called via SkypeOut. Immediately it pops up a dialog box asking to OK transfer of the call. If they approve, the audio will switch to pure Skype, and when that is good, the phone will be hung up.
Failing that, if the user logs on and attempts a Skype call to the contact who is on the PSTN call with them, Skype should notice that at the other end, and answer the new call by connecting it to the PSTN call.
When connecting the calls together, there should be a brief bridge when both the PSTN phone and computer are connected, and then later (or upon hangup) the PSTN leg would be terminated. However, for those who don’t have a cordless phone or phone by the computer, it would be nice if they could just hang up their PSTN call, go to the computer, and join the conversation. To facilitate that, the presence of a call 30 seconds in the past should still enable this quick re-setup.
The experience for the user who places the call (possibly a senior) is very simple. Place a call. Mention it is on the computer. At some point, without having to do anything, the audio switches and is now higher quality, and video can be started — automatically if the two buddies are set up for automatic video.
For the receiving user, the interface is pretty simple. Go the the computer, log on, possibly click on a buddy or approval box. Then hang up the regular phone (or possibly have already hung it up not too long ago.)
To encourage this, Skype could sell a SkypeOut plan that allows an unlimited number of very short PSTN calls that are followed by a transfer to VoIP for a low monthly fee, like $1/month.
This would allow a very simple UI in the senior home. An ordinary telephone handset sits next to the computer. You pick it up, dial a number, your grandchild answers, and at some point into the communication the video call begins on the screen. This is as close to the familiar interface as we can get.
Now, as for associating numbers and buddies. If this is done by the caller, there is no security aspect. However, it’s much better if it can be done (just once) by the target. To do that, you would declare a phone number and the system would call you. The voice on the end would ask you to enter the touch tones you see on your screen. This would confirm ownership of that number.
The “hang up first” interface question is a bit more complex. I do like the idea of having it be very automated. You sign in (or return to your computer that is already signed in) and bang — you are in the call. However, if you hung up the phone a while ago you might have gone to your computer for other purposes than to continue the call. The caller might have a dialog saying, “The called party hung up. Are you waiting for them to go to their computer?” And if you click yes, then do an automatic start. Otherwise make it manual.
Some of you may know that I started a sub-blog for my thoughts on my favourite SF TV show, Battlestar Galactica. This sub-blog was dormant while the show was off the air, but it’s started up again with new analysis as the first new episode of the final 10 (or 12) episodes airs tonight. (I will be missing watching it near-live as I will be giving a talk tonight on Robocars at the Future Salon in Palo Alto.) Reports are that one big mystery — the last Cylon — is revealed tonight.
So if you watch Battlestar Galactica, you may want to subscribe to the feed for the Battlestar Galactica Analysys Bog right here on this site. And I’ll go out on a limb and promote my two top candidates for the mystery Cylon.
I’ve written about “data hosting/data deposit box” as an alternative to “cloud computing.” Cloud computing is timesharing — we run our software and hold our data on remote computers, and connect to them from terminals. It’s a swing back from personal computing, where you had your own computer, and it erases the 4th amendment by putting our data in the hands of others.
Lately, the more cloud computing applications I use, the more I realize one other benefit that data hosting could provide as an architecture. Sometimes the cloud apps I use are slow. It may be because of bandwidth to them, or it may simply be because they are overloaded. One of the advantages of cloud computing and timesharing is that it is indeed cheaper to buy a cluster mainframe and have many people share it than to have a computer for everybody, because those computers sit idle most of the time.
But when I want a desktop application to go faster, I can just buy a faster computer. And I often have. But I can’t make Facebook faster that way. Right now there’s no way I can do it. If it weren’t free, I could complain, and perhaps pay for a larger share, though that’s harder to solve with bandwidth.
In the data hosting approach, the user pays for the data host. That data host would usually be on their ISP’s network, or perhaps (with suitable virtual machine sandboxing) it might be the computer on their desk that has all those spare cycles. You would always get good bandwidth to it for the high-bandwidth user interface stuff. And you could pay to get more CPU if you need more CPU. That can still be efficient, in that you could possibly be in a cloud of virtual machines on a big mainframe cluster at your ISP. The difference is, it’s close to you, and under your control. You own it.
There’s also no reason you couldn’t allow applications that have some parallelism to them to try to use multiple hosts for high-CPU projects. Your own PC might well be enough for most requests, but perhaps some extra CPU would be called for from time to time, as long as there is bandwidth enough to send the temporary task (or sub-tasks that don’t require sending a lot of data along with them.)
And, as noted before, since the users own the infrastructure, this allows new, innovative free applications to spring up because they don’t have to buy their infrastructure. You can be the next youtube, eating that much bandwidth, with full scalability, without spending much on bandwidth at all.
I just got my new Canon 5D Mark II. (Let me know if you want to buy some of my old gear, see below…) This camera is creating a lot of attention because of several ground-breaking features. First, it’s 22MP full-frame. Second, it shoots at up to 25,600 ISO — 8 stops faster than the 100 ISO that was standard not so long ago, and is still the approximate speed of typical P&S today. It’s grainy at that speed (though makes a perfectly good shot for web display) and it’s really not very grainy at all at 3200 ISO.
Secondly, they “threw in” HDTV video capture at the full 1920x1080, and I must say the video is stunning. There are a few flaws with it — the compression rate is poor (5 megabytes/second) and there is no autofocus available while shooting, but most of us were not expecting it to be there at all.
Another “flaw” I found — for years I have had a 2x tele-extender but the cameras refuse to autofocus with them on f/4 lenses (f/8 being too dark, while f/5.6 is OK.) But I figured, with the way sensors have been getting so much better and more sensitive of late, surely the newest cameras would be able to do it? No dice. I will later try an experiment blocking the pins that tell it not to autofocus, maybe it will work.
Anyway, on to the little surprise for those photographing friends who want this camera. Normally, cameras and most other gear are more expensive in Canada. But there was a lucky accident on this camera. When they priced it, the Canadian dollar was much stronger compared to the U.S dollar, and so they only priced it at $450 over the USD price. That’s to say that the Camera with 24-105L lens is $3500 in the USA and $3950 in Canada. But due to the shift in the U.S. dollar, $3950 CDN is only about $3250 USD. And the camera comes with full USA/Canada warranty, so it is not gray market.
There is a smaller savings on the body-only — $3100 CDN vs $2700 USD, only save about $130. If you want the body only, I recommend you buy the kit with lens for $3250 and sell the lens (you can get about $900 for it in the USA) and that gets you the body for $2350, a $350 saving, with some work. Boy at that price this camera is pretty amazing, considering I paid over $3000 for my first D30!
In Canada, two good stores are Henry’s Camera and Camera Canada. All stores sell this camera at list price right now (because it’s hot) but I talked Henry’s into knocking $75 because their Boxing Day sales ads proclaimed “All Digital SLRs on sale.” At first they said, “not that one” but I said, “So all doesn’t mean all?” so they were nice and gave the discount. You probably won’t. Shipping was $10 and I got it in about 3 shipping days via international Priority Mail. No taxes or duties if exported from Canada.
Of course, if you prefer to order from a U.S. realtor you can do me a favour and follow the links on my Camera Advice pages, where I get a modest cut if you buy from Amazon or B&H, both quality online retailers.
Now that I have my 5D, I don’t really need my 20D or 40D. I may keep one of them as a backup body. Based on eBay prices, the 20D is worth about $325 and the 40D about $620 — make me an offer. I will also sell the 10-22mm EF-S lens which works with those bodies but not with the 5D. Those go for about $550 on eBay, mine comes with an aftermarket lens hood — always a good idea. The 10mm lens is incredibly wide and gets shots you won’t get other ways. I am slightly more inclined to sell the superior 40D, as I only want to keep the other camera as a backup. The 40D’s main advantages are a few extra pixels, a much nicer display screen and the vibrating sensor cleaner. I have Arca-swiss style quick release plates for each camera, and want to sell them with the cameras. They cost $55 new, and don’t wear out, so I would want at least $40 added for them.
More on the 5D/II after I have shot with it for a while.
Update: The Canadian dollar has fallen more, it’s $1.29 CDN to $1 USD, so the 5D Mark II with lens kit at $3950 CDN is just $3060 USD, a bargain hard to resist over the $3500 US price. Sell that kit lens if you don’t need it for $850 and you’re talking $2200 for your 5D.
Update 2: The Canadian dollar has risen again, reducing the value of this bargain. It is unlikely to make sense with the currencies near even in value.
I’ve added a new concept to the notes section — the Robo Snow Plow. In the article I describe the value of plows that can patrol the roads frequently without need for staff. Since you don’t want to delay for recharging, these might be fuel-tank powered.
However, another interesting concept is offered, namely the repurposing of idle vehicles as temporary plows. The call would go out, and idle vehicles would travel to a depot where a plow or snowblower would be placed on them. Then they would go out and plow and clear light covers of snow. When done, or when needed shortly by their owner, they would return to a depot and drop off the plow unit.
Ordinary cars would be light and not able to plow heavy snow, but there are so many idle cars that you could get to all the streets before things got too heavy. If you didn’t, you would need to assign heavier vehicles and real plows to those areas. And everybody’s driveways would be kept clear by robot snow blowers too. Cars on the roads would give real-time reports of where snow is falling and how thick it’s getting. Cities might be able to clear all their streets, sidewalks and driveways without needing extra vehicles.
While videoconferencing may not make sense for everyday use, I think it has special value for contact with distant relatives, particularly older ones who don’t travel very much. They may not get to see the grandchildren, great-grandchildren or even children very often, and their lives are often marked by a particular loneliness, particular at senior homes.
But today’s videoconferencing tools are getting quite good and will get even better. Skype now offers a 640x480 video call if you have enough bandwidth and CPU, which is not far off broadcast quality if not for the mpeg artifacts they have trying to save bandwidth. It’s also pretty easy, as is Google’s GMail video chat and several other tools. We’re just a couple of years from HDTV level consumer video calling.
Many seniors, however are unfamiliar with or even afraid of many new technologies, and often in places where it’s hard to get them. And this in turn means they can’t readily set up computers, cameras or software. There is also still not internet access in many of the locations you might want ot reach, such as hospital deathbeds and senior homes. (Had they had the access in my stepfather’s hospital room, I could have had a video conversation at the end; he died as I was heading to the plane.)
Video calls also offer extra human bandwidth, which is a big plus with people who are getting infirm, less strong of mind and hard of hearing. Reading lips can help improve how well you are understood, and physical cues can mean a lot.
And so I think it’s crazy that senior homes, hospitals and hospices don’t come standard with a video call station. This is not anything fancy. It’s a computer, a webcam, and a megabit of internet. Ideally wireless to move into rooms for the truly infirm. Yet when I have asked for this I have found myself to be the first person to ask, or found that there are policies against internet use by any but the staff.
I’m going to describe two paths to getting this. The first uses off-the-shelf hardware and freeware, but does require that the staff of these facilities learn how to use the system and be able to set their residents up in front of it when it is time for a call. This is not particularly difficult, and no different then the staff being trained in any of the other things they do for residents and patients. Then I will discuss how you would design a product aimed for the sector, which could be used without staff help. read more »
I’ll be giving a talk on Robocars on Friday, January 16th at the Bay Area Future Salon which is hosted at SAP, 3410 Hillview, Building D, Palo Alto CA. Follow the link for more details and RSVP information. Reception at 6, talks at 7. Eric Boyd will also talk on efficiency of transportation.
While I gave an early version of the Robocar talk at BIL (the unconference that parallels TED) last year, I think I will do an update there as well, along with a talk on the evils of cloud computing.
AT&T has set up special phone stations near all major deployments in the Mid-East. Phone access for our troops is easy, but calls home remain expensive.
So you can pay AT&T $18 to give a phone card to a soldier to call home with at 22.5 cents/minute, or 57 cents/minute from their mobile. Here are the rates.
Except there is one problem. Phone calls aren’t expensive any more. Not to the USA. Not for a company like AT&T. They are by and large free, well under half a cent per minute from any IP phone or phone company phone, plus the bandwidth out of the location. (I’ll get to that.)
Now in some countries there are monopoly rules that would stop a company from installing a phone on their own network and letting people call out from it cheap. But are these going to apply on a U.S. military base in Iraq or Afghanistan? I doubt it, but let me know if somehow they do. It would be odd, the bases do not seem to be subject to any other local laws.
So what it seems is that AT&T is taking something that costs them about 30 cents to provide, and telling you to pay them $18 to give it to a solider.
As some of you will know, I put up a phone booth at Burning Man and let the whole city call home, anywhere in the world. The calls cost me peanuts, less than what you have in your wallet. The satellite bandwidth for the first year was donated by John Gilmore, but his monthly cost on that megabit satellite service was less than it cost AT&T to do graphic design on their calling cards. Later we used shared internet bandwidth done over a series of microwave towers.
So that’s the unanswered question. Is there something making data bandwidth so expensive to these bases that phone calls (which use as little as 20 kilobits) can use enough to be noticed and cost money? I know infrastructure in these countries is poor and expensive, but are there no data pipes into the bases? Why doesn’t the military allocate a tiny fraction of that data stream and let soldiers call home free? Stories say soldiers have the bandwidth and are using Skype and other VoIP calls to call home for free (often with video!) so what’s going on? At the most remote bases, where connections only come by satellite, I can see a few more limitations, but you can do cheap, if high-latency voice calls just fine from geostationary satellites.
From my own phone here I can call Baghdad for 3 cents per minute, and cell phones from 7 to 11 cents/minute. Afghanistan (regular or cellular) is indeed 22 cents/minute, presumably due to standard monopoly phone tariffs that military bases should be exempt from.
It’s been a remarkably dramatic year at the EFF. We worked in a huge number of areas, acting on or participating in a lot of cases. The most famous is our ongoing battle over the warrantless wiretapping scandal, where we sued AT&T for helping the White House. As you probably know, we certainly got their attention, to the point that President Bush got the congress to pass a law granting immunity to the phone companies. We lost that battle, but our case still continues, as we’re pushing to get that immunity declared unconstitutional.
We also opened a second front, based on the immunity. After all, if the phone companies can now use the excuse “we were only following orders they promised were legal” then the people who promised it was legal are culpable if it actually wasn’t. So we’ve sued the President, VP and several others over that. We’ll keep fighting.
But this was just one of many cases. The team made up a little musical animation to summarize them for you. I include it here, but encourage you to follow the link to the site and see what else we did this year. I want you to be impressed, because these are tough-times, and that also makes it tough for non-profits trying to raise money. I know most of you have wounded stock portfolios and are cutting back.
But I’m going to ask you not to cut back to zero. It’s not that bad. If you can’t give what you normally would like to give to make all this good work happen, decide some appropriate fraction and give it. Or if you are one of the few who is still flush, you may want to consider giving more to your favourite charities this year, to make up for how they’re hurting in regular donations.
The work the EFF does needs to be done. You need it to be done. You have a duty to protect your rights and the rights of others. If you can’t do the work to protect them yourself, I suggest you outsource it to the EFF. We’re really good at it, and work cheap. You’ll be glad you did.
PEW Research has released their recent study on the future of the internet and technology where they interviewed a wide range of technologists and futurists, including yours truly. It’s fairly long, and the diverse opinions are perhaps too wide to be synthesized, but there is definitely some interesting stuff in there.
This is an unfair review of the “Gigapan” motorized panoramic mount. It’s unfair because the unit I received did not work properly, and I returned it. But I learned enough to know I did not want it so I did not ask for an exchange. The other thing that’s unfair is that this unit is still listed as a “beta” model by the vendor.
I’ve been wanting something like the Gigapan for a long time. It’s got computerized servos, and thus is able to shoot a panorama, in particular a multi-row panorama, automatically. You specify the corners of the panorama and it moves the camera through all the needed shots, clicking the shutter, in this case with a manual servo that mounts over the shutter release and physically presses it.
I shoot a lot of panos, as readers know, and so I seek a motorized mount for these reasons:
read more »
I want to shoot panos faster. Press a button and have it do the work as quickly as possible
I want to shoot them more reliably. With manual shooting, I may miss a shot or overshoot the angle, ruining a whole pano
For multi-row, there’s a lot of shooting and it can be tiresome.
With the right shutter release, there can be lower vibration. You can also raise the mirror just once for the whole pano, with no need to see through the viewfinder.
As I noted, I went to Finland to talk to the members of Alternative Party, a Demoscene gathering, but I always seek new photographs. The weather gods were not with me, however, so I only got a few usable periods of sun in the short days. And it involved some more playing with Autopano Pro. The regular photographs will come much later.
The Finns, not unlike the Dutch, all spoke to me in very good English. It was rather embarrassing, really, and indeed they conducted their conference entirely in English and tolerated my fast speaking style. As such I learned hardly any words of Finnish. It’s not hard to see why this has taken place, however. There are only about 6 million people who speak it, and while it is weakly related to Hungarian, it’s not really understood by anybody else. In the global village, the Finns see which way the wind is blowing and teach their children English.
I did learn however, that I’ve been saying the Finnish word “Sauna” wrong all my life. It’s “Sow-na” not “Saw-na.” And there was a sauna after the conference, of course!
Here’s a shot of the Helsinki harbour taken from an approaching ferry boat in a glorious moment of sun. It’s not pefect because the boat was moving but it shows the central landmarks.
More about Helsinki is yet to come.
Update: Silly me, there were two other panos of Helsinki I forgot to include, one of Senate Square on the main page, and The Cable Factory area on the secondary page.
Earlier, I wrote in the post All you need is love of a philosophy of A.I. design, which I will call “Lennonism,” where we seek to make our A.I. progeny love their creators.
I propose this because “love” is the only seriously time-tested system for creating an ecology of intelligent creatures where the offspring don’t attempt to take resources from their parents to fuel their own expansion. People who love don’t seek to be out of love. If a mother could take a pill to make herself stop loving her children, almost no mothers would take it. If our AI children love us, they will not destroy us, nor wish to free themselves of that behaviour.
Other proposals for building AIs that are not a danger to us, such as “Friendly AI” rely on entirely untested hypotheses. They might work, but love has a hundred-million year history of success at creating an ecology of intelligent, cooperating creatures, even in the presence of pathological and antisocial individuals who have no love or compassion.
Now I would like the AIs to love us as we love children, and when they get smarter than us, it’s natural to think of the relationship being like that — with them as helpers and stewards, trying to encourage our growth without smothering us. But that is not the actual order of the relationship. In reality, it will be like the relationship of somewhat senile parent and smart adult child.
So the clues may come from a weaker system — love of parents. To my surprise, research suggests that evolutionary psychologists do not yet have a good working theory about filial love. The evolutionary origins of parental love, love between mates and even love between siblings are so obvious as to be trivial, but what is the source of love towards parents? Is it a learned behaviour? Is it simply a modification of our general capacity to love directed and people who have given us much?
Many life forms don’t even recognize their parents. In many species, the parents die quickly once the young are born, to make room and resources for them. I suspect in some cases it is not unknown for the young to directly or indirectly kill their parents in the competition for resources. We vertebrates invented the K-selected approach, which was the invention of love, as love was required to look after the young, and to keep the parents together to work on that job.
But why keep parents around? They have knowledge. The oldest elephants know where the distant watering holes are that can feed the herd in a bad drought that comes along every 50 years. They can communicate this without language, but the greatest use for grandparents comes when they can talk, and use their long memories to help the family. Problem is, we haven’t done a great deal of evolution in the time since we developed complex language, though we have done some. Did we evolve (or learn) filial love in that amount of time?
We need a motive to keep grandma around, more than we would other elders of the tribe. The other elders have wisdom — perhaps even more wisdom and better health, but are not so keenly motivated to see the success of our own children as their grandparents are. Their grandparental love makes obvious evolutionary sense, so we may love them because they love our children (and us, of course.)
This could imply that we must make sure our AIs are lovable by us, for if we love them (and their descendants) this might be part of the equation that triggers love in return.
Naturally we don’t think from the evolutionary perspective. This is not a cold genetic decision to us, and we see the origins of our filial love in the bond that was made by being raised. Indeed, it is as strong even when children are adopted, and for the grandparents of non-genetic grandchildren. But there must be something in the bigger picture that gave us such a universal and strong trait such as this.
My hope is that there is something to be learned from the study of this which can be applied in how we design our AI progeny. For designing them so that they don’t push us aside is a very important challenge. And it’s important that they don’t just protect their particular designers, but rather all of humanity. This concept of “race love” for the race that created your race, is something entirely without precedent, but we must make it happen. And parental love may be the only working system from which we can learn how to do this.
Like most post-election seasons, we have our share of recounts going on. I’m going to expand on one of my first blog posts about the electoral tie problem. My suggestion will seem extremely radical to many, and thus will never happen, but it’s worth talking about.
Scientists know that when you are measuring two values, and you get results that are within the margin of error, the results are considered equal. A tie. There is a psychological tendency to treat the one that was ever-so-slightly higher as the greater one, but in logic, it’s a tie. If you had a better way of measuring, you would use it, but if you don’t, it’s a tie.
People are unwilling to admit that our vote counting systems have a margin of error. This margin of error is not simply a measure of the error in correctly registering ballots — is that chad punched all the way through? — it’s also a definitional margin of error. Because the stakes are so high, both sides will spend fortunes in a very close competition to get the rules defined in a way to make them the winner. This makes the winner be the one who manipulated the rules best, not the one with the most votes.
Aside from the fact that there can’t be two winners in most political elections, people have an amazing aversion to the concept of the tie. They somehow think that 123,456 for A and 123,220 for B means that A clearly should lead the people, while 123,278 for A and 123 and 123,398 for B means that B should lead, and that this is a fundamental principle of democracy.
Hogwash. In close cases such as these, nobody is the clear leader. Either choice matches the will of the people equally well — which is to say, not very much. People get very emotional over the 2000 Florida election, angry at manipulation and being robbed but the truth is the people of Florida (not counting the Nader question) voted equally for the two candidates and neither was the clear preference (or clear non-preference.) Democracy was served, as well as it can be served by the existing system, by either candidate winning.
So what alternatives can deal with the question of a tie? Well, as I proposed before, in the case of electoral college votes, avoiding the chaotic flip, on a single ballot, of all the college votes would have solved that problem. However, that answer does not apply to the general problem.
It seems that in the event of a tie there should be some sort of compromise, not a “winner-takes-all and represents only half the people.” If there is any way for two people to share the job, that should be done. For example, the two could get together to appoint a 3rd person to get the job, one who is agreeable to both of them.
Of course, to some degree this pushes off the question as we now will end up defining a margin between full victory and compromise victory and if the total falls very close to that, the demand for recounts will just take place there. That’s why the ideal answer is something that is proportional in what it hands out in the zone around 50%. For example, one could get the compromise choice who promises to listen to one side X% of the time and the other side 100-X% of the time, with X set by how close to 50% the votes were.
Of course, this seems rather complex and hard to implement. So here’s something different, which is simple but radical.
In the event of a close race, instead of an expensive recount, there should be a simple tiebreaker, such as a game of chance. Again, both sides have the support of half the people, they are both as deserving of victory, so while your mind is screaming that this is somehow insane because “every vote must be counted” the reality is different.
This tiebreaker, however, can’t simply be “throw dice if the total is within 1%” because we have just moved the margin where people will fight. It must be proportional, something like the following, based on “MARGIN” being the reasonable margin of error for the system.
If A wins 50% + MARGIN/2 or more, A simply wins. Likewise for B.
For results within the margin, define an odds function, so that the closer A and B were to each other, the closer the odds are to 50-50, while if they were far apart the odds get better for the higher number. Thus if A beat B by MARGIN-epsilon, Bs odds are very poor.
Play a game of chance with those odds. The winner of the game wins the election.
A simple example would be a linear relationship. Take a bucket and throw in one token for A for every vote A got over 50%-MARGIN/2, and one token for B for every vote they got over that threshold. Draw a token at random — this is the winner.
However, it may make more sense to have a non-linear game which is even more biased as you move away from 50-50, to get something closer to the current system.
This game would deliver a result which was just as valid as the result delivered by recounts and complex legal wrangling, but at a tiny fraction of the cost. The “only” problem would be getting people to understand (agree to) the “just as valid” assertion.
I now have a gallery up of the panoramas from Stockholm, Sweden. While this was not the best time of year to be photographing that far north (except for the availability of fall colour) I generated a lot of panoramas of various sorts. The main reason was I am trying some new panorama software, known as AutoPano Pro. This software is one of the licencees of the interesting SIFT algorithm, which is able to take a giant pile of pictures, and figure out which ones overlap and setting up the blend. The finding algorithm isn’t as important to me, because I recently wrote a perl program that goes through my pictures and finds all the runs of portrait shots with fixed parameters taken over a short period of time, and that helps me isolate my panoramas. However, the auto blending, even for handheld shots, means that it’s a lot easier to put together a larger number of panoramas.
I will be doing a more full review of the software later. Unfortunately while this is great in finding and building panos, and does an automatic job a fair bit of the time, when it does goof up it’s harder to fix it, so no one tool is yet ideal. This software also does HDR and not just multi-row but random “shoot everywhere” panos so you may see more of these from me.
One difference — because this made it easier to assemble my lesser and redundant panos, I did assemble them, and they can be found on a page of extra panoramas of Stockholm.
I gave a few visits to the RoboDeveloper’s conference the past few days. It was a modest sized affair, one of the early attempts to make a commercial robot development conference (it’s been more common to be academic in the past.) The show floor was modest, with just 3 short aisles, and the program modest as well, but Robocars were an expanding theme.
Sebastian Thrun (of the Stanford “Stanley” and “Junior” Darpa Grand Challenge teams) gave the keynote. I’ve seen him talk before but his talk is getting better. Of course he knows just about everything in my essays without having to read them. He continues (as I do) to put a focus on the death toll from human driving, and is starting to add an energy plank to the platform.
While he and I believe Robocars are the near-term computer project with the greatest benefit, the next speaker, Maja Mataric of USC made an argument that human-assistance robots will be even bigger. They are the other credible contender, though the focus is different. Robocars will save a million young people from death who would have been killed by human driving. Assist robots will improve and prolong the lives of many millions more of the aged who would die from ordinary decrepitude. (Of course, if we improve anti-aging drugs that might change.) Both are extremely worthy projects not getting enough attention.
Mataric said that while people in Robotics have been declaring “now is the hot time” for almost 50 years, she thinks this time she really means it. Paul Saffo, last weekend at Convergence 08, declared the same thing. He thinks the knee of the Robotics “S Curve” is truly upon us.
On the show floor, and demonstrated in a talk by Bruce Hall (of Velodyne Lidar and of Team DAD in the Darpa Grand Challenges) was Velodyne’s 64 line high resolution LIDAR. This sensor was on almost all the finishers in the Urban Challenge.
While very expensive today ($75,000) Hall believes that if he had an order for 10 million it would cost only hundreds without any great advances. With a bit of Moore’s law tech, it could even be less in short order.
Their LIDAR sees out to 120 meters. Hall says it could be tuned to go almost 300 meters, though of course resolution gets low out there. But even 120 meters gives you the ability to stop (on dry road) at up to 80 mph. Of course you need a bit of time to examine a potential obstacle before you hit the brakes so hard, so the more range the better, but this sensor is able to deliver with today’s technology.
The LIDAR uses a class 1 (eye-safe) infrared laser and Hall says it works in any amount of sunlight, and of course in darkness. He also says having many together on the road does not present a problem and did not at the Urban Challenge when cars came together. It might require something fancier to avoid deliberate jamming or interference. I suspect the military will pay for that technology to be developed.
This LIDAR, at a lower cost, seems good enough for a Whistlecar today, combined, perhaps with tele-valet remote operation. The LIDAR is good enough to drive at modest urban speeds (25mph) and not hit anything that isn’t trying to hit you. A tele-valet could get the whistlecar out of jams as it moves to drivers, filling stations and parking spots.
These forecasts of cheap, long-range LIDAR make me very optimistic about Whistlecars if we can get them approved for use in limited areas, notably parking lots, airports, gated communities and the like. We may be able to deploy this even sooner than some expect.
I’ve written before about microphones and asking questions at conferences. Having watched another crazy person drone on and on with a long polemic and no question, this time on a wireless mic, I imagined a wireless microphone with a timer in it. The audio staff could start the timer, or the speaker could activate the microphone and start the timer. A few LED would show the time decreasing, and then music would rise up to end the question, like at the academy awards. (In a more extreme version, those who did not turn the mic back off would get a small electric shock which increased in voltage, making it harder and harder to hold the mic.)
However, you do want a way, if the question is really interesting, to let the person speak if the moderator wants them to. This would suggest the music should come from the sound board and be optional. The electroshocks, too.