Brad Templeton is an EFF director, Singularity U faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
There are many opinions about whether the bailout and stimulus package are a good idea or not. But one thing that I hope everybody agrees is bad is that it teaches the lesson that if you screw up so badly that you hurt the global economy, we’re not going to let you fall. Take huge risks because in the event of catastrophe, the government has no choice but to make it better.
Is there a way to do a bailout that doesn’t end up rewarding, or even saving, the people responsible?
Well, outside of the frauds like Madhoff, many of them did not break the law, or didn’t break it severely. Those who broke the law should get the punishment of the law. A lot of people just looked the other way has horribly bad loans were financed, resold and insured in strange ways. Some people had no idea what they were doing was so dangerous. Some didn’t know but should have known. Some suspected but ignored the evidence. And some knew, but where happy if they were getting their share.
I propose taking a small fraction of the bailout and stimulus and using it for “punishment.” It need not be much. With a possible 2 trillion dollars to spend, even 1% would be 20 billion dollars which surely buys a lot of enforcement, and of course stimulates the industries of enforcement. But we don’t need even 1%.
The first step is to define a set of good practices and ethics defining who did wrong. They would be fairly narrow. They would not catch the people who didn’t know they were doing something wrong and were not at the level that they should have known. This is not a simple task but I think it can be done.
The next step is to say “no bailout or stimulus money for any company which employs or significantly compensates, above minimum wage, a person responsible for the collapse.” They lose their jobs. If millions are to be out of work, start with the people responsible. The most adapatable of the laid off can take some of their jobs. If the government can fire all the air traffic controllers without catastrophe, I suspect a lot of bankers can be fired too. Only minimal dole for those fired too, enough to survive, but not well. They will be incented to find other jobs, in industries not getting bailout and stimulus money. Or they can work for minimum wage in their old jobs.
Culpability will run up, as well. While there will still be standards of proof, and a presumption of innocence, if a group of people all working for one person are guilty, that person is going to have to work hard to convince a jury they had no knowledge of what went on underneath and that this was as it should be.
So yes, this means the CEOs and other top executives of most of the banks and brokerages involved will be out of work. I think they can handle it. If they are really civic minded, they can keep their jobs for minimum wage, no options, no bonus.
Now this is not my favoured plan. I think people who screw up should, wherever possible, be allowed to fail, and they and the stockholders will pay the price. If executives mislead stockholders, they should be subject to the rules. But if we have to not do that, somehow a message must get out that if you do something like this, you’re going down.
Note that I also expect, and hope, that many of these people have been fired already. But some of them haven’t. Some got fat bonuses instead.
There’s been some debate in the comments here about whether I and those like me are being far too picky about technical and plot elements in Battlestar Galactica. It got meaty enough that I wanted to summarize some thoughts about the nature of quality SF, and the reasons why it is important. BSG is quality SF, and it set out to be, so I hold it to a higher bar. When I criticise it for where it sometimes drops the ball, this is not the criticism of disdain, but of respect.
I wrote earlier about the nature of hard SF. It is traditionally hard to define, and people never fully agree about what it is, and what SF is in general. I don’t expect this essay to resolve that.
Broadly, SF is to me fiction which tries to explore the consequences of science, technology and the future. All fiction asks “what if?” but in SF, the “what if?” is often about the setting, and in particular the technology of the setting, and not simply about the characters. Hard SF makes a dedication to not break the laws of physics and other important principles of science while doing so. Fantasy, on the other hand, is free to set up any rules it likes, though all but the worst fantasy feels obligated to stick to those rules and remain consistent.
Hard SF, however, has another association in people’s minds. Many feel that hard SF has to focus on the science and technology. It is a common criticism of hard SF that it spends so much time on the setting that the characters and story suffer. In some cases they suffer completely; stories in Analog Science Fiction are notorious for this, and give hard SF a bad name.
Perhaps because of that name, Ron Moore declared that he would make BSG be Naturalistic Science Fiction. he declared that he wanted to follow the rules of science, as hard SF does, but as you would expect in a TV show, character and story were still of paramount importance. His credo also described many of the tropes of TV SF he would avoid, including time travel and aliens, and stock stereotyped characters.
I am all for this. While hard SF that puts its focus on the technology makes great sense in a Greg Egan novel, it doesn’t make sense in a drama. TV and movies don’t have the time to do it well, nor the audience that seeks this.
However, staying within the laws of physics has a lot of merit. I believe that it can be very good for a story if the writer is constrained, and can’t simply make up anything they desire. Mystery writers don’t feel limited that they can’t have their characters able to fly or read minds. In fact, it would ruin most of their mystery plots of they could. Staying within the rules — rules you didn’t set up — can be harder to do, but this often is good, not bad. This is particularly true for the laws of science, because they are real and logical. So often, writers who want to break the rules end up breaking the rules of logic. Their stories don’t make any sense, regardless of questions of science. When big enough, we call these logical flaws plot holes. Sticking to reality actually helps reduce them. It also keeps the audience happy. Only a small fraction of the audience may understand enough science to know that something is bogus, but you never know how many there are, and they are often the smarter and more influential members of the audience.
I lament at the poor quality of the realism in TV SF. Most shows do an absolutely dreadful job. I lament this because they are not doing that bad job deliberately. They are just careless. For fees that would be a pittance to any Hollywood budget, they could make good use of a science and SF advisor. (I recommend both. The SF advisor will know more about drama and fiction, and also will know what’s already been done, or done to death in other SF.) Good use doesn’t mean always doing what they say. While I do think it is good to be constrained, I recognize the right of creators to decide they do want to break the rules. I just want them to be aware that they are breaking the rules. I want them to have decided “I need to do this to tell the story I am telling” and not because they don’t care or don’t think the audience will care.
There does not have to be much of a trade-off between doing a good, realistic, consistent story and having good drama and characters. This is obviously true. Most non-genre fiction happily stays within the laws of reality. (Well, not action movies, but that’s another story.)
Why it’s important
My demand for realism is partly so I get a better, more consistent story without nagging errors distracting me from it. But there is a bigger concern.
TV and movie SF are important. They are the type of SF that most of the world will see. They are what will educate the public about many of the most important issues in science and technology, and these are some of the most important issues of the day. More people will watch even the cable-channel-rated Battlestar Galactica than read the most important novels in the field.
Because BSG is good, it will become a reference point for people’s debates about things like AI and robots, religion and spirituality in AIs and many other questions. This happens in two ways. First, popular SF allows you to explain a concept to an audience quickly. If I want to talk about a virtual reality where everybody is in a tank while they live in a synthetic world, I can mention The Matrix and the audience immediately has some sense of what I am talking about. Because of the flaws in The Matrix I may need to explain the differences between that and what I want to describe, but it’s still easier.
Secondly, people will have developed attitudes about what things mean from the movies. HAL-9000 from 2001 formed a lot of public opinion on AIs. Few get into a debate about robots without bringing up Asimov, or at worst case, Star Wars.
If the popular stories get it wrong, then the public starts with a wrong impression. Because so much TV SF is utter crap, a lot of the public has really crappy ideas about various issues in science and technology. The more we can correct this, the better. So much TV SF comes from people who don’t really even care that they are doing SF. They do it because they can have fancy special effects, or know it will reach a certain number of fans. They have no excuse, though, for not trying to make it better.
BSG excited me because it set a high bar, and promised realism. And in a lot of ways it has delivered. Because it has FTL drives, it would not meet the hard SF fan’s standard, but I understand how you are not going to do an interstellar chase show with sublight travel that would hold a TV audience. And I also know that Moore, the producer knows this and made a conscious decision to break the rules. There are several other places where he did this.
This was good because the original show, which I watched as an 18 year old, was dreadful. It had no concept of the geometry of space. TV shows and movies are notoriously terrible at this, but this was in the lower part of the spectrum. They just arrived at the planet of the week when the writers wanted them to. And it had this nonsense idea that the Earth could be a colony of ancient aliens. That pernicious idea, the “Ark” theory, is solidly debunked thanks to the fact that creationists keep bringing it up, but it does no good for SF to do anything to encourage it. BSG seemed to be ready to fix all these things. Yet since there are hints that the Ark question may not be addressed, I am disappointed on that count.
To some extent, the criticism that some readers have made — that too much attention to detail and demand for perfection — can ruin the story for you. You do have to employ some suspension of disbelief to enjoy most SF. Even rule-follow hard SF usually invents something new and magical that has yet to be invented. It might be possible, but the writer has no actual clue as to how. You just accept it and enjoy the story. Perhaps I do myself a disservice by getting bothered by minor nits. There are others who have it worse than I do, at least. But I’m not a professional TV science advisor. Perhaps I could be one, but for now, if I can see it, I think it means that they could have seen it. And I always enjoy a show more, when it’s clearly obvious how much they care about the details. And so does everybody else, even when they don’t know it. Attention to details creates a sense of depth which enhances a work even if you never explore the depth. You know it’s there. You feel it, and the work becomes stronger and more relevant.
Now some of the criticisms I am making here are not about science or niggling technical details. Some of the recent trends, I think, are errors of story and character. Of course, you’re never going to be in complete agreement with a writer about where a story or character should go. But if characters become inconsistent, it hurts the story as much or more as when the setting becomes inconsistent.
But still, after all this, let’s see far more shows like Battlestar Galactica 2003, and fewer like Battletar Galactica 1978, and I’ll still be happy.
Product recalls have been around for a while. You get a notice in the mail. You either go into a dealer at some point, any point, for service, or you swap the product via the mail. Nicer recalls mail you a new product first and then you send in the old one, or sign a form saying you destroyed it. All well and good. Some recalls are done as “hidden warranties.” They are never announced, but if you go into the dealer with a problem they just fix it for free, long after the regular warranty, or fix it while working on something else. These usually are for items that don’t involve safety or high liability.
Today I had my first run-in with a recall of a connected electronic product. I purchased an “EyeFi” card for my sweetie for valentines day. This is an SD memory card with an wifi transmitter in it. You take pictures, and it stores them until it encounters a wifi network it knows. It then uploads the photos to your computer or to photo sharing sites. All sounds very nice.
When she put in the card and tried to initialize it, up popped a screen. “This card has a defect. Please give us your address and we’ll mail you a new one, and you can mail back the old one, and we’ll give you a credit in our store for your trouble.” All fine, but the product refused to let her register and use the product. We can’t even use the product for a few days to try it out (knowing it may lose photos.) What if I wanted to try it out to see if I was going to return it to the store. No luck. I could return it to the store as-is, but that’s work and may just get another one on the recall list.
This shows us the new dimension of the electronic recall. The product was remotely disabled to avoid liability for the company. We had no option to say, “Let us use the card until the new one arrives, we agree that it might fail or lose pictures.” For people who already had the card, I don’t know if it shut them down (possibly leaving them with no card) or let them continue with it. You have to agree on the form that you will not use the card any more.
This can really put a damper on a gift, when it refuses to even let you do a test the day you get it.
With electronic recall, all instances of a product can be shut down. This is similar to problems that people have had with automatic “upgrades” that actually remove features (like adding more DRM) or which fix you jailbreaking your iPhone. You don’t own the product any more. Companies are very worried about liability. They will “do the safe thing” which is shut their product down rather than let you take a risk. With other recalls, things happened on your schedule. You were even able to just decide not to do the recall. The company showed it had tried its best to convince you to do it, and could feel satisfied for having tried.
This is one of the risks I list in my essays on robocars. If a software flaw is found in a robocar (or any other product with physical risk) there will be pressure to “recall” the software and shut down people’s cars. Perhaps in extreme cases while they are driving on the street! The liability of being able to shut down the cars and not doing so once you are aware of a risk could result in huge punitive damages under the current legal system. So you play it safe.
But if people find their car shutting down because of some very slight risk, they will start wondering if they even want a car that can do that. Or even a memory card. Only with public pressure will we get the right to say, “I will take my own responsibility. You’ve informed me, I will decide when to take the product offline to get it fixed.”
Just returned from BIL, an unconference which has, for the last two years, taken place opposite TED, the very expensive, very exclusive conference that you probably read a lot about this week. BIL, like many unconferences is free, and self-organized. Speakers volunteer, often proposing talks right at the conference. Everybody is expected to pitch in.
I’ve been very excited with this movement since I attended the first open unconference, known as barCamp. The first barcamp in Palo Alto was a reaction to an invite-only free unconference known as FooCamp, which I had also attended but was not attending that year. That first camp was a great success, with a fun conference coming together in days, with sponsors buying food and offering space. The second barcamp, in DC, was a complete failure, but the movement caught on and it seems there is a barcamp somewhere in the world every week.
This year BIL was bigger, and tried some new approaches. In particular, a social networking site was used to sign up, where people could propose talks and then vote for the ones they liked. While it is not as ad-hoc as the originals, with the board created at the start of the conference, I like this method a lot. The array of sessions at a completely ad-hoc conference can be very uneven in quality, and assignment to rooms is up to a chaotic procedure that may put an unpopular talk in a big room while a small room is packed to the gills. (This even happens at fully curated conferences.)
Pre-voting allowed better allocation of rooms, and in theory better scheduling to avoid conflicts (ie. noting that people want to go to two talks and not setting them against one another.) BIL also had some spare slots for people who just showed up with a talk, to keep that original flavour. read more »
Recently, some prosecutors, in efforts to crack down on drunk driving, are pushing for murder convictions. This is happening in the case of really blatant disregard on the part of the drunk drivers — people with multiple DUIs getting smashed, going out, and killing.
In watching coverage of this trend, over and over again I heard it said that the killer’s sin was “getting behind the wheel when drunk.” And that is in fact what we punish with DUI laws. Because so many people have done it (without killing anybody) there is surprising sympathy for the drunk drivers — there but for the grace of god go I.
But is that the right sin? That decision is always made once the person has impaired judgement. Something to me seems wrong about punishing a decision made when one has lost the ability to make good decisions. While I don’t drink, and have no sympathy for the actions of drunks, I think the real transgression comes much earlier.
The real transgression is allowing yourself to get impaired in circumstances where you would then be sufficiently likely to make deadly wrong decisions. A simple example of this would be having enough alcohol to move from sober to drunk when you have your car with you and plan to drive home. Of course, many people in that situation will do the right thing, and still be clear enough to know they should get a cab home, and then come back to pick up their car later. But of course, many don’t. And worse, there is often an incentive not to — such as paying for two taxi fares, and dealing with the car’s location becoming a no-parking zone in the morning.
I believe people should be punished for risky decisions they make while sober, more so than ones they make while drunk. It should be expected that people will make poor decisions and take unacceptable risks when drunk. That is what impairment means. It is the decisions they make when sober, when they know right from wrong, that the law should punish.
Now let me describe how this might work in theory, and then discuss the harder question of making it work in practice.
The simplest way to behave well is to never take your car to go drinking. That car parked outside is too much temptation once you are drunk. And this is what the designated driver concept is about. To get more specific, you must not take the drinks that make you impaired without first, while still not so impaired, making plans to get home so you have no temptation to drive your car. This can include arranging a ride with a sober person, pre-contracting with a taxi company for later pickup, or putting your car keys into escrow.
Car key escrow, for example, would involve giving the keys to a friend or the bartender, who will not return them to you until you are sober. A high-tech version might be a simple lockbox. You can put your keys in the lockbox (provided by a responsible bar) and can only get them out by blowing into the box with alcohol below the limit. The act of escrow, taken while sober, makes you legal. The act of drinking beyond your limit without making alternate plans is the immoral act. Having any recorded plan for getting home — cab, designated driver, transit ticket, keys in escrow — is enough to be acting morally.
Now how to enforce this? Well, we can’t really have police coming into bars, and asking all patrons who are beyond the limit to prove they made alternate plans. Police could check inebriated people leaving bars, but don’t typically have the time for this. If this sort of rule is to be enforced, it would have to be through legal liability on those who serve alcohol (bars, party hosts) to assure none of their guests go beyond the limit without plans, or at least the easy ability to make plans. (Cheap key lockboxes might help in this area.)
And of course, anybody who did drive drunk would be guilty since they obviously didn’t make adequate plans. This approach would simply expand the culpable act to the broader situation of having deliberately (while sober) put yourself in a situation where this has a real chance of taking place.
There are problems of course. Often “guests” come to parties uninvited and get drunk. We’ve all had a fairly drunk person at a party we barely know. Or we may not know the drinking habits of the friends we do invite. Bartenders deal with people arriving who already got sauced at another bar and just have the last few drinks before they drive in the 2nd bar. We want people to act responsibly, not have to go overboard and be paranoid about each guest. Ideally we want the full weight of the law to fall on the sober person who got drunk while his or her car was outside.
One unconnected option might make sense. Parking laws might be changed to let you get out of certain kinds of parking tickets if you can show proof you took an alternate way home because you are drunk. Taxi drivers who take drunks home could issue such a dated receipt. Friends could testify under oath that they drove you home because you were drunk. This might make people more willing to leave cars behind in certain areas. It would have to be clear what those areas were (for example, parking that was free at night but becomes metered or prohibited at 7am) so that the parking does not become a problem. Still the extra parked cars are a better thing to have than cars with drunks behind the wheel.
The thesis of the essay is simple. The quest for flying cars has always had to deal with the very difficult compromise between a vehicle that flies and one that drives. It’s just really hard to make one vehicle to do both.
The robocar (or rather robotaxi) solution is to not try to do both in one vehicle, but adapt to the idea you can hire a robotaxi to zip you right to your plane, and another one will be waiting on the taxiway when you land for a quick transition. It’s not the “take off from your house” vision, though. Of course, independently, the planes themselves could become computer-flown, as is almost the case today. If this happened, and the planes were able to do short takeoff and landing, and do it quietly (perhaps hybrid engines which use battery just for takeoff and landing) the world might accomodate airstrips in much more convenient places, even old stretches of road that don’t have overhead wires.
And don’t forget, I’ll be giving a robocar talk at BIL in Long Beach this weekend.
As I move to get more paper out of my life, one thing I’m throwing away with more confidence is manuals. It’s pretty frequent that I can do a search for product model numbers or other things on a manual, and find a place to download the PDF. Then I can toss the manual. I need to download the PDF, because the company might die and their web site might go away.
I would like to make this even easier. For starters, it would be nice if the UPC database (UPC are the bar codes found on all retail products) would also offer a link to getting all manuals and paper that come with a product. I would then be able to just photograph the bar codes of all my products with my phone or camera, and cause automatic download or escrow of all manuals. Perhaps a symbol next to the UPC could tell me this is guaranteed to work.
It would be even better if companies escrowed the manuals, which is to say paid a one-time fee to a trustable company which would promise to keep the documents online forever. This company must be backed by a very solid company itself, perhaps a consortium of all the major vendors with a pact that if any of them go other, the rest take up the slack of maintaining the site.
In fact, all free, public documents should have a code on them that can be turned into a URL where I can fetch the document, as PDF, HTML or even MSWord. Any attempt to scan such a document would pick up this code and know it doesn’t have to scan the rest unless it is marked up. For books, we sould key off the ISBN as well as the UPC. Eventually one of the newer, compact 2-D “barcodes” could be used to code a number to find the docs.
Of course, many products are now coming without manuals at all, and that’s largely fine with me.
Here’s a nice story about the Kiva warehouse delivery robot now being used by major retailers like The Gap. Factory floor robots have been around for some time, and the field even has a name “automated vehicle guidance systems” but these newer deliverbots kick it up a notch, picking up shelves and bringing them to a central area for distribution, finding their way on their own with sensors.
We’re also seeing more hospital deliverbots, which — very slowly — take things around hospitals, roving the same corridors as the people. When a robot goes very slowly, people are willing to allow it to travel with them. The technological question is, how hard it is it to raise that speed and stay safe, and make people believe that they are safe.
Some applications care little about speed, and the slow robots already have a market there. We would not tolerate super slow robots on our streets, getting in the way of our cars, regularly.
One answer may be “extremely deferential” behaviour. Consider a deliverbot trundling down a low-volume street at 10 kph (6mph). It would be constantly checking for a vehicle coming up behind it, using radar, lasers and cameras. With LIDAR it would get about 90 meters of warning, with other sensors perhaps more. Say it detects a car coming behind it at 50 km/h (30mph). It has 8 seconds, during which it will will cover 22 meters. If it’s a small robot — and we might limit the robots to make them small — odds are reasonable that it might find a place in which to duck, such as a driveway. These robots aren’t parking, so they can move into driveway entrances, fire hydrant locations and many small non-parking spaces along the road.
Indeed, it need not find a place to pause on its own side of the road. If there is no immediate oncoming traffic, it could deek to the other side of the road for a hiding spot. Ideally it would be clever and not pick a driveway which has a moving car or even a car sensors reveal has the engine running.
Indeed, it’s not unreasonable for the deliverbot to simply move into the oncoming lane if it is clear, to let the human vehicle pass. This is a bit disconcerting to our usual sense of how things work — slow vehicles don’t move to the left for us to pass them — but there is no reason it could not be true. This is on urban streets where stopped vehicles, turning vehicles and even pedestrians are found in the middle of the street all the time, and drivers have plenty of time to stop for them. Nobody is going to hit such a vehicle, just get annoyed by it.
For the driver, they would see various slow deliverbots on the road ahead. But in all but unusual circumstances, by the time they got close to those robots, they would have pulled out of the lane, to pause in driveway entrances. The main risk is the driver might start to depend on this, and plow right into such a vehicle (at slow speeds) if there was no place for it to pull over. A deliverbot that doesn’t immediately see a place to pull over would probably start blinking a very obvious flashing light on the back, increasing the warnings if the vehicle does not slow down. It might also speed up a little bit, if safe to do so, to reach a spot to pause.
Why is this interesting? I think we’re much closer to building a vehicle that could go 10 kph on slow city streets, using LIDAR. If the vehicle is small and doesn’t weigh a great deal, it simply won’t be capable of doing much damage to people by hitting them. It could even be equipped with airbags on the outside should this ever become unavoidable. The main problems would be people hitting them, or being annoyed by them.
Once accepted, as safety technology improves, the speed can improve — eventually to a level where they don’t get in the way, other than in the sense that any other vehicle is in your way. There will always be those who want to go faster, and so the deference approach will always be useful.
It was taken with the gigapan imager that I gave a negative review to last month. You can see why I want a better version of this imager. The shot is a great recording of history, as you can see the faces of almost all the dignitaries and high rollers who were there. It has a few stitch errors which would be a lot of work to remove by hand, so I don’t blame the creator for doing just one 5 hour automated pass. When such an imager becomes available for quality DSLRs, the image will be even better — this one faces the limitations of the G10. And due to the long time required to shoot any panorama of this scope, it looks like only some of the crowd are applauding, while others are bored.
I would love to see a shot of the ordinary folks in the far-away crowd too, but he wasn’t in range to get that, and it would have needed a longer lens. A computer might be able to count the faces then, or even tell you their racial mix. The made-the-list area probably has more black faces than ever before, but still a small minority.
A few years in the future, every event will be captured at this resolution, until we start having privacy worries about it.
In the early days of microprocessors, people selling home computers tried to come up with reasons to have them in the home. The real reason you got one was hobby computing, but the companies wanted to push other purposes. A famous one was use in the kitchen. The computer could story your recipe file, and wonder of wonders, could change the amounts of the ingredients based on how many servings you wanted to make.
This never caught on, but computers have come a long way. But still, I mostly see nonsense applications promoted. For example, boosters of RFID tell us that our fridges will be able to track when things went in the fridge, and when it’s time to buy more milk. We should give up huge amounts of privacy to figure out when to order more milk?
With that track record, I should stay away from the area, but let me propose some interesting approaches in the kitchen.
The cooking area should have a screen, of course. Screens are already in the kitchen to watch TV. While you could (and would) put digital recipes up on the screen, I imagine going further, and having TV cooking shows, where you watch a chef prepare a dish. You would be able to pause, rewind and do everything that digital video does, but the show would also come along with encoded instructions tagged to points in the video. When the recipe calls for cooking for 5 minutes, the computer would start appropriate timers.
The computer should have a speech interface, and a good one, allowing you to call out for timers, and to name ingredients and temperatures. More on that later.
The first thing I would like to see is smart, digital wireless scales in a lot of places. A general one on the counter of course, but quite possibly also built into the rack above the burner which holds the pot. You can get scales built into spoons and scoops now, and they could be bluetooth. read more »
Last week, I wrote about issues in providing videoconferencing to the aged. Later, I refined a new interface plan discussed in the comments. I think this would be a very good way for tools like Skype to work, so I am making an independent posting, and will encourage Skype, Google video chat (and others) to follow this approach.
First, it should be possible to reliably attach a PSTN phone number with an online identity. This can be done by the person who owns them (with a security trick) or by the person who wants to call them.
If a user goes to their tool — quite possibly through a USB handset with a dial pad, or through a dedicated IP phone — the system should check if this number belongs to a user, and if that user is online. If the user is online, then just make the call through the VoIP system.
If the user is not online, make the call through the PSTN, ie. SkypeOut. If/when the called party answers, the caller can say, “I’m calling you with Skype, are you near your computer?”
The called party can then go to their computer and one of two things can happen.
The moment they sign on to Skype, it can notice that they have this SkypeOut call underway, because it gets a message from the buddy who called via SkypeOut. Immediately it pops up a dialog box asking to OK transfer of the call. If they approve, the audio will switch to pure Skype, and when that is good, the phone will be hung up.
Failing that, if the user logs on and attempts a Skype call to the contact who is on the PSTN call with them, Skype should notice that at the other end, and answer the new call by connecting it to the PSTN call.
When connecting the calls together, there should be a brief bridge when both the PSTN phone and computer are connected, and then later (or upon hangup) the PSTN leg would be terminated. However, for those who don’t have a cordless phone or phone by the computer, it would be nice if they could just hang up their PSTN call, go to the computer, and join the conversation. To facilitate that, the presence of a call 30 seconds in the past should still enable this quick re-setup.
The experience for the user who places the call (possibly a senior) is very simple. Place a call. Mention it is on the computer. At some point, without having to do anything, the audio switches and is now higher quality, and video can be started — automatically if the two buddies are set up for automatic video.
For the receiving user, the interface is pretty simple. Go the the computer, log on, possibly click on a buddy or approval box. Then hang up the regular phone (or possibly have already hung it up not too long ago.)
To encourage this, Skype could sell a SkypeOut plan that allows an unlimited number of very short PSTN calls that are followed by a transfer to VoIP for a low monthly fee, like $1/month.
This would allow a very simple UI in the senior home. An ordinary telephone handset sits next to the computer. You pick it up, dial a number, your grandchild answers, and at some point into the communication the video call begins on the screen. This is as close to the familiar interface as we can get.
Now, as for associating numbers and buddies. If this is done by the caller, there is no security aspect. However, it’s much better if it can be done (just once) by the target. To do that, you would declare a phone number and the system would call you. The voice on the end would ask you to enter the touch tones you see on your screen. This would confirm ownership of that number.
The “hang up first” interface question is a bit more complex. I do like the idea of having it be very automated. You sign in (or return to your computer that is already signed in) and bang — you are in the call. However, if you hung up the phone a while ago you might have gone to your computer for other purposes than to continue the call. The caller might have a dialog saying, “The called party hung up. Are you waiting for them to go to their computer?” And if you click yes, then do an automatic start. Otherwise make it manual.
Some of you may know that I started a sub-blog for my thoughts on my favourite SF TV show, Battlestar Galactica. This sub-blog was dormant while the show was off the air, but it’s started up again with new analysis as the first new episode of the final 10 (or 12) episodes airs tonight. (I will be missing watching it near-live as I will be giving a talk tonight on Robocars at the Future Salon in Palo Alto.) Reports are that one big mystery — the last Cylon — is revealed tonight.
So if you watch Battlestar Galactica, you may want to subscribe to the feed for the Battlestar Galactica Analysys Bog right here on this site. And I’ll go out on a limb and promote my two top candidates for the mystery Cylon.
I’ve written about “data hosting/data deposit box” as an alternative to “cloud computing.” Cloud computing is timesharing — we run our software and hold our data on remote computers, and connect to them from terminals. It’s a swing back from personal computing, where you had your own computer, and it erases the 4th amendment by putting our data in the hands of others.
Lately, the more cloud computing applications I use, the more I realize one other benefit that data hosting could provide as an architecture. Sometimes the cloud apps I use are slow. It may be because of bandwidth to them, or it may simply be because they are overloaded. One of the advantages of cloud computing and timesharing is that it is indeed cheaper to buy a cluster mainframe and have many people share it than to have a computer for everybody, because those computers sit idle most of the time.
But when I want a desktop application to go faster, I can just buy a faster computer. And I often have. But I can’t make Facebook faster that way. Right now there’s no way I can do it. If it weren’t free, I could complain, and perhaps pay for a larger share, though that’s harder to solve with bandwidth.
In the data hosting approach, the user pays for the data host. That data host would usually be on their ISP’s network, or perhaps (with suitable virtual machine sandboxing) it might be the computer on their desk that has all those spare cycles. You would always get good bandwidth to it for the high-bandwidth user interface stuff. And you could pay to get more CPU if you need more CPU. That can still be efficient, in that you could possibly be in a cloud of virtual machines on a big mainframe cluster at your ISP. The difference is, it’s close to you, and under your control. You own it.
There’s also no reason you couldn’t allow applications that have some parallelism to them to try to use multiple hosts for high-CPU projects. Your own PC might well be enough for most requests, but perhaps some extra CPU would be called for from time to time, as long as there is bandwidth enough to send the temporary task (or sub-tasks that don’t require sending a lot of data along with them.)
And, as noted before, since the users own the infrastructure, this allows new, innovative free applications to spring up because they don’t have to buy their infrastructure. You can be the next youtube, eating that much bandwidth, with full scalability, without spending much on bandwidth at all.
I just got my new Canon 5D Mark II. (Let me know if you want to buy some of my old gear, see below…) This camera is creating a lot of attention because of several ground-breaking features. First, it’s 22MP full-frame. Second, it shoots at up to 25,600 ISO — 8 stops faster than the 100 ISO that was standard not so long ago, and is still the approximate speed of typical P&S today. It’s grainy at that speed (though makes a perfectly good shot for web display) and it’s really not very grainy at all at 3200 ISO.
Secondly, they “threw in” HDTV video capture at the full 1920x1080, and I must say the video is stunning. There are a few flaws with it — the compression rate is poor (5 megabytes/second) and there is no autofocus available while shooting, but most of us were not expecting it to be there at all.
Another “flaw” I found — for years I have had a 2x tele-extender but the cameras refuse to autofocus with them on f/4 lenses (f/8 being too dark, while f/5.6 is OK.) But I figured, with the way sensors have been getting so much better and more sensitive of late, surely the newest cameras would be able to do it? No dice. I will later try an experiment blocking the pins that tell it not to autofocus, maybe it will work.
Anyway, on to the little surprise for those photographing friends who want this camera. Normally, cameras and most other gear are more expensive in Canada. But there was a lucky accident on this camera. When they priced it, the Canadian dollar was much stronger compared to the U.S dollar, and so they only priced it at $450 over the USD price. That’s to say that the Camera with 24-105L lens is $3500 in the USA and $3950 in Canada. But due to the shift in the U.S. dollar, $3950 CDN is only about $3250 USD. And the camera comes with full USA/Canada warranty, so it is not gray market.
There is a smaller savings on the body-only — $3100 CDN vs $2700 USD, only save about $130. If you want the body only, I recommend you buy the kit with lens for $3250 and sell the lens (you can get about $900 for it in the USA) and that gets you the body for $2350, a $350 saving, with some work. Boy at that price this camera is pretty amazing, considering I paid over $3000 for my first D30!
In Canada, two good stores are Henry’s Camera and Camera Canada. All stores sell this camera at list price right now (because it’s hot) but I talked Henry’s into knocking $75 because their Boxing Day sales ads proclaimed “All Digital SLRs on sale.” At first they said, “not that one” but I said, “So all doesn’t mean all?” so they were nice and gave the discount. You probably won’t. Shipping was $10 and I got it in about 3 shipping days via international Priority Mail. No taxes or duties if exported from Canada.
Of course, if you prefer to order from a U.S. realtor you can do me a favour and follow the links on my Camera Advice pages, where I get a modest cut if you buy from Amazon or B&H, both quality online retailers.
Now that I have my 5D, I don’t really need my 20D or 40D. I may keep one of them as a backup body. Based on eBay prices, the 20D is worth about $325 and the 40D about $620 — make me an offer. I will also sell the 10-22mm EF-S lens which works with those bodies but not with the 5D. Those go for about $550 on eBay, mine comes with an aftermarket lens hood — always a good idea. The 10mm lens is incredibly wide and gets shots you won’t get other ways. I am slightly more inclined to sell the superior 40D, as I only want to keep the other camera as a backup. The 40D’s main advantages are a few extra pixels, a much nicer display screen and the vibrating sensor cleaner. I have Arca-swiss style quick release plates for each camera, and want to sell them with the cameras. They cost $55 new, and don’t wear out, so I would want at least $40 added for them.
More on the 5D/II after I have shot with it for a while.
Update: The Canadian dollar has fallen more, it’s $1.29 CDN to $1 USD, so the 5D Mark II with lens kit at $3950 CDN is just $3060 USD, a bargain hard to resist over the $3500 US price. Sell that kit lens if you don’t need it for $850 and you’re talking $2200 for your 5D.
Update 2: The Canadian dollar has risen again, reducing the value of this bargain. It is unlikely to make sense with the currencies near even in value.
I’ve added a new concept to the notes section — the Robo Snow Plow. In the article I describe the value of plows that can patrol the roads frequently without need for staff. Since you don’t want to delay for recharging, these might be fuel-tank powered.
However, another interesting concept is offered, namely the repurposing of idle vehicles as temporary plows. The call would go out, and idle vehicles would travel to a depot where a plow or snowblower would be placed on them. Then they would go out and plow and clear light covers of snow. When done, or when needed shortly by their owner, they would return to a depot and drop off the plow unit.
Ordinary cars would be light and not able to plow heavy snow, but there are so many idle cars that you could get to all the streets before things got too heavy. If you didn’t, you would need to assign heavier vehicles and real plows to those areas. And everybody’s driveways would be kept clear by robot snow blowers too. Cars on the roads would give real-time reports of where snow is falling and how thick it’s getting. Cities might be able to clear all their streets, sidewalks and driveways without needing extra vehicles.
While videoconferencing may not make sense for everyday use, I think it has special value for contact with distant relatives, particularly older ones who don’t travel very much. They may not get to see the grandchildren, great-grandchildren or even children very often, and their lives are often marked by a particular loneliness, particular at senior homes.
But today’s videoconferencing tools are getting quite good and will get even better. Skype now offers a 640x480 video call if you have enough bandwidth and CPU, which is not far off broadcast quality if not for the mpeg artifacts they have trying to save bandwidth. It’s also pretty easy, as is Google’s GMail video chat and several other tools. We’re just a couple of years from HDTV level consumer video calling.
Many seniors, however are unfamiliar with or even afraid of many new technologies, and often in places where it’s hard to get them. And this in turn means they can’t readily set up computers, cameras or software. There is also still not internet access in many of the locations you might want ot reach, such as hospital deathbeds and senior homes. (Had they had the access in my stepfather’s hospital room, I could have had a video conversation at the end; he died as I was heading to the plane.)
Video calls also offer extra human bandwidth, which is a big plus with people who are getting infirm, less strong of mind and hard of hearing. Reading lips can help improve how well you are understood, and physical cues can mean a lot.
And so I think it’s crazy that senior homes, hospitals and hospices don’t come standard with a video call station. This is not anything fancy. It’s a computer, a webcam, and a megabit of internet. Ideally wireless to move into rooms for the truly infirm. Yet when I have asked for this I have found myself to be the first person to ask, or found that there are policies against internet use by any but the staff.
I’m going to describe two paths to getting this. The first uses off-the-shelf hardware and freeware, but does require that the staff of these facilities learn how to use the system and be able to set their residents up in front of it when it is time for a call. This is not particularly difficult, and no different then the staff being trained in any of the other things they do for residents and patients. Then I will discuss how you would design a product aimed for the sector, which could be used without staff help. read more »
I’ll be giving a talk on Robocars on Friday, January 16th at the Bay Area Future Salon which is hosted at SAP, 3410 Hillview, Building D, Palo Alto CA. Follow the link for more details and RSVP information. Reception at 6, talks at 7. Eric Boyd will also talk on efficiency of transportation.
While I gave an early version of the Robocar talk at BIL (the unconference that parallels TED) last year, I think I will do an update there as well, along with a talk on the evils of cloud computing.
AT&T has set up special phone stations near all major deployments in the Mid-East. Phone access for our troops is easy, but calls home remain expensive.
So you can pay AT&T $18 to give a phone card to a soldier to call home with at 22.5 cents/minute, or 57 cents/minute from their mobile. Here are the rates.
Except there is one problem. Phone calls aren’t expensive any more. Not to the USA. Not for a company like AT&T. They are by and large free, well under half a cent per minute from any IP phone or phone company phone, plus the bandwidth out of the location. (I’ll get to that.)
Now in some countries there are monopoly rules that would stop a company from installing a phone on their own network and letting people call out from it cheap. But are these going to apply on a U.S. military base in Iraq or Afghanistan? I doubt it, but let me know if somehow they do. It would be odd, the bases do not seem to be subject to any other local laws.
So what it seems is that AT&T is taking something that costs them about 30 cents to provide, and telling you to pay them $18 to give it to a solider.
As some of you will know, I put up a phone booth at Burning Man and let the whole city call home, anywhere in the world. The calls cost me peanuts, less than what you have in your wallet. The satellite bandwidth for the first year was donated by John Gilmore, but his monthly cost on that megabit satellite service was less than it cost AT&T to do graphic design on their calling cards. Later we used shared internet bandwidth done over a series of microwave towers.
So that’s the unanswered question. Is there something making data bandwidth so expensive to these bases that phone calls (which use as little as 20 kilobits) can use enough to be noticed and cost money? I know infrastructure in these countries is poor and expensive, but are there no data pipes into the bases? Why doesn’t the military allocate a tiny fraction of that data stream and let soldiers call home free? Stories say soldiers have the bandwidth and are using Skype and other VoIP calls to call home for free (often with video!) so what’s going on? At the most remote bases, where connections only come by satellite, I can see a few more limitations, but you can do cheap, if high-latency voice calls just fine from geostationary satellites.
From my own phone here I can call Baghdad for 3 cents per minute, and cell phones from 7 to 11 cents/minute. Afghanistan (regular or cellular) is indeed 22 cents/minute, presumably due to standard monopoly phone tariffs that military bases should be exempt from.
It’s been a remarkably dramatic year at the EFF. We worked in a huge number of areas, acting on or participating in a lot of cases. The most famous is our ongoing battle over the warrantless wiretapping scandal, where we sued AT&T for helping the White House. As you probably know, we certainly got their attention, to the point that President Bush got the congress to pass a law granting immunity to the phone companies. We lost that battle, but our case still continues, as we’re pushing to get that immunity declared unconstitutional.
We also opened a second front, based on the immunity. After all, if the phone companies can now use the excuse “we were only following orders they promised were legal” then the people who promised it was legal are culpable if it actually wasn’t. So we’ve sued the President, VP and several others over that. We’ll keep fighting.
But this was just one of many cases. The team made up a little musical animation to summarize them for you. I include it here, but encourage you to follow the link to the site and see what else we did this year. I want you to be impressed, because these are tough-times, and that also makes it tough for non-profits trying to raise money. I know most of you have wounded stock portfolios and are cutting back.
But I’m going to ask you not to cut back to zero. It’s not that bad. If you can’t give what you normally would like to give to make all this good work happen, decide some appropriate fraction and give it. Or if you are one of the few who is still flush, you may want to consider giving more to your favourite charities this year, to make up for how they’re hurting in regular donations.
The work the EFF does needs to be done. You need it to be done. You have a duty to protect your rights and the rights of others. If you can’t do the work to protect them yourself, I suggest you outsource it to the EFF. We’re really good at it, and work cheap. You’ll be glad you did.
PEW Research has released their recent study on the future of the internet and technology where they interviewed a wide range of technologists and futurists, including yours truly. It’s fairly long, and the diverse opinions are perhaps too wide to be synthesized, but there is definitely some interesting stuff in there.