Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.
I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.
There have been experiments with dedicated lanes in the past, including a special automated lane back in the 90s in San Diego. The problem is much easier to solve (close to trivial by today’s standards) if you have a dedicated lane, but this violates the first rule of robocars in my book — don’t change the infrastructure.
Aside from the huge cost of building the dedicated lanes, once you have built a lane you now have a car which can only drive itself in that dedicated lane. That’s a lot less valuable to the customer, effecitvely you are only get customers who happen to commute on that particular route, rather than being attractive to everybody. And you can’t self-drive on the way to or from the highway, so it is not clear what they mean when they say the driver sets a destination, other than perhaps the planned exit.
Yes, the car is a lot cheaper but this is a false economy. Robocar sensors are very expensive today but Moore’s law and volume will make them cheaper and cheaper over time. Highway lanes are not on any Moore’s law curve, in fact they are getting more expensive with time. And if the lane is dedicated, that has a number of advantages, though it comes with a huge cost.
Of course, today, nobody has a robocar safe enough to sell to consumers for public streets. But I think that by the early 2020s, when this study might recommend completing a highway, the engineers would open up the new lane and find that while it’s attractive for its regular nature and especially attractive if it is restricted and thus has lighter and more regular traffic, the cars are already able to drive on the regular lanes just fine.
A better proposal, once robocars start to grow in popularity, would be to open robocar lanes during rush hour, like carpool lanes. These lanes would not be anything special, though they would feature a few things to make the car’s job easier, such as well maintained markings, magnets in the road if desired, no changes in signage or construction without advance notice etc. But most of all they would be restricted during rush hour so that cars could take advantage of the smooth flow and predictable times that would come with all cars being self-driving. Unless humans kept taking over the cars and braking when they got scared or wanted to look at an accident in the other lanes, these lanes would be metered and remain free of traffic jams. However, you need enough robocar flow to justify them since if you only use half the capacity of a lane it is wasteful. On the other hand, such lanes could be driven by the more common “super cruise” style cars that just do lane following and ACC.
Hats off to the video embedded below, which was prepared for a futuristic transportation expo in my home town of Toronto.
Called the PAT (People and Things) this video outlines the UI and shows a period in the day of a robotic taxi/delivery vehicle as it moves around Toronto picking up people and packages.
I first learned about the video from a new blog on the subject of consumer self driving cars — as far as I know the second serious one to exist after this one. The Driverless Car HQ started up earlier this year and posts with a pretty solid volume. They are more comprehensive in posting various items that appear in the media than I am, and cover some areas I don’t, so you may want to check them out. (That’s a conscious choice on my part, as I tend not to post links to stories that I judge don’t tell us much new. An example would be that the SARTRE road train just did a demo in Spain last month, but it was not much different from demos they had done before.)
Of course, as I said earlier, sadly “Driverless Car” is one of my least favourite terms for this technology, but that doesn’t impede the quality of the blog. In addition, while I do report news on the Google car on this blog, I tend to refrain from commentary due to being on that team, and the folks at DCHQ are not constrained this way.
Face recognition of passengers as they approach the car
Automatic playing of media for the passengers (apparently resuming from media paused earlier in some cases)
Doing package delivery work when needed
Self-cleaning after each passenger
Optional ride-share with friends
In-car video conferencing on the car’s screens
Offering the menu of a cafe which is the destination of a trip. (Some suspect this is a location-based ad spam, but I think it’s a more benign feature because the passenger is picking up his ride-share friend at the cafe.)
And the UIs are slick, if a bit busy, and nicely done.
The concept vehicle at the Brickworks is fairly simple but does present some ideas I have felt are worthwhile, such as single passenger vehicles, face to face seating etc. It’s a bit too futuristic, and not aerodynamic. In the concept, it adjusts for the handicapped. I actually think that’s the reverse of what is likely to happen. Rather than making all cars able to meet all needs, it makes more sense to me to have specialized cars that are cheaper and more cost effective at their particular task, and have dedicated (more expensive) vehicles for wheelchairs. (For example, I like the hollow vehicles like the Kenguru.) I think you serve the disabled better for the same money by having these specialized vehicles — the wait may be slightly longer, but the vehicle can be much better at serving the individual’s needs.
Ford, which has already touted the value of robocars, has announced plans to do a traffic-assist autopilot system sometime mid-decade. Ford joins Mercedes, VW/Audi and Cadillac in announcing such systems. Ford’s vehicle will also offer automatic parking in perpendicular parking spots. For some time many cars have offered automated parallel parking. Since most people do not find perpendicular parking all that difficult, perhaps their goal here is very tight spaces (though that would require getting out of the car and blocking the rude driver, which I have found out only gets your car vandalized) or possibly parking in a personal garage that is very thin.
AUVSI and Mercedes
On the negative front, Mercedes appears to have backed off their plan to offer a traffic jam assistant in the 2013 S class. Earlier in June I attended the AUVSI “Driverless Car Summit” in Detroit, and Mercedes indicated that while they do have that technology in their F.800 concept car, this is only a prototype. As currently set up, the Mercedes system requires you to touch the wheel every 8 seconds. Honda was promoting this in 2006. Mercedes also showed their “6D” stereo vision based system which demonstrated impressive object tracking. They also claimed it does as well in differing light conditions, which would be a major breakthrough.
Some other notes from the conference:
There was effectively universal hate for the term “driverless car.” I join the haters, since the car has a driver, but it’s a computer. No other term won big support, though.
While AUVSI is about unmanned military vehicles, they put on a nicely demilitarized conference, which was good.
There were still a lot of fans of DSRC (a car data radio protocol) and V2V communications. Some from that community have now realized they went down the wrong path but a lot had made major career investments and will continue to push it, including inside the government.
The NHTSA is doing a research project on how they might regulate safety standards. They have not laid out a strategy but will be looking at sensor quality, low level control system squality, UI for the handoff between manual and self-driving and testing methodology.
I liked Mercedes’ terms for various modes of self-driving: Feet off, Hands off, Eyes off and Body out. The car companies are aiming at hands off, Google is working on Eyes Off but Body out (which means being so good that the car can operate without anybody in it or without any attention from the occupant) is the true robocar and the long term goal for many but not all projects.
Continental showed more about their own cruising system that combines lane-keeping and automatic cruise-control. They now say they have the 10,000 miles of on-road testing needed for the Nevada testing licence, but have not yet decided if they will get one. There is some question is what they are doing requires a licence under the Nevada regulations. (I suspect it does not.) However, they were quizzed as to whether they were testing in Nevada without a licence, which they deny. Continental says their system is built entirely from parts that will be “production parts” as of early 2013.
Legal and states panels showed progress but not too much news. States seem to be pleased so far.
The National Federation for the Blind showed off their blind driving challenge. They have become keen on building a car which has enough automation for a blind person to operate but still uses the blind driver’s skills (such as hearing and thinking) to make the task possible. This is an interesting goal for the feeling of autonomy, but I suspect it is more likely they will just get full-auto cars sooner, and they accept this is likely.
Know me by my flyer number and don’t repeat things to me I’ve already certified as knowing, like safety rules
Know my language (I input it, after all) and don’t bother me with announcements other than in my best understood language
Show me most things as text, perhaps in a crawl under my show. If need be, have me confirm I understand.
Tailor the message to my age and my location in the plane. Show me exits on the screen for my seat.
Cut back on the spam about how great your airline is, how wonderful the FF plan is or why I should buy duty free.
Today, instead you can see the visible frustration on the faces of flyers as their movie is interrupted so that they can here the translation into Russian of the long announcement they just heard in German and English.
Having good custom in-flight entertainment is good, and considered a major competitive feature, but already I see more and more people preferring to put out a tablet, even when they have a super-fancy system in the first-class seats. The tablet of course does not have the interruptions (even for the tiny number of real announcements such as in our case last week, “we can’t get the landing gear doors to close so we’re dumping fuel and returning to the airport”) and it also has, if you prepare it, customized entertainment that you know you want to watch.
Frankly, I am not sure who programs the video selections on many of the airlines but I have to suspect they don’t just try to get the best movies with good reviews. They either try to get the cheapest movies or have deals with certain studios — it’s amazing how few quality films they might have in a selection of 100 movies.
I also remain disappointed at how badly implemented most of the in-flight systems I have seen. They are all slow, with highly noticeable lags after keypresses, poor touchscreens, freezes and crashes. Any tablet or phone puts them to shame when it comes to UI and responsiveness. And to top it off, they are huge — on main airlines, many of the seats have reduced footroom to fit the box for the video system. (It also has other in-seat electronics I presume but still, it’s about 10x bigger than it needs to be.) This is odd particularly since in planes floor space and weight are at such a premium. A tablet computer, either fixed in the seat or on an anti-theft power/data tether, would provide a better system — smaller, lighter, better UI, cheaper, better screen — in just about every way. Of course when they first designed these seats years ago they did not have cheap tablets but there is little excuse to continue installing the old ones.
Wait, how could they have known? How could they have not known. It’s 2012. We’ve known for decades now that each year computer products get smaller, faster, cheaper and superior in major ways. When you are designing a system to install in the future, it’s a mistake to design it based on the current technology. You should bet that something better will be along and make your design adaptable to it. If nothing else your standard design is going to get faster and higher resolution — which makes the slow response time of the existing systems inexplicable.
Many airlines are starting to offer satellite TV. That’s better than the old limited selections (or in particular a single bad movie) but actually not too appealing. Aside from being full of commercials and ignoring your schedule, with TV the announcements and interruptions make you miss crucial parts of your show as they talk over them. More than once I’ve been watching a show on an airline to have them talk over the climax of the film.
I’m whining a lot but it’s because I do believe this is important. Truth is that on a flight you are often tired and cramped, and reading and working are not tremendously comfortable. I bring a book but read at a reduced speed. Having nice noise-cancelling headphones and a good in-flight entertainment system with quality content can make a make a flight much better, and it’s a shame that so many things are obviously wrong with the systems they have built. Today’s flights are stressful in any cabin, and a quiet and uninterrupted experience would do a lot to increase customer satisfaction.
There’s a lot of excitement about the potential of autonomous drones, be they nimble quadcopters or longer-range fixed wing or hybrid aircraft. A group of students from Singularity University, for example, has a project called MatterNet working to provide transportation infrastructure for light cargo in regions of Africa where roads wash out for half the year.
Closer to home, these drones are not yet legal for commercial use, while government agencies are using them secretly.
Here’s one useful idea: A small set of medical drones scattered around the city. Upon emergency call, they can fly, via a combination of autonomous navigation and remote-human-operated flying at the end, to any destination in the city within a couple of minutes. Call 911 and as soon as you say it’s a medical emergency the drone is on the way. When it gets there, the human operator lands it or even sends it in a balcony on tall buildings with balconies. Somebody has to carry it to the patient if they are far from the outside.
When it gets to the patient it has a camera and conferencing ability to a remote doctor can examine the patient and talk to people around the patient to ask them questions or give them instructions. It also could contain one of those “foolproof defibrillator” modules able to deal with many kinds of heart attacks. They are already in many buildings but this way they could be anywhere. It’s more useful than a taco.
The remote doctor could advise any medical staff who come, or give advice to the ambulance that’s on the way but not getting there for a few minutes. If a medicine that can be administered by a layperson is needed, there might be some in the drone but a second drone could be loaded and dispatched within a few minutes as well — that might take longer to fly but less time than an ambulance. You might not put any valuable medicines in the first drone to prevent people from summoning them just to steal them, though this might just happen for the valuable drone unless steps are taken to make that non-productive.
This should be combined with something I have felt is long overdue in the world of our mobile phones. People who are able to be on-call EMTs and doctors should have their phones updating their locations with a medical service while they are on call for such action. Then anybody with an emergency should be able to summon or get to the closest professional very quickly. (Of course there is no need to record this data after it changes, to avoid making a life-log of the doctor.) Nobody should ever have to ask “is there a doctor in the house?” 911 should be able to say, “There is a doctor 3 doors down, she’s been notified.” But the drone can always come, and bring a remote specialist if need be.
The other barrier to this is network dead zones. A map would need to be made of network dead zones and the drone would not fly into them, though it could fly through them. It would land just outside the dead zone and warn people not to carry it into one if the remote doctor’s services are needed.
Someday, the drone could contain a winner of the X-prize “Medical Tricorder” contest with sensors to diagnose all sorts of conditions, and it might even eventually be a robot able to administer emergency drugs — but the actual delivery and video feed is something we can do today.
One of my first rules of robocars is “you don’t change the infrastructure.” Changing infrastructure is very hard, very expensive, requires buy-in from all sorts of parties who are slow to make decisions, and even if you do change it, you then have a functionality that only works in the places you have managed to change it. New infrastructure takes many decades — even centuries, to become truly ubiquitous.
That’s why robocar enthusiasts have been skeptical of things like ITS plans for roadside to vehicle and vehicle to vehicle communications, plans for dedicated highway lanes with special markers, and for PRT which needs newly built guideways. You have to work with what you have.
There are some ways to bend this rule. Some infrastructure changes are not too hard — they might just require something as simple and cheap as repainting. Some new infrastructures might be optional — they make things better in the places you put them, but they are not necessary to operations. Some might focus on specific problem areas — like special infrastructure in heavy pedestrian areas or parking lots, enabling or improving optional forms of operation in those areas.
Another possiblility is to have robocars enable a form of new infrastucture, turning it upside down. The infrastructure might need the robocars rather than the other way around. I wrote about that sort of plan when discussing a solar panel on a robocar.
A recent proposal from Siemens calls for having overhead electric wires for trucks. Trolley buses and trams use overhead electric wires, and there are hybrid trolley buses (like the Boston T line) which can run either on the wires or on an internal diesel. These trucks are of that type. The main plan for this is to put overhead wires in things like shipping ports, where trucks are running around all the time, and they would benefit greatly from this.
I’ve seen many proposals for electrication of the roads. Overhead wires are problematic because they need to be high enough to go over the trucks and other high vehicles, but that makes them harder to reach by low vehicles. You need two wires and must get good contact. They are also damn ugly. This has lead to proposals for inductive power supplies buried in the road. This is very expensive as it requires tearing up the road. There are also inductive losses, and while you don’t need to make contact, precise driving is important for efficiency. In these schemes, battery-electric cars would be able to avoid using their batteries (and in fact charge them) while on the highway, vastly increasing their range and utility.
Robocars offer highly precise driving. This would make it easier to line up on overhead wires or inductive coils in the road. It even would make it possible to connect with rails in the roadbed, though right now people don’t want to consider having a high voltage rail on the ground, even on a highway.
It was proposed to me (I’m trying to remember by who — my apologies) that one new option would be a rail on the side of the highway. This lane would be right up against the guardrail, and normally would be the shoulder. In the guardrail would be power rails, and a connector would come from the left side of the vehicle. Only a robot would be able to drive so precisely as to do this safely. Even with a long pole and more distance I am not sure people would enjoy trying to drive like this. A grounding rail in the roadbed might also be an option — though again tearing up the roadbed is very expensive to do and maintain.
There is still the problem of having a live rail or wire at reachable height. The system might be built with an enclosed master cable and then segments of live wire which are only live when a vehicle is passing by them. Obviously a person doesn’t want to be there when a car is zooming through. This requires roboust switching eqiupment for the thousands of watts one wishes to transfer. You also have to face the potential that a car from the regular lanes could crash into the rail and wires, and while that’s never going to be safe you don’t want to make it worse. You also need switching if you are going to have accounting, so only those who pay for it get power. (Alternately it could be sold by a subscription so you don’t account for the usage and you identify cars that don’t have a subscriber tag who are sucking juice and fine them.)
There is also the problem that this removes the shoulder which provides safety to other cars and provides a breakdown lane. If a vehicle does have to stop in this lane for emergency reasons, sensors in the rail could make sure that all robocars would know and leave the lane with plenty of margin. They would all have batteries or engines and be able to operate off the power — indeed the power lines need not be continuous, you don’t have to build them in sections of the road where it’s difficult. If other cars are allowed to enter the lane, it must not be dangerous other than physically for them to brush the wires.
It’s also possible that the rail could be inductive. The robocar could drive and keep its inductor contact just a short distance from the coils in the rail. This is more expensive than direct contact, and not as efficient, but it’s a lot cheaper than burying inductors in the roadbed. It’s safe for pedestrians and most impacts, and while a hard impact could expose conductors, a ground fault circuit could interrupt the power. Indeed, because all vehicles on the line will have alternate power, interruption in the event of any current not returning along the return is a reasonable strategy.
For commuters with electric cars, there is a big win. You can get by with far less battery and still go electric. The battery costs a lot of money — more than enough to justify the cost of installing the connection equipment. And having less battery means less weight, and that’s the big win for everybody, as you make the vehicles more efficient when you cut out that weight. Of course, if this lane is only for use by electrified robocars, it becomes a big incentive to get one just to use the special lane.
The power requirements are not small. Cars will want 20kw to go at highway speed, and trucks a lot more. This makes it hard to offer charging as well as operating current, but smaller cars might be able to get a decent charge while driving.
Like most people, I have a lot of different passwords in my brain. While we really should have used a different system from passwords for web authentication, that’s what we are stuck with now. A general good policy is to use the same password on sites you don’t care much about and to use more specific passwords on sites where real harm could be done if somebody knows your password, such as your bank or email.
The problem is that over time you develop many passwords, and sometimes your browser does not remember them for you. So you go back to a site and try to log in, and you end up trying all your old common passwords. The problem: At many sites, if you enter the wrong password too many times, they lock you out, or at least slow you down. That’s not unwise on their part, but a problem for you.
One solution: Sites can remember hashes of your old passwords. If you type in an old password, they can say, “No, that used to be your password but you have a new one now.” And not count that as a failed attempt by a password cracker. This adds a very slight risk, in that it lets a very specific attacker who knows you super well get a few free hits if they have managed to learn your old passwords. But this risk is slight.
Of course they should store a hash of the password, not the actual password. No site should store the actual password. If a site can offer to mail you your old password rather than offering a link to reset the password, it means they are keeping it around. That’s a security risk for you, and also means if you use a common password on such sites, they now know it and can log in as you on all the other sites you use that password at. Alas, it’s hard to tell when creating an account whether a site stores the password or just a hash of it. (A hash allows them to tell if you have typed in the right password by comparing the hash of what you typed and the stored hash of the password back when you created it. A hash is one-way so they can’t go from the hash to the actual password.) Alas, only a small minority of sites do this right.
This is just one of many things wrong with passwords. The only positive about them is you can keep a password entirely in your memory, and thus go to a random computer and login without anything but your brain. That is also part of what is wrong with them, in that others can do that too. And that the remote computers can quite easily be compromised and recording the password. The most secure systems use the combination of something in your memory and information in a device. Even today, though, people are wary of solutions that require them to carry a device. Pretty soon that will change and not having your device will be so rare as to not be an issue.
I’m doing a former-cold-war tour this month and talking about robocars.
This Friday, May 11, I will be giving the 2301st lecture for the Philosophical Society of Washington with my new, Prezi-enabled robocars talk. This takes place around 8pm at the John Wesley Powell Auditorium. This lecture is free.
A week later it’s off to Moscow to enjoy the wonders of Russia.
There will be a short talk locally in between at a private charity event on May 14.
I found this recent article from the editor of the MIT Tech review on why apps for publishers are a bad idea touched on a number of key issues I have been observing since I first got into internet publishing in the 80s. I recommend the article, but if you insist, the short summary is that publishers of newspapers and magazines flocked to the idea of doing iPad apps because they could finally make something they that they sort of recognized as similar to a traditional publication; something they controlled and laid out, that was a combined unit. So they spent lots of money and ran into nightmares (having to design for both landscape and portrait on the tablet, as well as possibly on the phones or even Android.) and didn’t end up selling many subscriptions.
Since the dawn of publishing there has been a battle between design and content. This is not a battle that has or should have a single winner. Design is important to enjoyment of content, and products with better design are more loved by consumers and represent some of the biggest success stories. Creators of the content — the text in this case — point out that it is the text where you find the true value, the thing people are actually coming for. And on the technology side, the value of having a wide variety of platforms for content — from 30” desktop displays to laptops to tablets to phones, from colour video displays to static e-ink — is essential to a thriving marketplace and to innovation. Yet design remains so important that people will favour the iPhone just because they are all the same size, and most Android apps still can’t be used on Google TV.
This is also the war between things like PDF, which attempts to bring all the elements of paper-based design onto the computer, and the purest form of SGMLs, including both original and modern HTML. Between WYSIWYG and formatting languages, between semantic markup and design markup. This battle is quite old, and still going on. In the case of many designers, that is all they do, and the idea that a program should lay out text and other elements to fit a wide variety of display sizes and properties is anathema. To technologists, that layout should be fixed is almost as anathema.
Also included in this battle are the forces of centralization (everything on the web or in the cloud) and the distributed world (custom code on your personal device) and their cousins online and offline reading. A full treatise on all elements of this battle would take a book for it is far from simple.
I sit mostly with the technologists, eager to divide design from content. I still write all my documents in text formatting languages with visible markup and use WYSIWYG text editors only rarely. An ideal system that does both is still hard to find. Yet I can’t deny the value and success of good design and believe the best path is to compromises in this battle. We need compromises in design and layout, we need compromises between the cloud and the dedicated application. End-user control leads to some amount of chaos. It’s chaos that is feared by designers and publishers and software creators, but it is also the chaos that gives us most of our good innovations, which come from the edge.
Let’s consider all the battles I perceive for the soul of how computing, networks and media work:
The design vs. semantics battle (outlined above)
The cloud vs. personal device
Mobile, small and limited in input vs. tethered, large screen and rich in input
Central control vs. the distributed bazaar (with so many aspects, such as)
The destination (facebook) vs. the portal (search engine)
The designed, uniform, curated experience (Apple) vs. the semi-curated (Android) vs. the entirely open (free software)
The social vs. the individual (and social comment threads vs. private blogs and sites)
The serial (email/blogs/RSS/USENET) vs. the browsed (web/wikis) vs. the sampled (facebook/twitter)
The reader-friendly (fancy sites, well filtered feeds) vs. writer friendly (social/wiki)
In most of these battles both sides have virtues, and I don’t know what the outcomes will be, but the original MITTR article contained some lessons for understanding them.
I have not intended for this blog to become totally about robocars but the news continues to flow at a pace more rapid than most expected.
Nevada has issued its first licence for an autonomous car — to Google, of course. This is a testing licence with a special red plate with an infinity symbol on it. It’s a cool looking licence but what’s really cool is that even in the 2000s when I would give talks on this technology and get called a ridiculous optimist, I never expected that we would see an official licenced robocar in the USA in the spring of 2012 — even if only for testing.
This is a picture of a car with a California plate. The new plate has licence number 001, you can see a picture here.
The Nevada law enabled both the testing of vehicles in the state and their eventual operation by regular owners. For testing, the vehicles need to have two people in them, as has been normal Google policy. They must do 10,000 miles first off of Nevada roads — either on test tracks, or in the case of the early vehicles, in other states that don’t have a 10,000 mile requirement. German auto and tire supplier Continental has said it’s been racking up the 10,000 miles and wants to apply, and press reports say other applicants are in the wings. As far as I know this is the first officially licenced car in the world, though several other research cars have gotten special one-off permits to allow them to be tested on the roads in places like Germany and China.
More information has come from the Google team (to which I am a consultant) at the Society of Automotive Engineers conference in Detroit. In a speech there, covered in the Detroit Free Press and many others Anthony Levandowski outlined how Google has been talking to all significant car manufacturers about how they might work together to produce cars with Google’s technology. Google is not looking to become a car manufacturer, but does want to see a real car on the roads — and not next decade.
At the same time, talks with insurance companies about how to provide insurance for self-driving cars are also going on. Insurance companies pay the cost of all accidents, either directly through policies bought by the driver, or indirectly through insurance sold to manufacturers, and of course all these policies and cars are really paid for by car owner/drivers. As long as accidents are lowered, and the cost per accident remains the same, it’s a win.
At the same time J.D. Power and Associates released a study on self driving car markets. This survey shows around a third of buyers would like to get self-driving functionality in their car, and about 20% would pay $3,000 for it. While advanced laser-based scanners cost much more than that today, I am confident that Moore’s Law and higher volumes can bring things down to that price. These numbers are quite high for such a radical new technology. Such technologies normally only require a small volume of early adopters to get them going. The varoius basic autopilots announced by car manufacturers which require you to still keep your attention on the road will sell for well under $3,000.
Sebastian Thrun, leader of the Google X Lab, recently appeared on Charlie Rose where he spoke about the car, about Glass, and mostly about Udacity, his personal online education project. Sebastian also publicly posted that he took one of the Google self-driving Lexus cars up to lake Tahoe this weekend. I do think those long vacation home drives will be a big driver of people to pay serious money for a self-driving car. Saving time on the average 30 minute commute is one thing, but the 4 hour drive to Lake Tahoe is a real change, especially if you can use the time to interact with your family or get in serious reading or video watching. Of course, right now, Sebastian was keeping his eyes on the road in case he needed to intervene, since this is still a prototype.
Finally, NHTSA has released a report saying that robocars could eliminate up to 80% of crashes. While they won’t get to that number right away, I think they can even do better in time. David Strickland, the head of NHTSA, has stated he has very high hopes for the technology, which is tremendous news, because it means that one my biggest fears in my early days of forecasting this technology — too much government opposition — seems less likely.
Some accidents are caused by mechanical failures (like tire blowouts or bad brakes) freak weather and other situations a self-driving car can’t do much about. We may never get to zero. But this should still be the biggest lifesafer in the developed world until somebody cures some of the biggest diseases.
While Mercedes has been reported as promising a traffic-jam autopilot in the 2013 S class due later this year, I was surprised to learn that Honda briefly made claims that their 2006 “Accord ADAS” in the UK was a self-driving car.
However this car is, as the name suggests, an ADAS car with Honda’s lane-keeping system which will nudge the car back into the lane if you drift out of it. Such lane keeping systems have indeed been around for a while. This car notices if you keep your hands off the wheel for more than a short time, and sounds an alarm. In order to “self-drive” the demonstrator keeps his hands close to the wheel and touches it every so often to avoid the alarm. You get the impression that he and others have been using the car in this fashion.
It is no idle alarm. The LKAS nudge is not quite powerful enough to steer the car in any kind of real turn, and the camera finding lane markers of course occasionally fails to find them. This, again is common in fancy ADAS cars. What is interesting is that Honda allowed this to be pitched as an attempt at self-driving. They have not done this recently, though lane-keep ADAS systems have continued to be available since then from Honda and other vendors.
Honda has been generally not too active in announcements of self-driving cars. They have shown concept cars that listed self-driving as one of the features, but these were concept cars, not actual implementations. Toyota and Nissan have both made various announcements. The smaller Japanese companies (Mazda, Mitsubishi and Subaru/Fuji) also have no public projects.
On a second note, I will be speaking Wednesday morning at the MLOVE Conference in Monterey on self-driving cars. Then I will be heading over to the Asilomar Microcomputer Workshop — a 35 year old conference I’ve been going to for decades which happens to be in the same place at the same time.
In the Cadillac video below, they explain the system as a combination of ACC, lane-keeping and GPS. This is similar to the other announced plans from many other car companies, including Mercedes, BMW, VW/Audi and others. The use of GPS suggests the car may also use map information, which is not known to be used by the other announced products, but is heavily used by Google and the various eyes-free projects.
It is pure speculation, but perhaps they are building maps of where the lane markers are reliable enough and where they have faded out so that they can refuse to super-cruise when approaching those zones. They might also use the GPS to assure you super-cruise only on the highway or other limited areas.
In the video, which shows a demo at about the 1:10 mark, they are driving on a test track, and always next to a blue line along the lane markers. Obviously a real product could not depend on special lane striping if it wants to be broadly usable, but this may assist them in testing their system with confidence. (ie. compare what their lane-finder detects to what an independent system that tracks the blue line detects.)
GM has had various self-driving projects, including the futuristic EN-V and the sponsorship of BOSS in the Darpa Urban Challenge. The Cadillac brand is well positioned. Self-driving is initially going to be a luxury feature, but companies that sell sporty performance cars don’t want to detract from their image as selling a fun driving experience. A pure luxury brand like Cadillac does not have as much of that problem as BMW and Mercedes have. At the same time, the video insists that they don’t want to take away from driving.
It’s been interesting to see how TV shows from the 60s and 70s are being made available in HDTV formats. I’ve watched a few of Classic Star Trek, where they not only rescanned the old film at better resolution, but also created new computer graphics to replace the old 60s-era opticals. (Oddly, because the relative budget for these graphics is small, some of the graphics look a bit cheesy in a different way, even though much higher in technical quality.)
The earliest TV was shot live. My mother was a TV star in the 50s and 60s, but this was before videotape was cheap. Her shows all were done live, and the only recording was a Kinescope — a film shot off the TV monitor. These kinneys are low quality and often blown out. The higher budget shows were all shot and edited on film, and can all be turned into HD. Then broadcast quality videotape got cheap enough that cheaper shows, and then even expensive shows began being shot on it. This period will be known in the future as a strange resolution “dark ages” when the quality of the recordings dropped. No doubt they will find today’s HD recordings low-res as well, and many productions are now being shot on “4K” cameras which have about 8 megapixels.
But I predict the future holds a surprise for us. We can’t do it yet, but I imagine software will arise that will be able to take old, low quality videos and turn them into some thing better. They will do this by actually modeling the scenes that were shot to create higher-resolution images and models of all the things which appear in the scene. In order to do this, it will be necessary that everything move. Either it has to move (as people do) or the camera must pan over it. In some cases having multiple camera views may help.
When an object moves against a video camera, it is possible to capture a static image of it in sub-pixel resolution. That’s because the multiple frames can be combined to generate more information than is visible in any one frame. A video taken with a low-res camera that slowly pans over an object (in both dimensions) can produce a hi-res still. In addition, for most TV shows, a variety of production stills are also taken at high resolution, and from a variety of angles. They are taken for publicity, and also for continuity. If these exist, it makes the situation even easier. read more »
By now, you’ve probably heard of the proposal from the White House to abolish April Fool’s Day as a national holiday starting in 2015. Some in the comedy community are upset at the end of an old tradition and a day devoted to what we love.
But it’s time to face facts. It’s just not working any more. When I was a kid, April 1st was mostly a day of physical pranks or very short gags. You would replace the sugar with salt or put a white powder in an envelope. But the internet changed it and made every gag global.
The key to a good gag was the person believing in the gag and then suddenly remembering what day it was. If you were lucky they didn’t clue in and you could exclaim “April Fool” for much hilarity.
It was common in days past for people to forget what day it was. One of my best pranks came decades ago, when I posted in Science Fiction forums on April 1 that Fred Saberhagen’s “Berserker” novels were a rip-off of the fine original Battlestar Galactica series. Over 70 different people posted rants about how stupid I was, and a serious fraction of them pointed out that the Saberhagen books long predated Galactica, and said things like “why don’t you check the dates on what you read?”
Now, nobody is surprised. Google has 13 different gags up today, including one on the front page. Every major web site has a gag, many have long traditions. Perhaps somebody is briefly surprised by the first one, but generally everybody knows what day it is and nobody is fooled.
Some have proposed that the national Fool’s day be moved to a random day each year, with not much promotion done about what the date is. People who were funny (or thought they were funny) would make sure they knew the date. I am not sure that’s enough — it would help make the first gag a surprise but soon the tolerance would build up.
A bit better is the proposal from then National Comedy & Gag Association to have a different day in each state, as proclaimed by the Governor, or even every city. This would allow surprise because when you read jokes from other geographic regions, you might see only half a dozen on any given day. You would then have to research the location of the joke and check to see if that location is having its local Fool’s day that day.
Can anything restore the sanctity of this holiday? It may be that this is one thing the internet has destroyed.
I recently updated my book recommendation box to list the very best recent SF to read from the last few years. This is SF that meets my goals for great SF. I see somewhat “hard” SF that speaks about important and real ideas, while being entertaining writing at the same time.
The Quantum Thief by Hannu Rajaniemi (2011)
This astounding first novel rates as best of 2011 for me. Except it came out in 2010, but in limited release in the UK so most people did not see it until 2011. An amazingly constructed post-singularity world that deserves all the superlatives. The next book is eagerly awaited. Particularly remarkable is that as a Finn, I presume his first language was not English. It is disappointing that it did not receive a Hugo nomination.
Super Sad True Love Story by Gary Shteyngart (2010)
This novel was paid surprisingly little attention by the SF community, but in fact it’s the best SF novel of 2010. A wonderful dystopian view of a failing USA where only dollars backed by the Yuan are valuable and the coveted jobs are in retail and media. A dark view of whuffie-like reputation where everybody’s credit score is displayed everywhere they go, and at every gathering everybody is rated on fuckability (and you see where you stand.) The anti-hero works for an anti-aging company that is a marvelous parody but the topics are deep and serious. Not even nominated for the Hugo which is a terrible mistake.
The City and the City by China Miéville (2009)
The best of 2009 (tied for the Hugo award, too.) The City and the City at first may not seem like SF because the cities are so implausible, but it’s really a fun experiment in social or political science to imagine two towns co-existing like this, partly overlaid in space while the residents are trained from birth to pay no notice to the other city. This is probably the weakest on this list, and indeed the co-winner that year (Windup Girl) was almost anti-SF as the science it it was fully bogus. But CatC grew on me as I came to see it as alternate-social worldbuilding.
Anathem by Neal Stephenson (2008)
It came 2nd for the Hugo, but even the winner, Neil Gaiman, declared it should have won. Read my full review.
Rainbow’s End by Vernor Vinge (2006)
The Hugo Winner for 2006 is also my pick for the best of the decade. If you like your SF full of wonderful new ideas, in this case related to the near future rather than the more abstract distant ones seen in earlier Vinge triumphs, this is the book for you. The protagonist has recently been cured of Alzheimer’s but that doesn’t mean many of his memories weren’t destroyed. He tries to fit into a world where everybody wears augmented reality lenses and clothes, education and play are radically different and a conspiracy is trying to develop a drug that makes you more accepting of suggestions. Note that 2006 also included the excellent Blindsight by Peter Watts available free here.
Other great reads
As noted above check out Embassytown (nominated for the Hugo in 2012) and other Miéville works, and Blindsight by Peter Watts.
If you like Zombies, read Feed by Mira Grant — or rather read it for its treatment of a future, blogger-centered media world. It and its sequel were/are Hugo nominated. Several by Charlie Stross rate highly, such as Halting State, which is probably the best SF novel of 2007 — though the alternate history and Hugo winner The Yiddish Policemen’s Union is a better overall novel. And if you’re from the 80s like me you will want to read the recent Ready Player One, a novel about a world where the now richest man in the world created a globe-spanning MMORPG, and then willed it to whoever could solve a challenge in it. To win, you needed to know all the obscure 70s and 80s culture references that were dear to the deceased programmer.
Going back in the decade 2004 was also a very strong year with River of Gods being worth of a best-of-decade list, and The Algebraist and Iron Sunrise (particularly for its wonderful reMastered cult of the unborn god) are also very strong. 2006 had the very fun Old Man’s War as a fine debut novel, and Accelerando is superb (indeed unmatched until Rainbow’s End) for its ideas but lacking in its characters — Stross gets better at this later.
Today Google released a new 3 minute video highlighting advanced self-driving car use. Here I embed the video, discussion below includes some minor spoilers on surprises in the video. I’m pleased to see this released as I had a minor & peripheral role in the planning of it, but the team has done a great job on this project.
This video includes active operation of the vehicle on not just ordinary streets, by private parking lots for door to door transportation. You can click on it to see it in HD directly on Youtube. read more »
For some time, the US Postal Service has allowed people to generate barcoded postage. You can do that on the expensive forms of mail such as priority mail and express mail, but if you want to do it on ordinary mail, like 1st class mail or parcel post, you need an account with a postage meter style provider, and these accounts typically include a monthly charge of $10/month or more. For an office, that’s no big deal, and cheaper than the postage meters that most offices used to buy — and the pricing model is based on them to some extent, even though now there is no hardware needed. But for an ordinary household, $120/year is far more than they are going to spend on postage.
There is one major exception I know of — if you buy something via PayPal, they allow you to print a regular postage shipping label with electronic postage. This is nice and convenient, but no good for sending ordinary letters and other small items.
I think the USPS is shooting itself in the foot by not letting people just buy postage online with no monthly fee. The old stamp system is OK for regular letters, and indeed they finally changed things so that old first class stamps still work after price raises, but for anything else you have to keep lots of stamps in supply and you often waste postage, or make a trip to a mailing office. This discourages people from using the post office, and will only hasten its demise. Make it trivial to mail things and people will mail more.
It could be a web printed mailing label as you can use for priority mail, but most software vendors would quickly support such a system. If people wanted, they could even buy “stamps” which were collections of electronic postage in various denominations that could be used by programs so there is no need to handle transactions. Address label printers would all quickly also do postage.
Of course the official suppliers like Endica and stamps.com would fight this completely. They love being official suppliers and charging large fees. They have more lobbying power than ordinary mailers. So the post office is going to quietly slip away into that good night, instead of taking advantage of the fact that it’s the one delivery company that comes to my door every day (for both pick up and delivery) and all the effiencies that provides.