Brad Templeton is an EFF
director, Singularity U
faculty, software architect and internet entrepreneur, robotic car strategist, futurist lecturer, hobby photographer and Burning Man artist.
This is an "ideas" blog rather than a "cool thing I saw today" blog. Many of the items are not topical. If you like what you read, I recommend you also browse back in the archives, starting with the best of blog section. It also has various "topic" and "tag" sections (see menu on right) and some are sub blogs like Robocars, photography and Going Green. Try my home page for more info and contact data.
Submitted by brad on Wed, 2015-01-28 12:05.
As some of you may know, I have been working as chair of computing and networking at Singularity University. The most rewarding part of that job is our ten week summer Graduate Studies Program. GSP15 will be our 7th year of it. This program takes 80 students from around the world (typically over 30 countries and only 10-15% from North America) and gives them 5 weeks of lectures on technology trends in a dozen major fields, and then 5 weeks of forming into teams to try to apply that knowledge and thinking to launch projects that can seriously change the world. (We set them the goal of having the potential to help a billion people in 10 years.)
The classes have all been fantastic, and many of the projects have gone on to be going concerns. A lot of the students come in with one plan for their life and leave with another.
It’s about to get better. One big problem was that the program is expensive. Last year we charged almost $30,000 (it includes room and board) and most of the scholarships were sponsored competitions in different countries and regions. This limits who can come.
Larry Page and Google helped found Singularity U in 2009, and has stepped up massively this year with a scholarship fund that assures that all accepted students will attend free of charge. Students will either get in through one of the global contests, or be accepted by the admissions team and given a full scholarship. It means we’ll be able to select from the best students in the world, regardless of whether they can afford the cost.
In spite of the name, SU is not really about “the singularity” and not anything like a traditional university. The best way to figure it out is to read the testimonials of the graduates.
Students come in many age ranges — we have had early 20s to late 50s, with a mix of backgrounds in technology, business, design and art. Show us you’re a rising star (or a star that has done it before and is ready to do it again even bigger) and consider applying.
Speaking at SU
In the rest of the year we do a lot of shorter programs, from a couple of days to a week, aimed at providing a compressed view of the future of technology and its implications to a different crowd — typically corporate, entrepreneur and investor based. As that grows, we need more speakers, and I’m particularly interested in finding new folks to add related to computing and networking technologies. We do this all over the planet, which can be a mix of rewarding and draining, though about half the events are in Silicon Valley. There are 3 things I am looking for:
- The chops and expertise in your field to do a cutting edge talk — why do we start listening to you?
- Great speaking skills — why do we keep listening to you?
- All else being equal, I seek more great female and minority speakers to reverse Silicon Valley’s imbalances, which we suffer as well.
Is this you, or do you have somebody to recommend? Contact me (email@example.com) for more details. While top-flight people generally have some of their own work to talk about, and I do use speakers sometimes on very specific topics, the ideal speaker is a great teacher who can cover many topics for audiences who are very smart but not always from engineering backgrounds.
Our next public event is March 12-14 in Seville, Spain — if you’re in Europe try to make it.
Submitted by brad on Sat, 2015-01-24 12:24.
Some new results from the NGV Team at the University of Michigan describe different approaches for perception (detecting obstacles on the road) and localizations (figuring out precisely where you are.) Ford helped fund some of the research so they issued press releases about it and got some media stories. Here’s a look at what they propose.
Many hope to be able to solve robotics (and thus car) problems with just cameras. While LIDAR is going to become cheap, it is not yet, and cameras are much cheaper. I outline many of the trade-offs between the systems in my article on cameras vs lasers. Everybody hopes for a research breakthrough or computer vision breakthrough to make vision systems reliable enough for safe operation.
The Michigan lab’s approach is a special machine vision one. They map the road in advance in 3D and visible light by using a mapping car equipped with lots of expensive LIDAR and other sensors. They build a 3D representation of the road similar to what you need for a video game engine, and from that, with the use of GPUs, they can indeed create a 2D image of what a camera should see from any given point.
The car goes out into the world and its actual camera delivers a 2D frame of what it sees. Their system then compares that with generated 2D images of what the camera should see until it finds the closest match. Effectively, it’s like you looking out a window and then going into a video game and wandering around looking for a place that looks like what you see out that window, and then you know where the window is.
Of course it is not “wandering,” and they develop efficient search algorithms to quickly find the location that looks most like the real world image. We’ve all seen video games images, and know they only approximate the real world, so nothing will be an exact match, but if the system is good enough, there will be a “most similar” match that also corresponds with what other sensors, like your GPS and your odometer/dead reckoning system, tell you about where you probably are.
Localization with cameras has been done before, and this is a new approach taking advantage of new generations of GPUs, so it’s interesting. The big challenge is simulating the lighting, because the real world is full of different lighting, high dynamic range, and shadows. The human system has no problem understanding a stripe on the road as it moves through the shadow of a tree, but computer systems have a pretty tough time with that. Sun shadows can be mapped well with GPUs, but shadows from things like the moving limbs of trees are not possible to simulate, as are the shadows of other vehicles and road users. At night, light and shadows come from car headlights and urban lights. The team is optimistic about how well they will handle these problems.
The much larger challenge is object perception. Once you have a simulation of what the camera should see, you can notice when there are things present that are not in the prediction — like another car or pedestrian, or a new road sign. (Right now their system mostly is looking at the ground.) Once you identify the new region, you can attempt to classify it using computer vision techniques, and also by watching it move against the expected background.
This is where it gets challenging, because the bar is very high. To be used for driving it must effectively always work. Even if you miss 1 pedestrian in a million you have a real problem because there are billions of pedestrians encountered by a billion drivers every day. This is why people love LIDAR — if something (other than a mirror or sheet of glass) sufficiently large is sufficiently close you, you’re going to get laser returns from it, and not from what’s behind it. It has the reliability number that is needed.
The challenge of vision systems is to meet that reliability goal.
This work is interesting because it does a lot without relying on AI “computer vision” techniques. It is not trying to look at a picture and recognize a person. Humans are able to look at 2D pictures with bizarre lighting and still tell you not just what the things in the picture are, but often how far away they are and what they are doing. While we can be fooled in a 2D image, once you have a moving dynamic world, humans are, generally reliable enough at spotting other things on the road. (Though of course, with 1.2 million dead each year, and probably 50 million or more accidents, the majority because somebody was “not looking,” we are far from perfect.)
Some day, computer vision will be as good at recognizing and understanding the world as people are — and in fact surpass us. There are fields (like identifying traffic signs from photos) where they already surpass us. For those not willing to wait until that day, new techniques in perception that don’t require full object understanding are always interesting.
I should also point out that while lowering cost is of course a worthwhile goal, it is a false goal at this time. Today, maximal safety is the overriding goal, and as such, nobody will actually release a vehicle to consumers without LIDAR just to save the estimated 2017 cost of LIDAR, which will be sub-$500. Only later, when cameras get so good they completely replace LIDAR safety capabilities for less money would people release such a system to save cost. On the other hand, improving cameras to be used together with LIDAR is a real goal; superior safety, not lower cost.
Submitted by brad on Thu, 2015-01-22 12:13.
Let me confess a secret fear. I suspect that the first “autopilot”
functions on cars is going to be a bit boring.
I’m talking the offerings like traffic jam assist from Mercedes, super cruise from Cadillac
and others. The faster highway assist versions which combine ADAS
functions like lane-keeping and adaptive cruise control to keep the
car in its lane and a fixed distance from the car in front of you.
What Tesla has promoted and what scrappy startup “Cruise” plans to offer
as a retrofit later this year. This is, in NHTSA’s flawed “levels”
document what could be called supervision type 2.
Some of them also offer lane change, if you approve the safety of
All these products will drive your car, slow or fast on highways,
but they require your supervision. They may fail to find the lane in
certain circumstances, because the makers are badly painted, or confusing,
or just missing, or the light is wrong. When they do they’ll kick out
and insist you drive. They’ll really insist, and you are expected to
be behind the wheel, watching and grabbing it quickly — ideally even
noticing the failure before the system does.
Some will kick out quite rarely. Others will do it several times during
a typical commute. But the makers will insist you be vigilant, not just
to cover their butts legally, but because in many situations you really
do need to be vigilant.
Testing shows that operators of these cars get pretty confident,
especially if they are not kicking out very often. They do things they
are told not to do. Pick up things to read. Do e-mails and texts.
This is no surprise — people are texting even now when the car isn’t
driving for them at all.
To reduce that, most companies are planning what they call
“countermeasures” to make sure you are paying attention to the road.
Some of them make you touch the wheel every 8 to 10 seconds. Some will
have a camera watching your eyes that sounds an alarm if you look away
from the road for too long. If you don’t keep alert, and ignore the
alarms, the cars will either come to a stop in the middle of the freeway,
or perhaps even just steer wild and run off the road. Some vendors
are talking about how to get the car to pull off safely to the side of
There is debate about whether all this will work, whether the
countermeasures or other techniques will assure safety. But let’s
leave that aside for a moment, and assume it works, and people stay safe.
I’m now asking the harder question, is this a worthwhile product?
I’ve touted it as a milestone — a first product put out to customers.
That Mercedes offered traffic jam assist in the 2014 S-Class and others
followed with that and freeway autopilots is something I tell people
in my talks to make it clear this is not just science fiction ideas and
cute prototypes. Real, commercial development is underway.
That’s all true, and I would like these products. What I fear though,
is whether it will be that much more useful or relaxing as adaptive cruise
control (ACC.) You probably don’t have ACC in your car. Uptake on it is
quite low — as an individual add-on, usually costing $1,000 to $2,000,
only 1-2% of car buyers get it. It’s much more commonly purchased as
part of a “technology package” for more money, and it’s not sure what
the driving force behind the purchase is.
Highway and traffic jam autopilot is just a “pleasant” feature, as is ACC.
It makes driving a bit more relaxing, once you trust it. But it doesn’t
change the world, not at all.
I admit to not having this in my car yet. I’ve sat in the driver’s seat of
Google’s car some number of times, but there I’ve been on duty to watch
it carefully. I got special driver training to assure I had the skills to
deal with problem situations. It’s very interesting, but not relaxing.
Some folks who have commuted long term in such cars have reported it to
A Step to greater things?
If highway autopilot is just a luxury feature, and doesn’t change
the world, is it a stepping stone to something that does? From a
standpoint of marketing, and customer and public reaction, it is.
From a technical standpoint, I am not so sure. read more »
Submitted by brad on Mon, 2015-01-19 12:38.
For many decades, cameras have come with a machine screw socket (1/4”-20) in the bottom to mount them on a tripod. This is slow to use and easy to get loose, so most photographers prefer to use a quick-release plate system. You screw a plate on the camera, and your tripod head has a clamp to hold those plates. The plates are ideally custom made so they grip an edge on the camera to be sure they can’t twist.
There are different kinds of plates, but in the middle to high end, most people have settled on a metal dovetail plate first made by Arca Swiss. It’s very common with ball-heads, but still rare on pan-heads and lower end tripods, which use an array of different plate styles, including rectangles and hexagons.
The plates have issues — the add weight to your camera and something with protruding or semi-sharp edges on the bottom. They sometimes block doors on the bottom of the camera. If they are not custom, they can twist, and if they are custom they can be quite expensive. They often have tripod holes but those must be off-center.
Arca style dovetails are quite sturdy, but must be metal. With only the 2 sides clamped they can slide to help you position the camera. It is hard, but not impossible to make them snap in, so they usually are screwed and unscrewed which takes time and work and often involves a knob which can get in the way of other things. They are 38mm wide, and normally the dovetails are parallel to the sensor plane, though for strength the plates on big lenses are sometimes perpendicular, which is not an issue for most ball heads.
It’s time the camera vendors accepted that the tripod screw is a legacy part and move to some sort of quick release system standardized and built right into the cameras. The dovetail can probably be improved on if you’re going to start from scratch, and I’m in favour of that, but for now it is almost universal among serious photographers so I will discuss how to use that.
I have seen a few products like this — for example the E-mount to EOS adapter I bought includes a tripod wedge which has both a screw and ARCA dovetails. (Considering the huge difference in weight between my mirrorless cameras and old Canon glass, this mount is a good idea.)
Many cameras are deep enough that a 38mm wide dovetail (with tripod hole) could be built into the base of the camera. You would have to open the clamp fully to insert unless you wanted the dovetails to run the entire length, which you don’t, but I think most photographers would accept that to have something flush. It would expand the size of the camera slightly, perhaps, but much less than putting on a plate does — and everybody with high end cameras puts on a plate.
Today, though, many cameras have flip-up screens. They are certainly very handy. As people want their screens as big as possible, this can be an issue as the screen goes down flush with the bottom. If there’s a clamp on the bottom, it can block your screen from getting out. One idea would be to design clamps that taper away at the back, or to accept the screen won’t go down all the way.
The smaller cameras
A lot of new cameras are not 38mm deep, though. Putting plates on them is even worse as they stick out a lot. While again, a new design would help solve this problem, one option would be to standardize on a narrower dovetail, and make clamps that have an adapter that can slide in, seat securely so it won’t pop when the pressure is applied, and hold the narrower plate. That or have a clamp with a great deal of travel but that tends to take a lot of time to adjust. (I will note that there are 2 larger classes of dovetails used for heavy telescopes, known as the Vixen and the Losmandy “D”. Some vixen clamps are actually able to grab an arca plate, even though they are not as deep because of the valley often formed with the dovetail and the top of the plate.
It’s also possible to have a 2 level clamp that can grab a smaller plate but there must be a height gap, which may or may not work.
Narrower plates would be used only on smaller and lighter cameras, where not as much strength is needed. However, here again it might be time to design something new.
A locking pin
For some time, camcorders have established a pattern of having a small hole forward of the tripod screw for a locking pin. This allows a much sturdier mount that can’t twist with no need to grab edges of the camera body. Still cameras could do well to establish pin positions — perhaps one one forward, and one to the side. All they have to do is have small indentations for these pins, which typically come spring-loaded on the plates so you can still use them if the hole is not there. (The camcorder pin is placed forward of the tripod hole, but often “forward” is in the direction of the rails.)
For small cameras, it would be necessary to put the dovetail rails perpendicular to the sensor, and they would be very short. That’s OK because those cameras are small and light. The clamps screws would need to be flush with the top of the clamp. (This is sometimes true but not always.)
The presence of a pin would allow small, generic clamps to sturdily hold many cameras. For larger cameras, bigger plates would be available. The cost and size of plates would go down considerably.
The tripod leg screw
The world also standardized on using a bigger machine screw — 3/8”-16 thread — to connect tripod legs to tripod heads. This is a stronger screw, but could also use improvement. The fact that it takes time to switch tripod heads is not that big a deal for most photographers, but the biggest problem is there is no way, other than friction, to lock it, and many is the time that I have turned my tripod head loose from my legs. Here, some sort of clamp or retractable pin would be good, but frankly another clamp (quick release or not) might make sense, and it could become a standard for heavier duty cameras as well.
Something entirely new
I would leave it to a professional mechanical engineer to design something new, but I think a great system would scale to different sizes, so that one can have variants of it for small, light devices, and variants for big, heavy gear, with a way that the larger clamps could easily adapt to hold some of the smaller sizes. I would also design it to be backwards compatible if practical — it is probably easy to leave a 1/4-20 hole in the center, and it may even be possible in the larger sizes to have dovetails that can be gripped by such clamps.
Submitted by brad on Fri, 2015-01-16 17:05.
In my earlier article on robocar challenges I gave very brief coverage to the issue of parking. Challenged on that, I thought it was time to expand.
The world “parking” means many things, and the many classes of parking problems have varying difficulties.
The taxi doesn’t park
One of the simplest solutions to parking involves robotaxi service. Such vehicles don’t really park, at least not where they dropped you off. They drop you off and go to their next customer. If they don’t have another ride, they can deliberately go to a place where they know they can easily park to wait. They don’t need to tackle a parking space that’s challenging at all.
Simple non-crowded lots
Parking in basic parking lots — typical open ground lots that are not close to full — is a pretty easy problem. So easy in fact, that we’ve seen a number of demonstrations, ranging back to Junior 3 and Audi Piloted Parking. Cars in the showroom now will identify parking spots for you (and tell you if you fit.) They have done basic parallel parking (with you on the brakes) for several years, and are starting to now even do it with you out of the car (but watching from a distance.) At CES VW showed the special case of parking in your own garage or driveway, where you show the car where it’s going to go.
The early demos required empty parking lots with no pedestrians, and even no other moving cars, but today reasonably well-behaved other cars should not be a big problem. That’s the thing about non-crowded lots: People are not hunting or competing for spaces. The robocars actually would be very happy to seek out the large empty sections at the back of most parking lots because you aren’t going to be walking out that far, the car is going to come get you.
The biggest issue is the question of pedestrians who can appear out from behind a minivan. The answer to this is simply that vehicles that are parking can and do go slow, and slow automatically gives you a big safety boost. At parking lot speed, you really can stop very quickly if a pedestrian appears out of nowhere. The car, after all, is not in a hurry, and can slow itself when close to minivans, or if it has noticed pedestrians who are moving near it and have disappeared behind vehicles. Out at the back of a parking lot, nobody cares if you go 5 km/h, or even right down the center of the lane to assure there are no surprises.
To the right we see a picture of Junior 3 entering a parking lot, hunting for a space and taking it — in 2009.
Mapping is still desirable for parking lots. This is particularly true because parking lots, not being public roads, set up their own sets of rules and put up signs meant only for humans. They may direct traffic to be one-way in certain areas in nonstandard ways. They may have gates when you have to pay or insert tickets. Parking spots will be marked reserved for certain cars (Electric vehicle, expectant mother, wheelchair, employee of the month, CEO, customers of company X) with signs meant for humans.
It’s not necessarily super hard to map a parking lot, just time consuming to encode all these rules. Unlike roads, which everybody drives, any given parking lot likely only serves the people who live, work or shop next to it — you will never park in 95% of the lots in your city, though you will drive most of its main roads. Somebody has to pay for the cost of that mapping — either because lots of people want to use the lot, or because the owner of the lot wants to encourage robocars. Fortunately, with the robocars doing things like using the least popular spots, or even valet parking as described below, there is a strong incentive to the owner of a lot to get it mapped and keep it mapped. Only lots that never fill out would have no incentive, and those lots can often be parked in without a map.
While you want trained mappers to confirm the geometry of a parking lot, coding in the signs and special rules is a task easily left to the parking lot owner. If the lot manager forgets to tag the CEO’s space as reserved, nobody is hurt (except the lot manager when the CEO arrives.)
Robocar parking mistakes are easy to fix. Robocars can put a phone number or URL on the back where you can go to complain about a robocar that is parked badly or blocking things. As long as that doesn’t happen too often, the cost of the support desk is manageable. The folks at the support desk can look out with the robot’s sensors and tell it to move. It’s not like finding a human driven car blocking something, where you have to find the owner. In a minute, the robocar will be gone.
More crowded lots
The challenge of parking lots, in spite of the low speeds, is that they don’t have well defined rules of the road. People ignore the arrows on the ground. They pause and wait for cars to exit. In really crowded lots, cars follow people who are leaving at walking speed, hoping to get dibs on their spot. They wait, blocking traffic, for a spot they claim as theirs. People fight for spots and steal spots. People park badly and cross over the lines.
As far as I know, nobody has tried to solve this challenge, and so it remains unsolved. It is one of the few problems in robocars that actually deserves the label of “AI,” though some think all driving is AI.
Even so, on the grand scheme of things, my intuition is that this is not one of the grand unsolved challenges of AI. Parking lots don’t have legalized rules of the road, but they do have rules and principles, and we all learn them the more we park. Creating a system that can do well with these rules using various AI tools seems like a doable challenge when the time comes. My intuition is that it’s a lot easier than winning on Jeopardy. This system will be able to take advantage of a couple of special abilities of the robocars:
- They will be able to park and exit spots quickly and efficiently. They won’t be like the people you always see who do a 5 point turn to exit their parking spot when you (but not they) can see they still have 5 feet of room behind them.
- In general, they will be superb parkers, centering themselves as well as possible inside spots
- They don’t need room to open their doors, so they can park right next to walls and pillars.
- Yes, they could also park right next to badly parked cars which have encroached into other spaces and thus made a space no human can use. There is a risk of course that the bad parker, who finds they can’t get in one side, might retaliate. (I’ve had a guy rip my mirror off in revenge.) In this case, though, they will have a photo of the licence plate and a sensor record of the revenge taking place!
- In the event of problems or deadlock, they are open to the idea of just giving up and parking somewhere farther away that is easier to park in. Unlike humans they could drive as quickly in reverse as forward to back out of situations.
In spite of all this, the cars will want to avoid the full parking lots where the chaos happens. If there is another lot not far away, they will just go there, and require a couple minutes more advance notice from their master when summoned to pick them up. If there is nowhere nearby to park, the car will tell its passenger that she has to do the parking.
Even in the most crowded lots, there is the potential to easily create zones of the parking lot that are marked:
“Robot Valet Parking only. All other cars may be blocked in or towed. No pedestrians.”
In the car’s map, it will indicate what server is handling the robo-valet section, though it is possible to have it work without any communication at all.
In the most basic version the car would ask permission to enter the lot. The database might even assign it a spot, but generally it would just enter and take any spot. By “any spot”, I mean any piece of pavement, ignoring the lines on the ground. At first the cars would choose spots that let them have an unblocked pack to leave. As soon as too many cars arrive to do that, they would switch to a more dense, valet pattern that blocks in some cars (the ones who said they were leaving latest.) It would report where it parked to the database, as well as how to send it a message, and when it expects to leave.
Other cars would arrive. Eventually one would block in your car. If the database has given them a way to communicate (probably over the internet, though if they had V2V they could use that) they might discuss who plans to leave first, and the cars would adjust themselves to put the cars that will leave sooner at the front. This is strongly in the interests of the cars. If you plan to be there a while, you want to go to the back so you don’t have to keep moving to let cars behind you out. But it still works, just not as well, if the cars just take any available spot.
When it’s time to leave, the cars could try to send a message over the data networks to the cars in front of them, but a simpler approach might be to just nudge slightly forward — a few cm will do it. This will cause the car in the direction of the nudge to notice, and it too would nudge forward, and so on, and so on until the front car moves out, and then all the cars in that row can move out, including your car, which leaves the lot. Then the other cars can move in to fill the spot. If they have a database which maps the cars in that section, they could try to be clever in how they re-fill the empty column to minimize movement.
There are even faster algorithms if you leave a few empty spaces. Robocars have the ability to move in concert to “move the space” and put it next to a car that wants to exit. It’s more efficient, but not needed.
The database becomes more useful if a human driver ignores the signs and tries to park in the lot. That’s because the database is the simplest way of spotting a vehicle that’s not supposed to be there. As a first step, the cars in the lot could start flashing their lights and honking their horns at the interloper, or even speak human language messages out a speaker. “Hey, this is the robot valet lot, you are blocking me in! We’re calling a tow truck to come remove you if you don’t leave.” Some idiots may still try, and the robots could arrange so that almost all of them can still get out, and if not, they might call that tow truck.
The robo-valet section can be at the back of the parking lot, or the top of a structure — those places the humans park in last. The owner of the lot has a huge incentive to do this, since they can make much more efficient use of their land with the tight valet-dense parking. All the owner has to do is register the lot section in a database — a database that a company like Google would probably be happy to offer for free to benefit their cars.
Human valets could also park cars in this area. They would just need to use an app on their smartphone that tells them where to park and allows them to register that they did it. The robots will want the human-parked cars to park at the back, because they will move out of the way when it’s time for the human parked car to be driven back out.
The main requirements for this parking area would be that it be reachable from the outside without going through a zone of chaos, and that it then be possible to also reach the pickup/dropoff point for passengers without the risk of getting stuck in chaos. Larger lots tend to have entrance lanes without spots on them that serve this purpose.
Pedestrians will still enter the lot, in spite of the sign. Just go extra slow if they are there, and perhaps talk to them and ask them to leave. While you won’t actually present a danger to them at your low speed, they probably will heed the advice of 3000lb robots. Perhaps tell them they have 15 seconds to put down their weapon.
To get really clever, the sign marking the border of the Robo-Valet area might itself be on a small robot. Thus, when the robo-valet area gets full, the sign can move to expand the area if space is available. You could expand even into areas occupied by human-parked cars — just know that they are there and don’t block them in — or move out of their way when needed. Eventually they leave and only robocars enter.
When the demand goes down, the sign can easily move to shrink the valet area.
Submitted by brad on Fri, 2015-01-16 13:33.
I’m sure, like me, you have lots of electronic gadgets that have status LEDs on them. Some of these just show the thing is on, some blink when it’s doing things. Of late, as blue LEDs have gotten cheap, it has been very common to put disturbingly bright blue LEDs on items.
These become much too bright at night, and can be a serious problem if the device needs to be in a bedroom or hotel room. Which things like laptops, phone and camera chargers and many other devices need to do. I end up putting small pieces of electrical tape over these blue LEDs.
I call upon the factories of Shenzen and elsewhere to produce low cost, standardized status LEDs. These LEDs will come with an included photosensor that measures the light in the room, and adjusts the LED so that it is just visible at that lighting level. Or possibly turns it off in the dark, because do we really need to know that our charger is on after we’ve turned off the lights?
Of course, one challenge is that the light from the LED gets into the photosensor. For most LEDs, the answer is pretty easy — put a filter that blocks out the colour of the LED over the photosensor. If you truly need a white LED, you could make a fancy circuit that turns it off for a few milliseconds every so often (the eye won’t notice that) and measures the ambient light while it’s off. All of this is very simple, and adds minimally to the cost. (In fact, the way you adjust the brightness of an LED is typically to turn it on and off very fast.)
Get these made and make it standard that all our gear uses them for status LEDs. Frankly, I think it would be a good idea even for consumer goods that don’t get into our bedrooms. My TV rooms and computer rooms don’t need to look like Christmas scenes.
Submitted by brad on Thu, 2015-01-15 17:45.
Robocar news continues after CES with announcements from the Detroit Auto Show (and a tiny amount from the TRB meeting.)
Google doesn’t talk a lot about their car, so address by Chris Urmson at the Detroit Auto Show generated a lot of press. Notable statements from Chris included:
- A timeline of 2 to 5 years for deployment of a vehicle
- Public disclosure that Roush of Michigan acted as contract manufacturer to build the new “buggy” models — an open secret since May
- A list of other partners involved in building the car, such as Continental, LG (batteries), Bosch and others.
- A restatement that Google does not plan to become a car manufacturer, and feels working with Detroit is the best course to make cars
- A statement that Chris does not believe regulation will be a major barrier to getting the vehicles out, and they work regularly to keep NHTSA informed
- A few more details about Google’s own LIDAR, indicating that units are the size of coffee cups. (You will note the new image of the buggy car does not have a Velodyne on the roof.)
- More indication that things like driving in snow are not in the pipeline for the first vehicles
Almost all of this has been said before, though the date forecasts are moved back a bit. That doesn’t surprise me. As Google-watchers know, Google began by doing extensive, mostly highway based testing of modified hybrid cars, and declared last May that they were uncomfortable with the safety issues of doing a handoff to a human driver, and also that they have been doing a lot more on non-highway driving. This culminated with the unveiling of the small custom built buggy with no steering wheel. The shift in direction (though the Lexus cars are still out there) will expand the work that needs to be done.
Car company announcements out of the Detroit show were minor. The press got all excited when one GM executive said they “would be open to working with Google.” While I don’t think it was actually an official declaration, Google has said many times they have talked to all major car companies, so there would be no reason for GM to go out to the press to say they want to talk to Google. Much PR over nothing, I suspect.
Ford, on the other hand, actually backtracked and declared “we won’t be first” when it comes to this technology. I understand their trepidation. Being first does not mean being the winner in this game. But neither does being 2nd — there will be a time after which the game is lost.
There were concept vehicles displayed by Johnson Controls (a newcomer) and even a Chinese company which put a fish tank in the rear of the car. You could turn the driver’s seat around and watch your fish. Whaa?
In general, car makers were pushing their dates towards 2025. For some, that was a push back from 2020, for others a push forward from 2030, as both of those numbers have been common in predictions. I guess now that it’s 2015, 2020 is just to realistic a number to make an uncertain prediction about.
Earlier, Boston Consulting Group released a report suggesting robocars would be a $42B market in 2025 — the car companies had better get on it. With the global ground transportation market in the range of $7 trillion in my guesstimate, that’s a drop in the bucket, but also a huge number.
News from the Transportation Research Board annual meeting has been sparse. The combined conference of the TRB and AUVSI on self-driving cars in the summer has been the go-to conference of late, and other things usually happen at the big meeting. Released research suggested 10% of vehicles could be robocars in 2035 — a number I don’t think is nearly aggressive enough.
There also was tons of press over the agreement between NASA Ames and Nissan’s Sunnyvale research lab to collaborate. Again, not a big surprise, since they are next door to one another, and Martin Sierhuis the director of the research lab made his career over at Nasa. (Note of disclosure: I am good friends with Martin, and Singularity U is based at the NASA Research Park.)
Submitted by brad on Thu, 2015-01-08 19:55.
Day 3 at CES started with a visit to BMW’s demo. They were mostly test driving new cars like the i3 and M series cars, but for a demo, they made the i3 deliver itself along a planned corridor. It was a mostly stock i3 electric car with ultrasonic sensors — and the traffic jam assist disabled. When one test driver dropped off the car, they scanned it, and then a BMW staffer at the other end of a walled course used a watch interface to summon that car. It drove empty along the line waiting for test drives, and then a staffer got in to finish the drive to the parking spot where the test driver would actually get in, unfortunately.
Also on display were BMW’s collision avoidance systems in a much more equipped research car with LIDARs, Radar etc. This car has some nice collision avoidance. It has obstacle detection — the demo was to deliberately drive into an obstacle, but the vehicle hits the brakes for you. More gently than the Volvo I did this in a couple of years ago.
More novel is detection of objects you might hit from the side or back in low speed operations. If it looks like you might sideswipe or back into a parking column or another car, the vehicle hits the brakes on you (harder) to stop it from happening.
Insurers will like this — low speed collisions in parking lots are getting to be a much larger fraction of insurance claims. The high speed crashes get all the attention, but a lot of the payout is in low speed.
I concluded with a visit to my favourite section of CES — Eureka Park, where companies get small lower cost booths, with a focus on new technology. Also in the Sands were robotics, 3D printing, health, wearables and more — never enough time to see it all.
I have added 12 more photos to my gallery, with captions — check the last part out for notes on cool products I saw, from self-tightening belts and regenerating roller skates to phone-charging camping pots.
Submitted by brad on Wed, 2015-01-07 23:44.
After a short Day 1 at CES a more full day was full of the usual equipment — cameras, TVs, audio and the like and visits to several car booths.
I’ve expanded my gallery of notable things with captions with cars and other technology.
Lots of people were making demonstrations of traffic jam assist — simple self-driving at low speeds among other cars. All the demos were of a supervised traffic jam assist. This style of product (as well as supervised highway cruising) is the first thing that car companies are delivering (though they are also delivering various parking assist and valet parking systems.)
This makes sense as it’s an easy problem to solve. So easy, in fact, that many of them now admit they are working on making a real traffic jam assist, which will drive the jam for you while you do e-mail or read a book. This is a readily solvable problem today — you really just have to follow the other cars, and you are going slow enough that short of a catastrophic error like going full throttle, you aren’t going to hurt people no matter what you do, at least on a highway where there are no pedestrians or cyclists. As such, a full auto traffic jam assist should be the first product we see form car companies.
None of them will say when they might do this. The barrier is not so much technological as corporate — concern about liability and image. It’s a shame, because frankly the supervised cruise and traffic jam assist products are just in the “pleasant extra feature” category. They may help you relax a bit (if you trust them) as cruise control does, but they give you little else. A “read a book” level system would give people back time, and signal the true dawn of robocars. It would probably sell for lots more money, too.
The most impressive car is Delphi’s, a collaboration with folks out of CMU. The Delphi car, a modified Audi SUV, has no fewer than 6 4-plane LIDARs and an even larger number of radars. It helps if you make the radars, as otherwise this is an expensive bill of materials. With all the radars, the vehicle can look left and right, and back left and back right, as well as forward, which is what you need for dealing with intersections where cross traffic doesn’t stop, and for changing lanes at high speed.
As a refresher: Radar gives you great information, including speed on moving objects, and sucks on stationary ones. It goes very far and sees through all weather. It has terrible resolution. LIDAR has more resolution but does not see as far, and does not directly give you speed. Together they do great stuff.
For notes and photos, browse the gallery
Submitted by brad on Tue, 2015-01-06 23:11.
A reasonable volume of robocar related stuff here at CES. I just had a few hours today, and went to see the much touted Mercedes F015 “Luxury in Motion.” This is a concept and not a planned vehicle, but it draws together a variety of ideas — most of which we’ve seen before — with some new explorations.
The vehicle has a long wheelbase design to allow it to have a very large passenger compartment, which features just 4 bucket seats, the front two of which can rotate to create face to face seating. (In addition, they can rotate to make it easier to get into the car.) We’ve seen a number of face to face concepts and designs and I’ve been interested in the idea from the start, the idea of making car travel more social and better for both families and co-workers. As a plus, rear facing seats, though less comfortable for some fraction of the population, are going to be safer in a front end collision.
The vehicle features a bevy of giant touchscreens. We see a lot of this, but I actually will note that we don’t have this at our desks or in our homes. I suspect passengers in robocars will prefer the tablets they already have, though there is the issue that looking down at a tablet generates motion sickness sometimes.
The interior has an odd mix of carpet and hardwood, perhaps trying to be more like a living room.
More interesting, though not on display, are the vehicle’s systems for communicating with pedestrians and other road users. These include LEDs that can indicate if the car is self-driving (boring, and something I pushed to have removed from the Nevada law,) but more interesting are indicators that help to tell pedestrians the vehicle has seen them. One feature, which only is likely to work at night, laser projects a crosswalk in front of the vehicle when it stops, to tell a pedestrian it sees them and is expecting them to cross in front. It can also make LED words at the back for other cars (something that is I think illegal in some jurisdictions.
Also interesting has been the press reaction. Wired thinks it’s bonkers and not designed very well. The bonkers part is because the writer thinks it de-emphasizes driving too much. Of course, those of that stripe are quite upset at Google’s car with no controls. Other writers have liked the design, and find it quite superior to Google’s non-threatening design, suggesting the Google design is for regulators and the Mercedes design is for customers. Google plans to get approval for their car and operate it, while Mercedes is just using the F015 as a concept.
I have a gallery of several pictures of the car which I will add to during the week. In the gallery you will also see:
Audio Piloted Driving prototype
Audi drove one of their cars from the Bay Area to CES, letting press take 100 mile stints. It also helped them learn things about different conditions. One prototype is in the booth, I will go out to see the real car outdoors tomorrow.
TRW was showing off their technology with a transparent model showing where they had put an array of radars to make 360 degree radar and camera coverage. No LIDAR, but they will probably get one eventually. Radar’s resolution is low, but they believe that by fusing the radar and the camera views they can get very good perception of the road.
There are more for me to see tomorrow. Ford showed more of their ADAS systems and also their Focus which has 4 of the 32 plane velodyne LIDARs on it. Toyota showed only a hydrogen fuel cell car. Valeo has some interesting demos I will want to see — they have promised doing a good traffic jam assist. While they have not said so, I think the most interesting car company robocar function will be a traffic jam assist which does not require supervision — ie. you can read. While no car company is ready to have the driver out of the loop at high speeds, doing it at traffic jam speeds is much easier, because mainly you just have to follow the other cars, and you stop self-driving if the jam opens up. Several companies are working on a product like this and I suspect it will be the first real robocar product to reach the market that is actually practical. The “super cruise” products which drive while you watch are pleasant, but not much more world-changing than adaptive cruise control. When the car can give people time back, even if it’s only the traffic jam time, then something interesting starts happening.
Submitted by brad on Mon, 2015-01-05 15:28.
When Southwest started using tablets for in-flight entertainment, I lauded it. Everybody has been baffled by just how incredibly poor most in-flight video systems are. They tend to be very slow, with poor interfaces and low resolution screens. Even today it’s common to face a small widescreen that takes a widescreen film, letterboxes it and then pillarboxes it, with only an option to stretch it and make it look wrong. All this driven by a very large box in somebody’s footwell.
I found out one reason why these systems are so outdated. Apparently, all seatback screens have to be safety tested, to make sure that if you are launched forward and hit your head on the screen, it is not more dangerous than it needs to be. Such testing takes time and money, so these systems are only updated every 10 years. The process of redesigning, testing and installing takes long enough that it’s pretty sure the IFE system will seem like a dinosaur compared to your phone or tablet.
One airline is planning to just safety test a plastic case for the seatback into which they can insert different panels as they develop. Other airlines are moving to tablets, or providing you movies on your own tablet, though primarily they have fallen into the Apple walled garden and are doing it only for the iPad.
The natural desire is just to forget the airline system and bring your own choice of entertainment on your own tablet. This is magnified by the hugely annoying system which freezes the IFE system on every announcement. Not just the safety announcements. Not just the announcements in your language, but also the announcement that duty free shopping has begun in English, French and Chinese. While a few airlines let you start your movie right after boarding, you don’t want to do it, as you will get so many interruptions until the flight levels off that it will drive you crazy. The airline provided tablet services also do this interruption, so your own tablet is better.
In the further interests of safety, new rules insist you can only use the airline’s earbud headphones during takeoff and landing, not your nice noise cancellation phones. But you didn’t pick up earbuds since you have the nicer ones. The theory is, your nice headphones might make you miss a safety announcement when landing, even though they tend to block background noise and actually make speech clearer.
One of the better IFE systems is the one on Emirates. This one, I am told, knows who you are, and if you pause a show on one flight, it picks up there on your next flight. (Compare that to so many systems that often forget where you were in the film on the same flight, and also don’t warn you if you won’t be able to finish the movie before the system is turned off.)
Using your own tablet
It turns out to be no picnic using your own tablet.
- You have to remember to pre-load the video, of course
- You have to pay for it, which is annoying if:
- The airline is already paying for it and providing it free in the IFE
- You have it on netflix/etc. and could watch it at home at no cost
- You wish to start a movie one day and finish it on another flight, but don’t want to pay to “own” the movie. (Because of this I mostly watch TV shows, which only have a $3 “own” price and no rental price.)
How to fix this:
- IFE systems should know who I am, know my language, know if I have already seen the safety briefing, and not interrupt me for anything but new or plane-specific safety announcements in my chosen language.
- Like the Emirates systems, they should know where I am in each movie, as well as my tastes.
- How to know the language of the announcement? Well, you could have a button for the FA to push, but today software is able to figure out the language pretty reliably, so an automated system could learn the languages and the order in which they are done on that flight. Software could also spot phrases like “Safety announcement” at the start of a public address, or there could be a button.
- Netflix should, like many other services, allow you to cache material for offline viewing. The material can have an expiration date, and the software can check when it’s online to update those dates, if you are really paranoid about people using the cache as a way to watch stuff after it leaves Netflix. Reportedly Amazon does this on the Kindle Fire.
- Online video stores (iTunes, Google Play, etc.) should offer a “plane rental” which allows you to finish a movie after the day you start it. In fact, why not have that ability for a week or two on all rentals? It would not let you restart, only let you watch material you have not yet viewed, plus perhaps a minute ahead of that.
- Perhaps I am greedy, but it would be nice if you could do a rental that lets 2 or more people in a household watch independently, so I watch it on my flight and she watches it on hers.
- If necessary, noise-cancelling headphones should have a “landing mode” that mixes in more outside sound, and a little airplane icon on them, so that we can keep them on during takeoff and landing. Or get rid of this pretty silly rule.
Choosing your film
There’s a lot of variance in the quality of in-flight films. Air Canada seems particularly good at choosing turkeys. Before they close the doors, I look up movies — if I can get the IFE system to work with all the announcements — in review sites to figure out what to watch. In November, at Dublin Web Summit, I met the developers of a travel app called Quicket, which specialized in having its resources offline. I suggested they include ratings for the movies on each flight — the airlines publish their catalog in advance — in the offline data, and in December they had implemented it. Great job, Quicket.
Submitted by brad on Fri, 2015-01-02 16:19.
One of air travel’s great curses is that you have to leave for the airport a long time before your flight. Airlines routinely “recommend” you be there 2 or 3 hours ahead, and airport ride companies often take it to heart and want to pick you up many hours before even short flights. The curse is strongest on short flights, where you can easily spend as much as twice the time getting to the flight as you spend in the air.
The reality, though, is that it’s not nearly that strict. I often arrive much later. I’ve missed 3 flights in my life — in two cases because cheap airlines literally had nobody at the counter past their cutoff deadline, and once because United’s automated bag check line was very long (I got there before the deadline) but their computer is fully strict on the deadline while humans usually are not. In all cases, I got on another flight, and the time lost to these missed flights is vastly less than the time gained by not being at the airport so early.
But it’s getting harder. Airlines are getting stricter, and in a few cases offering no flexibility.
The big curse is that many of the delays can’t be predicted. It may almost always take 20 minutes to get to the airport, but every so often traffic will make it 40. Security is usually only 5-10 minutes but there are times when it’s 30. Car rental return, parking shuttles, called taxis and Ubers can have unexpected delays. Parking lots can be full (as happened to me this xmas after Uber failed me.) Immigration can range from 2 minutes to 1.5 hours if you have to go to secondary screening. While in theory you could research this, sometimes at strange airports you are surprised to find it’s 30 minutes walk and people-mover to your gate.
If you ever fly privately, though, you will discover a different world, where even if you’re just a guest you can arrive a very short time before your flight. (If you’re the owner, of course, it doesn’t take off until you get there.) But there are many options that can speed your trip through the airport without needing to fly a private jet:
- Tools like Google Now track traffic and warn you when you need to leave earlier to get to the airport
- If you take a cab to the airport, you eliminate the delays of parking and car return
- Though rarer today, ability to check bags in advance at remote locations helps a lot
- Curb checking of bags is great, as of course is online check-in sent to your phone
- (Not checking bags is of course better, and any savvy flyer avoids it whenever they can, but sometimes you can’t.)
- Premium passengers get check-in gates with minimal lines, and premium security lines
- If you have a Global Entry or Nexus card, you can skip the immigration/customs line
- TSA PRE, “Clear” and premium passenger security lines provide a no-wait experience. Of course nobody should ever have to wait, ever.
- Failing that, offering appointments at security for a predictable security trip can remove the time risk
- Sometimes they also let people who are at risk of missing a flight skip past the security line (and some other lines)
- In some cases, premium passengers are shuttled in vehicles within the terminal or on the tarmac
- Business class passengers can board as late as they want (or as early) and still get a place in the bins on most flights
In addition, I believe that if you wanted to get your checked bag cleared quickly by the TSA for money, it could happen. Of course, we can’t have everybody do this all the time, or so I presume, because it would require too much in the way of resources. But what if we allow you to do this occasionally when factors beyond your control have made you late.
What is proposed is that every so often — perhaps one time in twenty — when factors like traffic, long security lines or other things mostly beyond your control made you late, you could invoke an urgent need, and still make your flight.
This would allow you to budget a more reasonable time to arrive
What does this all add up to? It should be possible, at an extra cost, to get a quick trip through the airport. Say that cost is $200 (I don’t think it’s that much, but say that it is.) You could pay $10 extra per flight for “insurance” and be able to invoke an urgent trip every so often when things go wrong. It’s worth it to pay every trip because it gives you a benefit on every trip — you leave later, knowing you will make it even if traffic, security lines or similar factors would delay you too much.
Some of the services you might get would include:
- Somebody meets your car at the curb, takes your keys, and then parks it or returns it to the car rental facility
- Another employee meets you and checks in your bags at the curb. Your bags are put in a special urgent queue in TSA inspection. If need be a staffer walks it through.
- A golf cart takes you to security if it’s not close, and you get to the front of the line.
- If your gate is far, another golf cart or escort takes you there
The natural question is, “why wouldn’t you want this all the time?” And indeed you would, and a large fraction of passengers would pay a fairly high fee to get this when they need it. Airlines might make it just part of the service with high-priced tickets or super-elite flyers, and I see no reason that should not happen. The price can be set so that the demand matches the supply, based on the cost of having extra employees to handle urgent passengers.
When it comes to more “public” resources like TSA screening, they have a simple rule. You can give premium services to premium passengers if what you do also speeds up the line for ordinary passengers. A simple implementation of this is to just pay for an extra screening station for the premium passengers, because now you don’t butt in line and in fact by not being in the regular line at all, you speed it up for all in it. You don’t need to be so extravagant, however. For example, the “TSE PRE” line, which allows a faster trip through the X-ray (you don’t have to take anything out, or remove your shoes in this line) speeds up everybody because we all wait behind people doing that. If you can show that the amount you speed up the whole process is greater than the delay you add by letting premium passengers jump the queue, it is allowed.
But as fancy as these services sound, with extra staff, they are really not that expensive. Perhaps just 20 minutes of employee time for most of it — more if they are driving your car to a parking lot for you. (Note that this curb hand-off is forbidden by most airports because car rental companies already would like to offer it to their top customers but it is believed that would be too popular and increase traffic. Special permission would need to be arranged.)
For the “insurance” approach, a few techniques could assure it was not being abused. The frequency of use is one of them, of course, but you could also give people an app for their phones. This app, using GPS and knowing a flight is coming, would know when you left for the airport. In fact, it could give you alerts as to when to leave based on information about traffic, parking and security wait times. If you left at the reasonable departure deadline, you would get the urgent service if traffic or other surprise factors made you late. If you left after that deadline, you would not be assured the fast track path.
What would be better would be an app that actually works with all the airport functions you will interact with — check in, the gate, bag check, passenger screening, parking lots, rental cars, traffic etc. Their databases could know their state, any special conditions, and both recommend a time to leave that will work, but even make appointments for you and tell you when to leave for them. Then your phone could guide you through the airport and do all the hard work. It would provide an ID to get you your appointment at security. It might tell you to not drive your own car and take a car service instead if that’s easier than parking your car for you. It would coordinate for all the passengers using the system to make sure they flow through the airport in a well regulated manner, with no surprises, so that people don’t have to try to get there hours in advance.
Submitted by brad on Fri, 2014-12-19 13:39.
Yesterday’s note on Here’s maps brought up the question of the wisdom of map-based driving. While I addressed this a bit earlier let me add a bit more detail.
A common first intuition is that because people are able to drive just fine on a road they have never seen before that this is how robots will do it. They are bothered that present designs instead create a super-detailed map of the road by having human driven cars scan the road with sensors in advance. After all, the geometry of the road can change due to construction; what happens then?
They hope for a car that, like a human, can build its model of the road in real time while driving the road for the first time. That would be nice, of course, and gives you a car that can drive most roads right away, without needing to map them. But it’s a much harder problem to solve, and unlikely to ever be solved perfectly. Car companies are building very simple systems which can follow the lines on a freeway under human supervision without need for a map. But real city streets are a different story.
The first thing to realize is that any system which could build the correct model as you drive is a system that could build a map with no human oversight, so the situations are related. But building a map in advance is always going to have several very large advantages:
- You build the map from not just one scan of the road, but several, and done in different lanes and directions. As a result, you get 3-D scans of everything from different angles, and can build a superior model of the world.
- Using multiple scans lets you learn about things that are stationary but move one day to the next, like parked cars.
- You can process the data using a cloud supercomputer in as much time, memory and data storage as you want. Your computer is effectively thousands of times more capable.
- Humans can review the map built by the software if there’s anything it is uncertain about (or even if there is nothing) at their leisure.
- Humans can also test the result of the automatic and guided mapping to assure accuracy with one extra drive down the road.
In turn there are disadvantages
- At times, such as construction, the road will have changed from when it was mapped
- This process costs effort, and so the vehicle either does not drive off the map, or only handles a more limited set of simpler roads off the map.
The advantages are so great that even if you did have a system which could handle itself without a map, it is still always going to be able to do better
with a map. Even with a great independent system you would want to make an effort to map the most popular roads and the most complex roads, up to the limit of your budget. The cost is an issue, but the cost of mapping roads is nothing compared to the cost of building or maintaining them. It’s a few times driving down the road, and some medium-skilled labour.
The road has changed
Let’s get to the big issue — the map is wrong, usually because construction has changed it.
First of all, we must understand that the sensors always disagree with the map, because the sensors are showing all the other cars and pedestrians etc. Any car has to be able to perceive these and drive so as not to hit them. If a traffic cone, “road closed” sign or flagman appears in the road, a car is not going to just plow into them because they are not on the map! The car already knows where not to go, the question is where it should go when the lanes have changed.
Even vehicles not rated to drive any road without a map can probably still do basic navigation and stay within their lane markers without a map. For the 10,000 miles of driving you do in a year, you need a car that does that 99.99999% of the time (for which you want a map) but it may be acceptable to have a car that’s only 99.9% able to do that for the occasional mile of restriped road. Indeed, when there are other, human-driven cars on the road, a very good strategy is just to follow them — follow one in front, and watch cars to the side. If the car has a clear path following new lane markers or other cars, it can do so.
Google, for example, has shown videos of their vehicle detecting traffic cones and changing lanes to obey the cones. That’s today — it is only going to get better at this.
But not all the time. There will be times when the lanes are unclear (sometimes the old lanes are still visible or the new ones are not well marked.) If there are no other cars to follow, there are also no other cars to hit, and no other traffic to block.
Still, there will be times when the car is not sure of where to go, and will need help. Of course, if there is a passenger in the car, as there would be most of the time, that passenger can help. They don’t need to be a licenced driver, they just need to be somebody who can point on the screen and tell the car which of the possible paths it is considering is the right one. Or guide it with something like a joystick — not physically driving but just guiding the car as to where to go, where to turn.
If the car is empty, and has a network connection, it can send a picture, 3-D scan and low-res video to a remote help station, where a person can draw a path for the car to go for its next 100 meters, and keep doing that. Not steering the car but helping it solve the problem of “where is my lane?” The car will be cautious and stop or pull over for any situation where it is not sure of where to go, and the human just helps it get over that, and confirms where it is safe to go.
If the car is unmanned and has no network connection of any kind, and can’t figure out the road, then it will pull over, or worst case, stop and wait for a human to come and help. Is that acceptable? Turns out it probably is, due to one big factor:
This only applies to the first car to encounter an unplanned, unreported construction zone
We all drive construction zones every day. But it’s much more rare that we are the first car to drive the construction zone as they are setting it up. And most of the rules I describe above are only for the first connected car to encounter a surprise change to the road. In other words, it’s not going to happen very often. Once a car encounters a surprise change to the road, it will report the problem with the map. Immediately all other cars will know about the zone.
If that first car is able to navigate the new zone, it will be scanning it with sensors, and uploading that data, where a crew can quickly build a corrected map. Within a few minutes, the map and the road will no longer differ. And that first car will be able to navigate the new zone 99.999% of the time — either because it has a human on board, remote human help or it’s a simple enough change that the car is able to drive it with an incorrect map.
In addition, the construction zone has to be a surprise. That means that, in spite of regulations, the construction crews did not log plans for it in the appropriate databases. Today that happens fairly often, but over time it’s going to happen less. In fact, there are plans to have transponders on construction equipment and even traffic cones that make it impossible to create a new construction zone without it showing up in the databases. Setting up a road change has a lot of strongly enforced safety rules, and I predict we’ll see “Get out your smartphone and make sure the zone is in the database before you create it” as one of them, especially since that’s so easy to do.
(You have probably also seen that tools like Waze, driven by ordinary human driver smartphones, are already mapping all the construction zones when they pop up.)
If a complex zone is present and unmapped, unmanned cars just won’t route through there until the map is updated. The more important the zone, the more quickly it will get updated. If need be, a mapping worker will go out in a car before work even begins. If a plan was filed, we’ll also know the plan for the zone, and whether cars can handle it with an old map or not.
Most of the time, though, a human passenger will be there to guide the car through the zone. Not to steer — there may not be a steering wheel — but to guide. The car will go slowly and stay safe.
Once a car is through, it will send the scans up to the mapping center, and all future cars will have a map to guide them until the crew changes the road again without logging it. I believe that doing so should be made against safety regulations, and be quite rare.
So look at those numbers. I will hope it’s reasonable to expect that 99% of construction zones will be logged in road authority databases before they begin. Of the 1% that aren’t, there will be a first robocar to encounter the zone. 90% of the time that car will have a passenger able to help. For the 10% unmanned cars, I predict a data network will be available 99% of the time. (Some would argue 100% of the time because unmanned cars will just not go where there is not a data connection, and we may also get new data services like Google’s Loon, or Facebook’s drone program to assure coverage everywhere.)
So now we are looking at one construction zone in 100,000 where there was no warning, there is no human, and there is no data. But we’ve rated are car as able to handle handle off-map driving 99.9% of the time. For the other .1%, it decides it can’t see a clear path, and pulls over. When it doesn’t report back in on the other side of the data dead zone, a service vehicle is dispatched and fixes the problem.
So now in one in 100,000,000 construction zones, we have a car deciding to pull over. Perhaps for half of those, it can’t figure out how to pull over, and it stops in the lane. Not great — but this is one in 200 million construction zones. In other words, it happens with much less frequency than accidents or stalled cars. And there is even a solution. If a construction worker flashes an ID card at the car’s camera when it’s in a confused state, the car can then follow that worker to a place to stop. In fact, since the confused state is so rare, there is probably not even a need for an ID card. Just walk up, make a “follow me” gesture and walk the car where it needs to go.
Tweak these numbers as you like. Perhaps you think there will be far more construction zones not logged in databases. Perhaps you think the car’s ability to drive a changed zone will only be 50%. Perhaps you think there will still be lots of unmanned cars running in wireless dead zones in 2020. Even so the number of cars that stop and give up will still be far fewer than the number of cars that block roads today due to accidents and mechanical problems. In other words, no big whoop.
It’s important to realize that unmanned cars are not in a hurry. They can avoid zones they are not comfortable with. If they can’t get through at all, the taxi company sending the car can just send another from a different direction in almost all cases.
It’s also important to realize that cars in an uncertain situation are also not in a big hurry. They will slow until they can be sure they are safe and able to handle the road. Slow, it turns out, is easy. Slow and heavy traffic (ie. a traffic jam) is actually also very easy — you don’t even need to see the lines on the road to handle that one; you usually can’t.
Once again this is only for the first car to encounter the surprise zone. Much more common will be a car that is the first to encounter a planned zone. This car will always have a competent passenger, because the service will not direct an unmanned car into an unknown construction zone where there is no data. This passenger will get plenty of warning, and their car may well pull over so there is no transition from full-auto to semi-auto while the car is moving. Then this person will guide the car through the zone at reduced speed. Probably just with a joystick, though possibly there will handlebars that can pop out or plug in if true semi-manual driving is needed.
New road signs
Road signs are a different problem. Already there are very decent systems for recognizing road signs captured by the camera — systems that actually do better at it than human beings. But sometimes there are road signs with text, and the system may recognize them, but not understand them. Here again we may call upon human beings, either in the vehicle, or available via a data connection. Once again, this is only for the first unmanned car to encounter the new road sign.
I will propose something stronger, though. I believe there should be a government mandated database of all road signs. Further, I believe the law should say that no road sign has legal effect until it is entered in the database. Ie. if you put up a sign with a new speed limit, it is not a violation of the limit to ignore the sign until the sign is in the database. At least not for robots. Once again, all this needs is that the crews putting in the signs have smartphones so they can plonk the sign on the map and enter what it is.
We may never need this, though, because the ability of computers to read signs is getting very good. It may be faster to just make it even better than to wait for a law that mandates the database. With a 3-D map, you will never miss a brand new sign, but you might get confused by a changed sign — you will know it changed but may need to ask for help to understand it if it is non-standard. There are already laws that standardize road signs, but only to a limited extent. Even so, the number of sign styles in any given country is still a very manageable number.
Random road events
Sometimes driving geometry changes not due to construction, but due to accidents and the environment. Trees get knocked down. Roads flood. Power lines may fall. The trees will be readily seen, and for the first car to come to a fallen tree, the procedure will be similar, though in a low traffic area the vehicles will be programmed to go around them, as they are for stalled cars and slow moving vehicles. Flooding and power lines are more challenging because they are harder to see. Flooding, of course, does not happen by surprise. That there is flooding in a region will be well known so cars will be on the lookout for it. Human guides will again be key.
A plane is not a bird
Aircraft do not fly by flapping their wings, and robocars will not see the world as people do nor drive as they do. When they have accurate maps, it gives us much more confidence in their safety, particularly the ability to pick the right path reliably at speed. But they have a number of tools open to them for driving a road that doesn’t match the map precisely without needing to have the ability to drive unmapped roads 99.999999% of the time. That’s a human level ability and they don’t need it.
Submitted by brad on Thu, 2014-12-18 14:14.
I see new articles on robocars in the press every day now, though most don’t say a lot new. Here, however, are some of the recent meaningful stories from the last month or two while I’ve been on the road. There are other sites, like the LinkedIn self-driving car group and others, if you want to see all the stories.
Winners chosen in UK competition
Four cities in the UK have been chosen for testing and development of robcars using the £10 million funding contest. As expected, Milton Keynes was chosen along with Coventry, and also Greenwich and Bristol. The BBC has more.
Chinese competition has another round
Many don’t know it, but China has been running its own “DARPA Grand Challenge” style race for 6 years now. The entrants are mostly academic, and not super far along, but the rest of the world stopped having contests long ago, much to its detriment. I was recently in Beijing giving a talk about robocars for guests of Baidu — my venue was none other than the Forbidden City — and the Chinese energy is very high. Many, however, thought that an announcement that Baidu would provide map data for BMW car research meant that Baidu was doing a project the way Google is. It isn’t, at least for now.
LA Mayor wants the cars
I’ve seen lots of calls from cities and regions that robocars come there first. In the fall, the mayor of Los Angeles made such a call. What makes this interesting is that LA is indeed a good early target city, with nice wide and simple roads, lots of freeways, and relatively well-behaved drivers compared to the rest of the world. And it’s in California, which is where a lot of the best development is happening, although that’s all in the SF Bay Area.
Concept designs for CES and beyond
More interesting concept cars are arising, as designers realize what they can do when freed of having a driver’s seat that faces forward and has all the controls, and as electric drivetrains allow you to move around where the drivetrain goes. Our friends at the design firm IDEO came up with some concepts that are probably not realistic but illustrate worthwhile principles. In particular, their vision of the delivery robot is quite at odds with mine. I see delivery robots as being very small, just suitcase sized boxes on wheels, except for the few that are built for very large cargo like furniture and industrial deliveries. Delivery robots will come to you on your schedule, not on the delivery company’s schedule. There will be larger robots with compartments when you can service a group of people who live together, but there is a limit to how many you can serve and still deliver at exactly the right time that people expect.
Everybody is also interested to see what Daimler will unveil at the Consumer Electronics Show. They showed off an interior with face-to-face seating and everybody wearing a VR headset, and have been testing a car under wraps.
It’s interesting to think about the VR headset. A lot of people would get sick if jostled in a car while wearing a VR headset. However, it might be possible to have the VR headset deliberately bounce the environment it’s showing you, so that it looks like you’re riding a car in that environment that’s bumping just the way you are. Or even walking.
Here (Nokia/Navteq) builds a big library of HD maps
Robocars work better if they get a really detailed map of their environment to drive with. Google’s project is heavily based on maps, and they have mapped out all the roads they test near Google HQ. Nokia’s “Here” division has decided to enter this in a big way. Nokia calls its projects “HD Maps,” which is a good name because you want to make it clear that these are quite unlike the navigation maps we are used to from Google, Here and other companies. These maps track every lane and path a car could take on the road, but also every lane marker, every curb, every tree — anything that might be seen by the cameras and 3D sensors.
Nokia makes the remarkable claim to have produced 1.2 million miles of HD Maps in 30 countries in the last 15 months. That’s remarkable because Google declared that one of their unsolved problems was that the cost of producing maps, and they were working to bring that cost down. Either Nokia/Here has made great strides in reducing that cost, or their HD Maps are not quite at the level of accuracy and detail that might be needed.
Nonetheless, the cost of the mapping will come down. In fact, many people express surprise when they learn that the cars rely so heavily on maps, as they expect a vehicle that, like a human being, can easily drive on a road they’ve never seen before, with no map. Humans can do that, but a car that could do that is also a car that could build the sort of map we’re talking about, in real time. Making the map ahead of time has several advantages, and is easier to do than doing it in real time. Perhaps some day that real-time map builder (what roboticists call Simultaneous localization and mapping) will arise, but for now, pre-mapping is the way to go.
510 Systems story told (sort of.)
There was recently press about the kept-quiet acquisition by Google of 510 Systems. I was at Google at the time, and it involves friends of mine, so I will have to say there are some significant errors in the story, but it’s interesting to see it come out. It wasn’t really that secret. What Anthony did with PriBot was hardly secret — he was on multiple TV shows for his work — and that he was at Google working at first on Streetview and later on the car was also far from secret. But it wasn’t announced so nobody picked up on it.
Submitted by brad on Tue, 2014-12-16 01:07.
Uber is spreading fast, and running into protests from the industries it threatens, and in many places, the law has responded and banned, fined or restricted the service. I’m curious what its battles might teach us about the future battles of robocars.
Taxi service has a history of very heavy regulation, including government control of fares, and quota/monopolies on the number of cabs. Often these regulations apply mostly to “official taxis” which are the only vehicles allowed to pick up somebody hailing a cab on the street, but they can also apply to “car services” which you phone for a pick-up. In addition, there’s lots of regulation at airports, including requirements to pay extra fees or get a special licence to pick people up, or even drop them off at the airport.
Why we have Taxi regulation and monopolies
The heavy regulation had a few justifications:
- When hailing a cab, you can’t do competitive shopping very easily. You take the first cab to come along. As such there is not a traditional market.
- Cab oversupply can cause congestion
- Cab oversupply can drive the cost of a taxi so low the drivers don’t make a living wage.
- We want to assure public safety for the passengers, and driving safety for the drivers.
- A means, in some places, to raise tax revenue, especially taxing tourists.
Most of these needs are eliminated when you summon from an app on your phone. You can choose from several competing companies, and even among their drivers, with no market failure. Cabs don’t cruise looking for fares so they won’t cause much congestion. Drivers and companies can have reputations and safety records that you can look up, as well as safety certifications. The only remaining public interest is the question of a living wage.
Taxi regulations sometimes get stranger. In New York (the world’s #1 taxi city) you must have one of the 12,000 “medallions” to operate a taxi. These medallions over time grew to cost well north of $1 million each, and were owned by cab companies and rich investors. Ordinary cabbies just rented the medallions by the hour. To avoid this, San Francisco made rules insisting a large fraction of the cabs be owned by their drivers, and that no contractual relationship could exist between the driver and any taxi company.
This created the situation which led to Uber. In San Francisco, the “no contract” rule meant if you phoned a dispatcher for a cab, they had no legal power to make it happen. They could just pass along your desire to the cabbie. If the driver saw somebody else with their arm up on the way to get you, well, a bird in the hand is worth two in the bush, and 50% of the time you called for a cab, nobody showed up!
Uber came into that situation using limos, and if you summoned one you were sure to get one, even if it was more expensive than a cab. Today, that’s only part of the value around the world but crazy regulations prompted its birth.
The legal battles (mostly for Uber)
I’m going to call all these services (Uber, Lyft, Sidecar and to some extent Hail-O) “Online Ride” services. read more »
Submitted by brad on Fri, 2014-12-12 09:24.
Dave Barry once wrote that there is a federal law that no two people on a plane can pay the same price for their seat. Airlines use complex systems to manage ticket prices, constantly changing them based on expected demand and competition, and with over a dozen fare classes with different rules.
When it comes to the rules, a usual principle is that only the more expensive tickets give you the flexibility to change your plans. For any reasonable price, you will have change and cancellation fees, and for the lowest cost tickets, changes are next to impossible. This is compounded by the fact that changes usually require paying the difference to the current price, but the current price in the few days before a flight is the very expensive flexible price. Missing a flight or deciding to move a fight a day can be hugely expensive.
The flexible tickets are ridiculously expensive as well, often 2x or even 3x the inflexible cost. In general, unless you change your plans a lot, you are still better off buying the cheap inflexible tickets and then eating the high cost on the relatively rare times you make changes. (Many airlines do offer cheap “same day” changes, particularly to status flyers.)
Flexible tickets can command this price because they are of greatest use to business passengers. We fly more on short notice, and need to make sudden changes, while people on vacation generally do have a fixed schedule. Airlines know business customers will pay more, and so they search for things that only business passengers want, and charge heavily for them.
Sell me a ticket where I have to be flexible
For leisure travel, here’s an alternative. Sell me a ticket that allows reasonable and low-cost changes when seats are available. Make it not a big deal to let me leave when I want to. To make this ticket cheap, but a big burden on me — the airline can also delay my flight.
What this would mean is that up to some amount of time, like 24 hours before the flight, the airline can email me and say, “Sorry, that flight is selling out, we’ve moving you to another flight.” The other flight would be within a time window — the longer the window, the cheaper the ticket. 24 to 48 hours would usually be enough.
The typical business passenger is not going to tolerate this. In business, time is money and losing a day just isn’t an option.
Some leisure passengers would not tolerate it either. If you have other bookings that are hard to change, like sold-out hotels, or a cruise, you don’t want to miss them. (Though in the world of flight cancellations you have to prepare for this sometimes.) But many hotels and other things are pretty flexible.
Most could handle such a rule going home, unless they are going home and must get to work the next day. For retired people, and the many people who work flexible schedules (consultants, writers and many other self-employed) it is not a big issue to get home a day or two late. And for many of these people it’s also not a big issue to arrive at the destination a day late, and certainly not a few hours late. In addition, many people taking an extended trip to multiple cities would be perfectly fine with the idea that they might spend an extra day in Rome and a day less in London, or vice versa. (On shorter trips with several flights a day, the delay might well be only a few hours.)
You could also offer the airline the power to make you leave earlier, but they would have to give you more notice on most legs.
This is great for the airline. They get the power to move people off full planes to replace them with high revenue customers at no cost, and put them on planes that are less full, where the seats are almost free. (If both planes are full, they would not move you.) Today they do this by asking for volunteers and paying them with vouchers, or on some occasions doing a forced bumping.
This is like standby, in a way, but less uncertain than that. A bit more like the way employees fly free on their off-hours.
There is one class of business passenger who might tolerate this, namely those making a visit to a branch office. They might be able to continue work for another day at the branch rather than go home if they don’t have meetings scheduled. I don’t think there would be a lot of this, unless you could also do it for business class tickets.
As part of the deal, the airline would also offer you a guaranteed low rate on an airport hotel for your extra day. They already have negotiated rates and spaces. With advance notice, though, you will probably be able to stay at your own hotel unless you travel at a sold-out time. These fares might make more sense in shoulder seasons, where hotel changes are easy.
As a passenger
As a reminder, you do all this to save money on a flexible ticket. You get a ticket where you can leave whenever you want without a large change fee. For a certain class of voyager (the retired in particular) this is the sort of ticket they want. Of course, seats have to be available, you can’t switch to a sold-out flight, and seat selection may be limited if you do things on short notice. But it need not always be on short notice.
The notice from the airline could even be long, too. Their computers are estimating the load all the time, and they might send you a request to move even a week or month in advance. For a higher cost, you might lengthen the window so you need a week’s notice if you are going to be moved (and they might then move you forward or backward.)
Submitted by brad on Tue, 2014-12-09 13:25.
When I talk about robocars, I often get quite opposite reactions:
- Americans, in particular, will never give up car ownership! You can pry the bent steering wheel from my cold, dead hands.
- I can’t see why anybody would own a car if there were fast robotaxi service!
- Surely human drivers will be banned from the roads before too long.
I predict neither extreme will be true. I predict the market will offer all options to the public, and several options will be very popular. I am not even sure which will be the most popular.
- Many people will stick to buying and driving classic, manually driven cars. The newer versions of these cars will have fancy ADAS systems that make them much harder to crash, and their accident levels will be lower.
- Many will buy a robocar for their near-exclusive use. It will park near where it drops them off and always be ready. It will keep their stuff in the trunk.
- People who live and work in an area with robotaxi service will give up car ownership, and hire for all their needs, using a wide variety of vehicles.
- Some people will purchase a robocar mostly for their use, but will hire it out when they know they are not likely to use it, allowing them to own a better car. They will make rarer use of robotaxi services to cover specialty trips or those times when they hired it out and ended up needing it. Their stuff will stay in a special locker in the car.
In addition, people will mix these models. Families that own 2 or more cars will switch to owning fewer cars and hiring for extra use and special uses. For example, if you own a 2 person car, you would summon a larger taxi when 3 or more are together. In particular, parents may find that they don’t want to buy a car for their teen-ager, but would rather just subsidize their robotaxi travel. Parents will want to do this and get logs of where their children travel, and of course teens will resist that, causing a conflict. read more »
Submitted by brad on Thu, 2014-12-04 09:12.
In August, I attended the World Science Fiction Convention (WorldCon) in London. I did it while in Coeur D’Alene, Idaho by means of a remote Telepresence Robot(*). The WorldCon is half conference, half party, and I was fully involved — telepresent there for around 10 hours a day for 3 days, attending sessions, asking questions, going to parties. Back in Idaho I was speaking at a local robotics conference, but I also attended a meeting back at the office using an identical device while I was there.
After doing this, I have written up a detailed account of what it’s like to attend a conference and social event using these devices, how fun it is now, and what it means for the future.
You can read Attending the World Science Fiction convention on the other side of the world by remote telepresence robot
For those of you in the TL;DR crowd, the upshot is that it works. No, it’s not as good as being there in person. But it is a substantial fraction of the way there, and it’s going to get better. I truly feel I attended that convention, but I didn’t have spend the money and time required to travel to London, and I was able to do other things in Idaho and California at the same time.
When you see at new technology that seems not quite there yet, you have to decide — is this going to get better and explode, or is it going to fizzle. I’m voting for the improvement argument. It won’t replace being there all of the time, but it will replace being there some of the time, and thus have big effects on travel — particularly air travel — and socialization. There are also interesting consequences
for the disabled, for the use of remote labour and many other things.
(*)As the maker will point out, this is not technically a robot, just a remote controlled machine. Robots have sensors and make some of their own decisions on how they move.
Submitted by brad on Mon, 2014-12-01 09:52.
On Saturday I wrote about how we’re now capturing the world so completely that people of the future will be able to wander around it in accurate VR. Let’s go further and see how we might shoot the video resolutions of the future, today.
Almost everybody has a 1080p HD camera with them — almost all phones and pocket cameras do this. HD looks great but the future’s video displays will do 4K, 8K and full eye-resolution VR, and so our video today will look blurry the way old NTSC video looks blurry to us. In a bizarre twist, in the middle of the 20th century, everything was shot on film at a resolution comparable to HD. But from the 70s to 90s our TV shows were shot on NTSC tape, and thus dropped in resolution. That’s why you can watch Star Trek in high-def but not “The Wire.”
I predict that complex software in the future will be able to do a very good job of increasing the resolution of video. One way it will do this is through making full 3-D models of things in the scene using data from the video and elsewhere, and re-rendering at higher resolution. Another way it will do this is to take advantage of the “sub-pixel” resolution techniques you can do with video. One video frame only has the pixels it has, but as the camera moves or things move in a shot, we get multiple frames that tell us more information. If the camera moves half a pixel, you suddenly have a lot more detail. Over lots of frames you can gather even more.
This will already happen with today’s videos, but what if we help them out? For example, if you have still photographs of the things in the video, this will allow clever software to fill in more detail. At first, it will look strange, but eventually the uncanny valley will be crossed and it will just look sharp. Today I suspect most people shooting video on still cameras also shoot some stills, so this will help, but there’s not quite enough information if things are moving quickly, or new sides of objects are exposed. A still of your friend can help render them in high-res in a video, but not if they turn around. For that the software just has to guess.
We might improve this process by designing video systems that capture high-res still frames as often as they can and embed them to the video. Storage is cheap, so why not?
I typical digital video/still camera has 16 to 20 million pixels today. When it shoots 1080p HD video, it combines those pixels together, so that there are 6 to 10 still pixels going into every video pixel. Ideally this is done by hardware right in the imaging chip, but it can also be done to a lesser extent in software. A few cameras already shoot 4K, and this will become common in the next couple of years. In this case, they may just use the pixels one for one, since it’s not so easy to map a 16 megapixel 3:2 still array into a 16:9 8 megapixel 4K image. You can’t just combine 2 pixels per pixel.
Most still cameras won’t shoot a full-resolution video (ie. a 6K or 8K video) for several reasons:
- As designed, you simply can’t pull that much data off the chip per unit time. It’s a huge amount of data. Even with today’s cheap storage, it’s also a lot to store.
- Still camera systems tend to compress jpegs, but you want a video compression algorithm to record a video even if you can afford the storage for that.
- Nobody has displays to display 6K or 8K video, and only a few people have 4K displays — though this will change — so demand is not high enough to justify these costs
- When you combine pixels, you get less noise and can shoot in lower light. That’s why your camera can make a decent night-time video without blurring, but it can’t shoot a decent still in that lighting.
What is possible is a sensor which is able to record video (at the desired 30fps or 60fps rate) and also pull off full-resolution stills at some lower frame rate, as long as the scene is bright enough. That frame rate might be something like 5 or even 10 fps as cameras get better. In addition, hardware compression would combine the stills and the video frames to eliminate the great redundancy, though only to a limited extent because our purpose is to save information for the future.
Thus, if we hand the software of the future an HD video along with 3 to 5 frames/second of 16megapixel stills, I am comfortable it will be able to make a very decent 4K video from it most of the time, and often a decent 6K or 8K video. As noted, a lot of that can happen even without the stills, but they will just improve the situation. Those situations where it can’t — fast changing objects — are also situations where video gets blurred and we are tolerant of lower resolution.
It’s a bit harder if you are already shooting 4K. To do this well, we might like a 38 megapixel still sensor, with 4 pixels for every pixel in the video. That’s the cutting edge in high-end consumer gear today, and will get easier to buy, but we now run into the limitations of our lenses. Most lenses can’t deliver 38 million pixels — not even many of the high-end professional photographer lenses can do that. So it might not deliver that complete 8K experience, but it will get a lot closer than you can from an “ordinary” 4K video.
If you haven’t seen 8K video, it’s amazing. Sharp has been showing their one-of-a-kind 8K video display at CES for a few years. It looks much more realistic than 3D videos of lower resolution. 8K video can subtend over 100 degrees of viewing angle at one pixel per minute of arc, which is about the resolution of the sensors in your eye. (Not quite, as your eye also does sub-pixel tricks!) At 60 degrees — which is more than any TV is set up to subtend — it’s the full resolution of your eyes, and provides an actual limit on what we’re likely to want in a display.
And we could be shooting video for that future display today, before the technology to shoot that video natively exists.
Submitted by brad on Thu, 2014-11-27 14:32.
Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism — but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.
We still have a lot of resolution gain left to go. The human eye sees about a minute of arc, which means about 5,000 pixels for a 90 degree field of view. Since we have some ability for sub-pixel resolution, it might be suggested that 10,000 pixels of width is needed to reproduce the world. But that’s not that many Moore’s law generations from where we are today. The graphics rendering problem is harder, though with high frame rates, if you can track the eyes, you need only render full resolution where the fovea of the eye is. This actually gives a boost to onto-the-eye systems like a contact lens projector or the rumoured Magic Leap technology which may project with lasers onto the retina, as they need actually render far fewer pixels. (Get really clever, and realize the optic nerve only has about 600,000 neurons, and in theory you can get full real-world resolution with half a megapixel if you do it right.)
Walking around Rome, I realized something else — we are now digitizing our world, at least the popular outdoor spaces, at a very high resolution. That’s because millions of tourists are taking billions of pictures every day of everything from every angle, in every lighting. Software of the future will be able to produce very accurate 3D representations of all these spaces, both with real data and reasonably interpolated data. They will use our photographs today and the better photographs tomorrow to produce a highly accurate version of our world today.
This means that anybody in the future will be able to take a highly realistic walk around the early 21st century version of almost everything. Even many interiors will be captured in smaller numbers of photos. Only things that are normally covered or hidden will not be recorded, but in most cases it should be possible to figure out what was there. This will be trivial for fairly permanent things, like the ruins in Rome, but even possible for things that changed from day to day in our highly photographed world. A bit of AI will be able to turn the people in photos into 3-D animated models that can move within these VRs.
It will also be possible to extend this VR back into the past. The 20th century, before the advent of the digital camera, was not nearly so photographed, but it was still photographed quite a lot. For persistent things, the combination of modern (and future) recordings with older, less frequent and lower resolution recordings should still allow the creation of a fairly accurate model. The further back in time we go, the more interpolation and eventually artistic interpretation you will need, but very realistic seeming experiences will be possible. Even some of the 19th century should be doable, at least in some areas.
This is a good thing, because as I have written, the world’s tourist destinations are unable to bear the brunt of the rising middle class. As the Chinese, Indians and other nations get richer and begin to tour the world, their greater numbers will overcrowd those destinations even more than the waves of Americans, Germans and Japanese that already mobbed them in the 20th century. Indeed, with walking chairs (successors of the BigDog Robot) every spot will be accessible to everybody of any level of physical ability.
VR offers one answer to this. In VR, people will visit such places and get the views and the sounds — and perhaps even the smells. They will get a view captured at the perfect time in the perfect light, perhaps while the location is closed for digitization and thus empty of crowds. It might be, in many ways, a superior experience. That experience might satisfy people, though some might find themselves more driven to visit the real thing.
In the future, everybody will have had a chance to visit all the world’s great sites in VR while they are young. In fact, doing so might take no more than a few weekends, changing the nature of tourism greatly. This doesn’t alter the demand for the other half of tourism — true experience of the culture, eating the food, interacting with the locals and making friends. But so much commercial tourism — people being herded in tour groups to major sites and museums, then eating at tour-group restaurants — can be replaced.
I expect VR to reproduce the sights and sounds and a few other things. Special rooms could also reproduce winds and even some movement (for example, the feeling of being on a ship.) Right now, walking is harder to reproduce. With the OR Crescent Bay you could only walk 2-3 feet, but one could imagine warehouse size spaces or even outdoor stadia where large amounts of real walking might be possible if the simulated surface is also flat. Simulating walking over rough surfaces and stairs offers real challenges. I have tried systems where you walk inside a sphere but they don’t yet quite do it for me. I’ve also seen a system where you are held in place and move your feet in slippery socks on a smooth surface. Fun, but not quite there. Your body knows when it is staying in one place, at least for now. Touching other things in a realistic way would require a very involved robotic system — not impossible, but quite difficult.
Also interesting will be immersive augmented reality. There are a few ways I know of that people are developing
- With a VR headset, bring in the real world with cameras, modify it and present that view to the screens, so they are seeing the world through the headset. This provides a complete image, but the real world is reduced significantly in quality, at least for now, and latency must be extremely low.
- With a semi-transparent screen, show the augmentation with the real world behind it. This is very difficult outdoors, and you can’t really stop bright items from the background mixing with your augmentation. Focus depth is an issue here (and is with most other systems.) In some plans, the screens have LCDs that can go opaque to block the background where an augmentation is being placed.
- CastAR has you place retroreflective cloth in your environment, and it can present objects on that cloth. They do not blend with the existing reality, but replace it where the cloth is.
- Projecting into the eye with lasers from glasses, or on a contact lens can be brighter than the outside world, but again you can’t really paint over the bright objects in your environment.
Getting back to Rome, my goal would be to create an augmented reality that let you walk around ancient Rome, seeing the buildings as they were. The people around you would be converted to Romans, and the modern roads and buildings would be turned into areas you can’t enter (since we don’t want to see the cars, and turning them into fast chariots would look silly.) There have been attempts to create a virtual walk through ancient Rome, but being able to do it in the real location would be very cool.