Spokesmen for the MPAA, RIAA and several other content industry companies recently issued a statement of support for the new “Stop Airline Piracy Act” or SAPA, now before congress.
SAPA seeks to address the massive tide of copyright infringing material flowing into the USA on commercial airlines and delivery services. Today in China and many other countries, bootleg DVDs, CDs and software disks are being manufactured in bulk, and sold to visitors on the streets of these cities in illicit malls. Then, these visitors fly back to the USA with the pirate disks in their suitcases, taking them into the USA. Other Americans are ordering these pirate DVDs and having them shipped via both airlines and other shippers directly to their homes.
SAPA addresses this problem by giving content owners tools to cut down this pirate flow. A content owner, once they learn of an airline or shipping service which is regularly and repeatedly bringing pirated material into the country, can file claims alleging the presence of this infringement. The bill allows them to shut off the flow of money, traffic and customers to the airlines, by getting US companies to stop directing people to the airlines, and stopping payment services from transferring money to them.
“Last month, we worked with customs and border patrol to inspect planes coming into LAX from overseas,” said Pearl Alley, a spokesperson for the MPAA. “We found that every single plane of an unnamed airline had pirated material in passenger bags or in the hold. Not just a few planes, every single plane. Most planes had multiple pirated products, including DVDs and CDs, and files on laptops and music players.” Customs is able to seize any laptop or music player coming into the country for any reason and copy its drive to see what’s on it, according to CBP officials.
“These airlines and shippers are enabling and facilitating infringement. This has got to be stopped, and SAPA will stop it,” said Alley.
Under SAPA, an airline alleged to have been regularly carrying in pirated material can be blacklisted. Travel agents will be forbidden from booking passengers on the airline. Travel web sites can be ordered not to list flights or even the existence of the airline. Phone book and Yellow page companies can be ordered to remove any listings for the airline, and in some cases, phone switches can be ordered to not complete calls directed at airline phone numbers. Travel review books and sites can be ordered edited to delete mention of the airline or recommendations to fly on it.
To shut off the money flow, an accusation of alleged infringement under SAPA can result in an order to Visa, Mastercard, Paypal and other financial processors to not accept payments for the airline or shipping company. “They may be overseas, but we can stop them from destroying American jobs with tools we have at home,” said Senator Dianne Feinstein (D-CA), co-sponsor of the senate version of the bill.
Airports can also be prohibited from allowing the planes to land. However, planes in the air can file a counter-notice within 5 days of a claim, providing they subject themselves to US jurisdiction and agree to be liable if they are found to have copyright material in their holds. Aircraft which can’t file a counter notice are free to turn around on approach to LAX and return over the Pacific, but may not land at any airport in a country which has signed the Anti-Counterfeiting Trade Agreement with the USA.
“Legitimate Airlines, ones that are not carrying in pirated material every day, will not be harmed by this act, because of the counter-notice provision. In addition, if a rightsholder files a false claim, and there are no copyright violations on board the plane, the airline has a right to sue for damages over misuse of the act — so it’s all safe and does not block legitimate trade,” said Alley.
Several airlines, travel agencies and travel sites have, not surprisingly, filed opposition to this bill, but it is supported by a broad coalition of US job creators in Hollywood and Redmond, as well as domain name site GoDaddy.
This time of year I do a lot of online shopping, and my bell rings with many deliveries. But today and tomorrow, not Saturday. The post office comes Saturday but has announced it wants to stop doing that to save money. They do need to save money, but this is the wrong approach. I think the time has come for Saturday and Sunday delivery to be the norm for UPS, Fedex and the rest.
When I was young almost all retailers closed on Sunday and even had limited hours on Saturday. Banks never opened on the weekend either. But people soon realized that because the working public had the weekend off, the weekend was the right time for consumer services to be operating. The weekend days are the busiest days at most stores.
The shipping companies like Fedex and UPS started up for business to business, but online shopping has changed that. They now do a lot of delivery to residences, and not just at Christmas. But Thursday and Friday are these odd days in that business. An overnight package on Friday gets there 3 days later, not 1. (If you use the post office courier, you get Saturday delivery as part of the package, and the approximately 2 day Priority mail service is a huge win for things sent Thursday.) In many areas, the companies have offered Saturday and even Sunday delivery, but only as a high priced premium service. Strangely, the weekend also produces a gap in ground shipping times — the truck driving cross-country presumably pauses for 2 days.
We online shoppers shop 7 days a week and we want out stuff as soon as we can get it. I understand the desire to take the weekend off, but usually there are people ready to take these extra shifts. This will cost the delivery companies more as they will have to hire more workers to operate on the weekend. And they can’t just do it for ground (otherwise a 3 day package sent Friday arrives the same time as an overnight package.)
Update: I will point out that while online shipping is the David to the Goliath of brick & mortar, changing shipping to 7 days a week will mean a bunch more stuff gets bought online, and shipped, and will bring new revenue to the shipping companies. It’s just just a cost of hiring more people. It also makes use of infrastructure that sits idle 2 days a week.
This is particularly good for those who are always not hope to sign for packages that come during the work week. The trend is already starting. OnTrak, which has taken over a lot of the delivery from Amazon’s Nevada warehouse to Californians, does Saturday delivery, and it’s made me much more pleased with Amazon’s service. When Deliverbots arrive, this will be a no brainer.
Here the court held that Citizens United, a group which had produced an anti-Hilary Clinton documentary, had the right to run ads promoting their documentary and its anti-Clinton message. It had been held at the lower court that because the documentary and thus the ads advocated against a candidate, they were restricted under campaign finance rules. Earlier, however, the court had held earlier that it was OK for Michael Moore to run ads for Fahrenheit 9/11, his movie which strongly advocated against re-electing George W. Bush. The court could not find the fine line between these that the lower court had held, but the result was a decision that has people very scared because it strips most restrictions on campaigning by groups and in particular corporations. Corporations have most of the money, and money equals influence in elections.
Most attempts at campaign finance reform and control have run into a constitutional wall. That’s because when people talk about freedom of speech, it’s hard to deny that political speech is the most sacred, most protected of the forms of speech being safeguarded by the 1st amendment. Rules that try to say, “You can’t use your money to get out the message that you like or hate a candidate” are hard to reconcile with the 1st amendment. The court has made that more clear and so the only answer is an amendment, many feel.
It seems like that should not be hard. After all, the court only ruled 5-4, and partisan lines were involved. Yet in the dissent, it seems clear to me that the dissenters don’t so much claim that political speech is not being abridged by the campaign finance rules, but rather that the consequences of allowing big money interests to dominate the political debate are so grave that it would be folly to allow it, almost regardless of what the bill of rights says. The courts have kept saying that campaign finance reform efforts don’t survive first amendment tests, and the conclusion many have come to is that CFR is so vital that we must weaken the 1st amendment to get it.
With all the power of an amendment to play with, I have found most of the proposed amendments disappointing and disturbing. Amendments should be crystal clear, but I find many of the proposals to be muddy when viewed in the context of the 1st amendment, even though as later amendments they have the right to supersede it.
The problem is this: When they wrote that the freedom of the press should not be abridged, they were talking about the big press. They really meant organizations like the New York Times and Fox News. If those don’t have freedom of the press, nobody does. And these are corporations. Until very recently it wasn’t really possible to put out your political views to the masses on your terms unless you were a media corporation, or paid a media corporation to do it for you. The internet is changing that but the change is not yet complete.
Many of the amendments state that they do not abridge freedom of the press. But what does that mean? If the New York Times or Fox News wish to use their corporate money to endorse or condemn a candidate — as they usually do — is that something we could dare let the government restrict? Would we allow the NYT to do it in their newspaper, but not in other means, such as buying ads in another newspaper, should they wish to do so? Is the Fox News to be defined as something different from Citizens United?
I’m hard pressed to reconcile freedom of the press and the removal of the ability of corporations (including media ones) from using money to put out a political message. What I fear as that to do so requires that the law — nay, the constitution — try to define what is being “press” and what is not. This is something we’ve been afraid to do in every other context, and something I and my associates have fought to prevent, as lawsuits have tried to declare that bloggers, for example, were not mainstream press and thus did not have the same freedom of the press as the big boys. read more »
Earlier I wrote about desires for the next generation of DSLR camera and a number of readers wrote back that they wanted to be able to swap the sensor in their camera, most notably so they could put in a B&W sensor with no colour filter mask on it. This would give you better B&W photos and triple your light gathering ability, though for now only astronomers are keen enough on this to justify filterless cameras.
I’m not sure how easy it would be to make a sensor that could be swapped, due to a number of problems — dust, connectivity and more. In fact I wonder if an idea I wrote about earlier — lenses with integrated sensors might have a better chance of being the future.
Here’s another step in that direction — a “foveal” digital camera that has tiny sensors in the middle of the frame and larger ones out at the edges. Such sensors have been built for a variety of purposes in the past, but might they have application for serious photography?
For example, the 5d Mark II I use has 22 million 6.4 micron sensors. Being that large, they are low noise compared to the smaller sensors found in P&S cameras. But the full frame requires very large, very heavy, very expensive lenses. Getting top quality over the large image circle is difficult and you pay a lot for it.
Imagine that this camera has another array, perhaps of around 16 million pixels of 1.6 micron size in the center. This allows it to shoot a 16MP picture in the small crop zone or a 22MP picture on the full frame. (It also allows it to shoot a huge 252 megapixel image that is sharp in the center but interpolated around the edges.) The central region would have transistors that could combine all the wells of a particular colour in the 4x4 array that maps to one large pixel. This is common in the video modes on DSLR cameras, and helps produce pixels that are much lower noise than the tiny pixels are on their own, but not as good as the 16x larger big pixels, though the green pixels, which make up half the area, would probably do decently well.
As a result, this camera would not be as good in low light, and the central region would be no better in low light than today’s quality P&S cameras. But that’s actually getting pretty good, and the results at higher light levels are excellent.
The win is that you would be able to use a 100mm/f2 lens with the field of view of a 400mm lens for a 16MP picture. It would not be quite as good as a real 400mm f/2.8L Canon lens of course. But it could compare decently — and that 400mm lens is immense, heavy and costs $10,000 — far more than the camera body. On the other hand a decent 100mm f/2.8 lens aimed at the smaller image circle would cost a few hundred dollars at most, and do a very good job. A professional wildlife or sports photographer might still seek the $10K lens but a lot of photographers would be much happier to carry the small one, and not just for the saved cost. You would not get the very shallow depth of field of the 400mm f/2.8 — it would be about double with a small sensor 100mm f/2 — but many would consider that a plus in this situation, not a minus.
You could also use 3.2 or 2.1 micron sensors for better low-noise and less of a crop (or focal length multiplier as it is incorrectly called sometimes.)
One other benefit is that, if your lens can deliver it, and particularly when you have decent lighting, you would get superb resolution in the center of your full frame photos, as the smaller pixels are combined. You would get better colour accuracy, without as many bayer interpolation artifacts, as you would truly sense each colour in every pixel, and much better contrast in general. You would be making use of the fact that your lens is sharper in the center. Jpeg outputs would probably never do the 250 megapixel interpolated image, but the raw output could record all the pixels if it is not necessary to combine the wells to improve signal/noise.
In the case of the helicopter, which is still moving as it was just a regular tour helicopter, the challenge is to shoot very fast and still not make mistakes in coverage. I took several panos but only a few turned out. Victoria Falls can really only be viewed from the air — on the ground the viewing spots during high water season are in so much mist that it’s actually raining hard all around you, and in any event you can’t see the whole falls. One lesson is to try not to be greedy and get a 200m pano. Stick to 50 to 100mm at most.
On this trip I took along a 100-400mm lens, and it was my first time shooting with such a long lens routinely. I knew intellectually about the much smaller depth of field at 400mm, but in spite of this I still screwed up a number of panoramas, since I normally set focus at one fixed distance for the whole pano. Stopping down 400mm only helps a little bit. Wildlife will not sit still for you, creating extra challenges. I already showed you this elephant shot but I am also quite fond of this sunset on the Okavango delta. While this shot may not appear to have wildlife, the sun is beaming through giant spiderwebs which are the work of “social spiders” which live in nests, all building the same web. I recommend zooming in on the scene in the center. I also have some nice regular photos of this which will be up later.
I am still a bit torn about the gallery of ordinary aspect ratio photos. I could put them up on my photo site easily enough, but I’ve noticed photos get a lot more commentary and possibly viewing when done on Google+/Picasa. This is a sign of a disturbing trend away from the distributed web, where people and companies had their own web sites and got pagerank and subscribers, to the centralized AOL style model of one big site (be it Facebook or Google Plus) which is attractive because of its social synergies.
The video is very science-fictional, though they have built a concept to look at without the auto-driving. Amusingly, they do show something I have always thought would be a nice ironic demo — playing a car racing game while in a self-driving car. While we are some distance away form a car where the entire surface, inside and out, is a display, I do think we’ll see display panels on robocars to help them act as taxis. Those display panels will say who the taxi is for, and might even have your favourite bumper sticker slogan while you move inside. Inside displays will be useful for all the things you would expect — dashboard, work and entertainment.
Toyota is also showing a Prius with a system them call AVOS (Automatic Vehicle Operation System.) While this is said to be a longer-term self-driving systems, report suggest that what will be done at the motor show is back-seat rides demonstrating parking ability and pickup at the door, similar to Nissan’s Pivo and Stanford’s Junior 3, but with some added obstacle avoidance. I have not seen reports of rides as yet. The Prius itself is use more basic sensors than the Google car and other major robocars.
Nissan has announced a new version of their Pivo concept car. The Pivo 3 here’s a story with a video offers 4 wheel steering and automatic parking, including a claimed functionality for automated valet parking. In the AVP case, the car requires a special parking lot, though it is not said what changes are needed. A few years ago the Stanford team demonstrated Junior 3 which could valet park in a lot to which it had a map, and which had no civilian pedestrians.
The Pivo 3, it is reported, “will come and pick you up when you summon it.” Presumably this involves both the parking lot and the path to the door where you summon it containing the special infrastructure it needs, but this is not described. What’s also described is something fairly important — automatic charging, where the car takes itself to a charging station and hooks up.
They say they have no commercial plans for the car, but that they do expect to put such functionality into other cars around “2016 to 2017.”
With the Tokyo motor show about to start, expect new announcements from Japan in the days to come — for example Toyota has promised a self-driving Prius at the show, in a similar parking lot mode to the Pivo.
In contrast to the optimism I usually present here, and last week’s article about a self-driving Mercedes just a year away it’s worth noting this interview with various BMW folks where they provide a much more cautious timeline of at least a decade. Part of their concern comes from the use of computer vision systems. These are much cheaper than laser scanners but do not provide the reliability needed; it’s no accident that all the successful teams in the Darpa urban challenge relied very heavily on laser scanning.
I’m enough of an optimist that I am ready to bring forward the question “When will a child be born that never drives because of robocars?” Of course there are many people in the developed world who never get a licence for a variety of reasons, particularly people who live their lives in Manhattan and other transit-heavy cities. But for most of us, getting a licence and getting on the road is a rite of passage. Yet studies are showing that teens are now waiting longer to get a licence with various reasons speculated.
Nonetheless eventually we will see somebody who would normally have been jumping at the chance to get a licence and get out on the road who never gets one because they have a robocar. It won’t be easy of course, since even those who have robocars will still need to travel to places that don’t have them and rent cars, but many people who don’t have licences today just make use of taxis and transit in those situations.
I will put forward the proposal that this child may already have been born. When I see a baby today, I wonder, “will this child ever learn to drive?” While 16 years is aggressive for the ubiquitous fully autonomous operation needed for this, I do think we’re on the cusp, and if that child has not yet been born, it’s not too far away.
One reason for this is all the forces that are already reducing teen driving. A teen debating whether to take the effort to learn to drive might easily be swayed not to because mom has bought him a robocar. Once a successful safety record for robocars is demonstrated, parents will buy them for teens — instead of buying them driving lessons — and pressure the teens to not take the risk of driving themselves.
In other news, here’s a pointer to work by designer Charles Rattray on the look of future robocars. His designs match with my position that many robocars should be half the width of today’s cars, carrying only 1-2 people, since the vast majority of cars today only carry 1-2 people. Today’s car buyers insist on 5 passenger sedans (or larger) but when you have mobility-on-demand you can use the right vehicle for the trip on every trip, and that’s going to mostly be one person vehicles. This in turn, is the real key to efficient transportation, because while you can do great things with more efficient or electric power trains and more aerodynamic cars, nothing compares to making the car smaller, lighter and narrower in a major way. He has many design sketches and a video of how he sees the cars in action.
For the first time, a car company has put a date on shipment of a car with self-driving ability.
According to British site Auto Express, Mercedes has revealed that their 2013 S-class will feature self-driving. Not clear if there is an official company press release, though the company has been talking about such features, as have many other companies. Realize that the 2013 model year is just a year away.
The car will feature radar based automatic cruise control, combined with lane-marker following, and the automatic driving will only operate below 40kph. In other words, this is designed to let you take your hands off the wheel in stop-and-go traffic jams, not to drive you at actual open driving speeds. You’ll need to pay attention to the road, not read a book, but at that low speed you’ll have decent warning if something goes wrong and the car starts drifting, so I suspect that in spite of warnings not to do so, people will get away with minor tasks like reading a few e-mails or even sending some.
While a very basic level introduction, this is still a milestone and will pave the way (love those road metaphors) for other companies. While the focus of the DARPA grand challenges and most visions of the robocar future has been on cars that can drive completely on their own, there are now strong signals that the technology will arrive in the form of driving assist, and human drivers will be called upon to still do much of the driving, in particular the tricky bits the systems aren’t safe to handle. In my article a few years ago the roadmap to robocars I suspected we might see a few specialized applications first, such as robot valet parking and even autonomous vehicles for military delivery applications, but now the autopilot is on track for showing up commercially first.
I shoot with the Canon 5d Mark II. While officially not a pro camera, the reality is that a large fraction of professional photographers use this camera rather than the Eos-1D cameras which are faster but much bulkier and in some ways even inferior to the 5D. But it’s been out a long time now, and everybody is wondering when its successor will come and what features it will have.
Each increment in the DSLR world has been quite dramatic over the last decade. There’s always been a big increase in resolution with the new generation, but now at 22 megapixels there’s less call for that. While there are lenses that deliver more than 22 megapixels sharply, they are usually quite expensive, and while nobody would turn down 50mp for free, there just wouldn’t be nearly as much benefit from it than the last doubling. Here’s a look at features that might come, or at least be wished for.
More pixels may not be important, but everybody wants better pixels.
Low noise / higher ISO: The 5D2 astounded us with ISO 3200 shots that aren’t very noisy. Unlike megapixels, there is almost no limit to how high we would like ISO to go at low noise levels. Let’s hope we see 12,500 or more at low noise, plus even 50,000 noisy. Due to physics, smaller pixels have higher noise, so this is another reason not to increase the megapixel count.
3 colour: The value of full 3-colour samples at every pixel has been overstated in the past. The reason is that Bayer interpolation is actually quite good, and almost every photographer would rather have 18 million bayer pixels over 6 million full RGB pixels. It’s not even a contest. As we start maxing out our megapixels to match our lenses, this is one way to get more out of a picture. But if it means smaller pixels, it causes noise. The Foveon approach which stacked the 3 pixels would be OK here — finally. But I don’t expect this to be very likely.
Higher dynamic range: How about 16 bits per pixel, or even 24? HDR photography is cool but difficult. But nobody doesn’t want more range, if only for the ability to change exposure decisions after the fact and bring out those shadows or highlights. Automatic HDR in the camera would be nice but it’s no substitute for try high-range pixels.
Video & Audio
Due to the high quality video in the 5D2, many professional videographers now use it. Last week Canon announced new high-end video cameras aimed at that market, so they may not focus on improvements in this area. If they do, people might like to see things like 60 frame video, ability to focus while shooting, higher ISO, and 4K video. read more »
Last week, new studies came back on the California High Speed Rail project. They have raised the estimated cost to $99 billion, and dropped the ridership estimate to 36.8 million and $5.5 billion in annual revenue. Note that only around 20 million people currently fly the SF to LA corridor — they expect to not just capture most of those but large numbers of central valley trips.
Even at the earlier estimates the project was an obvious mistake, and there’s no way to financially justify spending $99 billion to pull in $5.3 billion/year even subbing zero in for the large operating cost. But for various political reasons involving getting federal money, some are still pushing for this project, and we may well build a short train to nowhere in the central valley just to get the federal bucks.
They’re planning there because the various cities in the populated areas have been fighting legal battles to block the train there, not wanting its disruption. Because the train can only stop if a very few places at the speed it wants to go, a lot of towns would end up having construction and noise and street blockage and not get a lot of use from the train.
The local opposition is a tough barrier, because the train ends up really only being useful where the people are. While I have doubts about how many people would ride the long haul, since few want to go from downtown SF to downtown LA, lots of people would ride a fast train in the urban areas. In particular, what nobody talks about is running the HSR primarily to the airport, and streamlining both security clearance and the connection with new technology. The only reason HSR is pushed as possibly competing with flights is because of the nightmare we have made of flying, where people have to get to airports 45 minutes ahead of even short-haul flights and take a fair bit of time to get out of airports on the other end and make it through traffic to their destinations. A fast train from a downtown to the airport where you clear security (and check bags) right on the train, and the train drops you right at the central gate areas post security would create an unbeatable trip from downtown anywhere to downtown anywhere.
For fast trains, the San Francisco to San Jose route is so short that a 250mph HSR could do the 48 mile trip between the towns in 12 minutes without stopping, call it 15 with the start and stop at each end. This opens up an interesting cost saving — you could build a single track, and have a train zip back and forth on it, and still provide service every 30 minutes. You could put a double-track section in the middle and have service every 15 minutes, with lots of safety interlocks of course. A single track requires less land, less of everything and could probably be built along easier routes, even highway medians in some cases. You could avoid turnaround time by having double track at the endpoints, so one train is leaving for opposite route the moment the other train arrives, giving each train quite a long turnaround — with double rolling stock.
Of course, having no stops is not that valuable because only a few people want to go from SJ to SF. People would want a stop at the airport as I have indicated, and at least one in Mountain View or Palo Alto. Each stop costs a bunch of time, and eventually the trip gets long enough that the single-track trick becomes less useful. For a while I’ve wondered if you could make trains that could dock, so that the main train runs non-stop and is able to shed cars which stop at local stops (not that hard) and to dock with cars coming from local stops (harder.) I proposed this 7 years ago near the start of this blog, and there are serious rail designers thinking along the same lines — see the video in that link.
In the Priestman Goode proposal, they have trains docking side to side. That seems much more challenging though it offers fast transfer. If you combine the two ideas, you would have two tracks — one for the nonstop trains and one for the docking shuttles which serve all the local stops. Indeed, if you could do this you could get rid of the old regular speed rail service running on existing track pairs because this would be superior in all ways except cost. My own proposals attempted to dock on a single track, which seems easier to me.
Robocars play a role in all this too. Even the HSR authority realizes they have a big problem, in that once people get quickly to an HSR station, they still have to get to their real destination. Using local transit may mean spending more time on a local bus than on the HSR. The mobility on demand of robocars is a great answer, and I’m pretty sure that with a 2030 forecast completion date (if they’re lucky) we’ll have robocars long before then. And the one thing cars can’t readily do is go very fast efficiently between cities.
The docking approach, should it work, has another advantage. The main train can take the best route (cheapest or shortest) without too much regard for where the stations are. People like stations in urban centers, but bringing the high speed train right through such areas (like Palo Alto) is hard and has caused the lawsuits. If the train goes through the industrial space along the Bay, and a spur goes into downtown for the shuttle that docks with it, you get a win all around.
Another approach that doesn’t require dock/undock works when you have a solid terminus like SF. You have 3 trains leave SF at the same time. The first one goes express to San Jose. The second goes express to Palo Alto and Mountain View and then switches to low speed tracks to go to Sunnyvale and Santa Clara. The third goes to SFO airport. Because SFO airport is also an origination point, it sends a train to SJ just before or after the one from SF, and another train to Mountain View right after that one. Mountain View to SJ service might be able to fit in or have to be local service. These sub-trains are just a few cars. This is not as energy efficient, though it can be if the trains are able to get close to one another and draft, sort of a virtual coupling without physical contact. You need perfect sync, and special long-spring collision bumpers in case the sync fails and they bump. The risk of higher-speed bumping must be prevented by failsafes that don’t even let the trains get on the same track until speed is matched close enough. This requires more than just a single track of course.
Congestion on the roads has a variety of sources. These include accidents of course, reductions in road capacity, irrational human driving behaviours and others, but most of all you get congestion when more cars are trying to use a road than it has capacity for.
That’s why the two main success stories in congestion today are metering lights and downtown congestion charging. Metering lights limit how fast cars can enter the highway, so that you don’t overload it and traffic flows smoothly. By waiting a bit at the metering light you get a fast ride once on the highway. Sometimes though, especially when the other factors like accidents come into play, things still gum up.
Now that more and more cars are connected (by virtue just of the smartphone the driver carries if nothing else) the potential will open up for something else in congestion — finding ways to encourage drivers to leave a congested road. read more »
A little self-plug. I have an article on an introduction to panoramic photographic technique the November issue of Photo Technique with a few panos in it. This is old world journalism, folks — you have to read it on paper at least for now.
We decided to go to Harvey’s pan in Savuti one afternoon and lucked upon a large breeding group of elephant just on their way there. I caught them in one of my first long lens panoramas. Long lens panos are fairly difficult due to the limited depth of field, but they get great detail on the baby elephant.
I’m just back from the “ITS World Congress” an annual meeting of people working on “Intelligent Transportation Systems” which means all sorts of applications of computers and networking to transportation, particularly cars. A whole bunch of stuff gets covered there, including traffic monitoring and management, toll collection, transit operations etc. but what’s of interest to robocar enthusiasts is what goes into cars and streets. People started networking cars with systems like OnStar, now known in the generic sense as “telematics” but things have grown since then.
The big effort involves putting digital radios into cars. The radio system, known by names like 802.11p, WAVE and DSRC involves an 802.11 derived protocol in a new dedicated band at 5.9ghz. The goal is a protocol suitable for safety applications, with super-fast connections and reliable data. Once the radios in the car, the car will be able to use it to talk to other cars (known as V2V) or to infrastructure facilities such as traffic lights (known as V2I.) The initial planned figured that the V2I services would give you internet in your car, but the reality is that 4G cellular networks have taken over that part of the value chain.
Coming up with value for V2V is a tricky proposition. Since you can only talk to cars very close to you, it’s not a reliable way to talk with any particular car. Relaying through the wide area network is best for that unless you need lots of bandwidth or really low latency. There’s not much that needs lots of bandwidth, but safety applications do demand both low latency and a robust system that doesn’t depend on infrastructure.
The current approach to safety applications is to have equipped cars transmit status information. Formerly called a “here I am” this is a broadcast of location, direction, speed and signals like brake lights, turn signals etc. If somebody else’s car is transmitting that, your car can detect their presence, even if you can’t see them. This lets your car detect and warn about things like:
The car 2 or 3 in front of you, hidden by the truck in front of you, that has hit the brakes or stalled
People in your blind spot, or who are coming up on you really fast when your’re about to change lanes
Hidden cars coming up when you want to turn left, or want to pass on a rural highway
Cars about to run red lights or blow stop signs at an intersection you’re about to go through
Privacy is a big issue. The boxes change their ID every minute so you can’t track a car over a long distance unless you can follow it over every segment, but is that enough? They say a law is needed so the police don’t use the speed broadcast to ticket you, but will it stay that way?
It turns out that intersection collisions are a large fraction of crashes, so there’s a big win there, if you can do it. The problem is one of critical mass. Installed in just a few cars, such a system is extremely unlikely to provide aid. For things like blindspot detection, existing systems that use cameras or radars are far better because they see all cars, not just those with radios. Even with 10% penetration, there’s only a 1% chance any given collision could be prevented with the system, though it’s a 10% chance for the people who seek out the system. (Sadly, those who seek out fancy safety systems are probably less likely to be the ones blowing through red lights, and indeed another feature of the system — getting data from traffic lights — already can do a lot to stop an equipped car from going through a red light by mistake.) read more »
They say that famous deaths come in threes. That’s no doubt just an artifact of our strange sense of coincidence, but after Jobs and Ritchie, tonight we learn of the death of John McCarthy, AI pioneer and creator of LISP.
My first personal encounter with John was part of a big story of my life, the banning of rec.humor.funny. In a short summary of what’s told there, RHF had been banned at Waterloo and later, due to a comedy of errors got banned at Stanford. Shortly after the ban John called me up and said he wanted to be a champion against the ban. He had been worried for some time about the growing tide of speech codes at supposed bastions of academic freedom, and the idea of banning publications on the internet was a new level. John used his sway to get some press, organize a protest march and have the matter fixed by the academic senate. Strangely, just a few days ago I was at a dinner for a group called FIRE which fights against crazy academic bans, and I was recounting the story of what John did at Stanford for the first time in many years.
Later, I moved to silicon valley and got to know John in person a bit more. He was an incredible force of character long after the age where most have shrunk away. If the AIs of the future are able to resurrect the figures of the past, you know he’ll be one of the first in line for them.
RIP John. And Dennis (who I praised over on Google+). And Steve Jobs. Let’s really limit it to three for a while.
Since getting involved with Google’s self-driving-car team, I’ve had to keep silent about its internals, but for those who are interested in the project, a recent presentation at the intelligent robotics conference in San Francisco is now up on youtube. The talk is by Sebastian Thrun (overall project leader) and Chris Urmson, lead developer. Sebastian led the Stanley and Junior teams in the Darpa Grand Challenge and Chris led CMU teams, including BOSS which won the urban challenge.
The talk begins in part one with the story of the grand challenges. If you read this blog you probably know most of that story.
Part two (above) shows video that’s been seen before in Sebastian’s TED talk and my own talks, and maps of some of the routes the car has driven. Then you get Chris showing some hard technical details about mapping and sensors.
Part three shows the never before revealed story of a different project called “Caddy”: self-driving, self-delivering golf carts for use in campus transportation. The golf carts are an example of what I’ve dubbed a WhistleCar — a car that delivers itself and then you drive it in any complex situations.
If you want to see what’s inside the project, these videos are a must-watch, particularly part 2 (embedded above) and the start of part 3.
There’s lots of other robocar news after the Intelligent Transportation Systems conference, which I attended this week in Orlando FL. The ITS community is paying only minimal attention to robocars, which is an error on their part, but a lot of the technology there will eventually affect how robocars develop — though a surprising amount of it will become obsolete because it focuses on the problems caused by lots of human driving.
The list of robocar teams grows again with a new project from Oxford university, led by Paul Newman. Nissan is also involved, though the base vehicle is a Bowler Wildcat off-road vehicle.
The project sports a LIDAR design I have not yet seen, with 4 laser units on a mount spinning at what looks like 1-2hz, but they claim a 40hz sampling rate and do have very nice mapping results. They claim their localizer is very good, and demos show it working on rough off-road terrain. Some videos also see it doing waypoint driving without the LIDAR but they talk about why GPS is not adequate.
The claims about the vehicle have a British understatement. They say it will be 10-15 years before it’s ready for the roads, and talk mostly about simple problems like handling traffic jams — something Audi, BWM and VW have all claimed they will release in the middle of this decade, using simpler sensor systems. He also envisions a future arms-race where a car that can do 10 minutes/day of self-driving competes with one that can do 15.
Congestion is their main message it seems, citing the Dept. for Transport’s figures of a 25 billion pound cost for congestion in 2025 in the UK.
Boston Dynamics has gone even further with their latest model, AlphaDog
The AlphaDog’s legs are hydraulic, and so adding legs like this to a car which has a motor and compressor is not so far fetched. In this design they could easily fold up into the sides of a single person wheeled vehicle. In the video, the robot is shown carrying 400lbs of weights, and a range of 20km is claimed. You might not quite want to ride it yet, but that’s coming.
Let’s look at some of the consequences for transportation and cities:
Houses need not be on streets to have full access by small vehicles and cargo delivery robots. They can be on the side of hills and up stairs. Neighbourhoods can be built with just small lightly paved or graded paths so that the robot’s legs don’t disturb the terrain.
The robots may well, in a controlled environment, be able to place their feet with good precision. As such the path for a walking robot might look like just a series of stone pads dotting the grass — the way some paths for people look. In reality they would be more sturdy, but that’s what they could look like.
In developing countries which do not have infrastructure, they may never have to put in that much infrastructure. Combined with flying robots, delivery of goods can become possible to any location, and at high speed.
The world’s tourist destinations may become swamped with people who can ride a walking robot to remote locations where before the daunting hike kept the crowds down. There will be efforts to ban walking chairs, but the elderly and disabled will be able to fight such bans as discriminatory.
Indeed, for the disabled and aged, the walking chair robot might well open up lots of the world that is now closed. The main issue would be power and noise. The motors that power BigDog are very noisy. AlphaDog in the video is using external power.
Robotic cargo delivery (deliverbots) need no longer be limited to places you can roll up to. That can include places inside buildings, even up stairs.
I’m actually not a fan of login and sessions on the web, and in fact prefer a more stateless concept I call authenticated actions to the more common systems of login and “identity.”
But I’m not going to win the day soon on that, and I face many web sites that think I should have a login session, and that session should in fact terminate if I don’t click on the browser often enough. This frequently has really annoying results — you can be working on a complex form or other activity, then switch off briefly to other web sites or email to come back and find that “your session has expired” and you have to start from scratch.
There are times when there is an underlying reason for this. For example, when booking things like tickets, the site needs to “hold” your pending reservation until you complete it, but if you’re not going to complete it, they need to return that ticket or seat to the pool for somebody else to buy. But many times sessions expire without that reason. Commonly the idea is that for security, they don’t want to leave you logged on in a way that might allow somebody to come to your computer after you leave it and take over your session to do bad stuff. That is a worthwhile concept, particularly for people who will do sessions at public terminals, but it’s frustrating when it happens on the computer in your house when you’re alone.
Many sites also overdo it. While airlines need to cancel your pending seat requests after a while, there is no reason for them to forget everything and make you start from scratch. That’s just bad web design. Other sites are happy to let you stay “logged on” for a year.
To help, it would be nice if the browser had a way of communicating things it knows about your session with the computer to trusted web sites. The browser knows if you have just switched to other windows, or even to other applications where you are using your mouse and keyboard. Fancier tools have even gone so far as to use your webcam and microphone to figure if you are still at your desk or have left the computer. And you know whether your computer is in a public space, semi-public space or entirely private space. If a browser, or browser plug-in, has a standardized way to let a site query session status, or be informed of session changes and per-machine policy, sites could be smarter about logging you out. That doesn’t mean your bank still should not be paranoid if you are logged in to a session where you can spend your money, but they can be more informed about it. read more »
But there’s one place this might make sense. I think you should get a chance to do a survey after every interaction with the police, as well as others who have some color of authority over you (judges, security guards, border patrol etc.) The data you enter would be anonymous, and the survey conducted by a different party bonded to protect your privacy. There would also be entry in some means (perhaps with different classes of card) about whether the encounter was assistive, or was a stop, or lead to arrest though there are limits on this while keeping the data anonymous. If you are required to identify yourself as part of the encounter, this can be your means to getting a card later, though again the data entered must not be tied to your name.
Police would get small cards which have a cryptographic code which allows the bearer to fill out the survey. They would be required to hand one out in any incident. The number handed out would need to be close to the count in their own incident report, so that they don’t just keep the cards to fill out positive surveys on themselves. If police won’t give you a card that’s a serious matter itself.
Of course, people who have been stopped, rather than assisted by police will have a naturally antagonistic view. What would matter in these surveys would be how each officer compares to the other officers. You would not judge officers on their absolute score, but their score relative to other officers with similar duties. These scores would be admissible in court when an officer testifies. An officer with a seriously bad record would become less trusted by judges and juries. The worst cops would have to leave the force, being unable to testify in court without being doubted. And the absolute numbers would also tell us something. On the forms, people could complain about misuse of authority and corruption, and could also leave positive remarks.
The 3rd party taking in the data would have to have impeccable credentials so people trust that it truly destroys any association between submitter and data. They would also have to be trained at how to protect against re-linking. (For example, if dates can be figured out, officers may well be able to connect people with forms. As such data must be released slowly, and only after a large enough number of forms are in the batch, and forms with unique profiles must be merged with care.) In most cases the 3rd party would have to be in another state, and possibly another country to assure it is not under the sway of those it is collecting data on.
We also would have to assure that people don’t try to sell the survey cards. That’s hard, if they are to be truly anonymous. You might have to use them quickly, to avoid giving you time to find a buyer. The 3rd party could run regular stings trying to buy and sell cards and pierce anonymity on just those. I’m sure that there are other ways officers would try to game the system that would have to be found and dealt with. Over time, the data should become public in amalgamated form, not just available to defence lawyers.