Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

Ride-sharing apps instead of Bus Rapid Transit?

You may have heard of Bus Rapid Transit — a system to give a bus line a private or semi-private right-of-way, along with bus stops that are more akin to stations than bus shelters (with ticket-taking machines and loading platforms for multiple doors.) The idea is to make bus transit competitive with light-rail (LRT) in terms of speed and convenience. Aside from getting caught in slow traffic, buses also are slow to board. BRT is hoped to be vastly less expensive than light rail — which is not hard because LRT (which means light capacity rail, not lightweight rail) has gotten up to $80 to $100M per mile. When BRT runs down the middle of regular roads, it gets signal timing assistance to help it have fewer stops. It’s the “hot new thing” in transit. Some cities even give it bits of underground or elevated ROW (the Boston Silver Line) and others just want to wall off the center of a road to make an express bus corridor. Sometimes BRT gets its own highway lane or shares a special carpool lane.

At the same time just about anybody who has looked at transit and the internet has noticed that as the buses go down the street, they travel with tons of cars carrying only one person and lots of empty seats. Many have wondered, “how could we use those empty private car seats to carry the transit load?” There are a number of ride-sharing and carpooling apps on web sites and on smartphones, but success has been modest. Drivers tend to not want to take the time to declare their route, and if money is offered, it’s usually not enough to counter the inconvenience. Some apps are based on social networks so friends can give rides to friends — great when it works but not something you can easily do on demand.

But one place I’ve seen a lot of success at this is the casual carpooling system found in a number of cities. Here it’s very popular to cross the Oakland-SF Bay Bridge, which has a $6 toll to cross into SF. It used to be free for 3-person carpools, now it’s $2.50, but the carpools also get a faster lane for access to the highly congested bridge both going in and out of SF.

Almost all the casual carpool pickup spots coming in are at BART (subway) stations, which are both easy for everybody to get to, and which allow those who can’t get a carpool to just take the train. There is some irony that it means that the carpools mostly take people who would have ridden BART trains, not people who would have driven, the official purpose of carpool subsidies. In the reverse direction the carpools are far fewer with no toll to be saved, but you do get a better onramp.

People drive the casual carpools because they get something big for for it — saving over $1,000/year, and hopefully a shorter line to the bridge. This is the key factor to success in ride share. The riders are saving a similar amount of money in BART tickets, even more if they skipped driving.

Let’s consider what would happen if you put in the dedicated lane for BRT, but instead of buses created an internet mediated carpooling system. Drivers could enter the dedicated lane only if:

  • They declared their exit in advance to the app on their phone, and it’s far enough away to be useful to riders.
  • They agree to pick up riders that their phone commands them to.
  • They optionally get a background check that they pay for so they can be bonded in some way to do this. (Only the score of the background check is recorded, not the details.)

Riders would declare their own need for a ride, and to what location, on their own phones, or on screens mounted at “stops” (or possibly in nearby businesses like coffee shops.) When a rider is matched to a car, the rider will be informed and get to see the approach of their ride on the map, as well as a picture of the car and plate number. The driver will be signaled and told by voice command where to go and who to pick up. I suggest calling this Carpool-Rapid-Transit or CRT.  read more »

Watson, game 2

Not much new to report after the second game of the Watson Jeopardy Challenge. I’ve added a few updates to yesterday’s post on Watson and the result was as expected, though Watson struggled a lot more in this game than in the prior round, deciding not to answer many questions due to low confidence and making a few mistakes. In a few cases it was saved by not buzzing fast enough even though it had over 50% confidence, as it would have answered slightly wrong.

Some quick updates from yesterday you will also find in the comments:

  • Toronto’s 2nd busiest airport, the small Island airport, has the official but rarely used name of Billy Bishop. Bishop was one of the top flying aces of WWI, not WWII. Watson’s answer is still not clear, but that it made mistakes like this is not surprising. That it made so few is surprising
  • You can buzz in as soon as Trebek stops speaking. If you buzz early, you can’t buzz again for 0.2 seconds. Watson gets an electronic signal when it is time to buzz, and then physically presses the button. The humans get a light, but they don’t bother looking at it, they try timing when Trebek will finish. I think this is a serious advantage for Watson.
  • This IBM Blog Post gives the details on the technical interface between Watson and the game.
  • Watson may have seemed confident with its large bet of $17,973. But in fact the bet was fixed in advance:
    • Had Jennings bet his whole purse (and got it right) he would have ended up with $41,200.
    • If Watson had lost his bet of 17,973, he would have ended up with $41,201 and bare victory.
    • Both got it right, and Jennings bet low, so it ended up being $77,147 to $24,000.
    • Jennings’ low bet was wise at it assured him of 2nd place and a $300K purse instead of $200K. Knowing he could not beat Watson unless Watson bet stupidly, he did the right thing.
    • Jennings still could have bet more and got 2nd, but there was no value to it, the purse is always $300K
    • If Watson had wanted to 2nd guess, it might have realized Jennings would do this and bet appropriately but that’s not something you can do more than once.
    • As you might expect, the team put a bunch of thought into the betting algorithm as that is one thing computers can do perfectly sometimes. I’ve often seen Jeopardy players lose from bad betting.
  • It still sure seemed like a program sponsored by IBM. But I think it would have been nice if the PI of DeepQA was allowed up on stage for the handshake.
  • I do wish they had programmed a bit of sense of humour into Watson. Fake, but fun.
  • Amusingly Watson got a category about computer keyboards and didn’t understand it.
  • Unlike the human players who will hit the buzzer before they have formed the answer in their minds, in hope that they know it, Watson does not hit unless it has computed a high confidence answer.
  • Watson would have bombed on visual or audio clues. The show has a rule allowing those to be removed from the game for a disabled player, these were applied!
  • A few of the questions had some interesting ironies based on what was going on. I wonder if that was deliberate or not. To be fair, I would think the question-writers would not be told what contest they were writing for.

Watson, come here, I want you

The computer scientist world is abuzz with the game show world over the showdown between IBM’s “Watson” question-answering system and the best human players to play the game Jeopardy. The first game has been shown, with a crushing victory by Watson (in spite of a tie after the first half of the game.)

Tomorrow’s outcome is not in doubt. IBM would not have declared itself ready for the contest without being confident it would win, and they wouldn’t be putting all the advertising out about the contest if they had lost. What’s interesting is how they did it and what else they will be able to do with it.

Dealing with a general question has long been one of the hard problems in AI research. Watson isn’t quite there yet but it’s managed a great deal with a combination of algorithmic parsing and understanding combined with machine learning based on prior Jeopardy games. That’s a must because Jeopardy “answers” (clues) are often written in obfuscated styles, with puns and many idioms, exactly the sorts of things most natural language systems have had a very hard time with.

Watson’s problem is almost all understanding the question. Looking up obscure facts is not nearly so hard if you have a copy of Wikipedia and other databases on hand, particularly one parsed with other state-of-the-art natural language systems, which is what I presume they have. In fact, one would predict that Watson would do the best on the hardest $2,000 questions because these are usually hard because they refer to obscure knowledge, not because it is harder to understand the question. I expect that an evaluation of its results may show that its performance on hard questions is not much worse than on easy ones. (The main thing that would make easy questions easier would be the large number of articles in its database confirming the answer, and presumably boosting its confidence in its answer.) However, my intuition may be wrong here, in that most of Watson’s problems came on the high-value questions.

It’s confidence is important. If it does not feel confident it doesn’t buzz in. And it has a serious advantage at buzzing in, since you can’t buzz in right away on this game, and if you’re an encyclopedia like the two human champions and Watson, buzzing in is a large part of the game. In fact, a fairer game, which Watson might not do as well at, would involve randomly choosing which of the players who buzz in in the first few tenths of a second gets to answer the question, eliminating any reaction time advantage. Watson gets the questions as text, which is also a bit unfair, unless it is given them one word a time at human reading speed. It could do OCR on the screen but chances are it would read faster than the humans. It’s confidence numbers and results are extremely impressive. One reason it doesn’t buzz in is that even with 3,000 cores it takes 2-6 seconds to answer a question.

Indeed a totally fair contest would not have buzzing in time competition at all, and just allow all players who buzz in to answer an get or lose points based on their answer. (Answers would need to be in parallel.)

Watson’s coders know by now that they probably should have coded it to receive wrong answers from other contestants. In one instance it repeated a wrong answer, and in another case it said “What is Leg?” after Jennings had incorrectly answered “What is missing an arm?” in a question about an Olympic athlete. The host declared that right, but the judges reversed that saying that it would be right if a human who was following up the wrong answer said it, but was a wrong answer without that context. This was edited out. Also edited out were 4 crashes by Watson that made the game take 4 hours instead of 30 minutes.

It did not happen in what aired so far, but in the trials, another error I saw Watson make was declining to answer a request to be more specific on an answer. Watson was programmed to give minimalist answers, which often the host will accept as correct, so why take a risk. If the host doesn’t think you said enough he asks for a more specific answer. Watson sometimes said “I can be no more specific.” From a pure gameplay standpoint, that’s like saying, “I admit I am wrong.” For points, one should say the best longer phrase containing the one-word answer, because it just might be right. Though it has a larger chance of looking really stupid — see below for thoughts on that.

The shows also contain total love-fest pieces about IBM which make me amazed that IBM is not listed as a sponsor for the shows, other than perhaps in the name “The IBM Challenge.” I am sure Jeopardy is getting great ratings (just having their two champs back would do that on its own but this will be even more) but I have to wonder if any other money is flowing.

Being an idiot savant

Watson doesn’t really understand the Jeopardy clues, at least not as a human does. Like so many AI breakthroughs, this result comes from figuring out another way to attack the problem different from the method humans use. As a result, Watson sometimes puts out answers that are nonsense “idiot” answers from a human perspective. They cut back a lot on this by only having it answer when it has 50% confidence or higher, and in fact for most of its answers it has very impressive confidence numbers. But sometimes it gives such an answer. To the consternation of the Watson team, it did this on the Final Jeopardy clue, where it answered “Toronto” in the category “U.S. Cities.”  read more »

Air New Zealand "Cuddle Class"

Some years ago I made the proposal that airlines sell half of a middle seat at half price or less so that two coach passengers could assure they would have an empty middle next to them.

I learned a while ago about one approach to this plan, a new “cuddle class” from Air New Zealand also known as the skycouch. It’s a row of 3 coach seats that folds down into a very narrow and short bed for two. The idea is that couples can book the whole row for 2.5x the cost of one seat, ie. the empty middle is being sold at a pretty reasonable half-price, or 1/4 price per person.

As I noted earlier, that alone would be worthwhile. Many people would gladly pay 25% more for an aisle or window with a guarantee that nobody was in the middle, and would get together with other solo voyagers to do this. Air New Zealand has for some time offered what it calls the “Twinseat” which is the ability to buy (for a fairly low price around $60) an assured empty adjacent seat “subject to availability.” This is something different — it’s simply saying that, if there are going to be empty middles on the plane anyway, the people who pay more at the gate will get those next to them. You can’t assure it on a flight unless you make sure you take a flight that won’t fill up.

This skycouch seat however has armrests that really go all the way up, and a footrest that comes up to make the whole thing a platform. Frankly, since 3 seats is only 4.5’ long and the bed is narrower than a twin bed, you need a couple that sleeps together very comfortably while spooning. While everybody likes doing that for a little while, it’s fewer who can do that for a whole night. One person could buy the whole row, I guess, but at 2.5x it starts to approach a nice business class seat, many of which now lie flat. (Mind you I’m picky enough that I don’t sleep that well in the business class flat seats, and I have yet to want to pay for the 1st class ones.)

It’s nice to the see the innovation, though. I mean some airlines even have coach armrests that don’t go up all the way when reclined, and that’s a real pain for couples who want to relax together even in the old seating designs.

What would be more interesting, if less romantic, would be a way to have a portable platform that could be installed on top of this row to turn it into two bunkbeds. From a physical standpoint, you could have 4 slots for poles, some reinforcing straps to form X braces on the poles, and a board with inflatable mattress on the top, such boards packed somewhere compactly in the ceiling when not in use. The poles would have to go up and hold a net and bars to stop the top bunkmate from falling out. But the hard part would be making this strong enough to qualify as safe in an emergency landing, since an emergency might arise while these are still assembled, though they would all be dismantled well before landing and they would only be used on flights 10 hours and up. If there were a section of these you could help it along by having no recline in these seats so the seat backs are solid and able to support the upper berth.

In this case, you could have strangers happily paying 125% of the base ticket price for one of these bunks. Lot of work to set up and tear down, though. Probably need a weight limit in the upper bunk. If you can do it at all.

Definition of pixels for the world's biggest photos

I shoot lots of large panoramas, and the arrival of various cheaper robotic mounts to shoot them, such as the Gigapan Epic Pro and the Merlin/Skywatcher (which I have) has resulted in a bit of a “mine’s bigger than yours” contest to take the biggest photo. Some would argue that the stitched version of the Sloane Digital Sky survey, which has been rated at a trillion pixels, is the winner, but most of the competition has been on the ground.

Many of these photos have got special web sites to display them such as Paris 26 gigapixels, the rest are usually found at the Gigapan.org site where you can even view the gigapans sorted by size to see which ones claim to be the largest.

Most of these big ones are stitched with AutopanoPro, which is the software I use, or the Gigapan stitcher. The largest I have done so far is smaller, my 1.4 gigapixel shot of Burning Man 2010 which you will find on my page of my biggest panoramas which more commonly are in the 100mp to 500mp range.

The Paris one is pretty good, but some of the other contenders provide a misleading number, because as you zoom in, you find the panorama at its base is quite blurry. Some of these panoramas have even just been expanded with software interpolation, which is a complete cheat, and some have been shot at mixed focal length, where sections of the panorama are sharp but others are not. I myself have done this, for example in my Gigapixel San Francisco from the end of the Golden Gate I shot the city close up, but shot the sky and some of the water at 1/4 the resolution because there isn’t really any fine detail in the sky. I think this is partially acceptable, though having real landscape features not at full resolution should otherwise disqualify a panorama. However, the truth is that sections of sky perhaps should not count at all, and anybody can make their panorama larger by just including more sky all the way to the zenith if they choose to.

There is a difficult craft to making such large photos, and there are also aesthetic elements. To really count the pixels for the world’s largest photos, I think we should count “quality” pixels. As such, sky pixels are not generally quality pixels, and distant terrain lost in haze also does not provide quality pixels. The haze is not the technical fault of the photographer, but it is the artistic fault, at least if the goal is to provide a sharp photo to explore. You get rid of haze only through the hard work of being there at the right time, and in some cities you may never get a chance.

Some of the shots are done through less than ideal lenses, and many of them are done use tele-extenders. These extenders do get more detail but the truth is a 2x tele-extender does not provide 4 times as many quality pixels. A common lens today is a 400mm with a 2x extender to get 800mm. Fairly expensive, but a lot cheaper than a quality 800mm lens. I think using that big expensive glass should count for more in the race to the biggest, even though some might view it as unfair. (A lens that big and heavy costs a ton and also weighs a lot, making it harder to get a mount to hold it and to keep it stable.) One can get very long mirror “lens” setups that are inexpensive, but they don’t deliver the quality, and I don’t believe work done with them should score as high as work with higher quality lenses. (It may be the case that images from a long telescope, which tend to be poor, could be scaled down to match the quality of a shorter but more expensive lens, and this is how it should be done.)

Ideally we should seek an objective measure of this. I would propose:

  • There should be a sufficient number of high contrast edges in the image — sharp edges where the intensity goes from bright to dark in the space of just 1 or 2 pixels. If there are none of these, the image must be shrunk until there are.
  • The image can then be divided up into sections and the contrast range in each evaluated. If the segment is very low contrast, such as sky, it is not counted in the pixel count. Possibly each block will be given a score based on how sharp it is, so that background items which are hazy count for more than nothing, but not as much as good sharp sections.
  • I believe that to win a pano should not contain gross flaws. Examples of such flaws include stripes of brightness or shadow due to cloud movement, big stitching errors and checkerboard patterns due to bad overlap or stitching software. In general that means manual exposure rather than shots where the stitcher tries to fix mixed exposures unless it does it undetectably.

Some will argue with the last one in particular, since for some the goal is just to get as many useful pixels as possible for browsing around. Gigapixel panoramas after all are only good for zooming around in with a digital viewer. No monitor can display them and sometimes even printing them 12 feet high won’t show all their detail, and people rarely do that. (Though you can see my above San Francisco picture as the back wall of a bar in SF.) Still, I believe it should be a minimum bar than when you look at the picture at more normal sizes, or print it out a few feet in size, it still looks like an interesting, if extremely sharp, picture.

Ideally an objective formula can be produced for how much you have to shrink what is present to get a baseline. It’s very rare that any such panorama not contain a fair number of segments with high contrast edges and lines in them. For starters, one could just put in the requirement that the picture be shrunk until you have a frame that just about anybody would agree is sharp like an ordinary quality photo when viewed 1:1. Ideally lots of frames like that, all over the photo.

Under these criteria a number of the large shots on gigapan fall short. (Though not as short as you think. The gigapan.org zoom viewer lets you zoom in well past 1:1, so even sharp images are blurry when zoomed in fully. On my own site I set maximum zoom at 200%.)

These requirements are quite strict. Some of my own photos would have to be shrunk to meet these tests, but I believe the test should be hard.

Blind man drives, sort of, with a robocar

A release from the National Federation for the Blind reports a blind person driving and avoiding obstacles on the Daytona speedway. They used a car from the TORC team at Virginia Tech, one of the competitors in the Darpa Grand Challenges. In effect, the blind driver replaced the “drive by wire” component of a robocar with a more intelligent and thinking human also able to feel acceleration and make some judgements. As the laser and other sensors in the car detected obstacles and turns, the computer sent audio and vibratory signals to the driver to turn, speed up or slow down.

While this demo is pretty simple, it was part of a larger project the NFB has to encourage computer and robotic technologies to let the blind do what the sighted can do. In my robocar roadmap I outlined a number of bodies who might promote and lobby for robocar technology, in particular the blind, so it’s good to see that step underway. They did it as well in 2009 with a simpler dune buggy.

This car did not use the fancy and expensive 64 line Velodyne LIDAR sensor that has become the norm on most other working robocars. The Virginia Tech team (Victor Tango) was the only one of the 6 teams to complete the urban challenge not to use that LIDAR. The car shown isn’t nearly as decorated with sensors as Victor Tango was, at least from looking at it visually, indicating good improvements in their system.