Submitted by brad on Tue, 2009-11-03 15:41.
I suggested this as a feature for my Canon 5D SLR which shoots video, but let me expand it for all video cameras, indeed all cameras. They should all include bluetooth, notably the 480 megabit bluetooth 3.0. It’s cheap and the chips are readily available.
The first application is the use of the high-fidelity audio profile for microphones. Everybody knows the worst thing about today’s consumer video cameras is the sound. Good mics are often large and heavy and expensive, people don’t want to carry them on the camera. Mics on the subjects of the video are always better. While they are not readily available today, if consumer video cameras supported them, there would be a huge market in remote bluetooth microphones for use in filming.
For quality, you would want to support an error correcting protocol, which means mixing the sound onto the video a few seconds after the video is laid down. That’s not a big deal with digital recorded to flash.
Such a system easily supports multiple microphones too, mixing them or ideally just recording them as independent tracks to be mixed later. And that includes an off-camera microphone for ambient sounds. You could even put down multiples of those, and then do clever noise reduction tricks after the fact with the tracks.
The cameraman or director could also have a bluetooth headset on (those are cheap but low fidelity) to record a track of notes and commentary, something you can’t do if there is an on-camera mic being used.
I also noted a number of features for still cameras as well as video ones:
- Notes by the photographer, as above
- Universal protocol for control of remote flashes
- Remote control firing of the camera with all that USB has
- At 480mbits, downloading of photos and even live video streams to a master recorder somewhere
It might also be interesting to experiment in smart microphones. A smart microphone would be placed away from the camera, nearer the action being filmed (sporting events, for example.) The camera user would then zoom in on the microphone, and with the camera’s autofocus determine how far away it is, and with a compass, the direction. Then the microphone, which could either be motorized or an array, could be directional in the direction of the action. (It would be told the distance and direction of the action from the camera in the same fashion as the mic was located.) When you pointed the camera at something, the off-camera mic would also point at it, except during focus hunts.
There could, as before be more than one of these, and this could be combined with on-person microphones as above. And none of this has to be particularly expensive. The servo-controlled mic would be a high end item but within consumer range, and fancy versions would be of interest to pros. Remote mics would also be good for getting better stereo on scenes.
Key to all this is that adding the bluetooth to the camera is a minor cost (possibly compensated for by dropping the microphone jack) but it opens up a world of options, even for cheap cameras.
And of course, the most common cameras out there now — cell phones — already have bluetooth and compasses and these other features. In fact, cell phones could readily be your off camera microphones. If there were an nice app with a quick pairing protocol, you could ask all the people in the scene to just run it on their cell phone and put the phone in their front pocket. Suddenly you have a mic on each participant (up to the limit of bluetooth which is about 8 devices at once.)
Submitted by brad on Sun, 2009-11-01 12:16.
Last week saw the DVD release of what may be the final Battlestar Galactica movie/episode, a flashback movie called “The Plan.” It was written by Jane Espenson and is the story of the attack and early chase from the point of view of the Cylons, most particularly Number One (Cavil.) (Review first, spoilers after the break.)
I’ve been highly down on BSG since the poor ending, but this lowered my expectations, giving me a better chance of enjoying The Plan. However, sadly it fell short even of lowered expectations. Critics have savaged it as a clip show, and while it does contain about 20% re-used footage (but not including some actors who refused to participate) it is not a clip show. Sadly, it is mostly a “deleted scenes” show.
You’ve all seen DVDs with “deleted scenes.” I stopped watching these on DVDs because it often was quite apparent why they were deleted. The scene didn’t really add anything the audience could not figure out on its own, or anything the story truly needed. Of course in The Plan we are seeing not deleted material but retroactive continuity. Once the story of Cavil as the mastermind of the attack was written in season 4, and that he did it to impress his creators (who themselves were not written as Cylons until season three) most of the things you will see become obvious. You learn very little more about them that you could not imagine.
There is some worthwhile material. The more detailed nuking of the colonies is chilling, particularly with the Cylon models smiling at the explosions — the same models the audience came to forgive later. Many like the backstory given to a hidden “Simon” model on board the fleet never seen in the show. He turns out (in a retcon) to be one of the first to become more loving and human, since we see him at the opening having secretly married a human woman, but we also don’t forget the other Simon models we saw, who were happy to run medical experiments on humans, smile at nukes, and lobotomize their fellow Cylons to meet Cavil’s needs.
We learn the answers to a few mysteries that fans asked about — who did Six meet after leaving Baltar on Caprica? The shown meeting is anticlimactic. How did Shelley Godfrey disappear after accusing Baltar? The answer is entirely mundane, and better left as a mystery. (Though it does put to rest speculation that she was actually a physical appearance of the Angel in Baltar’s head, who mysteriously was not present during Godfrey’s scenes.)
We get more evidence that Cavil is cold and heartless. Stockwell enjoys playing him that way. But I can’t say it told me much new about his character.
More disappointing is what we don’t get. We don’t learn what was going on in the first episode, 33 and what was really on the Olympic Carrier, a source of much angst for Apollo and Starbuck during the series. We don’t learn how the Cylons managed to be close enough to resurrect those tossed out airlocks, but not to catch the fleet. We don’t learn how Cavil convinced the other Cylons to kill all the humans, or their thoughts on it. We don’t learn how that decision got reversed. We learn more about what made Boomer do her sabotages and shooting of Adama, but we don’t learn anything about why she was greeted above Kobol by 100 naked #8s who then let her nuke their valuable base star. Now that the big secret of the god of Galactica is revealed, we learn nothing more about that god, and the angels don’t even appear.
In short, we learn almost nothing, which is odd for a flashback show aired after the big secrets have been revealed. Normally that is the chance to show things without having to hide the big secrets. Of course, they didn’t know most of these big secrets in the first season.
Overall verdict: You won’t miss a lot if you miss this, feel free to wait for it to air on TV.
Some minor spoiler items after the break. read more »
Submitted by brad on Fri, 2009-10-30 14:39.
While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.
This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.
I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.
So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)
However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.
Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.
There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.
However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.
One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?
We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.
Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.
Submitted by brad on Wed, 2009-10-28 22:03.
I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.
Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.
The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.
This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.
At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)
And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.
Twitter has made me revise slightly my two axes of social media — serial vs. browsed and reader-friendly vs. writer friendly. Twitter is generally serial, and I would say it is writer-friendly (it is easy to tweet) but not so reader friendly (the volume gets too high.)
However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.
This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)
TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.
Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.
The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.
On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.
Submitted by brad on Mon, 2009-10-26 14:42.
Saturday saw the dedication of a new autonomous vehicle research center at Stanford, sponsored by Volkswagen. VW provided the hardware for Stanley and Junior, which came 1st and 2nd in the 2nd and 3rd Darpa Grand Challenges, and Junior was on display at the event, driving through the parking lot and along the Stanford streets, then parking itself to a cheering crowd.
Junior continues to be a testing platform with its nice array of sensors and computers, though the driving it did on Saturday was largely done with the Velodyne LIDAR that spins on top of it, and an internal map of the geometry of the streets at Stanford.
New and interesting was a demonstration of the “Valet Parking” mode of a new test vehicle, for now just called Junior 3. What’s interesting about J3 is that it is almost entirely stock. All that is added are two lower-cost LIDAR sensors on the rear fenders. It also has a camera at the rear-view mirror (which is stock in cars with night-assist mode) and a few radar sensors used in the fixed-distance cruise control system. J3 is otherwise a Passat. Well, the trunk is filled with computers, but there is no reason what it does could not be done with a hidden embedded computer.
What it does is valet park itself. This is an earlier than expected implementation of one of the steps I outlined in the roadmap to Robocars as robo-valet parking. J3 relies on the fact the “valet” lot is empty of everything but cars and pillars. Its sensors are not good enough to deal well with random civilians, so this technology would only work in an enclosed lot where only employees enter the lot if needed. To use it, the driver brings the car to an entrance marked by 4 spots on the ground the car can see. Then the driver leaves and the car takes over. In this case, it has a map of the garage in its computer, but it could also download that on arrival in a parking lot. Using the map, and just the odometer, it is able to cruise the lanes of the parking lot, looking for an empty spot, which it sees using the radar. (Big metal cars of course show clearly on the radar.) It then drives into the spot.
read more »
Submitted by brad on Fri, 2009-10-23 17:54.
I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.
While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.
I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?
As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.
In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.
An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.
Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.
I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.
Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)
Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)
Submitted by brad on Wed, 2009-10-21 17:31.
Some time ago I proposed the “School of Fish Test” as a sort of turing test for robocars. In addition to being a test for the cars, it is also intended to be a way to demonstrate to the public when the vehicles have reached a certain level of safety. (In the test, a swarm of robocars moves ona track, and a skeptic in a sportscar is unable to hit one no matter what they do, like a diver trying to touch fish when swimming through a school.)
I was interested to read this month that Nissan has built test cars based on fish-derived algorithms as part of a series of experiments based on observing how animals swarm. (I presume this is coincidental, and the Nissan team did not know of my proposed test.)
The Nissan work (building on earlier work on bees) is based upon a swarm of robots which cooperate. The biggest test involves combining cooperating robots, non-cooperating robots and (mostly non-cooperating) human drivers, cyclists and pedestrians. Since the first robocars on the road will be alone, it is necessary to develop fully safe systems that do not depend on any cooperation with other cars. It can of course be useful to communicate with other cars, determine how much you trust them, and then cooperate with them, but this is something that can only be exploited later rather than sooner. In particular, while many people propose to me that building convoys of cars which draft one another is a good initial application of robotics (and indeed you can already get cars with cruise control that follows at a fixed distance) the problem is not just one of critical mass. A safety failure among cooperating cars runs the risk of causing a multi-car collision, with possible multiple injured parties, and this is a risk that should not be taken in early deployments of the technology.
My talk at the Singularity Summit on robocars was quite well received. Many were glad to see a talk on more near-future modest AI after a number of talks on full human level AI, while others wanted only the latter. A few questions raised some interesting issues:
- One person asked about the insurance and car repair industries. I got a big laugh by saying, “fuck ‘em.” While I am not actually that mean spirited about it, and I understand why some would react negatively to trends which will obsolete their industries, we can’t really be that backwards-looking.
- Another wondered if, after children discover that they nice cars will never hit them, they then travel to less safe roads without having learned proper safety instincts. This is a valid point, though I have already worried about what to do about the disruption to passengers who have to swerve around kids who play in the streets when it is not so clearly dangerous. Certain types of jaywalking that interfere with traffic will need to be discouraged or punished, though safe jaywalking, when no car is near, should be allowed and even encouraged.
- One woman asked if we might become disassociated with our environments if we spend our time in cars reading or chatting, never looking out. This is already true in a taxicab city like New York, though only limos offer face-to-face chat. I think the ability to read or work instead of focus on the road is mostly a feature and not a bug, but she does have a point. Still, we get even more divorced from the environment on things like subways.
As expected, the New York audience, unlike other U.S. audiences, saw no problem with giving up driving. Everywhere else I go, people swear that Americans love their cars and love driving and will never give it up. While some do feel that way, it’s obviously not a permanent condition.
Some other (non-transportation) observations from Singularity Summit are forthcoming.
BTW, I will be giving a Robocar talk next Wednesday, Oct 28 at Stanford University for the ME302 - Future of the Automobile class. (This is open to the general Stanford community, affiliates of their CARS institute, and a small number of the public. You can email email@example.com if you would like to go.)
Submitted by brad on Thu, 2009-10-15 12:01.
I’m impressed with a great interactive map of the U.S. power grid produced by NPR. It lets you see the location of existing and proposed grid lines, and all power plants, plus the distribution of power generation in each state.
On this chart you can see which states use coal most heavily — West Virginia at 98%, Utah, Wyoming, North Dakota, Indiana at 95% and New Mexico at 85%. You can see that California uses very little coal but 47% natural gas, that the NW uses mostly Hydro from places like Grand Coulee and much more. I recommend clicking on the link.
They also have charts of where solar and other renewable plants are (almost nowhere) and the solar radiation values.
Seeing it all together makes something clear that I wrote about earlier. If you want to put up solar panels, the best thing to do is to put them somewhere with good sun and lots of coal burning power plants. That’s places like New Mexico and Utah. Putting up a solar panel in California will give it pretty good sunlight — but will only offset natural gas. A solar panel in the midwest will offset coal but won’t get as much sun. In the Northeast it gets even less sun and offsets less coal.
Much better than putting up solar panels anywhere, howevever, is actually using the money to encourage real conservation in the coal-heavy areas like West Virginia, Wyoming, North Dakota or Indiana.
While, as I have written, solar panels are a terrible means of greening the power grid from a cost standpoint, people still want to put them up. If that’s going to happen, what would be great would be a way for those with money and a desire to green the grid to make that money work in the places it will do the best. This is a difficult challenge. People sadly are more interested in feeling they are doing the right thing rather than actually doing it, and they feel good when they see solar panels on their roof, and see their meter going backwards. It makes up for the pain of the giant cheque they wrote, without actually ever recovering the money. Writing that cheque so somebody else’s meter can go backwards (even if you get the savings) just isn’t satisfying to people.
It would make even more sense to put solar-thermal plants (at least at today’s prices,) wind or geothermal in these coal-heavy areas.
It might be interesting to propose a system where rich greens can pay to put solar panels on the roofs of houses where it will do the most good. The homeowner would still pay for power, but at a lower price than they paid before. This money would mostly go to the person who financed the solar panels. The system would include an internet-connected control computer, so the person doing the financing could still watch the meter go backwards, at least virtually, and track power generated and income earned. The only problem is, the return would be sucky, so it’s hard to make this satisfying. To help, the display would also show tons of coal that were not burned, and compare it to what would have happened if they had put the panels on their own roof.
Of course, another counter to this is that California and a few other places have very high tiered electrical rates which may not exist in the coal states. Because of that — essentially a financial incentive set up by the regulators to encourage power conservation — it may be much more cost-effective to have the panels in low-coal California than in high-coal areas, even if it’s not the greenest thing.
An even better plan would be to find a way for “rich greens” (people willing to spend some money to encourage clean power) to finance conservation in coal-heavy areas. To do this, the cooperation of the power companies would be required. For example, one of the best ways to do this would be to replace old fridges with new ones. (Replacing fridges costs $100 per MWH removed from the grid compared to $250 for solar panels.)
- The rich green would provide money to help buy the new fridge.
- An inspector comes to see the old fridge and confirms it is really in use as the main fridge. Old receipts may be demanded though these may be rare. A device is connected to assure it is not unplugged, other than in a local power failure.
- A few months later — to also assure the old fridge was really the one in use — the new fridge would be delivered by a truck that hauls the old one away. Inspectors confirm things and the buyer gets a rebate on their new fridge thanks to the rich green.
- The new, energy-efficient fridge has built in power monitoring and wireless internet. It reports power usage to the power company.
- The new fridge owner pays the power company 80% of what they used to pay for power for the old fridge. Ie. they pay more than their actual new power usage.
- The excess money goes to the rich green who funded the rebate on the fridge, until the rebate plus a decent rate of return is paid back.
To the person with the old fridge, they get a nice new fridge at a discount price — possibly even close to free. Their power bill on the fridge goes down 20%. The rest of the savings (about 30% of the power, typically) goes to the power company and then to the person who financed the rebate.
A number of the steps above are there to minimize fraud. For example, you don’t want people deliberately digging out an ancient fridge and putting it in place to get a false rebate. You also don’t want them taking the old fridge and moving it into the garage as a spare, which would actually make things worse. The latter is easy to assure by having the delivery company haul away the old one. The former is a bit tricky. The above plan at least demands that the old fridge be in place in their kitchen for a couple of months, and there be no obvious signs that it was just put in place. The metering plan demands wireless internet in the home, and the ability to configure the appliance to use it. That’s getting easier to demand, even of poor people with old fridges. Unless the program is wildly popular, this requirement would not be hard to meet.
Instead of wireless internet, the fridge could also just communicate the usage figures to a device the meter-reader carries when she visits the home to read the regular meter. Usage figures for the old fridge would be based on numbers for the model, not the individual unit.
It’s a bit harder to apply this to light bulbs, which are the biggest conservation win. Yes, you could send out crews to replace incandescent bulbs with CFLs, but it is not cost effective to meter them and know how much power they actually saved. For CFLs, the program would have to be simpler with no money going back to the person funding the rebates.
All of this depends on a program which is popular enough to make the power monitoring chips and systems in enough quantity that they don’t add much to the cost of the fridge at all.
Submitted by brad on Wed, 2009-10-14 14:30.
This press release describes a European research project on various intelligent vehicle technologies which will take place next year. As I outline in the roadmap a number of pre-robocar technologies are making their way into regular cars, so they can be sold as safer and more convenient. This project will actively collect data to learn about and improve the systems.
Today’s systems are fairly simple of course, and will learn a lot from this. This matches my prediction for how a robocar test suite will be developed, by gathering millions and later billions of miles of sample data including all accidents and anomalous events, over time with better and better sensors. Today’s sensors are very simple of course but this will change over time.
Initial reaction to these systems (which will have early flaws) may colour user opinion of them. For example, some adaptive cruise controls reportedly are too eager to decide there is a stopped car and will suddenly stop a vehicle. One of the challenges of automatic vehicle design will be finding ways to keep it safe without it being too conservative because real drivers are not very conservative. (They are also not very safe, but this defines the standards people expect.)
Submitted by brad on Mon, 2009-10-12 15:06.
Just back from a weeklong tour including speaking at Singularity Summit, teaching classes at Cushing Academy and a big Thanksgiving dinner (well, Thanksgiving is actually today but we had it earlier) and drive through fabulous fall colour in Muskoka.
This time United Airlines managed to misplace my luggage in both directions. (A reminder of why I don’t like to check luggage.) The first time the had an “excuse” in that we checked it only about 10 minutes before the baggage check deadline and the TSA took extra time on it. The way back it missed a 1 hour, 30 minute connection — no excuse for that.
However, again, my rule for judging companies is how they handle their mistakes as well as how often they make them. And, in JFK, when we went to baggage claim, they actually had somebody call our name and tell us the bag was not on the flight, so we went directly to file the missing luggage report. However, on the return flight, connecting in Denver to San Jose, we got the more “normal” experience — wait a long time at the baggage claim until you realize no more bags are coming and you’re among the last people waiting, and then go file a lost luggage report.
This made me realize — with modern bag tracking systems, the airline knows your bag is not on the plane at the time they close the cargo hold door, well before takeoff. They need to know that as this is part of the passenger-to-bag matching system they tried to build after the Pan Am 103 Lockerbie bombing. So the following things should be done:
- If they know my mobile number (and they do, because they text me delays and gate changes) they should text me that my luggage did not make the plane.
- The text should contain a URL where I can fill out my lost luggage report or track where my luggage actually is.
- Failing this, they should have a screen at the gate when you arrive with messages for passengers, including lost luggage reports. Or just have the gate agent print it and put it on the board if a screen costs too much.
- Failing this, they should have a screen at the baggage claim with notes for passengers about lost luggage so you don’t sit and wait.
- Failing this, an employee can go to the baggage claim and page the names of passengers, which is what they did in JFK.
- Like some airlines do, they should put a box with “Last Bag, Flight nnn” written on it on the luggage conveyor belt when the last bag has gone through, so people know not to wait in vain.
I might very well learn my luggage is not on before the plane closes the door. In that case I might even elect to not take the flight, though I can see that the airline might not want people to do this as they are usually about to close the door, if they have not already closed it.
Letting me fill out the form on the web saves the airline time and saves me time. I can probably do it right on the plane after it lands and cell phone use is allowed. I don’t even have to go to baggage claim. Make it mobile browser friendly of course.
Submitted by brad on Wed, 2009-09-30 14:47.
I have several sheetfed scanners. They are great in many ways — though not nearly as automatic as they could be — but they are expensive and have their limitations when it comes to real-world documents, which are often not in pristine shape.
I still believe in sheetfed scanners for the home, in fact one of my first blog posts here was about the paperless home, and some products are now on the market similar to this design, though none have the concept I really wanted — a battery powered scanner which simply scans to flash cards, and you take the flash card to a computer later for processing.
My multi-page document scanners will do a whole document, but they sometimes mis-feed. My single-page sheetfed scanner isn’t as fast or fancy but it’s still faster than using a flatbed because the act of putting the paper in the scanner is the act of scanning. There is no “open the top, remove old document, put in new one, lower top, push scan button” process.
Here’s a design that might be cheap and just what a house needs to get rid of its documents. It begins with a table which has an arm coming out from one side which has a tripod screw to hold a digital camera. Also running up the arm is a USB cable to the camera. Also on the arm, at enough of an angle to avoid glare and reflections are lighting, either white LED or CCFL tubes.
In the bed of the table is a capacitive sensor able to tell if your hand is near the table, as well as a simple photosensor to tell if there is a document on the table. All of this plugs into a laptop for control.
You slap a document on the table. As soon as you draw your hand away, the light flashes and the camera takes a picture. Then go and replace or flip the document and it happens again. No need to push a button, the removal of your hand with a document in place causes the photo. A button will be present to say “take it again” or “erase that” but you should not need to push it much. The light should be bright enough so the camera can shoot fairly stopped down, allowing a sharp image with good depth of field. The light might be on all the time in the single-sided version.
The camera can’t be any camera, alas, but many older cameras in the 6MP range would get about 300dpi colour from a typical letter sized page, which is quite fine. Key is that the camera has macro mode (or can otherwise focus close) and can be made to shoot over USB. An infrared LED could also be used to trigger many consumer cameras. Another plus is manual focus. It would be nice if the camera can just be locked in focus at the right distance, as that means much faster shooting for typical consumer digital cameras. And ideally all this (macro mode, manual focus) can all be set by USB control and thus be done under the control of the computer.
Of course, 3-D objects can also be shot in this way, though they might get glare from the lights if they have surfaces at the wrong angles. A fancier box would put the lights behind cloth diffusers, making things bulkier, though it can all pack down pretty small. In fact, since the arm can be designed to be easily removed, the whole thing can pack down into a very small box. A sheet of plexi would be available to flatten crumpled papers, though with good depth of field, this might not strictly be necessary.
One nice option might be a table filled with holes and a small suction pump. This would hold paper flat to the table. It would also make it easy to determine when paper is on the table. It would not help stacks of paper much but could be turned off, of course.
A fancier and bulkier version would have legs and support a 2nd camera below the table, which would now be a transparent piece of plexiglass. Double sided shots could then be taken, though in this case the lights would have to be turned off on the other side when shooting, and a darkened room or shade around the bottom and part of the top would be a good idea, to avoid bleed through the page. Suction might not be such a good idea here. The software should figure if the other side is blank and discard or highly compress that image. Of course the software must also crop images to size, and straighten rectangular items.
There are other options besides the capacitive hand sensor. These include a button, of course, a simple voice command detector, and clever use of the preview video mode that many digital cameras now have over USB. (ie. the computer can look through the camera and see when the document is in place and the hand is removed.) This approach would also allow gesture commands, little hand signals to indicate if the document is single sided, or B&W, or needs other special treatment.
The goal however, is a table where you can just slap pages down, move your hand away slightly and then slap down another. For stacks of documents one could even put down the whole stack and take pages off one at a time though this would surely bump the stack a bit requiring a bit of cleverness in straightening and cropping. Many people would find they could do this as fast as some of the faster professional document scanners, and with no errors on imperfect pages. The scans would not be as good as true scanner output, but good enough for many purposes.
In fact, digital camera photography’s speed (and ability to handle 3-D objects) led both Google Books and the Internet Archive to use it for their book scanning projects. This was of course primarily because they were unwilling to destroy books. Google came up with the idea of using a laser rangefinder to map the shape of the curved book page to correct any distortions in it. While this could be done here it is probably overkill.
One nice bonus here is that it’s very easy to design this to handle large documents, and even to be adjustable to handle both small and large documents. Normally scanners wide enough for large items are very expensive.
Submitted by brad on Mon, 2009-09-28 12:43.
A serious proportion of the computer users I know these days have gone multi-monitor. While I strongly recommend the 30” monitor (Dell 3007WFP and cousins or Apple) which I have to everybody, at $1000 it’s not the most cost effective way to get a lot of screen real estate. Today 24” 1080p monitors are down to $200, and flat panels don’t take so much space, so it makes a lot of sense to have two monitors or more.
Except there’s a big gap between them. And while there are a few monitors that advertise being thin bezel, even these have at least half an inch, so two monitors together will still have an inch of (usually black) between them.
I’m quite interested in building a panoramic photo wall with this new generation of cheap panels, but the 1” bars will be annoying, though tolerable from a distance. But does it have to be?
There are 1/4” bezel monitors made for the video wall industry, but it’s all very high end, and in fact it’s hard to find these monitors for sale on the regular market from what I have seen. If they are, they no doubt cost 2-3x as much as “specialty” market monitors. I really think it’s time to push multi-monitor as more than a specialty market.
I accept that you need to have something strong supporting and protecting the edge of your delicate LCD panel. But we all know from laptops it doesn’t have to be that wide. So what might we see?
- Design the edges of the monitor to interlock, and have the supporting substrate further back on the left and further forward on the right. Thus let the two panels get closer together. Alternately let one monitor go behind the other and try to keep the distance to a minimum.
- Design monitors that can be connected together by removing the bezel and protection/mounting hardware and carefully inserting a joiner unit which protects the edges of both panels but gets them as close together as it can, and firmly joins the two backs for strength. May not work as well for 2x2 grids without special joiners.
- Just sell a monitor that has 2, 3 or 4 panels in it, mounted as close as possible. I think people would buy these, allowing them to be priced even better than two monitors. Offer rows of 1, 2 or 3 and a 2x2 grid. I will admit that a row of 4, which is what I want, is not likely to be as big a market.
- Sell components to let VARs easily build such multi-panel monitors.
When it comes to multi-panel, I don’t know how close you could get the panels but I suspect it could be quite close. So what do you put in the gap? Well, it could be a black strip or a neutral strip. It could also be a translucent one that deliberately covers one or two pixels on each side, and thus shines and blends their colours. It might be interesting to see how much you could reduce visual effect of the gap. The eye has no problem looking through grid windows at a scene and not seeing the bars, so it may be that bars remain the right answer.
It might even be possible to cover the gap with a small thin LCD display strip. Such a strip, designed to have a very sharp edge, would probably go slightly in front of the panels, and appear as a bump in the screen — but a bump with pixels. From a distance this might look like a video wall with very obscured seams.
For big video walls, projection is still a popular choice, other than the fact that such walls must be very deep. With projection, you barely need the bezel at all, and in fact you can overlap projectors and use special software to blend them for a completely seamless display. However, projectors need expensive bulbs that burn out fairly quickly in constant use, so they have a number of downsides. LCD panel walls have enough upsides that people would tolerate the gaps if they can be made small using techniques above.
Anybody know how the Barco wall at the Comcast center is done? Even in the video from people’s camcorders, it looks very impressive.
If you see LCD panels larger than 24” with thin bezels (3/8 inch or less) at a good price (under $250) and with a good quality panel (doesn’t change colour as you move your head up and down) let me know. The Samsung 2443 looked good until I learned that it, and many others in this size, have serious view angle problems.
Submitted by brad on Fri, 2009-09-25 00:33.
Tonight I watched the debut of FlashForward, which is based on the novel of the same name by Rob Sawyer, an SF writer from my hometown whom I have known for many years. However, “based on” is the correct phrase because the TV show features Hollywood’s standard inability to write a decent time travel story. Oddly, just last week I watched the fairly old movie “Deja Vu” with Denzel Washington, which is also a time travel story.
Hollywood absolutely loves time travel. It’s hard to find a Hollywood F/SF TV show that hasn’t fallen to the temptation to have a time travel episode. Battlestar Galactica’s producer avowed he would never have time travel, and he didn’t, but he did have a god who delivered prophecies of the future which is a very close cousin of that. Time travel stories seem easy, and they are fun. They are often used to explore alternate possibilities for characters, which writers and viewers love to see.
But it’s very hard to do it consistently. In fact, it’s almost never done consistently, except perhaps in shows devoted to time travel (where it gets more thought) and not often even then. Time travel stories must deal with the question of whether a trip to the past (by people or information) changes the future, how it changes it, who it changes it for, and how “fast” it changes it. I have an article in the works on a taxonomy of time travel fiction, but some rough categories from it are:
- Calvinist: Everything is cast, nothing changes. When you go back into the past it turns out you always did, and it results in the same present you came from.
- Alternate world: Going into the past creates a new reality, and the old reality vanishes (at varying speeds) or becomes a different, co-existing fork. Sometimes only the TT (time traveler) is aware of this, sometimes not even she is.
- Be careful not to change the past: If you change it, you might erase yourself. If you break it, you may get a chance to fix it in some limited amount of time.
- Go ahead and change the past: You won’t get erased, but your world might be erased when you return to it.
- Try to change the past and you can’t: Some magic force keeps pushing things back the way they are meant to be. You kill Hitler and somebody else rises to do the same thing.
Inherent in several of these is the idea of a second time dimension, in which there is a “before” the past was changed and an “after” the past was changed. In this second time dimension, it takes time (or rather time-2) for the changes to propagate. This is mainly there to give protagonists a chance to undo changes. We see Marty Mcfly slowly fade away until he gets his parents back together, and then instantly he’s OK again.
In a time travel story, it is likely we will see cause follow effect, reversing normal causality. However, many writers take this as an excuse to throw all logic out the window. And almost all Hollywood SF inconsistently mixes up the various modes I describe above in one way or another.
Spoilers below for the first episode of FlashForward, and later for Deja Vu.
Update note: The fine folks at io9 asked FlashForward’s producers about the flaw I raise but they are not as bothered by it. read more »
Submitted by brad on Wed, 2009-09-23 17:50.
It seems that with more and more of the online transactions I engage in — and sometimes even when I don’t buy anything — I will get a request to participate in a customer satisfaction survey. Not just some of the time in some cases, but with every purchase. I’m also seeing it on web sites — sometimes just for visiting a web site I will get a request to do a survey, either while reading, or upon clicking on a link away from the site.
On the surface this may seem like the company is showing they care. But in reality it is just the marketing group’s thirst for numbers both to actually improve things and to give them something to do. But there’s a problem with doing it all the time, or most of the time.
First, it doesn’t scale. I do a lot of transactions, and in the future I will do even more. I can’t possibly fill out a survey on each, and I certainly don’t want to. As such I find the requests an annoyance, almost spam. And I bet a lot of other people do.
And that actually means that if you ask too much, you now will get a self-selected subset of people who either have lots of free time, or who have something pointed to say (ie. they got a bad experience, or perhaps rarely a very good one.) So your survey becomes valueless as data collection the more people you ask to do it, or rather the more refusals you get. Oddly, you will get more useful results asking fewer people.
Sort of. Because if other people keep asking everybody, it creates the same burn-out and even a survey that is only requested from 1 user out of 1000 will still see high rejection and self-selection. There is no answer but for everybody to truly only survey a tiny random subset of the transactions, and offer a real reward (not some bogus coupon) to get participation.
I also get phone surveys today from companies I have actually done business with. I ask them, “Do you have this survey on the web?” So far, they always say no, so I say, “I won’t do it on the phone, sorry. If you had it on the web I might have.” I’m lying a bit, in that the probability is still low I would do it, but it’s a lot higher. I can do a web survey in 1/10th the time it takes to get quizzed on the phone, and my time is valuable. Telling me I need to do it on the phone instead of the web says the company doesn’t care about my time, and so I won’t do it and the company loses points.
Sadly, I don’t see companies learning these lessons, unless they hire better stats people to manage their surveys.
Also, I don’t want a reminder from everybody I buy from on eBay to leave feedback. In fact, remind me twice and I’ll leave negative feedback if I’m in a bad mood. I prefer to leave feedback in bulk, that way every transaction isn’t really multiple transactions. Much better if ebay sends me a reminder once a month to leave feedback for those I didn’t report on, and takes me right to the bulk feedback page.
Submitted by brad on Tue, 2009-09-22 16:05.
I have put up a gallery of panoramas for Burning Man 2009. This year I went with the new Canon 5D Mark II, which has remarkable low-light shooting capabilities. As such, I generated a number of interesting new night panoramas in addition to the giant ones of the day.
In particular, you will want to check out the panorama of the crowd around the burn, as seen from the Esplanade, and the night scene around the Temple, and a twilight shot.
Below you see a shot of the Gothic Raygun Rocket, not because it is the best of the panoramas, but because it is one of the shortest and thus
fits in the blog!
Some of these are still in progress. Check back for more results, particularly in the HDR department. The regular sized photos will also be processed and available in the future.
Finally, I have gone back and rebuilt the web pages for the last 5 years of panoramas at a higher resolution and with better scaling. So you may want to look at them again to see more detail. A few are also up as gigapans including one super high-res 2009 shot in a zoomable viewer.
Submitted by brad on Mon, 2009-09-21 14:40.
Two events I will be at…
Tonight, at 111 Minna Gallery in San Francisco, we at EFF will be hosting a reading by Randall Monroe, creator of the popular nerd comic “xkcd.” There is a regular ticket ($30) and a VIP reception ticket ($100) and just a few are still available. Payments are contributions to the EFF.
In two weeks, on Oct 3-4, I will be speaking on the future of robot cars at the Singularity Summit in New York City. Lots of other good speakers on the podium too.
See you there…
Submitted by brad on Wed, 2009-09-16 13:05.
Two years ago, I discussed solutions for Burning Man Exodus. The problem: Get 45,000 people off the playa in 2 days, 95% of them taking a single highway south which goes through a small town which has a chokepoint capacity of about 450 cars/hour. Quite often wait times to get onto the road are 4 hours or more, though this year things were smoother (perhaps due to a lower attendance) and the number of people with 4 hour waits was lower. In a bad year, we might imagine that 25,000 people wait an average of 3 hours, or 75,000 person hours, almost 40 man-years of human labour.
Some judge my prior solution, with appointments, as too complex. Let’s try something which is perhaps simpler, at least at its base, though I have also thought up some complexities that may have value.
When you are ready to leave, drive to the gate. There, as was tried in 2,000, you would be directed into a waiting lot, shaped with cones at the front. The lot would have perhaps 20 rows of 10 cars (around 150 vehicles as there are so many trailers and RVs.) The lot would have a big number displayed. There you would park. You would then have three options:
- Stay in the lot and party with the other people in the lot, or sit in your car. This is in fact what you would do in the current situation, except there you start your engine and go forward 30 feet every minute. Share leftover food. Give donations to DPW crew. Have a good time.
- Go to the exodus station near the parking lots. Get a padded envelope and write your address on it, and your plate number, DL number and car description. Put your spare set of keys in the envelope. Get on your bike, or walk back to the city, and have a good time there. Listen to Exodus Radio. They will give reports on when your lot is going to move, in particular a 30 minute warning. When you get it, go back, pick up your keys and get ready.
- Volunteer to help with city cleanup. Do that by driving to the premium section of the waiting lot. Park there, wait a bit, and then get on a bus which takes you somewhere to do an hour shift of clean-up. You moop check the playa, clean, take down infrastructure, or spend an hour doing Exodus work which you trained for earlier. At the end of your shift, you are free to take a bus back to the lot, or wait in the city with friends. Listen to Exodus Radio. They will call your premium lot. It will be called well before the regular lot. Ideally give an hour, gain an hour! Get there and be ready.
When your lot is called, the Exodus worker pulls back the cones and the lanes stream out, non-stop (but still 10mph) off the playa. The road does not have to be lengthened to hold more cars. At the blacktop entrance, an Exodus worker with a temporary traffic light has it set to a green left arrow except when other traffic is coming, when it’s red. You turn without hesitation (people do that on green arrows, but slow down for flag workers.) Off you go.
As noted, it seems a good idea that people who want to leave the lot leave a set of keys. Not their only set — it is foolish to bring only one set to the playa anyway, and if you read the instructions you knew this. This allows exodus workers to easily move vehicles for people who don’t return, and there will be some. Even so the lots should be designed so it’s not hard to get around them. If the first lane has a spare lane to the other direction that works. It does require somebody to hand back the keys. If you read the instructions, they will say to bring photos of yourself (and alternate drivers.) Tape that photo to the key envelope and it makes it very quick and easy for the key wrangler to hand you the right keys. Don’t bring a photo and they must confirm a DL number, which they won’t have time to do if time is short — so get there in plenty of time.
If you don’t get there in time to get your keys, you can wait, or you can pay $10 (or whatever it costs) and the BM Org will mail you the keys after the event. Of course with rental vehicles this is not an option, so be there early.
However, it may also be simpler to not do the key system, and tow people who don’t show up, and charge them a fat fee for that. Or tow those who don’t leave a key. People might leave a fake key, which would result in a tow and an even larger fine, perhaps. As such I am not wedded to the key desk idea and it may be simpler to first see if no-shows are a big problem. No-shows can be punished in lots of ways if they signed a contract before leaving.
There is an issue for people who do volunteer work and then head for the city. They need to have left keys before the volunteer shift, or return to the lot to leave them, or not leave them and risk a tow.
Volunteers would get a leader who directs them what they will be doing. A common task will be doing a playa walk/MOOP sweep. The leader will listen to Exodus Radio and know if things move quickly and the volunteers must return. Normally, however, volunteer shifts would be taken only when the line is very long, much longer than a volunteer shift. People can of course offer to do more than one shift when the line is long but in that case they should bring their own portable radio, just as people who leave the lot should bring one or be near one. The shift leaders could also have a radio on loud enough for all to hear, hopefully the DJ will be doing something fun between exodus lot announcements.
As noted, one of the things people can volunteer for is exodus work itself. The offer of early exodus in exchange for an hour of exodus work assures there can never be a shortage of workers as long as there is a base of workers that does it without that reward. You’re helping the people ahead of you in line get out earlier. However, regular (non-leaving) volunteers are needed for when the line is short and first bunches up, and for when it shortens again.
To do exodus work you would have to attend training in advance, and be certified as able to do it. Probably done in SF, but possibly on-playa. Some other volunteer jobs (such as cleanup crew leader) would require some training and approval.
Staff needed are
- An exodus DJ (in a tower overlooking the line and the lots) with assistant or two who are controlling the whole operation.
- Flag worker controlling the traffic light at the blacktop. Possibly others in Gerlach.
- 2 crews of 1-2 workers directing cars into the lot currently filling up. They also prepare the lot, replacing the exit cones and possibly moving no-show cars to the side. May have a golf cart.
- 1-2 workers diverting cars from the main exit lane merge (the “fallopian tubes”) to the staging lots when needed. A cop would be very handy here.
- 1 worker to remove the cones at the lot being emptied and wave cars out of it. (The Exodus DJ is also telling those people to get going.) When only one lane is left, this person moves to the next lot. Worker probably has a scooter or golf cart.
- 1-2 workers to man the key desk.
The police come in huge numbers and spend a lot of time on victimless crimes. Managing traffic is a a great way to make really effective use of their police powers. Police can be there to deal with people who ignore signs, bypass or cut out of lots, or who leave their car without doing a key drop or contract.
How to start the lots
It is an interesting problem how to start the lots close to the city. Initially the volume is low and people exit directly, and will tend to go in multiple
lanes without a lot of work, eager as faster vehicles will be to pass slow ones. Eventually they will bunch up at the forced merge, and then the bunch up will spread backwards, traffic-jam style. However, this is taking place three miles from the city, at least 15 minutes drive at 10mph. There is a magic amount of back-up at which point you should start holding cars, and then a point at which you should release a lot full of them. Fortunately any short gaps you put in the stream are not wasted as they are re-smoothed on the blacktop before Gerlach-Empire, which is believed to be the primary choke point. However, it will take experience to learn the exact right times, so the first year will not do as well as later years. Data has been kept on car counts from the past, presumably broken down by hour, which could help.
Submitted by brad on Tue, 2009-09-15 17:21.
There is a number that should not be horribly hard to calculate by the actuaries of the health insurance companies. In fact, it’s a number that they have surely already calculated. What would alternate health insurance systems cost?
A lot of confusion in the health debate concerns two views the public has of health insurance. On the one hand, it’s insurance. Which means that of course insurers would not cover things like most pre-existing conditions. Insurance is normally only sold to cover unknown risk in every other field. If your neighbours regularly shoot flaming arrows onto your house, you will not get fire insurance to cover that, except at an extreme price. Viewed purely as insurance, it is silly to ask insurance companies to cover these things. Or to cover known and voluntary expenses, like preventative care, or birth control pills and the like. (Rather, an insurance company should decide to raise your prices if you don’t take preventative care, or allocate funds for the ordinary costs of planned events, because they don’t want to cover choices, just risks.)
However, we also seek social goals for the health insurance system. So we put rules on health insurance companies of all sorts. And now the USA is considering a very broad change — “cover everybody, and don’t ding them for pre-existing conditions.”
From a purely business standpoint, if you don’t have pre-existing conditions, you don’t want your insurance company to cover them. While your company may not be a mutual one, in a free market all should be not too far off the range of such a plan. Everything your company covers that is expensive and not going to happen to you raises your premiums. If you are a healthy-living, healthy person, you want to insure with a company that covers only such people’s unexpected illnesses, as this will give you the lowest premiums by a wide margin.
However, several things are changing the game. First of all, taxes are paying for highly inefficient emergency room care for the uninsured, and society is paying other costs for a sick populace, including the spread of disease. Next, insurance companies have discovered that if the application process is complex enough, then it becomes possible to find a flaw in the application of many patients who make expensive claims, and thus deny them coverage. Generally you don’t want to insure with a company that would do this: while your premiums will be lower, it is too hard to predict if this might happen to you. The more complex the policy rules, the more impossible it is to predict. However, it is hard to discover this in advance when buying a policy, and hard to shop on.
But when an insurance company decides on a set of rules, it does so under the guidance of its actuaries. They tell the officers, “If you avoid covering X, it will save us $Y” and they tell it with high accuracy. It is their job.
As such, these actuaries should already know the cost of a system where a company must take any client at a premium decided by some fairly simple factors (age being the prime one) compared to a system where they can exclude or surcharge people who have higher risks of claims. Indeed, one might argue that while clearly older people have a higher risk of claims, that is not their fault, and even this should not be used. Every factor a company uses to deny or surcharge coverage is something that reduces its costs (and thus its premiums) or they would not bother doing it.
On the other hand, elimination of such factors of discrimination would reduce costs in selling policies and enforcing policies, though not enough to make up for it, or they would already do it, at least in a competitive market. (It’s not, since any company that took all comers at the same price would quickly be out of business as it would get only the rejects of other companies.)
Single payer systems give us some suggestion on what this costs, but since they all cost less than the current U.S. system it is hard to get guidance. They get these savings for various reasons that people argue about, but not all of them translate into the U.S. system.
There is still a conundrum in a “sell to everybody” system. Insurance plans will still compete on how good the care they will buy is. What doctors can you go to? HMO or PPO? What procedures will they pay for, what limits will they have? The problem is this: If I’m really sick, it is very cost effective for me to go out and buy a very premium plan, with the best doctors and the highest limits. Unlike a random person, I know I am going to use them. It’s like letting people increase the coverage on their fire insurance after their house is on fire. If people can change their insurance company after they get sick then high-end policies will not work. This leaves us back at trying to define pre-existing conditions, and for example allowing people only to switch to an equivalent-payout plan for those conditions, while changing the quality of the plan on unknown risks. This means you need to buy high-end insurance when you are young, which most people don’t. And it means companies still have an incentive to declare things as pre-existing conditions to cap their costs. (Though at least it would not be possible for them to deny all coverage to such customers, just limit it.)
Some would argue that this problem is really just a progressive tax — the health plans favoured by the wealthy end up costing 3 times what they normally would while poorer health plans are actually cheaper than they should be. But it should put pressure on all the plans up the chain, as many poor people can’t afford a $5,000/month premium plan no matter that it gives them $50,000/month in benefits, but the very wealthy still can. So they will then switch to the $2,000/month plan the upper-middle class prefer, and go broke paying for it, but stay alive.
Or let’s consider a new insurance plan, the “well person’s insurance” which covers your ordinary medical costs, and emergencies, but has a lifetime cap of $5,000 on chronic or slow-to-treat conditions like cancer, diabetes and heart disease. You can do very well on this coverage, until you get cancer. Then you leave the old policy and sign up for premium coverage that includes it, which can’t be denied in spite of your diagnosis.
This may suggest that single-payer may be the only plan which works if you want to cover everybody. But single-payer (under which I lived for 30 years in Canada) is not without its issues. Almost all insurance companies ration care, including single payer ones, but in single payer you don’t get a choice on how much there will be.
However, it would be good if the actuaries would tell us the numbers here. Just what will the various options truly cost and what premiums will they generate? Of course, the actuaries have a self-interest or at least an employer’s interest in reporting these numbers, so it may be hard to get the truth, but the truth is at least out there.
Submitted by brad on Tue, 2009-09-15 14:06.
Yes, any system which is going to engage in some long activity which will freeze up the system for more than a few seconds should offer a way to cancel, abort or undo it. You would think designers would know that by now.
My latest peeve is cell phones and other smart devices which are complex enough to “boot.” now. In many cases if you want to see if they are on or not, you touch the power button — and if they were not on, they start their 30 to 60 second boot process. Which you must wait through so that you can then turn them off again. On some devices there is still a physical power button (and on many laptops you can fake one by holding down the soft power button for 4 seconds) but that’s not a great solution. Sure, at some point the booting device reaches a state where it can’t easily abort the boot as it is writing state, but this usually takes at least several seconds if not much longer to reach, so you should be able to abort right away.
Submitted by brad on Sat, 2009-09-12 15:15.
I just decided to cancel my AAdvantage credit card for a 1% cashback card with no annual fee. Many people have the frequent flyer cards so let’s consider the math on them. They typically come with a high annual fee (around $80) while other cards have no fee and other rewards.
Let’s say you spend $25,000 per year on the card, which is enough for 25,000 miles or one domestic flight on the typical airline. With a typical cashback card you get 1% back though some cards give 2% or even 4% back on certain classes of purchases. I have an Amex from Costco that gives 3% on gasoline and 2% on travel expenses, but Amex is not as accepted as Visa or MC.
- Your cash cost for the 25K miles is $250 plus the $80 annual fee = $320
- There are varying taxes and fees on award tickets, as low as $8 but sometimes much higher
- If you are booking less than 3 weeks in advance, fees of $50 to $100 will apply
- Finding available award seats can be quite difficult, the supply is far lower than for cash seats in most cases. There are also blackouts.
- You will not receive miles for your trip. A typical cross-country return is 5,000 miles, of $50 at the 1% rate, $80-$100 at the rate airlines claim
- Most people use miles long after they earn them, and in fact have a large balance. So a time discount should apply. Miles sitting in accounts earn no interest, cash does.
As such the free trip is harder to get and costs $400 to $500. But that is not far from (and sometimes more than) the cash price of a ticket.
But cash is of course a much more flexible thing — you can use it for anything, not just airline tickets. There are a raft of cards out there
now which tout “miles on any airline” and what they really give you is a 1% cashback that is only good on airlines. General 1% cashback is much better.
There is an argument that upgrades do much better. Upgrading with miles can be cheaper than upgrading with cash, since the cash price of business class seats is very high. However, as you learn if you are not a top elite flyer, upgrades are quite hard to get. Others are ahead of you in line. AA also instituted a cash co-pay on upgrades making them more expensive than before when done with miles.
If you spend less than $25K per year on the card, the math gets even worse. At $12.5K per year, you gave up at least $460 to $550 for your free ticket, and when the tickets are available on miles, the cash fare is often lower. If you spend much more a year, the cost may make some sense.
A common trick for people who have mileage cards is to pick up group checks at restaurants and have everybody pay you cash. However, the cards that give 3% cashback at restaurants like the Amex are much better for this.