Submitted by brad on Tue, 2008-10-21 19:25.
We need renewable energy, such as solar power. Because of that, companies are working hard on making it cheaper. They can do this either by developing new, cheaper to manufacture technologies, cheaper ways of installing or by simply getting economies of scale as demand and production increase. They haven’t managed to follow Moore’s law, though some new-technology developers predict they someday will.
However, there is a disturbing paradox in these activities. Unlike computers, it does not make financial sense to buy solar (or any other low or zero operating cost energy technology) if you have reasonable confidence the price is going to improve at even modest rates.
Imagine you have an energy technology with effectively zero operating cost, like PV panels. Let’s say that it’s reached the point that it can match the price of grid power over a 20 year lifetime. That means that, if it costs $10,000, it costs $72 per month or $872 per year at a 6% cost of funds. (Since $872 buys 9688 kwh at the national average grid price of 9 cents, that means you need a 4800 watt PV system to match the grid which is hard to do for $10,000 but someday it won’t be.)
But here’s the problem? Let’s say that it’s very reasonable to predict that the cost of solar will drop by more than 9% over the coming year. That’s a modest decrease, entirely doable just with increased production, and much less than people hope from new technology. That means that your $10,000 system will cost you $9,100 to buy a year down the road. Since we are talking about a grid-equivalent price system, the cost of grid power in this example is $872. So you can buy the power from the grid, wait a year, and save money. The more you expect the price of solar to drop, the more it makes financial sense to delay. (Note that at this lower price the system is now beating the grid. What matters really is whether the dollar cost reduction of the solar system exceeds the dollar cost of the grid electricity purchased.)
Indeed, if you predict the cost-drops will continue for many years, it sadly does not make sense to buy for a long time. Effectively until your predictions show that the cost decrease of the system no longer exceeds the cost of the power generated by it. That has to eventually come some time, since as it gets very cheap it can’t really drop in price by more than the cost of grid power, especially while there are physical install costs to include in the mix. But it can certainly drop by 6% per year for a decade, which would take it down to half its original cost. Possibly longer.)
Now, you will note I speak of the financial cost. This ignores any motivations based on trying to be greener. This is the analysis that would be done by somebody who is simply looking for the best price on power. This is frankly how most people think. This can be altered by both government incentives to buy solar and by externality taxes on polluting grid power.
This also applies not simply to solar, but any technology where you invest a lot of money up front, and have close to zero operating cost. Thus wind, geothermal and certain other technologies face the same math. Even nuclear to some extent.
All of this also depends in your confidence in your predictions. The more uncertain your predictions of price drops, the more you might be pushed in other directions to obtain certainty.
The paradox is this. We may be in a situation where solar is competing with grid power, and many are poised to buy it. If many do buy it, economies of scale will drive the price down. Thus, nobody should buy it, as they should wait for that price decrease! But if nobody buys it, it won’t decrease in price as much, creating a chaotic system. Some will buy it (to be green, for example) so it’s not a total loss, but it becomes harder to understand.
We’re used to dealing with computers, which reduce in price not just 6% a year but 40 to 50%. We’ve all felt the dilemma over whether to buy a computer or other electronic device that will lose its value so quickly, or whether to wait. However in that case, if we wait, we don’t get our nice new computer or camera, and thus lose out. You can wait forever which makes no sense. This is not the same logic with power. With power, we’re talking about a commodity that you can buy elsewhere, and get all the benefit — for less than the depreciation on what you considered buying.
What can keep the market for solar going if it looks like it will drop in price? Well, first of all, many people want to buy solar other than to save money. (Indeed, only a few people today with high local electricity prices and fat government rebates can save money with it.) Secondly, it seems that few people, even if their goal is to save money, think this way. And if they do, they are uncertain of their predictions, and would rather get the solar now than risk grid power going up or solar going down. But curiously, it remains the case that if people make predictions of cheap solar in the near term, and they are believed, it should kill most sales of solar in the present term.
Submitted by brad on Mon, 2008-10-20 13:46.
The latest tome — and at 900 pages, I mean tome — from Neal Stephenson (author of Snow Crash, the Diamond Age and Cryptonomicon) is Anathem. I’m going to start with a more general review, then delve into deep spoilers after the jump.
This book is highly recommended, with the caveat that you must have an interest in philosophy and metaphysics to avoid being turned off by a few fairly large sections which involve complex debate on these topics. On the other hand, if you enjoy such exploration, this is the book for you.
Anathem is set on a planet which is not Earth, but is full of parallels to Earth. The culture is much older than ours, but not vastly more advanced because on this world scientists, mathematicians and philosophers live a cloistered life. They live in walled-off communities called Concents, with divisions within which only have contact with the outside world, and with each other, for one 10 day period out of each year, decade, century or millennium.
As such the Avout, as they are called, lead a simple life, mostly free of technology, devoted to higher learning. It’s a non-religious parallel to monastic life. In the outside “saeclular” world, people live in a crass, consumer-oriented society both like and unlike ours.
I give the recommendation because he pulls this off really well. Anathem is a masterwork of world-building. You really get to identify with these mathematical monks and understand their life and worldview. He really builds a world that is different but understandable.
One way he does this, which does frustrate the reader at first, is through the creation of a lot of new coined terms. Some terms are used without introduction, some get a dictionary entry to help you into them. The terms are of course in a non-Earth language, but they are constructed from Latin and English roots, so they make sense to your brain. Soon you will find yourself using them.
So, if you like clever, complex worldbuilding and the worlds of science and philosophy, this book, long as it is, is worth it for you. However, I will shortly talk about the ending. Stephenson has a curse — his world building is superb, and his skill at satisfying endings is not up to it. Anathem actually has a decently satisfying ending in many ways — better than he has done before. There is both an ending to the plot, and some revelations at the very end which make you rethink all you have read before in the book. This time, I find fault with the consistency of the metaphysics, and mainly because I have explored the same topic myself and found it very difficult to make it work.
It’s not too much of a spoiler to say that after we are shown this remarkable monastic world, events transpire to turn it all upside-down. You won’t be disappointed, but I can’t go further without getting into spoilers. You will also find spoilers in my contributions to the Anathem Wiki. That Wiki may be handy to you after you read the book to understand some of the complex components. read more »
Submitted by brad on Thu, 2008-10-16 21:09.
Two disturbing trends are moving upwards in the area of blog comment spam.
You may want to note that I have changed the challenge question for posting comments on this blog. It is no longer my last name.
The first has been taking place for a while — it’s hand-written comment spam. Spammers are paying people, probably low-wage people in 3rd world countries, to write comments on blog posts that are very roughly on-topic. Then those comments will contain a link to the spammer’s site, with the keywords the spammer wants. Sometimes the link will just be on the userid.
The spammers do this even though I tell them that all links in comments get the “nofollow” tag which makes Google and other search engines ignore them and not assign rank to them. They are thus wasting their time, other than to get a few clickthroughs from readers here. The people they hire are smart enough to pass the Turing test and write a comment that is roughly on topic, but they either don’t understand the nofollow warning or don’t worry about it because they are paid by the comment.
Truth be known they don’t write very good comments. Any real examination will show they are not really appropriate. And more to the point, unlike the majority of comments, they have links, and of course those links are to commercial sites. Just the existence of links is enough to make the comment worthy of examination. And I now have spam filters that put posts with possible bad links into an approval queue rather that doing immediate posting, unfortunately.
Today I discovered a new type of spam on the blog. A spammer was creating userids, but not posting any comments. They just put a link to their spam pages in their user description. Userid creation does require a challenge question but at least one spammer wrote code to fill it in, since I don’t change the question every time as perhaps I should.
The userids would have names like “Brittney nude” and thus they show up in the blog user directory and are parsed by search engines. Since my pagerank is high, people are finding these userid pages for searches, and then perhaps following links to the spammers.
Mostly I want my challenge to be very simple to make it as easy as possible to participate. I don’t like image captchas, I find them a pain when I go to other sites. And most of them have been broken on the big, high-value sites. They probably would not get broken for a smaller site like mine. Other options include simple math problems (but those may get broken by code as well.)
My general rule has been that unless you are a high-value target (and perhaps I’m going up in value) you should not have to do very much. The key is not not be the same as other sites, and to not do anything like use a standard module for drupal so you are the same as all other drupal sites. As a collection, drupal sites are a high value target.
I deleted the users of course, but the interesting trick here was that since they did not post, I only noticed them by seeing referer logs coming from search engines.
Update: They are keeping at it, so I decided to put user creation on administrator approval. Truth is, not very many readers here create accounts, and there are only minor reasons to do so. If you create an account it takes away the “Not Verified” after your name and you don’t have to enter any parameters again. You can also edit and remove your comments after the fact if you post them with an account.
Submitted by brad on Tue, 2008-10-14 21:00.
I’ve written a few times about the “Selfish Merge” problem. Recently, reading the new book Traffic: Why We Drive the Way We Do by Tom Vanderbilt, I came upon some new research that has changed and refined my thinking.
The selfish merge problem occurs when two lanes reduce to one. Typically, most people try to be “good” and merge early, and that leaves the right lane, which is ending, mostly vacant. So some people zoom ahead of everybody in the right lane, and then merge at the very end. This is selfish in the sense that butting into any line is selfish. Even if overall traffic flow is not reduced (and even if it is increased) the person butting in moves everybody back one slot so they can get ahead by many slots. This angers people and generates more counter-productive behaviour, including road rage, and attempts to straddle the lanes so that the selfish mergers can’t move up to the merge point.
In Traffic, Vanderbilt writes of surprising research that changed his mind, which showed that, in simulations, some merging forms provided up to 15% more traffic throughput than proper attempts at a zipper merge. In particular, a non-selfish merge fully using the vanishing lane worked better than the typical butt-in situation described at the top.
In this merge, which I’ll call the “slow and fair merge,” drivers are told to use both lanes up to the merge-point, and then to fairly “take their turn” at the merge point entering the continuing lane. Nobody is selfish here, in that nobody butts ahead of anybody else, but both lanes are fully utilized up to the merge point.
This problem is complex, I believe, because there is a switch-over point, which I call the “collapse” point. This is the point at which the merge flow becomes high enough that traffic collapses to “stop and go” mode, before and at the merge-point. Before that point, in lighter traffic, there is little doubt (for reasons you will see below) that the “cooperating fast zipper” merge results in the best traffic flow. In particular, there are traffic volumes where you could either have cooperating zipper or “slow and fair” but cooperating zipper would do a fair bit better. There are also traffic volumes where cooperating zipper just isn’t possible any more, and we will either have “slow and fair” (which has the best volume) or “selfish merge” which has a worse volume.
Real world experiments show different results from the theoretical. In particular, many drivers, used to the anarchic selfish-merge approach, don’t understand fair and slow, even when signs are explicit about it, and so they resist using both lanes and try to merge early. They also try to straddle, devolving to selfish merge. An experiment with digital signs which changed from advising drivers to zipper-merge in light traffic to advising “use both lanes” and “merge here, take your turn” in heavier traffic was disobeyed in fair and slow mode by too many drivers. The experiment ended before people could learn the system. read more »
Submitted by brad on Tue, 2008-10-14 17:02.
This week, as part of a 3-part series on the future of driving, ARS Technica has written a feature article derived from, and covering my series on Robocars. While it covers less than I do here, it does present it from a different perspective that you may find of interest.
Due to their large audience, there is also a stream of comments. Frankly, most indicate that the commenter has not read my underlying articles and my FAQ section, but one commenter did bring up something interesting that I have incorporated into my section on Freedom.
Their point was this: Today, the police use traffic laws as a way to diminish the rule of law. Everybody violates traffic laws regularly, so the police can always find an excuse to pull over a vehicle that they wish to pull over for other reasons. In essence, this ability has seriously eroded our privacy and freedom while we travel on the roads. Generally, robocars would never offer the police an excuse to detain any random driver. They would have to observe something inside the vehicle, perhaps, in order to have the probable cause needed to stop it. It would be more akin to being in your house. Of course, the police can often still find a way if they try hard enough, but this should make that task a great deal harder.
This does not mean that robocars still don’t present lots of privacy and freedom risks. We must work to avoid those. But this is an upside I hadn’t thought of.
There are also a lot of diggs on the Technica article, with their own comments, even more removed from my base articles, which never got too many diggs on their own.
If you didn’t see it, back a few months ago, the series was also featured on slashdot with a lively thread.
Submitted by brad on Sat, 2008-10-11 12:48.
I have tripods with both 3 segments and 4 segments. A 4-segment tripod has 3 clamps per leg, which means 9 of them to open and close in extending and collapsing the tripod. That’s a pain. Enough of one that you sometimes find yourself asking whether a shot is worth setting up the tripod. But even 3 segment tripods are only a bit better.
I have my 4-segment legs because I can pack the tripod down into a reasonably small suitcase. I do most shooting when I travel so this is actually my best carbon fiber tripod. But when I am out carrying the tripod, or more commonly carrying it in the car, it doesn’t need to be this short. Unfortunately, the tripod fully extended, with camera and pano mount on it, is too long to fit in most cars, so I have to collapse one set of legs. That’s not so hard but it’s still very long and unweildy with just one set collapsed.
Here’s a possible answer: A 4 segment tripod where the bottom two segments join not with an external clamp, but which screw or snap together to make a smooth double-length segment. You used to be able to get monopods like this. Of course, the threaded join is not very convenient, and is not adjustable. However, you could readily take it apart to pack the tripod in a suitcase. If it can be made strong enough, a snap-together join would be best, with some recessed buttons to push to pull the legs apart. Then takedown and setup could be quick enough that you would also use it to put the pod into a backpack.
However, what you would have when put together is a 2-segment tripod, because the lower pair of segments, with no bumpy clamp, could feed up into the upper two segments when both of those are extended. In other words, you would have a nice tripod you could quickly reduce to half its length and back with just 3 clamps. A reasonable length for carrying and a very easy length to put into a car trunk or back seat.
You would not, however, be able to make the tripod any shorter than half-length without undoing the bottom join. Then you could get the tripod down to 1/4 length for low shots and for placing on tables and stone walls if half-length was just too high. That use is rare enough that I could handle that, especially if it’s just snaps.
The same approach could apply to your center column, or you could have just a 1/4 length center column, which is fine for most applications, since you don’t want to extend the column unless you have to, normally.
Note that the top join would be normal, so you would have 2 clamps per leg, and one hard-join. You don’t want a hard join at the top because presumably that will thin the inner diameter of the pole if you want it strong, stopping the lower segment from telescoping inside.
The 3rd segment (2nd from the ground) into which the bottom segment snaps, could also possibly have a spike or small foot coming out the center, which goes into a hole in the bottom segment. Or a place to attach such a foot. This would allow you to also configure a shorter, 3-segment standard tripod when you don’t want to snap in the lowest segment.
Submitted by brad on Thu, 2008-10-09 17:28.
Some upcoming events I will be involved in:
Burning Man Decompression, Sunday Oct 12
As I have for the past several years, I will show off my newest giant photographs of Burning Man at the “decompression” party, which takes place from noon to midnight on Sunday, Oct 12 (this coming Sunday) on Indiana St. south of Mariposa in San Francisco.
While decompression won’t get you to understand what Burning Man truly is, it’s the closest you’ll come while staying in the city. Come by, I will be easy to find with the giant photo wall. Come in “playa wear,” which means anything out of the ordinary, to get in for only $10.
Meet Jean Bartik, one of the world’s first programmers
The world’s first software team was a group of six women recruited to program the ENIAC. Jean Bartik was one of those six, and is giving a talk at the Computer History Museum in Mountain View on October 22 at 7pm. Prior to the talk, I am helping with a special VIP reception where you can meet Bartik, and see clips of a documentary-in-progress being done about the six earliest programmers. The producer is my friend Kathy Kleiman who needs financial contributions to complete the documentary. Unfortunately 3 of the women are now gone, but video interviews were made with them for this documentary.
If you would like to attend the VIP reception, send me a note.
Convergence conference on the Future
Foresight Institute, of which I am a director, is one of the organizations sponsoring Convergence 08 a futurist gathering with both scheduled debates on issues in AI, synthetic biology and longevity. Then there’s an unconference
component where the attendees make the program. I’ll deliver my robot car talk, with video. This takes place the weekend of November 15,
and Foresight Institute Senior Associates are also all invited.
On a side note, while I won’t be there as I will be at Alternative Party in Finland, on Oct 25, Futurists can also attend (at a higher price) this year’s Singularity Summit in San Jose.
Further in the calendar, check out eComm a conference on emerging telephony. This conference took up the mantle from the O’Reilly conference on the same topic, and now takes the mantle of the recently deceased VON conference. To find out what’s happening in VoIP and not-plain-old-telephone-service, check it out in early March of 09. I’ll be speaking on the EFF’s battle with AT&T over wiretapping and what it means for the new generation of telephony.
Submitted by brad on Thu, 2008-10-09 12:26.
Ford is making a new car-limiting system called MyKey standard in future models. This allows the car owner to enable various limits and permissions on the keys they give to their teen-agers. Limits included in the current system include an 80 mph speed limit, a 40% volume limit on the stereo, never-ending seatbelt reminders, earlier low-fuel warnings, audio speed alerts and inability to disable various safety systems.
My reaction is of course mixed. If you own something, it is reasonable for you to be able to constrain its use by people you lend it to. At the same time it is easy to see this literal paternalism turn into social paternalism. While it’s always been possible to build cars that, for example, can’t go over the speed limit, it’s always been seen as a “non-starter” with the public. The more cars that are out there which have governors on them, the more used to the idea people will get. (“Valet” keys that can’t go over 25mph or open the trunk have been common for some time.)
This is going to be one of the big questions on the path to Robocars — will they be able to violate traffic laws at the command of their owners? I have an essay on that coming up for the future, where I will also ask how much sense traffic laws make in a robocar world.
The Ford key limits speed to 80mph to allow the teen to pass on the highway. Of course on some highways here you could not go in the fast lane with that governor on, which probably suits the parents just fine. What they probably want would be more a limit on average speed, allowing the teen to, for short periods, burst to the full power of the car if it’s needed, but not from a standing start, and of course with advanced warning when the car has gone too fast too long to give a chance to safely slow down.
The earlier low-gas warning is just silly. The earlier you make a warning, the more you teach people to ignore it. If you have an early warning (subtle) and then a “this time we really mean it” warning most people will probably just use the second one. Many cars with digital fuel meters refuse to estimate fuel left below a certain amount, because they don’t want to be blamed for making you think you have more gas than you do. So they tell you nothing instead, which is silly.
What might make more sense would be the ability to make full use of speed, but the threat of reporting it to mom & dad if it’s over-used. (Such a product would be easy to add to existing cars, I wonder if anybody has made a product like that?) Ideally the product would warn the teen if they were getting close to the limit, to let them govern themselves, knowing that they would face a lecture and complete loss of car privileges if they go over the limitations.
On one hand, this is less paternalistic, because it does not constrain the vehicle and teaches the child to discipline themselves rather than making technology enforce the discipline. On the other hand, it is somewhat Orwellian, though the system need not report the particulars of the infringement, just the fact of it. Though we can certainly see parents wanting to know all the details.
Of course, we’ll see a lot more of that sort of surveillance asked for. Track-logs from the GPS in fact. Logging GPSs that can be hidden in cars cost only $80, and I am sure parents are buying them. (I have one, they are handy for geotagging photos.) We might also start seeing “smart” logging systems that measure speed infractions based on what road you are on. Ie. 80mph not near any highway is an infraction but on the highway it isn’t.
I doubt we’ll be able to stop this sort of governing or monitoring technology — so how can we bend it to protect freedom and privacy?
Submitted by brad on Tue, 2008-10-07 23:41.
The worst thing about political debates occurs when the candidates break into their canned speeches, often repeating ones they had done before, and often when they have very little to do with the question that was asked. This happens because the candidates’ teams, in negotiating debate rules, want it to happen. They want a boring debate, because they know that while it’s hard (but not impossible) to win an election with a great debate performance, it is certainly easy to lose one with a bad one. So they avoid risks.
We won’t stop that, but some of the questions asked by Gwen Ifill, Jim Lehrer and those selected by Brokaw could have been much better. Better, in that they could have pushed the debate towards real answers and away from canned ones, just a little. With so many questions, it is obvious before the question is finished either what the candidate will say, or what they won’t say. There are questions you just know no candidate will answer. Some questions are better than others.
So I want the moderators to workshop the questions in advance, with a small, dedicated team of political reporters who have followed the campaigns. Each proposed question should be tried out before the reporters, who will then think of how the candidate is likely to dodge the question, or what canned speech they will give.
Eventually you get a set of questions where the reporters, who have seen the candidate speak for weeks, don’t know the answers in advance, or think the candidate might give a real answer to. Care must be taken not to bias the questions. But they should be real reporter’s questions. As I said, a good candidate can dodge anything, but you can make it more obvious when they dodge, and give them better chances not to dodge. And certainly not give them question that make you shout “there’s no way they would answer that one.”
Next, in my dreamworld, I would like to see some sort of punishment for dodging. In this case, I would give a balanced audience voting meters where they indicate “Did the candidate actually try to answer the question?” And up on the board, like a baseball score, would go a series of Y and Ns, or 1s and 0s, for each question. The candidate will “win” this score if the crowd felt they actually tried to answer the question. Obviously there is a risk that the judges would bias towards the candidate they like. Reporters, who are used to asking questions and know when they have gotten a dodge would be the best judges. I guess if I can dream, I can dream that, because the candidates would never agree to that. One of them would always fear it was going to be against their interests.
Which is why the question rehearsal is possible, since that’s something the candidates can’t control in setting out the rules. Most other good ideas their teams can stomp out.
Submitted by brad on Mon, 2008-10-06 16:37.
Coming up in a couple of weeks I will be speaking as a special guest at Alternative Party, a digital culture conference in Helsinki, Finland. I’ll be doing my main talk on October 25th plus an extra session on either the 24th or 26th depending on schedules.
After that I will head to do some touristing in Stockholm for a few days, then for my first trip to Russia to visit St. Petersburg on the 31st.
Have some recommendations in Stockholm or St. Petersburg area? Let me know. My hosts will take care of me in Finland.
Submitted by brad on Thu, 2008-10-02 20:36.
I’ve added a new Robocars article, this time expanding on ideas about how parking works in the world of robocars. The main conclusion is that parking ceases to be an issue, even in fairly parking sparse cities, because robocars can do so many things to increase, and balance capaacity.
One new idea detailed (inspired by some comments in another post) is an approach for both valet parking and multi-row triple-parked street parking. This algorithm takes advantage of the fact that all the robocars in a row can be asked to move in concert, thus moving a “gap” left in any line to the right space in just a few seconds. Thus if there is just one gap per row, any car can leave the dense parking area in seconds, even from deep inside, as the other cars move to create a gap for that car to leave.
But there are many more ideas of how parking just should not be an issue in a robocar world. That is, until people realize that, and we start converting parking lots to other uses because we don’t need them. Eventually the market will find a balance.
Read Parking and Robocars.