Submitted by brad on Tue, 2005-08-23 22:51.
A mantra in the security community, at least among some, has been that crypto that isn’t really strong is worse than having no crypto at all. The feeling is that a false sense of security can be worse than having no security as long as you know you have none. The bad examples include of course truly weak systems (like 40 bit SSL and even DES), systems that appear strong but have not been independently verified, and perhaps the greatest villian, “security through obscurity” where the details of the security are kept secret — and thus unverified by 3rd parties — in a hope that might make them safer from attack.
On the surface, all of these arguments are valid. From a cryptographer’s standpoint, since we know how to design good cryptography, why would we use anything less?
However, the problem is more complex than that, for it is not simply a problem of cryptography, but of business models, user interface and deployment. I fear that the attitude of “do it perfectly or not at all” has left the public with “not at all” far more than it should have.
An interesting illustration of the conflict is Skype. Skype encrypts all its calls as a matter of course. The user is unaware it’s even happening, and does nothing to turn it on. It just works. However, Skype is proprietary. They have not allowed independent parties to study the quality of their encryption. They advertise they use AES-256, which is a well trusted cypher, but they haven’t let people see if they’ve made mistakes in how they set it up.
This has caused criticism from the security community. And again, there is nothing wrong with the criticism in an academic sense. It certainly would be better if Skype laid bare their protocol and let people verify it. You could trust it more. Read on… read more »
Submitted by brad on Sun, 2005-08-21 02:05.
Just back from a day at Bar Camp which was quickly put together as a tongue-in-cheek response to Tim O’Reilly’s Foo Camp and folks who had not been invited. Foo Camp is great fun, and Tim does it all for free, so it’s not suprising he has to turn people away — even me :-) — but Bar Camp was surprisingly good for something thrown together at the last minute with no costs.
It makes you wonder why some conferences have to cost so much. In Foo Camp, Tim provides his campus of course, which he already owns, and some rental facilities and most expensively, food, and lets people come free. Programming is ad-hoc, in recognition that at so many conferences, people come not because of the program but because of their fellow attendees. I haven’t asked Tim what it costs him per attendee but I suspect it’s much more modest than the fees at comparable conferences. People literally camp in empty cubicles and offices, though those not up for that can get hotel rooms or bring RVs.
Bar camp was even cheaper. Socialtext provided the office space in downtown Palo Alto. In just a short time, sponsors such as Technoratti and others provided all the food people could want, and attendees brought snacks and drinks. Fewer folks camped because it was in Silicon Valley, but some of the younger set did. Talks were quickly put together, but interesting, and covered whole ranges of new and interesting software developments. And as at Foo Camp and everywhere else, hallway conversation was the real action.
With the glut of office space in this valley, and in other places, such ad-hoc conferences should not be hard to set up. Nor should sponsors be hard to find for modest food and other needs. If people become interested in having a conference rather than a business, they can do for nothing what could cost $1000 per person and with less work.
Not that I wouldn’t enjoy going back to Foo Camp, but Bar was its own rewarding experience too.
Submitted by brad on Sat, 2005-08-20 01:30.
I’ve called before for a system of Universal DC Power and I still want it, but there is a partial step we could take.
I have a laptop power supply that comes with a variety of tips. The tips tell (through something as simple as a resistor) the power supply how much voltage and current to supply for the laptop they are designed for. I bought mine for use in an airplane, others are sold that do both 12v and AC power.
I would like to see one designed for the corporate market, rather than the carry-around market. Ones to be left in offices and under conference tables, so that when somebody visits with a laptop, they can plug it in. No need to get out their own supply or eventually no need to bring it.
Unlike the carry-around where you pick your tip and leave the rest, this would have an array of tips, possibly rotating on a click-wheel, or all connected to a switch where one can dial the voltage/polarity/etc.
Some companies take more drastic steps. At Google for example, I notice they have standardized on thinkpads, and so all desks and conference tables have think pad supplies. Everybody is able to roam the building and be sure of laptop power. These supplies, while a bit more expensive, could solve the same problem.
An alternate would be to standardize the special tip that describes the power needed. Everybody could get a tip or pigtail for their laptop and carry just that around. Conference rooms could in fact have single supplies that let you plug in several of the pigtail. Of course that is halfway to my original proposal.
Now it turns out a considerable majority of laptops take either 16 volts or 19 volts. The main rebel is Dell, which uses funny plugs and often over 20v. Some need more current than others, I don’t know if any need current limiting or if simply making the PS capable of 100w would do the trick. Anyway, in this case, we could develop a standard 16v plug (the thinkpad one) and a different standard 19v plug (probably an HP one), in two different shapes and colours, and people with laptops could carry a cheap converter to plug their laptop into it. Over time, laptops might come directly able to use this, if they aren’t already — on our path to a smarter power bus. Then people could say, “Oh, you have the orange plug. Great, I can plug my laptop into that.” Vendors who make laptops that won’t plug into one of these two will probably think about switching.
Submitted by brad on Fri, 2005-08-19 18:30.
Hot in the blogosphere these days, as a result of the Creationism/ID vs. Evolution debate is Pastafarianism, the worship of the Giant Flying Spaghetti Monster. The idea is to show that something as made up as the GFSM is as consistent as ID.
Now as I’ve written before, I think we should teach creationism and ID in the schools as an example of bad science. All students should learn how to identify when bad science and bad math (in particular bad stats) are being used to lie to them. They should take exams where they are given examples of bad science and must spot the flaws to pass.
However, never one to pass up a joke religion, I still think the Pastafari go too far. I propose Monolithism. (Perhaps monolitheism?).
Monolithism is a variation of Intelligent Design which proposes that man was given the gift of intelligent thought by giant black alien monoliths. They came to Earth long ago and are waiting out there for us to show them the results.
Now Monolithism has great visual aids already to show the process. It meets many of the requirements of I.D. except that most people know it’s a movie.
Submitted by brad on Tue, 2005-08-16 11:52.
One of the scourages of urban areas is the requirement (I presume) that heavy equipment make a loud beeping noise when it’s backing up. It’s meant to warn anybody standing behind the vehicle, presumably because the driver doesn’t have the same field of vision to see you, and because people are more wary of standing in front of a moving vehicle than behind it.
As such, as we all know, the sound is really piercing. And more to the point, it travels, often for miles. It’s a major noise pollution anywhere near any work site. I presume part of the problem as well is workers wearing hearing protection need it even louder.
So my challege is, can we do a better job? Can we make an attention getting sound that is more directional (aimed backwards, and perhaps down from the top of the vehicle) so it won’t travel as far or distract people not behind the machine?
Can we standardize rear-view cameras, which are so cheap now, so that the operator’s view of what’s behind is top notch?
Can we combine a quieter sound with really bright, moving lights, the kind you would see on the ground if your back were to the beeping machine? Could we blow air with high pressure streams or those long-distance vortexes like the AirZooka makes, or would this be too much of a problem with dust (or in wind?)
Can we have object detectors that spot objects in to the rear of the machine and make the beeps louder when there is something? (Admittedly they are going to go off for a wall or wheelbarrow as much as a guy, and they have to be really reliable because people would start depending on them to know how much caution to use.) Perhaps they can detect that everything they have seen has left the area and reduce the beeping, because if there is one person behind the truck, that assures you somebody is watching and will move anybody who doesn’t see the lights or hear the beeping.
I solicit other ideas to safely warn people about moving equipment that don’t ruin the peace.
Update: I received information from a firm called Brigade which claims to have an answer. They use white-noise alarms. They claim they are easier for us to echolocate than less natural pure-tone sounds, and I agree that they disperse into the environment more quickly so they won’t travel. The piercing alarm has been chosen in the past as it is un-natural and thus stands out more from background, but that means it travels further. Natural sounds fade from notice more quickly but possibly are just as recognizable close up.
Submitted by brad on Mon, 2005-08-15 01:01.
As some will know, I got heavily into the Hugo awards 13 years ago during my efforts at becoming an eBook publisher in the SF field. The Hugo award is voted on by the fans who attend the annual World Science Fiction Convention, or Worldcon, a moderately small voting pool (under 1000 of the typical 4000 to 7000 attendees will vote.)
The most important award and 2nd most voted on is the one for best Novel. The least important, but most voted on award is the one for best movie.
But still, for a long time, though both SF and Fantasy qualified for the award, the best Novel went exclusively to Science Fiction (with one dab into alternate history by Phillip K. Dick) and usually to hard, ideas-based SF. This went on until 2000 when the superb hard-SF novel “A Deepness in the Sky” won. The drama award was also heavily into SF, though it had some deviations, such as the coverage of Apollo XI and a few films in the 80s.
But in 2001, for the first time, a Fantasy novel won the best novel Hugo. Not just any fantasy novel, but a children’s novel, Harry Potter 4. Of course, the Harry Potter series is the most remarkable success not just in fantasy, but in publishing, so this is not too shocking. What’s surprising is that in 2002, 2004 and 2005 a fantasy novel would win best novel. At the same time, fantasies won all the best movie awards and all of the new best TV episode award until 2005. (Read on…) read more »
Submitted by brad on Sat, 2005-08-13 19:29.
I recently picked up a surplus battery-powered motor assist for a bicycle, and it's a lot of fun. Due to lower power you have to start peddling to 3mph and then it can run the bike for 10 miles at 10mph (for normal weight people, not me.)
All-electric cars didn't do well in the market in part because people were scared of their limited range, slow charging and and high cost, and the annoyance of plugging them in. They love hybrids because they don't have the range problem. Some folks are promoting plug-in hybrids, which are hybrids with lots of batteries. You can and should charge them from the grid, but you don't have to, so your range is the same as a gas car (or better) and on most trips you are much more efficient.
But perhaps cars are the wrong target. Electric bikes are heavy and a little more unstable when slow or being walked, and get really bad if you put enough batteries on them to go 20 or 30 miles. But trikes on the other hand are stable and you can load a lot more batteries onto them for serious range. And electric trikes are wicked efficient, in terms of cost (and fuel burned) per mile of travel. Orders of magnitude ahead of hybrid cars.
And all this is quite cheap to make if done in quantity. If our cities made more bike paths and bike lanes these trikes could become a major commute form, especially in California with its assured good weather. Yes, it's not perfect -- you have to recycle the batteries, and you do have rain to worry about, and the speed is definitely lower. But for shopping trips, neighbourhood trips and short commutes it seems a giant win.
Submitted by brad on Mon, 2005-08-08 16:19.
Newspapers won’t like this idea, but the truth is that most of the funnies aren’t funny, certainly not every day. There are some talented people doing comic strips, but it’s hard to do on a 7 days a week schedule, so they are almost all inconsistent.
You can read most of the strips on the web, so the next step is to build a system where we do shared editorial on their quality. People would read the funnies and vote on them. Then, you could present a page which showed you only the ones that made a certain cut. You could tune the cut — “Show me the top 90% of Dilbert, only the top 10% of Blondie” as you like it.
And you could even ask for the top few percent of comics you don’t normally read, though of course some of the jokes only make sense to semi-regular readers, so this won’t always be a winner. But it should be often enough.
Of course, some people have to read the comics before they have been graded. And there are fans willing to do that, but if there aren’t, you can make a trading system that says to make use of the ratings you have to contribute some. (Though if you get too hardnosed about it, people would start to introduce fake ratings to game this.)
User’s ratings would not be absolute, but rather based on their past history, and where in their own spectrum of ratings for that comic a particular rating falls. So it doesn’t help that you rank every Dilbert a 10 out of 10, such scoring would be discarded. Nor can individual comic publishers bump their own ratings on an absolute level, since again it’s a percentile result — they can only promote a personal favourite at the expense of others.
This would not be so hard to code, who wants to code it?
Submitted by brad on Sun, 2005-08-07 00:44.
A lot of older computers that people are ready to throw away can be decent linux boxes, in schools or in other charitable locations.
I propose a simple small program (possibly fitting on a floppy as well as CD) which can be inserted into an old computer. It scans the harware and compares it with hardware databases of chipsets, cards and other parts which are known to work well under linux (or your favourite BSD or other OS) and to work well together. It would also evaluate the machine and put it in a “performance class” to describe just how good it is. It might connect to the net (if it can) to download the latest such lists and info and software updates.
The goal is to test if the machine can do a problem-free install, one that asks almost no questions, and converts the system to a nice linux box, ready for some student to run for e-mail, web, and writing. There are so many machines to donate that we can insist on perfection. The program could also tell the owner what upgrades it might need to be good or to reach a performance class. “This machine is good but with 128M of ram it would reach performance class N.” “This machine would be perfect if you swapped the ethernet card for one of these models” and so on.
Next, of course, is a simple distribution, to install from CD-rom or over the network, that can be quickly installed with no questions asked except perhaps time-zone (if it can’t figure that out from the old OS.) The goal is a system that can be run by untrained admins who may never have seen the insides of linux or any other OS.
Submitted by brad on Fri, 2005-08-05 12:04.
I’m not the first to think about it, since I see a bunch of patent filings related to it, but how hard would it be to have a sensor for windshield fog. Seems to me you could bounce light (UV perhaps which water scatters, though other colours might work) off the windshield to detect if there’s fog on the inside and use that to control the defogger.
In particular, modern defoggers use the air conditioner which provides dehumidified air when you heat it up again, great for defogging. But also fuel inefficient. I swear that while today’s AC based defoggers are better than the old pure heat ones, I think the AC-off mode of modern defoggers is not as good as the AC-less mode of the old ones.
Anyway, the sensor could at least control if the AC is used, if nothing else.
Submitted by brad on Thu, 2005-08-04 15:50.
In this new category, “What a great idea” I will document interesting ideas I have seen in my travels. Things that make you go “why didn’t I think of that?” Some may be new, others just new to me.
At a recent symphony concert, I came out at intermission to see a table laid out with drinks and snacks, each with a little numbered placard. People had placed and prepaid orders before the show, and thus could get their drink without any line.
This made tremendous sense. We have this drink station that gets used for just 15 minutes a night which is terribly inefficient, and they found a way to spread out the work. This fits into one of my major themes these days, “Why should we ever have to wait in line?”
Of course, such an idea may only work with affluent symphony-goers, who are far less likely to try and steal somebody else’s drink order. You could have claim checks and that would still be faster than ordering, pouring and paying, if you didn’t trust the patrons. And, if they didn’t want you to turn off your cell phone during the show, you could even have people TXT in their orders (with keypress beep turned off of course.)
Texting in orders might make more sense in places like ballparks and amusement parks, with a message back when it’s ready. Why miss the game to stand in line for a hot dog?
Submitted by brad on Wed, 2005-08-03 17:39.
Recently, Joel on Software wrote an essay on good programmers and how they are qualitatively different from average ones. This is not a new realization, and he knows it and references sources like "The Mythical Man Month." It was accepted wisdom decades ago that a small team of really brilliant programmers would make a better product than a giant team of lesser ones.
That wisdom, however, failed to predict the rise of Microsoft. That wisdom says a software monopoly is impossible because there are reverse economies of scale in software development. So how did Microsoft do it. The answer to that is perhaps the true genius of Bill Gates.
The trick, in part, was finding ways to make software tremendously broad in scope and features. Microsoft Word has bazillions of features, as most people know. Windows in its kernel isn't much more complex than other systems but the real Windows also includes a vast collection of DLLs (libraries) that seem external but are really part of the OS. To clone the OS you must make these DLLs -- and many other things.
A program like MS Word, with so many features, takes raw money to clone. You need that core team of great programmers -- and Microsoft has many great programmers, make no mistake -- but you also need a giant team of lesser ones to keep all the features going, to QA and document them, to translate them and make them work in so many environments.
This does have an economy of scale in the development. Combine that with the immense economies of scale that exist in the distribution of all soft things that can be copied for free, and this permitted a monopoly.
Of course, no single user makes use of all the features of MS Word, so it took even more skill to get them to demand such a complex program, when they might be better served by a leaner, more elegant system. Like I said, this is only part of it.
Submitted by brad on Mon, 2005-08-01 12:06.
Mapping programs, and fancy GPSs come with map databases that will, among other things, plot routes for you and estimate the time to travel them. That’s great, but they are often wrong in a number of ways. Sometimes the streets are wrong (missing, really just a trail, etc.) and they just do a rough estimation of travel time.
Yet all the information is there, being collected constantly by every car that drives the roads with a GPS. Aggregating this data will tell you what roads are real, what roads might be missing, which are one-way, where freeway entrances and exists really are.
And it will also tell you real-world speed examples at various times and dates, at rush hour or otherwise. Even a range of speeds so you can know the speeds for faster and slower drivers and get a really good estimate of your own likely speed on a given road at a given time. After removing the anomalies (like people stopping for coffee) of course.
Rental cars with GPSs are collecting this all the time (sometimes to nefarious uses, like charging whopping fees for brief trips out of state). Technically this data can be had.
But here’s the bad part — there is a potential for giant privacy troubles unless this is done very well, and some may be impossible to do without a privacy risk. After all, until you upload the data, there is clearly a log of your travels sitting there to be used against you. Only a system with rapid upload (and which discards data that gets old, even if it’s not uploaded) would not create a large risk of something coming back to haunt you.
The data would have to be anonymized, of course, and that’s harder than it sounds. After all, your GPS logs say a lot about you even without your name. Most would identify where you live, though that can be mitigated by breaking them up into anonymized fragments to a degree. Likewise they’ll identify where you work or shop or who you visit, all of which could be traced back to you.
So here’s the Solve This aspect of this problem. Getting good data would be really handy. So how do we do it without creating a surveillance nightmare?
Submitted by brad on Sat, 2005-07-30 22:13.
Recently there was a big fuss (including denouncements from many I know) over a U.S. effort to do away with the leap second. People claimed this was like trying to legislate PI to be 3.I am amazed at the leap the the defense of the leap second. I would be
glad to see it go. All our computers keep track of time internally
as a number of seconds since some epoch, typically Jan 1 1970 or 1980.
They go through various contortions to turn that absolute time into
the local time. This includes knowing all the leap-day calculations
and the leap day calculations. It’s complicated by knowing that sometimes
the day is Feb 29, and by knowing that a very, very few minutes have 61
seconds in them (or if you prefer, that a very few hours have 3601 seconds
and rare days have 86401 seconds.)
That’s a mess. A minute should always have 60 seconds. Special casing
all time code to deal with this was the wrong approach, and as noted,
is subject to errors because the code is very rarely tested in that
I’m astounded to see people saying this is the same as declaring pi to be 3.
It’s having 86400 seconds in most days and rare leap seconds that is the
integerization of a real number. The truly scientific approach would be
to declare the day to be 86400.002 seconds, and lengthen that number over
the centuries, would it not?
Astronomers, like computers, can and should keep track of time as an
absolute number of seconds since some epoch. They actually care very
little about what the local time is other than to know when it’s dark,
something leap seconds have insignificant bearing on. Indeed, astronomers
might be happiest using siderial time (where a day is 23 hours 56 minutes and
seconds, the true rotational period of the Earth.)
Our system of time is not one scientists would pick in the first place.
It is clearly designed for the convenience of the ordinary people, and
the legacy of the traditional means of telling time. It’s silly
to use this legacy system and at the same demand the general public and its
timekeeping systems jump through error-prone hoops to make it reflect
noon correctly to the second. Nobody even uses local time anyway, they
all use a time zone. The time zone is off by a huge margin from local time,
why does it matter in the slightest if it’s off by a few more seconds?
In many centuries, the drift will be noticeable. If we still care about
local time, we can fix it then.
Submitted by brad on Fri, 2005-07-29 22:46.
This is a tricky puzzle question I thought up some time ago, but I figured I would blog it.
As people who study physics know, the acceleration a falling body undergoes if dropped (in a vacuum) at the surface of the earth is known as “g”, or 9.8 meters per second per second.
This is so close to 10 that most students and people doing back of envelope calculations often use 10 as the value of a “g”. It’s easy. Fall for one second and you’re going 10 meters/second.
So I got to wondering, how much would the earth have to change to make “g” equal to 10 instead of 9.80665?
So here’s the puzzle. By what ratio would you have to increase the diameter of the Earth so that the people on this bigger planet would have “g” equal to 10? Assume the average density of the Earth remains the same (5.46 g/cc).
Then click to the Puzzle Answer
Submitted by brad on Wed, 2005-07-27 15:47.
Ok, this idea will make no sense to those who have not gone RV camping. RVs have 3 water tanks — one for fresh water, one for the toilet sewage (known as “black water”) and one for the other drains (shower, sinks) known as “grey water.” When you camp in unserviced campsites for a while you become very aware of the capacities of your tanks.
However, the RV uses the fresh water tank to “flush” the toilet. It seems to me that with a small extra water pump, one could use the grey water, or a mixture — grey with a final spurt of fresh to rinse the bowl.
RVs don’t really flush the toilet, that would use way too much water. You rinse the bowl after #1 and you pre-fill the bowl before #2 and rinse later. read more »
Submitted by brad on Thu, 2005-07-21 20:10.
The EFF is holding a Blog-a-thon on our 15th anniversary inviting people to describe things that made them decide to fight for freedom.
That seemed like a good time for me to add some details to one of my early stories, about the banning of my moderated newsgroup rec.humor.funny. I've told the the RHF ban story before and even the story of how it led to the creation of ClariNet and I'll be adding more details in my upcoming history of ClariNet later this year.
Today, at 18 years old (Aug 7, 1987) rec.humor.funny and the netfunny.com site qualify as one of the oldest blogs in existence, if not the oldest. The first blog as far as I can tell was the moderated newsgroup mod.ber, created by Brian Redmond in 1984. Mod.ber is long gone, so something else is the oldest blog. Blog, short for weblog, means a personaly created serialized publication on the web. The web, though many people have forgotten it, is and was defined by Tim Berners-Lee as including not just HTTP and HTML but the other procols such as USENET, FTP, Gopher and Telnet that existed before HTTP. So the rare USENET groups that were moderated for content were the first blogs, and some remain today, and this is thus the story of the first banned blog.
(Other suggested candidates for oldest blog include RISKS digest, Telecom Digest and Human-Nets, though they were more discussion boards than blogs.)
I had always been a defender of free speech to that point, but nothing brings it home like being banned yourself. It's also remarkable to me how many threads of my life run through that banning. These include business threads (the creation of ClariNet) and even personal ones (it's how I met John McCarthy, who introduced me to a past girlfriend a decade later.) I wasn't unknown before but the events did a lot to boost my visibility.
Being censored was a remarkably emotional experience. It didn't help that it was on the front pages of the newspaper every day and that the best (if most frustrating ) thing to do was to keep silent and let the press coverage blow over. It did teach me the truth of the aphorism that censorship doesn't protect people from exposure to violent ideas because censorship is violence.
The EFF didn't exist during this period. Had it existed, it would proably have come to my aid. But many others did, which was heartening. And I learned a bit more about how useful satire is as a tool in these battles. Having fought in the online trenches, I was ready to support it when (in another strange coincidence) my friend of 10 years earlier, Mitch Kapor, led the drive to create it. And later, of course, I have become very proud to be involved in it.
Submitted by brad on Wed, 2005-07-20 00:32.
I blogged earlier about my being in the Silicon Valley 100, a group generated by a marketing company to send out free stuff to hopefully influential folks. In that posting, I link to Dan Gillmor’s reaction to the program, where he writes about how “spooky” it is to him. I didn’t agree that it was that spooky, but there is a definite irony to the fact that I recently got a set of books via the SV100, and in that set was Dan’s own book “We the Media.”
Dan’s eyes rolled up when I told him that at dinner this evening. Of course, it was his publisher Tim O’Reilly who put the book out to the group, and again I don’t find much spooky about it. Publishers have sent out free copies of books to folks, hoping for reviews and buzz, since the dawn of books.
I was with Dan at the first event in our program celebrating the EFF’s 15th birthday, a BayFF panel on blogging and blogger’s rights which attracted an overflow crowd with an engaged audience. I’ll remember to announce some of the other events in our program in advance. We sang Happy Birthday based on the older lyrics which are out of copyright.
Submitted by brad on Mon, 2005-07-18 13:31.
On 9/11, we all wondered how 19 men had lived for a year among us and still given their lives to carry out such acts. Now people wonder even more at how young native British men would give their lives to kill random fellow Britons.
But there is something different here that troubles me. Most suicide terrorists ostensibly use this tactic because there are targets you can only attack if you give your life. Suicide was an essential part of flying a plane into a building.
But you very plainly don’t need to kill yourself to set off a subway bomb. Even with increased vigilence, anybody could leave a backpack behind, rush out the doors just as they are about to close, and then blow up the bomb as the train enters the tunnel, without dying or even getting caught (except perhaps on camera.) There was almost no tactical need for these men to kill themselves. Yes, it makes it a little more certain to do it that way, but we hope that committed terrorists (especially with such clean pedigrees) are not so many in number that they can be wasted this way.
But they were wasted this way, and I think deliberately. I suspect they were chosen not as the most committed, but as the most unlikely, just for the shock value, the idea that your neighbour could be a suicide bomber. And strong shock value it is, because while you don’t have to die to bomb a train, there are a lot of targets where willingness to die is tactically necessary to carry it off, and it’s close to impossible to defend against such attacks.
Those who think careful ID checks and national ID cards will stop terrorists now must step back. These kids had clean ID. The security cameras have helped discover the story but of course could do nothing to prevent it.
I have contended (to much opposition) that terrorism and suicide bombing is a tool used against democracies that are accused of oppression. A recent book (Dying to Win by Robert Pape) now backs this up. The answers are not good.
Submitted by brad on Sun, 2005-07-17 22:52.
I wrote before on the ideal car dock for an MP3 player but the truth is we could use something even simpler sooner. On my recent trip, we brought the cassette adapter but there was no tape player in the rental car. We forgot the FM transmitter, but that’s not as good anyway.
So right away let’s see a small headphone plug on the car stereos to do a nice aux input, especially if you are taking away the tape. Duh.
But we can go beyond that with a USB jack, since all music players can plug into that though with different results. A few of them will be clever enough to draw power and recharge from it — indeed, it is time for cars to have USB jacks just for power since now my cell phones and PDA can all charge from that, and use a cigarette lighter plug with USB jack to do so — but we want something with the data.
With some music players, plug into USB and they look like a hard drive with music files on it. The stereo could be an MP3 player, but might have trouble with the DRMed music. We could also leave the MP3 player in control, but develop a protocol for it to stream digital audio to the stereo, and for the stereo to send back commands (FF/Rew, Pause, skip track etc.) to the player. Yeah, you could also do this over bluetooth but why wouldn’t you want power when in the car, so wires remain the right choice.
Perhaps down the road we might see music players splitting into two halves — drive and UI electronics/power. The drive unit, be it flash stick or hard disk, holds your music and files, and the UI unit does the rest, and can be mated with any drive, as can the computer and as can the car stereo.