Submitted by brad on Sat, 2006-10-21 12:49.
I’m enjoying the new version of Battlestar Galactica. Unlike the original, which was cheezy space opera, this show is the best SF show on TV. Yes, I watched the original when I was 18. I knew it was terrible (and full of bad science) but in the 70s TV SF was extremely rare, and often even worse.
The original show began with Pactrick Macnee narrating an opening “There are those who believe that life here, began out there, with tribes of humans who may have been the forefathers of the Egyptians…” They sought the lost tribe of Earth, and in a truly abyssmal sequel finally came to 1980 Earth, which was of course technologically backward compared to them and unable to help in their fight.
This idea was a common one in science fiction of the 20th century. It was frequent in written SF, and Star Trek twice took it up. In one 60s episode, the Enterprise met Sargon, who claimed to have sewn most of the humanoid races. Spock states this meshes with Vulcan history, but another character says that Humans appear to have evolved on Earth. A later episode of Star Trek: The Next Generation reverses this, and Picard follows clues left in DNA to discover the common ancestry of all the humanoids.
Back in the 60s and 70s, when Battlestar Galactica and Star Trek were written, you could get away with this plot. It had a romantic appeal. While there was tons of evidence, as even Star Trek of the 60s knew, that humans were from Earth, we had not come to the 90s and the DNA sequencer. Today we know we share 25% of our DNA with cabbages. We’re descended from a long line in the fossil record that goes back a billion years. If life on this planet was seeded from other planets, it was over a billion years ago. It certainly wasn’t during the lifetime of Humanity, and nor were all the animals also seeded here at the same time as we were unless the aliens who did it deliberately created a fake fossil record.
(Of course creationists try very hard to make the case that this could be true, but they don’t even remotely succeed. If you think they do have a point, you may want to stop reading. You can read on for more SF theory though.) read more »
Submitted by brad on Thu, 2006-10-19 17:44.
My Canon cameras have a variety of ways you can change their settings to certain specialty ones. You can set a manual white balance. You can set an exposure compensation for regular exposures or flash (to make it dimmer or brighter than the camera calculates it should be.) You can change various shooting parameters (saturation etc.) and how the images will be stored (raw or not, large/medium/small etc.) You can of course switch (this time with a physical dial) from manual exposure to various automatic and semi-automatic exposure modes. On the P&S cameras you can disable or enable flash with such settings. You can change shooting modes (single-shot, multi-shot.)
You can turn on bracketing of various functions.
And let’s face it, I bet all of you who have such cameras have found yourself shooting by accident in a very wrong mode, not discovering it quite for some time. If you’re in a fast shooting mode, not looking at the screen, it can be easy to miss things like a manual white balance or even a small exposure compensations.
The camera already features an option to auto-revert on exposure bracketing, since they decided few would want to leave such a feature on full time. But auto bracketing isn’t dangerous, it just wastes a couple of shots that you can just delete later. And it’s also very obvious when it’s on. Of all the things to consider auto-revert for, this was the least necessary.
To my mind, the thing I would like auto-revert on most of all is manual white balancing. I recently was shooting fast an furious in a plane, and learned after lots of shots I still had the camera in an artificial light balance setting from the night before. The camera can do a good job here because it can usually tell what the temperature of the ambient light is, and can notice that the balance is probably wrong. In addition, it can tell that lots of time has passed since the white balance was set manually. It really should have a good idea if it’s out in daylight or indoors, if it’s night or day.
And I’m not even asking for an auto-revert here. Rather, an error beep which also pops a message on the screen that the white balance may be wrong. And yes, for those who don’t want this feature they can disable it. However, what would be cool would be if the screen that pops up to warn about a possibly bad retained setting, would be the ability then and there to say, “Thanks, revert” or “Don’t warn me about this again” or “Don’t warn me about this until the next ‘session.’” The camera knows about ‘sessions’ because it sees pauses in shooting with the camera off, and as noted, changes from night to day, indoors to out.
Of course it would still keep shooting. For extra credit if it suspected something wrong, it could hold the image in RAW mode in its buffer memory, and if you ask to go to another setting that only changes the jpegs, it could actually redo the jpeg right.
Now of course, photographers often shoot in manual modes for a very good reason, and they are doing it because they don’t want the camera’s automatic settings. But that doesn’t mean they can’t be reminded if, after a longish bout with the camera off, they are shooting in a way that’s very different from what the camera wants. That can include exposure. I’ve often left the camera in manual and then forgotten about it until I saw the review screen. (Of course P&S users almost always look at the review screen, they don’t get this trouble.)
Again, I want the camera to shoot when I tell it to, but to consider warning me if I turn warnings on that the image is totally overexposed or underexposed. At night it would take a more serious warning since in night shots there often no “right” exposure to compare with.
A smart camera could even notice when you aren’t looking at the review screen, because you are shooting so fast. But like I said, those who want the old way could always turn such warnings off.
Another option would be an explicit button to say, “I’m going to make a bunch of specialty settings now. Please warn me if I don’t revert them at the next session.” This could extend even to warning you that you turned off autofocus. Review screens don’t show minor focus errors, so it would be nice to be reminded of this.
(I actually think an even better warning would be one where the camera beeps if nothing in the shot is in focus, as is often the case here. The camera can easily tell if there are no high contrast edges in the shot. Yes, there are a few scenes that have nothing sharp in them, I don’t mind the odd beep on those.)
Submitted by brad on Tue, 2006-10-17 20:12.
People are always looking for location aware services for their mobile devices, including local info. But frankly the UIs on small mobile devices often are poor. When you are on a cell phone, voice to a smart person is the interface you often want.
So here’s a possible location aware service. Let people register as a “local expert” for various coordinates. That’s probably folks who live in a neighbourhood or know it very well. They would then, using a presence system on their own phone or computer, declare when they are available to take calls about that location.
Somebody sitting with a cell phone in a location could call a special 900-like number. Their phone could just transmit their location, or they would quickly say it to a human for entry. Then, their call would be routed to a local expert who is marked as available for calls. (In some cases it may simultaneously ring several experts of possible but unsure availability and give the call to whoever answers first.)
Then they could, for a fee (perhaps $1/minute?) ask the expert questions.
- “Where’s the best Thai food?”
- “How do I get transit to such and such location?”
- “What’s a good Taxi company to call? Can you call me one?”
- “Is there a shop around here that sells widgets?”
- “Is this museum worth it?”
- “What parts of the area are dangerous?”
- “How much is real estate here?”
The expert would be expected to know how to answer questions about most of the restaurants, bars and shops. And they could also — so long as they disclosed any kickbacks very clearly — provide coupon codes to people that would rebate the cost of the call.
At the end of any call, the caller would stay on the line and be asked to rate the quality of the expert. They could also rate later. Experts would gain reputations for their skill, and the ones with the highest ratings would be given more calls, or be able to charge more.
Charging could be per minute, fixed-rate, or as noted, rebated with validation from a recommended merchant (though I would want to design a system so that advice is never biased by this.)
This could also be done by texting, which would be easier for experts to do, and probably be cheaper, but of course is slower for the mobile user. Many mobile users are getting pretty good at their texting. The experts would presumably be at computers with IM clients, but they could be at mobile phones as well.
To make this cheaper, one could arrange for trading minutes. Which is to say, if you put minutes into the system advising others, you can in turn use minutes getting advice when you need it. Some people might prefer to do this in a friendly way rather than charge or pay.
Experts could very well be just around the corner, physically, if they are being an expert on their local neighbourhood. It’s not out of the question they could then agree to help in person. In this case you would need to have some way to certify they’re not up to something nefarious. The fact that the call is logged and you know the home address of the expert in the database should be enough. The client might be up to something nefarious, but this seems a pretty low risk.
Submitted by brad on Tue, 2006-10-17 00:09.
Just on the heels of my prior post on the bad math often found around alternative energy, I see a Google Blog post on Google’s solar installation. It claims Google with save money with their 1.6 megawatt solar installation.
I would be very interested to see Google’s numbers — what are they paying for this PV system, and what do they pay the power company for their grid power? Did they get rebates on the PV install? Rebates can help a single customer save money but they do it at taxpayer expense which makes it a wash, other than as a means to try to increase the market for solar and bring down the price.
Now, I’m not in any way saying that it’s bad for Google to go solar. Large grid-tie solar arrays are quite green, with minimal emissions (only those from their manufacture, shipping and install) and so it’s good to have them, even if they are more expensive than non-green grid energy.
But I want to know, is my math bad, or is Google’s? If companies can really save money with a PV array they should be springing up like weeds.
Today I also read announcements of companies hoping to bring to market new solar panel technologies with thin films that are vastly cheaper than existing tech. When that happens, the panels really should sprout everwhere, and to very positive effect.
Update: The press releases say the system is 1.6MW, and provides 2.6 million khw/year for a saving of $393K per year (about 15 cents/kwh which is about right in California.) The press release also says the system will pay for itself in 7.5 years, which at 7% interest rate means its total cost was $2.2M. (Truth is Google is able to make far better than 7% with its money, I suspect.)
This means an astounding $1.38 per watt for installed solar. I’ve never heard of anything remotely like this.
Even with a bad-math 0% interest rate, 7.5 year payoff is $1.84/watt so it’s not just bad math here. Even with the California rebates of $2.60/watt and 30% federal tax credit, it’s still amzingly cheap — and almost all the savings are coming from the taxpayer.
The release also suggests that 393K per year will result in 15 million saved over a 30 year lifespan. I can’t figure the math in this number. The bad-math 30*393 is under 12 million. The real saving over 30 years at 7% interest has a present value of 4.8 million. The future value, in 30 years time, of $393/year is well over 30M at 7%. You need an interest rate of 1.5% to have a FV near $15M. I suspose the risk-free-rate-above-inflation might correspond to this but it’s not typical in expressing these numbers.
So what are the real numbers?
Submitted by brad on Wed, 2006-10-11 13:18.
Last week I wrote about how the 800 number you get on the web page should be special and understand your context and how frustrating it is to get an 800 number from the Contact-Us page on a web site and then be taken through a series of menus that are a waste of time for somebody who was just at the web site.
While the best thing to do is to get an eCRM system which connects the user with a session fully informed about what they were doing on the web, that’s expensive. However, a few more thoughts have come to me.
a) Most IVRs for large companies offer the choice to use a different language, such as Spanish or French, which is good. But if I was on the web site I probably made a language choice there. So the “Contact Us” page in Spanish should give an 800 number that doesn’t bother to ask me, and the “Contact Us” page in English should probably be the same.
b) “Listen carefully because some of our options has changed” is one of the biggest lies out there. By if the Contact-Us page is going to lead the customer to an IVR, why not offer a page with a basic diagram of the IVR menus. Yes, I would like it to include the “path to an agent” sequence, and I know many companies don’t want to provide that in order to keep costs down. But at the very least you can tell me about the other choices that will be on the menu, and the fact that after I press 3 I’m going to be entering my account number followed by a pound sign.
And when the options do change, you can update the web site menus, and put a date on them so we can spot if they are old.
c) Ideally, track what I’ve been doing in my web session. Did I just book a flight? Did I just place an order? Did I just try to place an order and fail? Your web server knows this stuff. Now for some reason I’m phoning. Look at what I did and if you can’t offer me a custom 800 number just for that, at least spell out my likely path through the IVR. For example, “To amend this order in a way that can’t be done HERE, Call 1-800-xxx-yyy, press 3, wait for voice and enter your order number 123456 and then the pound sign.” (Yes, it should know my order number if I just placed an order.)
Submitted by brad on Tue, 2006-10-10 16:26.
I think it’s important that we stop burning petrofuels or indeed any fuels and get energy from better sources.
But there’s a disturbing phenomenon I have seen from people who believe the same thing too much. They want to believe so much, they forget their math. (Or I may be being charitable. Some of them, trying too hard to sell an idea or a product, may be deliberately forgetting their math.)
I see this over and over again in articles about photovoltaic solar, wind and other forms of power. They suggest you could put in a PV panel array for $20,000, have it provide you with $1,000 worth of electicity per year and thus “pay for itself” in 20 years. Again and again I see people take a series of payments that happen over a long time and just divide the total by the monthly or annual amount. read more »
Submitted by brad on Fri, 2006-10-06 22:43.
When you call most companies today, you get a complex “IVR” (menu with speech or touch-tone commands.) In many cases the IVR offers you a variety of customer service functions which can be done far more easily on the web site. And indeed, the prompts usually tell you to visit the web site to do such things.
However, have we all not shouted, “I am already at your damned web site, I would not be calling you to do those things!”
And they should know this. So if you’re on the web site, and you’ve done more than just click on the “Contact Us” tab, then when you finally do click on the tab asking for a phone number, you should not get the same phone number that is given to newcomers or printed in non-web locations.
You should get a special phone number that says, “This customer is already on the web site. Don’t bother offering things that can be done far more easily on the web site.”
Now I understand why they offer these things. Agents cost money and they want to divert customers to automated systems if at all possible. But If I’m already at the automated system, I am usually calling for just a few reasons. Perhaps I want web site support, but I probably need an agent to do something that’s hard or impossible to do on the web site. Why frustrate me?
Of course, even better is if you have an eCRM system that integrates the call center and the web experience. Many companies now have a click-to-call link on their page. Some even connect you with an agent who has your information already from the history on the web site, but this is annoyingly rare. All this stuff is expensive and involves buying new tools and fancy reprogramming. What I propose is pretty trivial — a much simpler menu gated by the phone number the person came in on. Any IVR can do that with a small amount of work.
Now I see one hole. The “Gets to an agent fast” number might of course be spread around, and people would want to use it for all their calls, defeating (to the company) the purpose of all those menus. But today, numbers are cheap. You can get a block of 100 numbers and change the magic one every day. Or, with a little bit of programming, really not that much, you can have the web site tell the true web-sourced callers “Dial extension xxxx when you get connected.” That’s a little fancier, requires the IVR be programmed to know about a changing extension, but again it’s not nearly so hard as buying a whole eCRM system.
I know that companies don’t want to frustrate their customers, they think the IVRs are saving them enough money to offset the frustration. But in this case, they are costing money, as the person wastes time listening to a pointles s IVR. Let’s stop it!
Submitted by brad on Thu, 2006-10-05 21:56.
Every driver of a regular car knows this frustration well. You’re behind a big SUV or Minivan and you can no longer see what’s happening ahead of you, the way you can with ordinary cars. This is not simply because the ordinary cars are shorter, it’s because you can see through the windows of the ordinary car — they are at your level.
Of course trucks have always blocked the way but in the past they were few in number. Now that half the cars on the road are tall, being blocked is becoming the norm. This is dangerous, since good driving requires tracking the cars in front of the one you are following, and reacting to their brake lights as well.
Now that flat planel displays are plumetting in price, I propose that any vehicle that can’t be easily seen through by a driver in a standard height car must put a flat screen display on the back, said display showing the view of a camera on the front of the vehicle ideally configured
to act like a window would for a car at some modest distance behind the screen.
(A really clever display would track the distance of the car behind and zoom the view so it acts exactly like a window if it were big enough, or at least show what a big window would.)
I’m not talking HDTV here, though of course that would be nice and would become the norm a few years later. It might just be a 20” widescreen style display. For computers, these are dropping under $500 with HD resolution, and less with TV resolution. Admittedly car-mounted units would start off being more expensive in order to be rugged enough, though lots of people are putting small panels in their cars today.
It would of course need a very bright backlight for daytime, and an automatic adjustment of brightness for the night.
Quite a bit cheaper would be to just have the SUV/Minivans have the camera, and transmit the video over RF. The drivers of cars could be the ones to have to buy screens, in this case small dashboard screens which are cheaper than big ones and already exist in many cars for GPS. The big problem here is only receiving the signal of the car in front of you. You would need a protocol where cars that transmit also receive with highly directional antennas. Thus they would examine the direction of all signals they receive from other cameras, automatically pick a free band, and then transmit, “I’m car X. Car Y is in front of me, car Z in front of it.
Cars A and B are right front and direct right, car C is left, car D is behind me (probably you!)”
In fact it would be giving signal strength info from all directionals. It should be pretty easy then to tell, with all that info from all the cars around you, which is the car directly in front of you.
Then display it on the dash or even in a heads up display where the tail of the car is.
For privacy reason, cars could change their serial number from time to time so this can’t track them, though there is a virtue in broadcasting the licence plate so you can confirm you are really seeing the view of the car ahead of you by reading the plate.
This solution would cost under $50 for the camera and transmitter, much easier to mandate. The receiver would be an option car owners could buy. Not as fair of course, since the vision blockers should be the ones paying for this.
Submitted by brad on Tue, 2006-10-03 12:07.
We should all be disturbed by the story of a man who was questioned and missed his flight because he spoke on his cell phone in Tamil. Some paranoid thought it was suspicious, reported it, and so the guy gets pulled and misses his flight.
This is not the first time. People have been treated as suspicious for speaking in all sorts of languages, including Arabic, Hebrew, Urdo or just being Arabs or Sikhs. Sometimes it’s been a lot worse than just missing your flight.
So here’s a simple rule. If you want to report something as suspicious, then you don’t fly until the matter is resolved. After all, if you are really afraid, you wouldn’t want to fly. Even with the nasty foreigner pulled off the plane, you should be afraid of conspiracies with teams of villains. So you go into the holding cell and get a few questions too.
Now frankly, I would want to do much worse when it turns out the suspect is very obviously innocent. But I know that won’t get traction because people will not want to overly discourage reports lest they discourage a real report. But based on my logic above, this should not discourage people who think they really have something. At least not the first time.
TSA employees are of course in a CYA mode. They can’t screen out the paranoia because they aren’t punished for harassing the innocent, but they will be terribly punished if they ignore a report of somebody suspicious and decide to do nothing. That’s waht we need to fix long term, as I’ve written before. There must be negative consequences for people who implement security theatre and strip the innocent of their rights, or that’s what we will get.
Submitted by brad on Mon, 2006-10-02 12:29.
More cars are being made “drive-by-wire” where the controls are electronic, and even in cars with mechanical steering, throttle and brake linkages, there also exist motorized controls for power steering and cruise control. (It’s less common on the brakes.)
As this becomes more common, it would be nice if one could pop in a simple, short duration control console on the passenger’s side. It need not be large, full set of controls, it might be more of the video game console size.
The goal is to make it possible for the driver to ask the passenger to “take the wheel” for a short period of time in a situation where the driving is not particularly complex. For example, if the driver wants to take a phone call, or eat a snack or even just stretch for a minute. For long term driving, the two people should switch. It could also be used in an emergency, if the driver should conk out, but that’s rare enough I don’t think it’s all that likely people would have the presence of mind to pop out the auxilary controls and use them well.
The main question is, how dangerous is this? Disabled people drive with hand controls for throttle and brakes, though of course they train with this and practice all the time. You would want people to practice driving with the mini-console before using it on a live road. A small speed display would be needed.
While it’s possible to just pass over steering, and have the person in the driver’s seat be reading with brakes that seems risky to me, even if it’s cheaper. Driving from the other side of a car has poorer visibility, of course, but it’s legal and doable. However, I wouldn’t recommend this approach for complex city driving.
We’re used to a big wheel, but almost everybody is also comfortable with something like fold out handlebars that could pop out from the glovebox. (There is an airbag problem with this, perhaps having the bars be low would be better. As they are electronic, they can even pop up from under the front of the seat, or the console between the two seats.) Motorcycle style throttle — clutch would be too much work.
Driving schools would like to buy this of course. They already get cars with a passenger side brake pedal.
Submitted by brad on Thu, 2006-09-28 11:14.
Some time ago I modified this blog softare (Drupal) to ask a very simple question of people without accounts posting comments. It generally works very well at stopping robot posting, however the volume of spam has been increasing, so I changed the question. Volume may have dropped a touch but I still got a bunch, which means the spammers are actually live humans, not robots.
It’s also possible that asking natural language questions (rather than captcha style entry of text from a graphic) has gotten common enough that spammers have modified their software so they can figure out the answer once and easily code it, but I don’t think this is the case.
What’s curious is that my comment form also clearly explains that any links in comments will be done with the rel=nofollow tag, which tells Google and other search engines not to treat the link as a valid one when ranking pages. This means that, other than readers of the blog clicking on the links, which should be very rare, these spams should be unproductive for the spammer. But they’re still doing them.
The change however was prompted by a new breed of comment spam, where the spammers were copying other comments from inside large threads, but inserting their link on the author’s name. (This also uses rel=nofollow.) Indeed, such a technique does not automatically trigger my instincts to delete the spam, but they chose one of my own comments, so I recognized it. Right now my methods cut the spam enough that it is productive to manually delete what gets posted, though if the volume got high enough I would have to find other automated techniques.
(Drupal could of course help by having a much easier to use delete, including a ‘delete all from this IP address’ option.)
Submitted by brad on Fri, 2006-09-22 11:46.
As most people in the VoIP world know, the FCC mandated that “interconnected” VoIP providers must provide E911 (which means 911 calling with transmission of your location) service to their customers. It is not optional, they can’t allow the customer to opt out to save money.
It sounds good on the surface, if there’s a phone there you want to be able to reach emergency services with it.
The meaning of interconnected is still being debated. It was mostly aimed at the Vonages of the world. The current definition applies to service that has a phone-like device that can make and receive calls from the PSTN. Most people don’t think it applies to PBX phones in homes and offices, though that’s not explicit. It doesn’t apply to the Skype client on your PC, one hopes, but it could very well apply if you have a more phone like device connecting to Skype, which offers Skype-in and Skype-out services on a pay per use basis and thus is interconnected with the PSTN.
Here’s the kicker. There are a variety of companies which will provide E911 connectivity services for VoIP companies. This means you pay them and they will provide a means for you to route your user’s calls to the right emergency public service access point, and pass along the address the user registered with the service. Seems like a fine business, but as far as I can tell, all these companies are charging by the customer per month, with fees between $1 and $2 per month.
This puts a lot of constraints on the pricing models of VoIP services. There’s a lot of room for innovative business models that include offering limited or trial PSTN connection for free, or per-usage billing with no monthly fees. (All services I know of do the non-PSTN calling for
free.) Or services that appear free but are supported by advertising or other means. You’ve seen that Skype decided to offer free PSTN services for all of 2006. AIM Phoneline offers a free number for incoming calls, as do many others.
Read on… read more »
Submitted by brad on Sun, 2006-09-17 10:34.
It’s common in the blogosphere for bloggers to comment on the posts of other bloggers. Sometimes blogs show trackbacks to let you see those comments with a posting. (I turned this off due to trackback spam.) In some cases we effectively get a thread, as might appear in a message board/email/USENET, but the individual components of the thread are all on the individual blogs.
So now we need an RSS aggregator to rebuild these posts into a thread one can see and navigate. It’s a little more complex than threading in USENET, because messages can have more than one parent (ie. link to more than one post) and may not link directly at all. In addition, timestamps only give partial clues as to position in a thread since many people read from aggregators and may not have read a message that was posted an hour ago in their “thread.”
At a minimum, existing aggregators (like bloglines) could spot sub-threads existing entirely among your subscribed feeds, and present those postings to you. You could also define feeds which are unsubscribed but which you wish to see or be informed of postings from in the event of a thread. (Or you might have a block-list of feeds you don’t want to see contributions from.) They could just have a little link saying, “There’s a thread including posts from other blogs on this message” which you could expand, and that would mark those items as read when you came to the other blog.
Blog search tools, like Technoratti could also spot these threads, and present a typical thread interface for perusing them. Both readers and bloggers would be interested in knowing how deep the threads go.
Submitted by brad on Sat, 2006-09-16 15:33.
At the blogger panel at Fall VON (repurposed to be both video on the net as well as voice) Vlogger and blip.tv advocate Dina Kaplan asked bloggers to start vlogging. It’s started a minor debate.
My take? Please don’t.
I’ve written before on what I call the reader-friendly vs. writer-friendly dichotomy. My thesis is that media make choices about where to be on that spectrum, though ideal technology reduces the compromises. If you want to encourage participation, as in Wikis, you go for writer friendly. If you have one writer and a million readers, like the New York Times, you pay the writer to work hard to make it as reader friendly as possible.
When video is professionally produced and tightly edited, it can be reader (viewer) friendly. In particular if the video is indeed visual. Footage of tanks rolling into a town can convey powerful thoughts quickly.
But talking head audio and video has an immediate disadvantage. I can read material ten times faster than I can listen to it. At least with podcasts you can listen to them while jogging or moving where you can’t do anything else, but video has to be watched. If you’re just going to say your message, you’re putting quite a burden on me to force me to take 10 times as long to consume it — and usually not be able to search it, or quickly move around within it or scan it as I can with text.
So you must overcome that burden. And most videologs don’t. It’s not impossible to do, but it’s hard. Yes, video allows better expression of emotion. Yes, it lets me learn more about the person as well as the message. (Though that is often mostly for the ego of the presenter, not for me.)
Recording audio is easier than writing well. It’s writer friendly. Video has the same attribute if done at a basic level, though good video requires some serious work. Good audio requires real work too — there’s quite a difference between “This American Life” and a typical podcast.
Indeed, there is already so much pro quality audio out there like This American Life that I don’t have time to listen to the worthwhile stuff, which makes it harder to get my attention with ordinary podcasts. Ditto for video.
There is one potential technological answer to some of these questions. Anybody doing an audio or video cast should provide a transcript. That’s writer-unfriendly but very reader friendly. Let me decide how I want to consume it. Let me mix and match by clicking on the transcript and going right to the video snippet.
With the right tools, this could be easy for the vlogger to do. Vlogger/podcaster tools should all come with trained speech recognition software which can reliably transcribe the host, and with a little bit of work, even the guest. Then a little writer-work to clean up the transcript and add notes about things shown but not spoken. Now we have something truly friendly for the reader.
In fact, speaker-independent speech recognition is starting to almost get good enough for this but it’s still obviously the best solution to have the producer make the transcript. Even if the transcript is full of recognition errors. At least I can search it and quickly click to the good parts, or hear the mis-transcribed words.
If you’re making podcaster/vlogger tools, this is the direction to go. In addition, it’s absolutely the right thing for the hearing or vision impaired.
Submitted by brad on Fri, 2006-09-15 22:59.
In an earlier blog post I attempted to distinguish TVoIP (TV over internet) with IPTV, a buzzword for cable/telco live video offerings. My goal was to explain that we can be very happy with TV, movies and video that come to us over the internet after some delay.
The two terms aren’t really very explanatory, so now I suggested VAD, for Video-after-demand. Tivo and Netflix have taught us that people are quite satisifed if they pick their viewing choices in advance, and then later — sometimes weeks or months later — get the chance to view them. The key is that when they sit down to watch something, they have a nice selection of choices they actually want to see.
The video on demand dream is to give you complete live access to all the video in the world that’s available. Click it and watch it now. It’s a great dream, but it’s an expensive one. It needs fast links with dedicated bandwidth. If your movie viewing is using 4 of your 6 megabits, somebody else in the house can’t use those megabits for web surfing or other interactive needs.
With VaD you don’t need much in your link. In fact, you can download shows that you don’t have the ability to watch live at all, or get them at higher quality. You just have to wait. Not staring at a download bar, of course, nobody likes that, but wait until a later watching session, just as you do when you pick programs to record on a PVR like the Tivo.
I said these things before, but the VaD vision is remarkably satisfying and costs vastly less, both to the consumer, and those building out the networks. It can be combined with IP multicasting (someday) to even be tremendously efficient. (Multicasting can be used for streaming but if packets are lost you have only a limited time to recover them based on how big your buffer is.)
Submitted by brad on Wed, 2006-09-13 07:00.
Trade show booths are always searching for branded items to hand out to prospects. Until they fix the airport bans, how about putting your brand on a tube of toothpaste and/or other travel liquids now banned from carry-on bags?
(Yeah, most hotels will now give you these, but it’s the thought that counts and this one would be remembered longer than most T-shirts.)
Submitted by brad on Sun, 2006-09-10 18:18.
As a hirsute individual, I beg the world’s makers of medical tapes and band-aids to work on an adhesive that is decent at sticking to skin, but does not stick well to hair.
Not being versed in the adhesive chemistries of these things, I don’t know how difficult this is, but if one can be found, many people would thank you.
Failing that would be an adhesive with a simple non-toxic solvent that unbinds it, which could be swabbed on while slowly undoing tape.
Submitted by brad on Fri, 2006-09-08 12:24.
While it will be a while before I get the time to build all my panoramas of this year’s Burning Man, I did do some quick versions of some of those I shot of the burn itself. This year, I arranged to be on a cherry picker above the burn. I wish I had spent more time actually looking at the spectacle, but I wanted to capture panoramas of Burning Man’s climactic moment. The entire city gathers, along with all the art cars for one shared experience. A large chunk of the experience is the mood and the sound which I can’t capture in a photo, but I can try to capture the scope.
This thumbnail shows the man going up, shooting fireworks and most of the crowd around him. I will later rebuild it from the raw files for the best quality.
Shooting panoramas at night is always hard. You want time exposures, but if any exposure goes wrong (such as vibration) the whole panorama can be ruined by a blurry frame in the middle. On a boomlift, if anybody moves — and the other photographer was always adjusting his body for different angles — a time exposure won’t be possible. It’s also cramped and if you drop something (as I did my clamp knob near the end) you won’t get it back for a while. In addition, you can’t have everybody else duck every time you do a sweep without really annoying them, and if you do you have to wait a while for things to stabilize.
It was also an interesting experience riding to the burn with DPW, the group of staff and volunteers who do city infrastructure. They do work hard, in rough conditions, but it gives them an attitude that crosses the line some of the time regarding the other participants. When we came to each parked cherry picker, people had leaned bikes against them, and in one case locked a bike on one. Though we would not actually move the bases, the crew quickly grabbed all the bikes and tossed them on top of one another, tangling pedal in spoke, probably damaging some and certainly making some hard to find. The locked bike had its lock smashed quickly with a mallet. Now the people who put their bikes on the pickers weren’t thinking very well, I agree, and the DPW crew did have to get us around quickly but I couldn’t help but cringe with guilt at being part of the cause of this, especially when we didn’t move
the pickers. (Though I understand safety concerns of needing to be able to.)
Anyway, things “picked up” quickly and the view was indeed spectacular. Tune in later for more and better pictures, and in the meantime you can see the first set of trial burn panoramas for a view of the burn you haven’t seen.
Submitted by brad on Wed, 2006-09-06 11:54.
I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.
You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.
When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.
There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.
I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.
It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.
For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.
Submitted by brad on Mon, 2006-08-21 11:44.
One of the few positive things over the recent giant AOL data spill (which we have asked the FTC to look into) is it has hopefully taught a few lessons about just how hard it is to truly anonymize data. With luck, the lesson will be “don’t be fooled into thinking you can do it” and not “Just avoid what AOL did.”
There is some Irony that in general, AOL is one of the better performers. They don’t keep a permanent log of searches tied to userid, though it is tied, reports say, to a virtual ID. (I have seen other reports to suggest even this is erased after a while.) AOL also lets you turn off short term logging of the association with your real ID. Google, MSN, Yahoo and others keep the data effectively forever.
Everybody has pointed out that for many people, just the search queries themselves can be enough to identify a person, because people search for things that relate to them. But many people’s searches will not be trackable back to them.
However, the AOL records maintain the exact time of the search, to the second or perhaps more accurately. They also maintain the site the user clicked on after doing the search. AOL may have wiped logs, but most sites don’t. Let’s say you go through the AOL logs and discover an AOL user searched and clicked on your site. You can go into your own logs and find that search, both from the timestamp, and the fact the “referer” field will identify that the user came via an AOL search for those specific terms.
Now you can learn the IP address of the user, and their cookies or even account with your site, if your site has accounts.
If you’re a lawyer, however, doing a case where you can subpoena information, you could use that tool to identify almost any user in the AOL database who did a modest volume of searches. And the big sites with accounts could probably identify all their users who are in the database, getting their account id (and thus often name and email and the works.)
So even if AOL can’t uncover who many of these users are due to an erasure policy, the truth is that’s not enough. Even removing the site does not stop the big sites from tracking their own users, because their own logs have the timestamped searches. And an investigator could look for a query, do the query, see what sites you would likely click on, and search the logs of those sites. They would still find you. Even without the timestamp this is possible for an uncommon query. And uncommon queries are surprisingly common. :-)
I have a static IP address, so my IP address links directly to me. Broadband users who have dynamic IP addresses may be fooled — if you have a network gateway box or leave your sole computer on, your address may stay stable for months at a time — it’s almost as close a tie as a static IP.
The point here is that once the data are collected, making them anonymous is very, very hard. Harder than you think, even when you take into account this rule about how hard it is.