Media

Battlestar Galactica sub-blog returns to activity

Some of you may know that I started a sub-blog for my thoughts on my favourite SF TV show, Battlestar Galactica. This sub-blog was dormant while the show was off the air, but it’s started up again with new analysis as the first new episode of the final 10 (or 12) episodes airs tonight. (I will be missing watching it near-live as I will be giving a talk tonight on Robocars at the Future Salon in Palo Alto.) Reports are that one big mystery — the last Cylon — is revealed tonight.

So if you watch Battlestar Galactica, you may want to subscribe to the feed for the Battlestar Galactica Analysys Bog right here on this site. And I’ll go out on a limb and promote my two top candidates for the mystery Cylon.

Some recent posts of note:

Being the greatest athlete ever

NBC has had just a touch of coverage of Michael Phelps and his 8 gold medals, which in breaking Mark Spitz’s 7 from 1972 has him declared the greatest Olympic athlete, or even athlete of all time. And there’s no doubt he’s one of the greatest swimmers of all time and this is an incredible accomplishment. Couch potato that I am, I can hardly criticise him.

(We are of course watching the Olympics in HDTV using MythTV, but fast-forwarding over the major bulk of it. Endless beach volleyball, commercials and boring events whiz by. I can’t imagine watching without such a box. I would probably spend more time, which they would like, but be less satisfied and see fewer of the events I wish to.)

Phelps got 8 Gold but 3 of them were relays. He certainly contributed to those relays, may well have made the difference for the U.S. team and allowed it to win a gold it would not have won without him. So it seems fair to add them, no?

No. The problem is you can’t win relay gold unless you are lucky enough to be a citizen of one of a few powerhouse swimming nations, in particular the USA and Australia, along with a few others. Almost no matter how brilliant you are, if you don’t compete for one of these countries, you have no chance at those medals. So only a subset of the world’s population even gets to compete for the chance to win 7 or 8 medals at the games. This applies to almost all team medals, be they relay or otherwise. Perhaps the truly determined can emigrate to a contending country. A pretty tall order.

Phelps one 5 individual golds, and that is also the record, though it is shared by 3 others. He has more golds than anybody, though other athletes have more total medals.

Of course, swimming is one of the special sports in which there are enough similar events that it is possible to attain a total like this. There are many sports that don’t even have 7 events a single person could compete in. (They may have more events but they will be divided by sex, or weight class.)

Shooting has potential for a star. It used to even be mixed (men and women) until they split it. It has 9 male events, and one could in theory be master of them all.

Track and Field has 47 events split over men and women. However, it is so specialized in how muscles are trained that nobody expects sprinters to compete in long events or vice versa. Often the best sprinter does well in Long Jump or Triple Jump, allowing the potential of a giant medal run for somebody able to go from 100m to 400m in range. In theory there are 8 individual events 400m or shorter.

And there are a few other places. But the point is that to do what Phelps (or Spitz) did, you have to be in a small subset of sports, and be from a small set of countries. There have been truly “cross sport” athletes at the Olympics but in today’s world of specialized training, it’s rare. If anybody managed to win multiple golds over different sports and beat this record, then the title of greatest Olympian would be very deserving. One place I could see some crossover is between high-diving and Trampoline. While a new event, Trampoline seems to be like doing 20 vaults or high dives in a row. And not that it wasn’t exciting to watch him race.

More Burning Man packing…

Guarantee CPM if you want me to join your ad network

If you run a web site of reasonable popularity, you probably get invitations to sign up for ad networks from time to time. They want you to try them out, and will sometimes talk a great talk about how well they will do.

I always tell them “put your money where your mouth is — guarantee at least some basic minimum during the trial.”

Most of them shut up when I ask for that, indicating they don’t really believe their own message. I get enough that I wrote a page outlining what I want, and why I want it — and why everybody should want it.

If you have a web site with ads, and definitely if you have an ad network, consider reading what I want before I’ll try your ad network.

Just when you thought it was safe to buy a blu-ray player

The last week saw some serious signs that Blu-Ray could win the high-def DVD war over HD-DVD. Many people have been waiting for somebody to win the war so that they don’t end up buying a player and a video collection in the format that loses. (Strangely, the few players that supported both formats tended to cost much more than two individual players.)

Now there’s a report that the new profile for Blu-ray will obsolete many old players. So even those who made the right bet and didn’t get a PS3 may be just as screwed.

Something amazes me that has amazed me since the days of the first Audio CD players in the 80s. The Audio CD redbook format was defined early, and it was a lot of work to get reasonable combined audio + data disks because of it. And long after burnable CDs became popular (and into DVDs) it’s been the case that many home players can’t read the disks at all until they are “finalized” and unable to take more data. There were many other problems. And that’s not itself the problem, as there will always be demands you don’t anticipate.

But it’s not as though these devices don’t have a readily available means by which to be given new programming. They have a drive in them, and it would have been easy to issue CDs or DVDs with signed new firmwares on them. Indeed, since the disks have always been vastly huge compared to the firmwares of the devices they played on, it’s usually been the case that a disk wishing to use a new format could probably include new firmware for every known player in a small part of the disk. That’s certainly true for blu-ray.

Now of course if a player doesn’t have enough memory or CPU or graphics power, you can’t update it to do things it simply isn’t capable of doing. But you should be able to always update it to understand at least the structure of new formats, and know what they can use and what they can’t. Of course, all updates must be signed by a highly protected manufacturers key, so that attackers can’t hack your firmware, and the user should have to confirm on their remote that they want to accept the update. And yes, if that key is compromised and people don’t insert a disk with a revocation command on it quickly enough, there can be trouble. But it’s better than having players that slow down progress in the business.

(And yes, I realize that many early CD players did not have rewritable firmware, since ROM was cheaper than EEPROM and flash didn’t come along for a while. But it would have been worth it, and there’s no excuse for not having safely flashable firmwware today in just about anything.)

And on another rant, I’ve always been amazed at the devices that do allow firmware flashing but don’t have a safety mechanism. There are many devices, some still made today, that can be turned into “bricks” if you flash buggy firmware, in that you can no longer flash new firmware. Every device should have, in unwritable storage, the most basic and well tested firmware reloader that can be invoked if the recently installed firmware has failed. Some devices have this but it’s taken a long time.

While I don’t really seek a game machine because it would suck up too much time, it may be time for a PS3 as a Blu-Ray player. They are not much more expensive than the standalone players, and of course do much more. If I wanted a game machine it would probably be a Wii. We found one this year for a gift for the nephews, but they got another one so I ended up selling it for a $100 profit on Craigslist, the prevailing market being what it was. Made an Egyptian boy very happy, as they are very hard to get over there.

Old think on data storage for movies

A story from the New York Times suggests it costs over $12,000/year to store a movie in digital form.

This number is entirely bogus, and based on old thinking, namely the assumptions of offline storage on DVDs and tapes. Offline media do degrade, and you must copy them before they have a chance to degrade, which takes people, though frankly it’s still should not be as expensive as this. To do my calculations, I am going to assume a movie needs 100gb of storage with low-loss lossy compression. You can scale the numbers up if you like if you want to assume more, even at 1 TB it doesn’t change that much.

A film occupying 100gb of storage can go on about 20 dvds (or 11 dual layer,) costing about $8. It can go on 4 independent sets of 20 DVDs for $32 in media. Ideally you could rack these in a DVD jukebox, but if they are just sleeved, then once a year a person could pull out the DVDs, put them in a reader which would test them. Any that tested fine would be re-sleeved, those that did not would flag for the others to be pulled, and then copied to new media. (Probably better media, like blu-ray.) There are algorithms to distribute the data so that a large number of the disks must fail in that year to actually lose something. Of course, you use different vaults around the world. When approaching the point where failure rates go up for the media, you re-burn new copies even if the old ones still test fine.

This takes human time, though not all that much. Perhaps half an hour of actual human time swapping disks though much more real time to burn them, but you don’t do just one at a time.

However, even better is the new style of archival — online storage. Hard disks are 20 cents/gigabyte and continuing to fall. NAS boxes are more expensive now but there is no reason they won’t drop to very reasonable prices, so that a NAS case adds perhaps 5 cents/gigabyte (ie. $100 for a 4x500gb drive box which lasts for 10-15 years.) (NAS boxes are small boxes that hold a collection of drives and allow access to them over ethernet. No computer is needed.) They also cost about 2 cents/gb/year for power if on all the time, and some small amount for space, though they would tend to sit in computer centers that already exist.

Those are today’s prices, which will just get cheaper, except for the power. Much cheaper. If a drive lasts an average of 4 years before failing and a NAS lasts 10 years, this works out to 7.5 cents/gigabyte/year. Of course you will store your files redundantly, in 4 different places (which is actually overkill) and so it’s 30 cents/gigabyte/year.

Which is still just $30 for a 100gb file, or $300 for a TB.

Online storage is live. You can regularly check the integrity, all the time. You can either leave it off and spin it up every few days (to not use power) or just leave it on all the time. If one, two or three of the 4 disks fails, computers can copy the data to fresh disks in the network, and you are alive. Your disks should last 3 to 4 years but many will last much longer. You need a computer system to control all this, but you only need one for the entire cloud of NAS boxes, or at most a few. Its cost is low.

The real cost is people. But companies like Google have solved the problem of running large server farms. They tolerate single drive failures. The computers copy the data to new drives right away, and technicans go by every few days to pull old ones and slot in fresh ones for the next need — not for the same file. This takes just a few minutes of the tech’s time. And there is no rush to their work. Fore each 100gb file, you should expect to have a replacement about once every 4 years (ie. the lifetime of an average drive.)

Now all this is at today’s price of $100 for a 500gb drive. But that’s dropping fast, faster than Moore’s law. The replacements will be 1TB and 2TB drives before long, and the cost will continue to fall. And this is with 4 copies of every file. You can actually get by with less using modern data distribution algorithms which can scatter a file of 100gb into 200 1gb pieces, for which almost half must be lost before the whole file is lost. Several data centers could burn down without losing any files if things are done right. I have not accounted for bandwidth here for replacements, which usually would be done in the same data center except in unusual circumstances.

The biggest cost is the people to set all this up. However, presuming big demand, the cost per gigabyte for those people should become modest.

Writers' Strike threatening Porn Industry

The strike by screenwriters in the Porn Writers Guild of America is wreaking a less public havoc on the pornography industry. Porn writers, concerned about declining revenue from broadcast TV, also seek a greater share of revenue from the future growth areas of DVD and online sales.

“Online sales and DVD may one day be the prime sources of revenue in our industry,” stated union spokesman Seymour Beaver. We want to be sure we get our fair share of that for providing the writing that makes this industry tick.

“It’s getting terrible,” reported one porn consumer who refused to give his name. “I just saw Horny Nurses 14 and I have to tell you it was just a reshash of the plots from Horny Nurses 9 and 11. It’s like they didn’t even have a writer.”

“Fans are not going to put up with movies lacking in plot, character and dialogue, and that’s what they’ll get if they don’t meet our terms,” said Beaver. Beaver, who claims to have a copyright on the line, “Oh yes, baby, do it just like that, oh yeah” says he will not allow use of his lines without proper payment of residuals.

Some writers also fear that the move to online will result in customers simply downloading individual scenes rather than seeking movies with a cohesive story thread that makes you care about the characters. “I saw one movie with 5 scenes, and no character was in 2 of them,” complained one writer.

“What do people want? Movies where the actors just walk into a room, strip and just go at it? Where they always start with oral sex, then doggy, and then a money shot? Fans will walk if that’s all they get,” according to PWGA member Dick Member. “And don’t think about doing the lonely housewife and the pool-boy again. I own that.”

An industry spokesman said they had not yet seen any decline in revenues due to the strike, as they have about 2 million already-written scripts on the shelves. In addition, Hot Online Corporation spokesman Ivana Doit claimed their company is experimenting with a computer program that creates scripts through a secret algorithm. Scripts penned by the computer have already brought in a million in sales, claims Doit, but she would not indicate which films this applied to.

Converting vinyl to digital, watch the tone arm

After going through the VHS to digital process, which I lamented earlier I started wondering about the state of digitizing old vinyl albums and tapes is.

There are a few turntable/cd-writer combinations out there, but like most people today, I’m interested in the convenience of compressed digital audio which means I don’t want to burn to CDs at all, and nor would I want to burn to 70 minute CDs I have to change all the time just so I can compress later. But all this means I am probably not looking for audiophile quality, or I wouldn’t be making MP3s at all. (I might be making FLACs or sampling at a high rate, I suppose.)

What I would want is convenience and low price. Because if I have to spend $500 I probably would be better off buying my favourite 500 tracks at online music stores, which is much more convenient. (And of course, there is the argument over whether I should have to re-buy music I already own, but that’s another story. Some in the RIAA don’t even think I should be able to digitize my vinyl.)

For around $100 you can also get a “USB turntable.” I don’t have one yet, but the low end ones are very simple — a basic turntable with a USB sound chip in it. They just have you record into Audacity. Nothing very fancy. But I feel this is missing something.

Just as the VHS/DVD combo is able to make use of information like knowing the tape speed and length, detecting index marks and blank tape, so should our album recorder. It should have a simple sensor on the tone arm to see as it moves over the album (for example a disk on the axis of the arm with rings of very fine lines and an optical sensor.) It should be able to tell us when the album starts, when it ends, and also detect those 2-second long periods between tracks when the tone arm is suddenly moving inward much faster than it normally is. Because that’s a far better way to break the album into tracks than silence detection. (Of course, you can also use CDDB/Freedb to get track lengths, but they are never perfect so the use of this, net data and silence detection should get you perfect track splits.) It would also detect skips and repeats this way.  read more »

The scarcity of Talent

At Supernova 2007, several of us engaged Andrew Keen over his controversial book "The Cult of the Amateur." I will admit to not yet having read the book. Reviews in the blogosphere are scathing, but of course the book is entirely critical of the blogosphere so that's not too unexpected.

However, one of the things Keen said he worries about is what he calls the "scarcity of talent." He believes the existing "professional" media system did a good enough job at encouraging, discovering and promoting the talent that's out there, and so the world doesn't get more than slush with all the new online media. The amount of talent he felt, was very roughly constant.

I presented one interesting counter to this concept. I am from Canada. As you probably know, we excel at Hockey. Per capita certainly, and often on an absolute scale, Canada will beat any other nation in Hockey. This is only in part because of the professional leagues. We all play hockey when we are young, and this has no formal organization. The result is more talented players arise. The same is true for the USA in Baseball but not in Soccer, and so on.

This suggest that however much one might view YouTube as a vaster wasteland of terrible video, the existence of things like YouTube will eventually generate more and better videographers, and the world will be richer for it, at least if the world wants videographers. One could argue this just takes them away from something else but I doubt that accounts for all of it.

The Efficiency of Attention in Advertising

I’ve written before about the problems with TV advertising. Recently I’ve been thinking more about the efficiency of various methods of advertising — to the target, not to the advertiser. Almost all studies of advertising concern how effectively advertising turns into leads or sales, but rarely are the interests of the target of the ad considered directly.

I think that has to change, because we’re getting more tools to avoid advertising and getting more resistant. I refuse to watch TV with ads, because at $1.20 per hour of advertising watched, it’s a horrible bargain. I would rather pay if I could, and do indeed buy the DVDs in many cases, but mostly my MythTV skips the ads for me. The more able I am to do this, the more my desires as a target must be addressed.

Advertising isn’t totally valueless to the target. In fact, Google feels one big reason for their success is that they deliver ads you might actually care to look at. There are other forms of advertising with the same mantra out there, and they tend to do well, such as movie trailers and Superbowl ads.

Consider a video ad lasting 30 seconds, with a $10 CPM. That means the advertiser pays one cent per viewer of the ad. The viewer spends 30 seconds. On the other hand, a box with 3 or 4 Google ads, as you might see on this page, is typically scanned in well under a second. These ads also earn (as a group) about a $10 CPM though they are paid per click. Google doesn’t publish numbers, but let’s assume a $10 CPM and a 1% click-through on the box. It’s actually higher than this.

In the 30 seconds a TV ad takes, I can peruse perhaps 50 boxes, bars or banners of web ads. That will expose me to over 100 product offers that in theory match my interests, compared to 1 for the video ad. The video ad will of course be far more convincing as it is getting so much attention, but in terms of worthwhile products offered to me per second, it’s terrible.

It isn’t quite this simple though, since I will click on one ad every every minute spent looking at ads (not every minute on the web) and perhaps spend another minute looking in detail at what the ad had to offer. That particular, very well targeted site, gains the wealth of attention the video ad demands, but far more efficiently.

I think this area is worth of more study in the industry, and I think it’s a less understood reason why Google is getting rich, and old media are running scared. In the future, people will tolerate advertising less and less unless it is clearer to them what value they are getting for it. Simply being able to get free programming is not the value we’re looking for, or if it is, we want a better deal — more programming in exchange for our valuable attention. But we want more than that better deal. We want to be advertised to efficiently, in a way that considers our needs and value. The companies that get that will win, the dinosaurs will find themselves in the movie “The Sixth Sense” — dead people, who don’t know they’re dead.

Making instruments with the human voice

The human voice is a pretty versatile instrument, and many skilled vocalists have been able to do convincing imitations of other sounds, and we’ve all heard “human beat box” artists work with a microphone to do great sounds.

That got me thinking, could we train a choir to work together to sound like anything, starting with violins, and perhaps even a piano or more?

The idea would be to get some vocalists to make lots of sounds, both pure tones and more complex ones, and break them apart with spectrum analysis.   Do the same for the target sound — try to break it up into components that might be made by human vocal cords with appropriate spectrum analysis.

Then find a way to easily add the human sounds together to sound like the instrument.  Each singer might focus on one of the harmonics or other tonal qualities of the instrument.  Do it first in the computer, and then see if the people can do it together, without being distracted.  Then work on doing the attack and decay and other artifacts of the start and end of notes.

If it all worked, it would be a fun gag for a choir to suddenly sound like a piano or violin playing a popular piece.   Purer tones like a flute might be harder than complex tones.  Percussion is obviously possible though it might need some amplification.  Indeed, amplification to adjust the levels properly might help a lot but would be slightly more artificial than hearing this without any electronics.   Who knows, perhaps a choir could even sound like an orchestra playing the opening to Beethoven’s 5th, something everybody knows well.

Please release HD movies on regular DVDs

If you’ve looked around, you probably noticed a high-def DVD player, be it HD-DVD or Blu-Ray, is expensive. Expect to pay $500 or so unless you get one bundled with a game console where they are subsidized.

Now they won’t follow this suggestion, but the reality is they didn’t need to make the move to these new DVD formats. Regular old DVD can actually handle pretty decent HDTV movies. Not as good as the new formats, but a lot better than plain DVD. I’ve seen videos with the latest codecs that pack a quite nice HD picture into 2.5 to 3 gigabytes for an hour. I’ve even seen it in less, down to 1.5 gigabytes (actually less that SD DVDs) at 720p 24 fps, though you do notice some problems. But it’s still way better than a standard DVD. Even so, a dual layer DVD can bring about 9 gb, and a double sided dual layer DVD gives you 18gb if you are willing to flip the disk over to get at special features or the 2nd half of a very long movie. Or of course just do 2-disk sets.

Now you might feel that the DVD industry would not want to make a new slew of regular DVD players with the fancier chips in them able to do these mp4 codecs when something clearly better is around the corner. And if they did do this, it would delay adoption of whatever high def DVD format they are backing in the format wars. But in fact, these disks could have been readily playable already, with no change, for the millions who watch DVDs on laptops and media center PCs. More than will have HD DVD or Blu-Ray for some time to come, even with the boost the Playstation 3 gives to Blu-Ray.  read more »

A made-up backstory for Battlestar Galactica

When I watch SF TV shows, I often try to imagine a backstory that might make the story even better and SF like. My current favourite show is Battlestar Galactica, which is one of those shows where a deep mystery is slowly revealed to the audience.

So based on my own thoughts, and other ideas inspired from newsgroups, I’ve jotted down a backstory to explain the results you see in the show. Of course, much of it probably won’t end up being true, but there are hints that some of it might.

In my Battlestar Galactica back-story I explain why

  • Why everybody — even the so-called humans — is a Cylon
  • Who the Final 5 are and what they are doing
  • Why all this has happened before and is happening again
  • How the Cylons were made, and where they got their biotech

Of course, ignore this if you don’t watch the show. It’s pure fanfic/speculation.

The show remains one of the great SF TV shows, though it has been bogging down of late. This timeline may be a plea to return the show to some good hard SF roots. Posthumanism and strife between humans and AIs are hot themes in modern SF, and BSG is most interesting if it’s set in our future with things to say about the relationship between man, machine and artificial biological intelligence.

Update: I have updated the article based on the season finale, which confirmed a number of my speculations though of course not all of them.

Peerflix goes to dollar prices

I have written several times before about Peerflix — Now that I’ve started applying some tags as well as categories to my items you can now see all the Peerflix stories using that link — and the issues behind doing a P2P media trading/loaning system. Unlike my own ideas in this area, Peerflix took a selling approach. You sold and bought DVDs, initially for their own internal currency. It was 3 “Peerbux” for new releases, 2 for older ones, and 1 for bargain bin disks.

That system, however, was failing. You would often be stuck for months or more with an unpopular disk. Getting box sets was difficult. So in December they moved to pricing videos in real dollars. I found that interesting because it makes them, in a way, much closer to a specialty eBay. There are still a lot of differences from eBay — only unboxed disks are traded, they provide insurance for broken disks and most importantly, they set the price on disks.

One can trade DVDs on eBay fairy efficiently but it requires a lot of brain effort because you must put time into figuring good bid and ask prices for items of inconsequential price. Peerflix agreed that this is probably a poor idea, so they decided to set the prices. I don’t know how they set their initial prices, but it may have been by looking at eBay data or similar information.  read more »

It's OK, the internet will scale fine

I’ve been seeing a lot of press lately worrying that the internet won’t be able to handle the coming video revolution, that as more and more people try to get their TV via the internet, it will soon reach a traffic volume we don’t have capacity to handle. (Some of this came from a Google TV exec’s European talk, though Google has backtracked a bit on that.)

I don’t actually believe that, even given the premise behind that statement, which is traditional centralized download from sites like Youtube or MovieLink. I think we have the dark fiber and other technology already in place, with terabits over fiber in the lab, to make this happen.

However, the real thing that they’re missing is that we don’t have to have that much capacity. I’m on the board of Bittorrent Inc., which was created to commercialize the P2P file transfer technology developed by its founder, and Monday we’re launching a video store based on that technology. But in spite of the commercial interest I may have in this question, my answer remains the same.

The internet was meant to be a P2P network. Today, however, most people do download more than they upload, and have a connection which reflects this. But even with the reduced upload capacity of home broadband, there is still plenty of otherwise unused upstream sitting there ready. That’s what Bittorrent and some other P2P technologies do — they take this upstream bandwidth, which was not being used before, and use it to feed a desired file to other people wishing to download the file. It’s a trade, so you do it from others and they do it for you. It allows a user with an ordinary connection to publish a giant file where this would otherwise be impossible.

Yes, as the best technology for publishing large files on the cheap, it does get used by people wanting to infringe copyrights, but that’s because it’s the best, not because it inherently infringes. It also has a long history of working well for legitimate purposes and is one of the primary means of publishing new linux distros today, and will be doing hollywood major studio movies Feb 26.

Right now the clients connect with whoever they can connect with, but they favour other clients that send them lots of stuff. That makes a bias towards other clients to whom there is a good connection. While I don’t set the tech roadmap for the company, I have expectations that over time the protocol will become aware of network topology, so that it does an even better job of mostly peering with network neighbours. Customers of the same ISP, or students at the same school, for example. There is tons of bandwidth available on the internal networks of ISPs, and it’s cheap to provide there. More than enough for everybody to have a few megabits for a few hours a day to get their HDTV. In the future, an ideal network cloud would send each file just once over any external backbone link, or at most once every few days — becoming almost as efficient as multicasting.

(Indeed, we could also make great strides if we were to finally get multicasting deployed, as it does a great job of distributing the popular material that still makes up most of the traffic.)

So no, we’re not going to run out. Yes, a central site trying to broadcast the Academy Awards to 50 million homes won’t be able to work. And in fact, for cases like that, radio broadcasting and cable (or multicasting) continue to make the most sense. But if we turn up the upstream, there is more than enough bandwidth to go around within every local ISP network. Right now most people buy aDSL, but in fact it’s not out the question that we might see devices in this area move to being soft-switchable as to how much bandwidth they do up and and how much down, so that if upstream is needed, it can be had on demand. It doesn’t really matter to the ISP — in fact since most users don’t do upstream normally they have wasted capacity out to the network unless they also do hosting to make up for it.

There are some exceptions to this. In wireless ISP networks, there is no up and downstream, and that’s also true on some ethernets. For wireless users, it’s better to have a central cache just send the data, or to use multicasting. But for the wired users it’s all 2-way, and if the upstream isn’t used, it just sits there when it could be sending data to another customer on the same DSLAM.

So let’s not get too scared. And check out the early version of bittorrent’s new entertainment store and do a rental download (sadly only with Windows XP based DRM, sigh — I hope for the day we can convince the studios not to insist on this) of multiple Oscar winner “Little Miss Sunshine” and many others.

How to stop people from putting widescreen TVs in stretch mode

(Note I have a simpler article for those just looking for advice on how to get their Widescreen TV to display properly.)

Very commonly today I see widescreen TVs being installed, both HDTV and normal. Flat panel TVs are a big win in public places since they don’t have the bulk and weight of the older ones, so this is no surprise, even in SDTV. And they are usually made widescreen, which is great.

Yet almost all the time, I see them configured so they take standard def TV programs, which are made for a 4:3 aspect ratio, and stretch them to fill the 16:9 screen. As a result everybody looks a bit fat. The last few hotel rooms I have stayed in have had widescreen TVs configured like this. Hotel TVs disable you from getting at the setup mode, offering a remote control which includes the special hotel menus and pay-per-view movie rentals. So you can’t change it. I’ve called down to the desk to get somebody to fix the TV and they often don’t know what I’m talking about, or if somebody comes it takes quite a while to get somebody who understands it.

This is probably because I routinely meet people who claim they want to set their TV this way. They just “don’t like” having the blank bars on either side of the 4:3 picture that you get on a widescreen TV. They say they would rather see a distorted picture than see those bars. Perhaps they feel cheated that they aren’t getting to use all of their screen. (Do they feel cheated with a letterbox movie on a 4:3 TV?)

It is presumably for those people that the TVs are set this way. For broadcast signals, a TV should be able to figure out the aspect ratio. NTSC broadcasts are all in 4:3, though some are letterboxed inside the 4:3 which may call for doing a “zoom” to expand the inner box to fill the screen, but never a “stretch” which makes everybody fat. HDTV broadcasts are all natively in widescreen, and just about all TVs will detect that and handle it. (All U.S. stations that are HD always broadcast in the same resolution, and “upconvert” their standard 4:3 programs to the HD resolution, placing black “pillarbox” bars on the left and right. Sometimes you will see a program made for SDTV letterbox on such a channel, and in that case a zoom is called for.)

The only purpose the “stretch” function has is for special video sources like DVD players. Today, almost all widescreen DVDs use the superior “anamorphic” widescreen method, where the full DVD frame is used, as it is for 4:3 or “full frame” DVDs. Because TVs have no way to tell DVD players what shape they are, and DVD players have no way to tell TVs whether the movie is widescreen or 4:3, you need to tell one or both of them about the arrangement. That’s a bit messy. If you tell a modern DVD player what shape TV you have, it will do OK because it knows what type of DVD it is. DVD players, presented with a widescreen movie and a 4:3 TV will letterbox the movie. However, if you have a DVD player that doesn’t know what type of TV it is connected to, and you play a DVD, you have to tell the TV to stretch or pillarbox. This is why the option to stretch is there in the first place.

However, now that it’s there, people are using it in really crazy ways. I would personally disable stretch mode when playing from a source known not to be a direct video input video player, but as I said people are actually asking for the image to be incorrectly stretched to avoid seeing the bars.

So what can we do to stop this, and to get the hotels and public TVs to be set right, aside from complaining? Would it make sense to create “cute” pillarbars perhaps with the image of an old CRT TV’s sides in them? Since HDTVs have tons of resolution, they could even draw the top and bottom at a slight cost of screen size, but not of resolution. Some TVs offer the option of gray, black and white pillars, but perhaps they can make pillars that somehow match the TV’s frame in a convincing way, and the frame could even be designed to blend with the pillars.

Would putting up fake drapes do the job? In the old days of the cinema, movies came in different widths sometimes, and the drapes would be drawn in to cover the left and right of the screen if the image was going to be 4:3 or something not as wide. They were presumably trying to deal with the psychological problem people have with pillarbars.

Or do we have to go so far as to offer physical drapes or slats which are pulled in by motors, or even manually? The whole point of flatscreen TVs is we don’t have a lot of room to do something like this, which is why it’s better if virtual. And of course it’s crazy to spend the money such things would cost, especially if motorized, to make people feel better about pillarbars.

I should also note that most TVs have a “zoom” mode, designed to take shows that end up both letterboxed and pillarbarred and zoom them to properly fit the screen. That’s a useful feature to have — but I also see it being used on 4:3 content to get rid of the pillarbars. In this case at least the image isn’t stretched, but it does crop off the top and bottom of the image. Some programs can tolerate this fine (most TV broadcasts expect significant overscan, meaning that the edges will be behind the frame of the TV) but of course on others it’s just as crazy as stretching. I welcome other ideas.

Update: Is it getting worse, rather than better? I recently flew on Virgin America airlines, which has widescreen displays on the back of each seat. They offer you movies (for $8) and live satellite TV. The TV is stretched! No setting box to change it, though if you go to their “TV chat room” you will see it in proper aspect, at 1/3 the size. I presume the movies are widescreen at least.

Darfur movie, with white actors

There’s a great tragedy going on in the Sudan, and not much is being done about it. Among the people trying to get out the message are hollywood celebrities. I am not faulting them for doing that, but I have a suggestion that is right up their alley.

Which is to make a movie to tell the story, a true movie that is, hopefully a moving as a Schinder’s List or the Pianist. Put the story in front of the first world audience.

And, I suggest with a sad dose of cynicism, do it with whitebread american actors. Not that African actors can’t do a great job and make a moving film like Hotel Rwanda. I just have a feeling that first world audiences would be more affected if they saw it happening to people like them, rather than people who live in a tiny poor muslim villages in a remote desert. The skin colour is only part of what seems to have distanced this story to the point that little is being done. We may have to never again believe that people will keep the vow of never again.

So change the setting a bit and the people, but keep the story and the atrocities, and perhaps it can have the same effect that seeing a Schindler’s list can have on white euro descended Jews and non-Jews. And the Hollywood folks would be doing exactly what they are best at.

Time for RSS and the aggregators to understand small changes

Over 15 years ago I proposed that USENET support the concept of “replacing” an article (which would mean updating it in place, so people who had already read it would not see it again) in addition to superseding an article, which presented the article as new to those who read it before, but not in both versions to those who hadn’t. Never did get that into the standard, but now it’s time to beg for it in USENET’s successor, RSS and cousins.

I’m tired of the fact that my blog reader offers only two choices — see no updates to articles, or see the articles as new when they are updated. Often the updates are trivial — even things like fixing typos — and I should not see them again. Sometimes they are serious additions or even corrections, and people who read the old one should see them.

Because feed readers aren’t smart about this, it not only means annoying minor updates, but also people are hesitant to make minor corrections because they don’t want to make everybody see the article again.

Clearly, we need a checkbox in updates to say if the update is minor or major. More than a checkbox, the composition software should be able to look at the update, and guess a good default. If you add a whole paragraph, it’s major. If you change the spelling of a word, it’s minor. In addition to providing a good guess for the author, it can also store in the RSS feed a tag attempting to quantify the change in terms of how many words were changed. This way feed readers can be told, “Show me only if the author manually marked the change as major, or if it’s more than 20 words” or whatever the user likes.

Wikis have had the idea of a minor change checkbox for a while, it’s time for blogs to have it too.

Of course, perhaps better would be a specific type of update or new post that preserves thread structure, so that a post with an update is a child of a parent. Which means it is seen with the parent by those who have not yet seen the parent, but as an update on its on for those who did see it. For those who skipped the parent (if we know they skipped) the update also need not be shown.

Please don't videoblog (vlog)

At the blogger panel at Fall VON (repurposed to be both video on the net as well as voice) Vlogger and blip.tv advocate Dina Kaplan asked bloggers to start vlogging. It’s started a minor debate.

My take? Please don’t.

I’ve written before on what I call the reader-friendly vs. writer-friendly dichotomy. My thesis is that media make choices about where to be on that spectrum, though ideal technology reduces the compromises. If you want to encourage participation, as in Wikis, you go for writer friendly. If you have one writer and a million readers, like the New York Times, you pay the writer to work hard to make it as reader friendly as possible.

When video is professionally produced and tightly edited, it can be reader (viewer) friendly. In particular if the video is indeed visual. Footage of tanks rolling into a town can convey powerful thoughts quickly.

But talking head audio and video has an immediate disadvantage. I can read material ten times faster than I can listen to it. At least with podcasts you can listen to them while jogging or moving where you can’t do anything else, but video has to be watched. If you’re just going to say your message, you’re putting quite a burden on me to force me to take 10 times as long to consume it — and usually not be able to search it, or quickly move around within it or scan it as I can with text.

So you must overcome that burden. And most videologs don’t. It’s not impossible to do, but it’s hard. Yes, video allows better expression of emotion. Yes, it lets me learn more about the person as well as the message. (Though that is often mostly for the ego of the presenter, not for me.)

Recording audio is easier than writing well. It’s writer friendly. Video has the same attribute if done at a basic level, though good video requires some serious work. Good audio requires real work too — there’s quite a difference between “This American Life” and a typical podcast.

Indeed, there is already so much pro quality audio out there like This American Life that I don’t have time to listen to the worthwhile stuff, which makes it harder to get my attention with ordinary podcasts. Ditto for video.

There is one potential technological answer to some of these questions. Anybody doing an audio or video cast should provide a transcript. That’s writer-unfriendly but very reader friendly. Let me decide how I want to consume it. Let me mix and match by clicking on the transcript and going right to the video snippet.

With the right tools, this could be easy for the vlogger to do. Vlogger/podcaster tools should all come with trained speech recognition software which can reliably transcribe the host, and with a little bit of work, even the guest. Then a little writer-work to clean up the transcript and add notes about things shown but not spoken. Now we have something truly friendly for the reader. In fact, speaker-independent speech recognition is starting to almost get good enough for this but it’s still obviously the best solution to have the producer make the transcript. Even if the transcript is full of recognition errors. At least I can search it and quickly click to the good parts, or hear the mis-transcribed words.

If you’re making podcaster/vlogger tools, this is the direction to go. In addition, it’s absolutely the right thing for the hearing or vision impaired.

VAD (Video After Demand) instead of VoD

In an earlier blog post I attempted to distinguish TVoIP (TV over internet) with IPTV, a buzzword for cable/telco live video offerings. My goal was to explain that we can be very happy with TV, movies and video that come to us over the internet after some delay.

The two terms aren’t really very explanatory, so now I suggested VAD, for Video-after-demand. Tivo and Netflix have taught us that people are quite satisifed if they pick their viewing choices in advance, and then later — sometimes weeks or months later — get the chance to view them. The key is that when they sit down to watch something, they have a nice selection of choices they actually want to see.

The video on demand dream is to give you complete live access to all the video in the world that’s available. Click it and watch it now. It’s a great dream, but it’s an expensive one. It needs fast links with dedicated bandwidth. If your movie viewing is using 4 of your 6 megabits, somebody else in the house can’t use those megabits for web surfing or other interactive needs.

With VaD you don’t need much in your link. In fact, you can download shows that you don’t have the ability to watch live at all, or get them at higher quality. You just have to wait. Not staring at a download bar, of course, nobody likes that, but wait until a later watching session, just as you do when you pick programs to record on a PVR like the Tivo.

I said these things before, but the VaD vision is remarkably satisfying and costs vastly less, both to the consumer, and those building out the networks. It can be combined with IP multicasting (someday) to even be tremendously efficient. (Multicasting can be used for streaming but if packets are lost you have only a limited time to recover them based on how big your buffer is.)

Better handling of reading news/blogs after being away

I’m back fron Burning Man (and Worldcon), and though we had a decently successful internet connection there this time, you don’t want to spend time at Burning Man reading the web. This presents an instance of one of the oldest problems in the “serial” part of the online world, how do you deal with the huge backup of stuff to read from tools that expect you to read regularly.

You get backlogs of your E-mail of course, and your mailing lists. You get them for mainstream news, and for blogs. For your newsgroups and other things. I’ve faced this problem for almost 25 years as the net gave me more and more things I read on a very regular basis.

When I was running ClariNet, my long-term goal list always included a system that would attempt to judge the importance of a story as well as its topic areas. I had two goals in mind for this. First, you could tune how much news you wanted about a particular topic in ordinary reading. By setting how iportant each topic was to you, a dot-product of your own priorities and the importance ratings of the stories would bring to the top the news most important to you. Secondly, the system would know how long it had been since you last read news, and could dial down the volume to show you only the most important items from the time you were away. News could also simply be presented in an importance order and you could read until you got bored.

There are options to do this for non-news, where professional editors would rank stories. One advantage you get when items (be they blog posts or news) get old is you have the chance to gather data on reading habits. You can tell which stories are most clicked on (though not as easily with full RSS feeds) and also which items get the most comments. Asking users to rate items is usually not very productive. Some of these techniques (like using web bugs to track readership) could be privacy invading, but they could be done through random sampling.

I propose, however, that one way or another popular, high-volume sites will need to find some way to prioritize their items for people who have been away a long time and regularly update these figures in their RSS feed or other database, so that readers can have something to do when they notice there are hundreds or even thousands of stories to read. This can include sorting using such data, or in the absence of it, just switching to headlines.

It’s also possible for an independent service to help here. Already several toolbars like Alexa and Google’s track net ratings, and get measurements of net traffic to help identify the most popular sites and pages on the web. They could adapt this information to give you a way to get a handle on the most important items you missed while away for a long period.

For E-mail, there is less hope. There have been efforts to prioritize non-list e-mail, mostly around spam, but people are afraid any real mail actually sent to them has to be read, even if there are 1,000 of them as there can be after two weeks away.

Syndicate content