Since 1992 I have had a long association with the Hugo Awards for SF & Fantasy given by the World Science Fiction Society/Convention. In 1993 I published the Hugo and Nebula Anthology which was for some time the largest anthology of current fiction every published, and one of the earliest major e-book projects. While I did it as a commercial venture, in the years to come it became the norm for the award organizers to publish an electronic anthology of willing nominees for free to the voters.
This year, things are highly controversial, because a group of fans/editors/writers calling themselves the "Sad Puppies," had great success with a campaign to dominate the nominations for the awards. They published a slate of recommended nominations and a sufficient number of people sent in nominating ballots with that slate so that it dominated most of the award categories. Some categories are entirely the slate, only one was not affected. It's important to understand the nominating and voting on the Hugos is done by members of the World SF Society, which is to say people who attend the World SF Convention (Worldcon) or who purchase special "supporting" memberships which don't let you go but give you voting rights. This is a self-selected group, but in spite of that, it has mostly manged to run a reasonably independent vote to select the greatest works of the year. The group is not large, and in many categories, it can take only a score or two of nominations to make the ballot, and victory margins are often small. As such, it's always been possible, and not even particularly hard, to subvert the process with any concerted effort. It's even possible to do it with money, because you can just buy memberships which can nominate or vote, so long as a real unique person is behind each ballot.
The nominating group is self-selected, but it's mostly a group that joins because they care about SF and its fandom, and as such, this keeps the award voting more independent than you would expect for a self-selected group. But this has changed.
The reasoning behind the Sad Puppy effort is complex and there is much contentious debate you can find on the web, and I'm about to get into some inside baseball, so if you don't care about the Hugos, or the social dynamics of awards and conventions, you may want to skip this post.
I'm sure you've seen it. Shop for something and pretty quickly, half the ads you see on the web relate to that thing. And you keep seeing those ads, even after you have made your purchase, sometimes for weeks on end.
Musings on the economies of cutting the cord.
Over the past 14 years, there has been only one constant in my TV viewing, and that's The Daily Show. I first loved it with Craig Kilborn, and even more under Jon Stewart. I've seen almost all of them, even after going away for a few weeks, because when you drop the interview and commercials, it's a pretty quick play. Jon Stewart's decision to leave got a much stronger reaction from me than any other TV show news, though I think the show will survive.
When Southwest started using tablets for in-flight entertainment, I lauded it. Everybody has been baffled by just how incredibly poor most in-flight video systems are. They tend to be very slow, with poor interfaces and low resolution screens. Even today it's common to face a small widescreen that takes a widescreen film, letterboxes it and then pillarboxes it, with only an option to stretch it and make it look wrong. All this driven by a very large box in somebody's footwell.
On Saturday I wrote about how we're now capturing the world so completely that people of the future will be able to wander around it in accurate VR. Let's go further and see how we might shoot the video resolutions of the future, today.
Recently I tried Facebook/Oculus Rift Crescent Bay prototype. It has more resolution (I will guess 1280 x 1600 per eye or similar) and runs at 90 frames/second. It also has better head tracking, so you can walk around a small space with some realism -- but only a very small space. Still, it was much more impressive than the DK2 and a sign of where things are going. I could still see a faint screen door, they were annoyed that I could see it.
The Olympics are coming up, and I have a request for you, NBC Sports. It's the 21st century, and media technologies have changed a lot. It's not just the old TV of the 1900s.
The blogging world was stunned by the recent announcement by Google that it will be shutting down Google reader later this year. Due to my consulting relationship with Google I won't comment too much on their reasoning, though I will note that I believe it's possible the majority of regular readers of this blog, and many others, come via Google reader so this shutdown has a potential large effect here. Of particular note is Google's statement that usage of Reader has been in decline, and that social media platforms have become the way to reach readers.
The effectiveness of those platforms is strong. I have certainly noticed that when I make blog posts and put up updates about them on Google Plus and Facebook, it is common that more people will comment on the social network than comment here on the blog. It's easy, and indeed more social. People tend to comment in the community in which they encounter an article, even though in theory the most visibility should be at the root article, where people go from all origins.
However, I want to talk a bit about online publishing history, including USENET and RSS, and the importance of concepts within them. In 2004 I first commented on the idea of serial vs. browsed media, and later expanded this taxonomy to include sampled media such as Twitter and social media in the mix. I now identify the following important elements of an online medium:
- Is it browsed, serial or to be sampled?
- Is there a core concept of new messages vs. already-read messages?
- If serial or sampled, is it presented in chronological order or sorted by some metric of importance?
- Is it designed to make it easy to write and post or easy to read and consume?
Online media began with E-mail and the mailing list in the 60s and 70s, with the 70s seeing the expansion to online message boards including Plato, BBSs, Compuserve and USENET. E-mail is a serial medium. In a serial medium, messages have a chronological order, and there is a concept of messages that are "read" and "unread." A good serial reader, at a minimum, has a way to present only the unread messages, typically in chronological order. You can thus process messages as they came, and when you are done with them, they move out of your view.
E-mail largely is used to read messages one-at-a-time, but the online message boards, notably USENET, advanced this with the idea of move messages from read to unread in bulk. A typical USENET reader presents the subject lines of all threads with new or unread messages. The user selects which ones to read -- almost never all of them -- and after this is done, all the messages, even those that were not actually read, are marked as read and not normally shown again. While it is generally expected that you will read all the messages in your personal inbox one by one, with message streams it is expected you will only read those of particular interest, though this depends on the volume.
Echos of this can be found in older media. With the newspaper, almost nobody would read every story, though you would skim all the headlines. Once done, the newspaper was discarded, even the stories that were skipped over. Magazines were similar but being less frequent, more stories would be actually read.
USENET newsreaders were the best at handling this mode of reading. The earliest ones had keyboard interfaces that allowed touch typists to process many thousands of new items in just a few minutes, glancing over headlines, picking stories and then reading them. My favourite was TRN, based on RN by Perl creator Larry Wall and enhanced by Wayne Davison (whom I hired at ClariNet in part because of his work on that.) To my great surprise, even as the USENET readers faded, no new tool emerged capable of handling a large volume of messages as quickly.
In fact, the 1990s saw a switch for most to browsed media. Most web message boards were quite poor and slow to use, many did not even do the most fundamental thing of remembering what you had read and offering a "what's new for me?" view. In reaction to the rise of browsed media, people wishing to publish serially developed RSS. RSS was a bit of a kludge, in that your reader had to regularly poll every site to see if something was new, but outside of mailing lists, it became the most usable way to track serial feeds. In time, people also learned to like doing this online, using tools like Bloglines (which became the leader and then foolishly shut down for a few months) and Google Reader (which also became the leader and now is shutting down.) Online feed readers allow you to roam from device to device and read your feeds, and people like that.
Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book "Bowl of Heaven." Here's a Youtube video of my session. They did a review of the history of SF about "big dumb objects" -- stories like Niven's Ringworld, where a huge construct is a central part of the story.
I haven't bothered quickly reporting on the robocar story every other media outlet covered, the signing by Jerry Brown of California's law to enable robocars. For those with the keenest interest, the video of the signing ceremony has a short talk by Sergey Brin on some of his visions for the car where he declares that the tech will be available for ordinary people within 5 years.
I'm watching the Olympics, and my primary tool as always is MythTV. Once you do this, it seems hard to imagine watching them almost any other way. Certainly not real time with the commercials, and not even with other DVR systems. MythTV offers a really wide variety of fast forward speeds and programmable seeks. This includes the ability to watch at up to 2x speed with the audio still present (pitch adjusted to be natural) and a smooth 3x speed which is actually pretty good for watching a lot of sports. In addition you can quickly access 5x, 10x, 30x, 60x, 120x and 180x for moving along, as well as jumps back and forth by some fixed amount you set (like 2 minutes or 10 minutes) and random access to any minute. Finally it offers a forward skip (which I set to 20 seconds) and a backwards skip (I set it to 8 seconds.)
MythTV even lets you customize these numbers so you use different nubmers for the Olympics compared to other recordings. For example the jumps are normally +/- 10 minutes and plus 30 seconds for commercial skip, but Myth has automatic commercial skip.
A nice mode allows you to go to smooth 3x speed with closed captions, though it does not feature the very nice ability I've seen elsewhere of turning on CC when the sound is off (by mute or FF) and turning it off when sound returns. I would like a single button to put me into 3xFF + CC and take me out of it.
Anyway, this is all very complex but well worth learning because once you learn it you can consume your sports much, much faster than in other ways, and that means you can see more of the sports that interest you, and less of the sports, commercials and heart-warming stories of triumph over adversity that you don't. With more than 24 hours a day of coverage it is essential you have tools to help you do this.
I have a number of improvements I would like to see in MythTV like a smooth 5x or 10x FF (pre-computed in advance) and the above macro for CC/FF swap. In addition, since the captions tend to lag by 2-3 seconds it would be cool to have a time-sync for the CC. Of course the network, doing such a long tape delay, should do that for you, putting the CC into the text accurately and at the moment the words are said. You could write software to do that even with human typed captions, since the speech-recognition software can easily figure out what words match once it has both the audio and the words. Nice product idea for somebody.
Watching on the web
This time, various networks have put up extensive web offerings, and indeed on NBC this is the only way to watch many events live, or at all. Web offerings are good, though not quite at the quality of over-the-air HDTV, and quality matters here. But the web offerings have some failings
It's been interesting to see how TV shows from the 60s and 70s are being made available in HDTV formats. I've watched a few of Classic Star Trek, where they not only rescanned the old film at better resolution, but also created new computer graphics to replace the old 60s-era opticals. (Oddly, because the relative budget for these graphics is small, some of the graphics look a bit cheesy in a different way, even though much higher in technical quality.)
The earliest TV was shot live. My mother was a TV star in the 50s and 60s, but this was before videotape was cheap. Her shows all were done live, and the only recording was a Kinescope -- a film shot off the TV monitor. These kinneys are low quality and often blown out. The higher budget shows were all shot and edited on film, and can all be turned into HD. Then broadcast quality videotape got cheap enough that cheaper shows, and then even expensive shows began being shot on it. This period will be known in the future as a strange resolution "dark ages" when the quality of the recordings dropped. No doubt they will find today's HD recordings low-res as well, and many productions are now being shot on "4K" cameras which have about 8 megapixels.
But I predict the future holds a surprise for us. We can't do it yet, but I imagine software will arise that will be able to take old, low quality videos and turn them into some thing better. They will do this by actually modeling the scenes that were shot to create higher-resolution images and models of all the things which appear in the scene. In order to do this, it will be necessary that everything move. Either it has to move (as people do) or the camera must pan over it. In some cases having multiple camera views may help.
When an object moves against a video camera, it is possible to capture a static image of it in sub-pixel resolution. That's because the multiple frames can be combined to generate more information than is visible in any one frame. A video taken with a low-res camera that slowly pans over an object (in both dimensions) can produce a hi-res still. In addition, for most TV shows, a variety of production stills are also taken at high resolution, and from a variety of angles. They are taken for publicity, and also for continuity. If these exist, it makes the situation even easier.
In media today, it's common to talk about three screens: Desktop, mobile and TV. Many people watch TV on the first two now, and tools like Google TV and the old WebTV try to bring interactive, internet style content to the TV. People like to call the desktop the "lean forward" screen where you use a keyboard and have lots of interactivity, while the TV is the "lean back" couch-potato screen. The tablet is also distinguishing itself a bit from the small screen normally found in mobile.
More and more people also find great value in having an always-on screen where they can go to quickly ask questions or do tasks like E-mail.
I forecast we will soon see the development of a "fourth screen" which is a mostly-always-on wall panel meant to be used with almost no interaction at all. It's not a thing to stare at like the TV (though it could turn into one) nor a thing to do interactive web sessions on. The goal is to have minimal UI and be a little bit psychic about what to show.
One could start by showing stuff that's always of use. The current weather forecast, for example, and selected unusual headlines. Whether each member of the household has new mail, and if it makes sense from a privacy standpoint, possibly summaries of that mail. Likewise the most recent status from feeds on twitter or Facebook or other streams. One could easily fill a screen with these things so you need a particularly good filter to find what's relevant. Upcoming calendar events (with warnings) also make sense.
Some things would show only when important. For example, when getting ready to go out, I almost always want to see the traffic map. Or rather, I want to see it if it has traffic jams on it, no need to show it when it's green -- if it's not showing I know all is good. I may not need to see the weather if it's forecast sunny either. Or if it's raining right now. But if it's clear now and going to rain later I want to see that. Many city transit systems have a site that tracks when the next bus or train will come to my stop -- I want to see that, and perhaps at morning commute time even get an audio alert if something unusual is up or if I need to leave right now to catch the street car. A view from the security camera at the door should only show if somebody is at the door.
There are so many things I want to see that we will need some UI for the less popular ones. But it should be a simple UI, with no need to find a remote (though if I have a remote -- any remote -- it should be able to use it.) Speech commands would be good to temporarily see other screens and modes. A webcam (and eventually Kinect style sensor) for gestural UI would be nice, letting me swipe or wave to get other screens.
Like me, you probably have a dozen "universal" remote controls gathered over the years. With each new device and remote you go through a process to try to figure out special codes to enter into the remote to train it to operate your other devices. And it's never very good, except perhaps in the expensive remotes with screens and macros.
Some notes from the bi-annual Olympics crackfest...
I'm starting to say that Curling might be the best Olympic sport. Why?
- It's the most dominated by strategy. It also requires precision and grace, but above all the other Olympic sports, long pauses to think about the game are part of the game. If you haven't guessed, I like strategy.
- Yes, other sports have in-game strategy, of course, particularly the team sports. And since the gold medalist from 25 years ago in almost every sport would barely qualify, you can make a case that all the sports are mostly mental in their way. But with curling, it's right there, and I think it edges out the others in how important it is.
- While it requires precision and athletic skill, it does not require strength and endurance to the human limits. As such, skilled players of all ages can compete. (Indeed, the fact that out-of-shape curlers can compete has caused some criticism.) A few other sports, like sharpshooting and equestrian events, also demand skill over youth. All the other sports give a strong advantage to those at the prime age.
- Mixed curling is possible, and there are even tournaments. There's debate on whether completely free mixing would work, but I think there should be more mixed sports, and more encouragement of it. (Many of the team sports could be made mixed, of course mixed tennis used to be in the Olympics and is returning.)
- The games are tense and exciting, and you don't need a clock, judge or computer to tell you who is winning.
On the downside, not everybody is familiar with the game, the games can take quite a long time and the tournament even longer for just one medal, and compared to a multi-person race it's a slow game. It's not slow compared to an even that is many hours of time trials, though those events have brief bursts of high-speed excitement mixed in with waiting. And yes, I'm watching Canada-v-USA hockey now too.
These days it is getting very common to make videos of presentations, and even to do live streams of them. And most of these presentations have slides in Powerpoint or Keynote or whatever. But this always sucks, because the camera operator -- if there is one -- never moves between the speaker and the slide the way I want. You can't please everybody of course.
I think URL shorteners are are a curse, but thanks to Twitter they are growing vastly in use. If you don't know, URL shorteners are sites that will generate a compact encoded URL for you to turn a very long link into a short one that's easier to cut and paste, and in particular these days, one that fits in the 140 character constraint on Twitter.
It's now becoming common to kludge a conference "backchannel" onto Twitter. I am quite ambivalent about this. I don't think Twitter works nearly as well as an internal backchannel, even though there are some very nice and fancy twitter clients to help make this look nicer.
But the real problem comes from the public/private confusion. Tweets are (generally) public, and even if tagged by a hashtag to be seen by those tracking an event, they are also seen by your regular followers. This has the following consequences, good and bad.
- Some people tweet a lot while in a conference. They use it as a backchannel. That's overwhelming to their followers who are not at the conference, and it fills up the feed.
- When multiple people do it, it's almost like a spam. I believe that conferences like using Twitter as backchannel because it causes constant mentions of their conference to be broadcast out into the world.
- While you can filter out a hashtag in many twitter clients, it's work to do so, and the general flooding of the feed is annoying to many.
- People tweeting at a conference are never sure about who they are talking to. Some tweets will clearly be aimed at fellow conference attendees. But many are just repeats of salient lines said on stage, aimed only at the outsiders.
- While you can use multiple tags and filters to divide up different concurrent sessions of a conference, this doesn't work well.
- The interface on Twitter is kludged on, and poor.
- Twitter's 140 character limit is a burden on backchannel. Backchannel comments are inherently short, and no fixed limit is needed on them. Sure, sometimes you go longer but never much longer.
- The Twitter limit forces URLs to be put into URL shorteners, which obscure where they go and are generally a bane of the world.
Dedicated backchannels are better, I think. They don't reach the outside world unless the outsiders decide to subscribe to them, but I think that's a plus. I think the right answer is a dedicated, internal-only backchannel, combined with a minimal amount of tweeting to the public (not the meeting audience) for those who want to give their followers some snippets of the conferences their friends are going to. The public tweets may not use a hashtag at all, or a different one from the "official" backchannel as they are not meant for people at the conference.
The most common dedicated backchannel tool is IRC. While IRC has its flaws, it is much better at many things than any of the web applications I have seen for backchannel. It's faster and has a wide variety of clients available to use with it. While this is rarely done, it is also possible for conferences to put an IRC server on their own LAN so the backchannel is entirely local, and even keeps working when the connection to the outside world gets congested, as is common on conference LANs. I'm not saying IRC is ideal, but until something better comes along, it works. Due to the speed, IRC backchannels tend to be much more rapid fire, with dialog, jokes, questions and answers. Some might view this as a bug, and there are arguments that slowing things down is good, but Twitter is not the way to attain that.
However, we won't stop those who like to do it via Twitter. As noted, conferences like it because it spams the tweetsphere with mentions of their event.
I would love to see an IRC Bot designed to gateway with the Twitter world. Here are some of the features it might have.