Tags

Speaking on Personal Clouds in SF, and Robocars in Phoenix

Two upcoming talks:

Tomorrow (April 4) I will give a very short talk at the meeting of the personal clouds interest group. As far as I know, I was among the first to propose the concept of the personal cloud in my essages on the Data Deposit Box back in 2007, and while my essays are not the reason for it, the idea is gaining some traction now as more and more people think about the consequences of moving everything into the corporate clouds.

My lighting talk will cover what I see as the challenges to get the public to accept a system where the computing resources are responsible to them rather than to various web sites.

On April 22, I will be at the 14th International Conference on Automated People Movers and Automated Transit speaking in the opening plenary. The APM industry is a large, multi-billion dollar one, and it’s in for a shakeup thanks to robocars, which will allow automated people moving on plain concrete, with no need for dedicated right-of-way or guideways. APMs have traditionally been very high-end projects, costing hundreds of millions of dollars per mile.

The best place to find me otherwise is at Singularity University Events. While schedules are being worked on, with luck you see me this year in Denmark, Hungary and a few other places overseas, in addition to here in Silicon Valley of course.

V2V and connected car part 3: Broadcast data

Earlier in part one I examined why it’s hard to make a networked technology based on random encounters. In part two I explored how V2V might be better achieved by doing things phone-to-phone.

For this third part of the series on connected cars and V2V I want to look at the potential for broadcast data and other wide area networking.

Today, the main thing that “connected car” means in reality is cell phone connectivity. That began with “telematics” — systems such as OnStar but has grown to using data networks to provide apps in cars. The ITS community hoped that DSRC would provide data service to cars, and this would be one reason for people to deploy it, but the cellular networks took that over very quickly. Unlike DSRC which is, as the name says, short range, the longer range of cellular data means you are connected most of the time, and all of the time in some places, and people will accept nothing less.

I believe there is a potential niche for broadcast data to mobile devices and cars. This would be a high-power shared channel. One obvious way to implement it would be to use a spare TV channel, and use the new ATSC-M/H mobile standard. ATSC provides about 19 megabits. Because TV channels can be broadcast with very high power transmitters, they reach almost everywhere in a large region around the transmitter. For broadcast data, that’s good.

Today we use the broadcast spectrum for radio and TV. Turns out that this makes sense for very popular items, but it’s a waste for homes, and largely a waste for music — people are quite satisfied instead with getting music and podcasts that are pre-downloaded when their device is connected to wifi or cellular. The amount of data we need live is pretty small — generally news, traffic and sports. (Call in talk shows need to be live but their audiences are not super large.)

A nice broadcast channel could transmit a lot of interest to cars.

  • Timing and phase information on all traffic signals in the broadcast zone.
  • Traffic data, highly detailed
  • Alerts about problems, stalled vehicles and other anomalies.
  • News and other special alerts — you could fit quite a few voice-quality station streams into one 19 megabit channel.
  • Differential GPS correction data, and even supplemental GPS signals.

The latency of the broadcast would be very low of course, but what about the latency of uploaded signals? This turns out to not be a problem for traffic lights because they don’t change suddenly on a few milliseconds notice, even if an emergency vehicle is sending them a command to change. If you know the signal is going to change 2 seconds in advance, you can transmit the time of the change over a long latency channel. If need be, a surprise change can even be delayed until the ACK is seen on the broadcast channel, to within certain limits. Most emergency changes have many seconds before the light needs to change.

Stalled car warnings also don’t need low latency. If a car finds itself getting stalled on the road, it can send a report of this over the cellular modem that’s already inside so many cars (or over the driver’s phone.) This may take a few seconds to get into the broadcast stream, but then it will be instantly received. A stalled car is a problem that lasts minutes, you don’t need to learn about it in the first few milliseconds.

Indeed, this approach can even be more effective. Because of the higher power of the radios involved, information can travel between vehicles in places where line of sight communications would not work, or would actually only work later than the server-relayed signal. This is even possible in the “classic” DSRC example of a car running a red light. While a line of sight communication of this is the fastest way to send it, the main time we want this is on blind corners, where LoS may have problems. This is a perfect time for those longer range, higher power communications on the longer waves.

Most phones don’t have ATSC-M/H and neither do cars. But receiver chips for this are cheap and getting cheaper, and it’s a consumer technology that would not be hard to deploy. However, this sort of broadcast standard could also be done in the cellular bands, at some cost in bandwidth for them.

19 megabits is actually a lot, and since traffic incidents and light changes are few, a fair bit of bandwidth would be left over. It could be sold to companies who want a cheaper way to update phones and cars with more proprietary data, including map changes, their own private traffic and so on. Anybody with a lot of customers might fight this more efficient. Very popular videos and audio streams for mobile devices could also use the extra bandwidth. If only a few people want something, point to point is the answer, but once something is wanted by many, broadcast can be the way to go.

What else might make sense to broadcast to cars and mobile phones in a city? While I’m not keen to take away some of the nice whitespaces, there are many places with lots of spare channels if designed correctly.

Solving V2V Part 2: Make it Phone to Phone

Last week, I began in part 1 by examining the difficulty of creating a new network system in cars when you can only network with people you randomly encounter on the road. I contend that nobody has had success in making a new networked technology when faced with this hurdle.

This has been compounded by the fact that the radio spectrum at 5.9ghz which was intended for use in short range communications (DSRC) from cars is going to be instead released as unlicenced spectrum, like the WiFi bands. I think this is a very good thing for the world, since unlicenced spectrum has generated an unprecedented radio revolution and been hugely beneficial for everybody.

But surprisingly it might be something good for car communications too. The people in the ITS community certainly don’t think so. They’re shocked, and see this as a massive setback. They’ve invested huge amounts of efforts and careers into the DSRC and V2V concepts, and see it all as being taken away or seriously impeded. But here’s why it might be the best thing to ever happen to V2V.

The innovation in mobile devices and wireless protocols of the last 1-2 decades is a shining example to all technology. Compare today’s mobile handsets with 10 years ago, when the Treo was just starting to make people think about smartphones. (Go back a couple more years and there weren’t any smartphones at all.) Every year there are huge strides in hardware and software, and as a result, people are happily throwing away perfectly working phones every 2 years (or less) to get the latest, even without subsidies. Compare that to the electronics in cars. There is little in your car that wasn’t planned many years ago, and usually nothing changes over the 15-20 year life of the car. Car vendors are just now toying with the idea of field upgrades and over-the-air upgrades.

Car vendors love to sell you fancy electronics for your central column. They can get thousands of dollars for the packages — packages that often don’t do as much as a $300 phone and get obsolete quickly. But customers have had enough, and are now forcing the vendors to give up on owning that online experience in the car and ceding it to the phone. They’re even getting ready to cede their “telematics” (things like OnStar) to customer phones.

I propose this: Move all the connected car (V2V, V2I etc.) goals into the personal mobile device. Forget about the mandate in cars.

The car mandate would have started getting deployed late in this decade. And it would have been another decade before deployment got seriously useful, and another decade until deployment was over 90%. In that period, new developments would have made all the decisions of the 2010s wrong and obsolete. In that same period, personal mobile devices would have gone through a dozen complete generations of new technology. Can there be any debate about which approach would win?  read more »

V2V vs. the paths to a successful networked technology (Part 1)

A few weeks ago, in my article on myths I wrote why the development of “vehicle to vehicle” (V2V) communications was mostly orthogonal to that of robocars. That’s very far from the view of many authors, and most of those in the ITS community. I remain puzzled by the V2V plan and how it might actually come to fruition. Because there is some actual value in V2V, and we would like to see that value realized in the future, I am afraid that the current strategy will not work out and thus misdirect a lot of resources.

This is particularly apropos because recently, the FCC issued an NPRM saying it wants to open up the DSRC band at 5.9ghz that was meant for V2V for unlicenced wifi-style use. This has been anticipated for some time, but the ITS community is concerned about losing the band it received in the late 90s but has yet to use in anything but experiments. The demand for new unlicenced spectrum is quite appropriately very large — the opening up of 2.4gz decades ago generated the greatest period of innovation in the history of radio — and the V2V community has a daunting task resisting it.

In this series I will examine where V2V approaches went wrong and what they might do to still attain their goals.


I want to begin by examining what it takes to make a successful cooperative technology. History has many stories of cooperative technologies (either peer-to-peer or using central relays) that grew, some of which managed to do so in spite of appearing to need a critical mass of users before they were useful.

Consider the rise and fall of fax (or for that matter, the telephone itself.) For a lot of us, we did not get a fax machine until it was clear that lots of people had fax machines, and we were routinely having people ask us to send or receive faxes. But somebody had to buy the first fax machine, in fact others had to buy the first million fax machines before this could start happening.

This was not a problem because while one fax machine is useless, two are quite useful to a company with a branch office. Fax started with pairs of small networks of machines, and one day two companies noticed they both had fax and started communicating inter-company instead of intra-company.

So we see rule one: The technology has to have strong value to the first purchaser. Use by a small number of people (though not necessarily just one) needs to be able to financially justify itself. This can be a high-cost, high-value “early adopter” value but it must be real.

This was true for fax, e-mail, phone and many other systems, but a second principle has applied in many of the historical cases. Most, but not all systems were able to build themselves on top of an underlying layer that already existed for other reasons. Fax came on top of the telephone. E-mail on top of the phone and later the internet. Skype was on top of the internet and PCs. The underlying system allowed it to be possible for two people to adopt a technology which was useful to just those two, and the two people could be anywhere. Any two offices could get a fax or an e-mail system and communicate, only the ordinary phone was needed.

The ordinary phone had it much harder. To join the phone network in the early days you had to go out and string physical wires. But anybody could still do it, and once they did it, they got the full value they were paying for. They didn’t pay for phone wires in the hope that others would some day also pay for wires and they could talk to them — they found enough value calling the people already on that network.

Social networks are also interesting. There is a strong critical mass factor there. But with social networks, they are useful to a small group of friends who join. It is not necessary that other people’s social groups join, not at first. And they have the advantage of viral spreading — the existing infrastructure of e-mail allows one person to invite all their friends to join in.

Enter Car V2V

Car V2V doesn’t satisfy these rules. There is no value for the first person to install a V2V radio, and very tiny value for the first thousands of people. An experiment is going on in Ann Arbor with 3,000 vehicles, all belonging to people who work in the same area, and another experiment in Europe will equip several hundred vehicles.  read more »

Top Myths of Robocars (and why V2V is not the answer)

There’s been a lot of press on robocars in the last few months, and a lot of new writers expressing views. Reading this, I have encountered a recurring set of issues and concerns, so I’ve prepared an article outlining these top myths and explaining why they are not true.

Perhaps of strongest interest will be one of the most frequent statements — that Vehicle to Vehicle (V2V) communication is important, or even essential, to the deployment of robocars. The current V2V (and Vehicle to Infrastructure) efforts, using the DSRC radio spec are quite extensive, and face many challenges, but to the surprise of many, this is largely orthogonal to the issues around robocars.

So please read The top 10 (or so) myths or robocars.

They are:

  • They won’t be safe
  • The big issue is who will be liable in a crash
  • The cars will need special dedicated roads and lanes
  • This only works when all cars are robocars and human driving is banned
  • We need radio links between cars to make this work
  • We wont see self-driving cars for many decades
  • It is a long time before this will be legal
  • How will the police give a robocar a ticket?
  • People will never trust software to drive their car
  • They can’t make an OS that doesn’t crash, how can they make a safe car?
  • We need the car to be able to decide between hitting a schoolbus and going over a cliff
  • The cars will always go at the speed limit

You may note that this is not my first myths FAQ, as I also have Common objections to Robocars written when this site was built. Only one myth is clearly in both lists, a sign of how public opinion has been changing.

Larry Niven and Greg Benford on "Bowl of Heaven" and Big, Dumb Objects

Last month, I invited Gregory Benford and Larry Niven, two of the most respected writers of hard SF, to come and give a talk at Google about their new book “Bowl of Heaven.” Here’s a Youtube video of my session. They did a review of the history of SF about “big dumb objects” — stories like Niven’s Ringworld, where a huge construct is a central part of the story.

My interview with Vernor Vinge

Vernor Vinge is perhaps the greatest writer of hard SF and computer-related SF today. He has won 5 Hugo awards, including 3 in a row for best novel (nobody has done 4 in a row) and his novels have inspired many real technologies in cyberspace, augmented reality and more.

I invited him up to speak at Singularity University but before that he visited Google to talk in the Authors@Google series. I interview him about his career and major novels and stories, including True Names, A Fire Upon the Deep, Rainbow’s End and his latest novel Children of the Sky. We also talk about the concept of the Singularity, for which he coined the term.

The "Forgetful Broker" is needed for Data Deposit Box

For some time I’ve been advocating a concept I call the Data Deposit Box as an architecture for providing social networking and personal data based applications in a distributed way that tries to find a happy medium between the old PC (your data live on your machine) and the modern cloud (your data live on 3rd party corporate machines) approach. The basic concept is to have a piece of cloud that you legally own (a data deposit box) where your data lives, and code from applications comes and runs on your box, but displays to your browser directly. This is partly about privacy, but mostly about interoperability and control.

This concept depends on the idea of publishing and subscribing to feeds from your friends (and other sources.) Your friends are updating data about themselves, and you might want to see it — ie. things like the Facebook wall, or Twitter feed. Feeds themselves would go through brokers just for the sake of efficiency, but would be encrypted so the brokers can’t actually read them.

There is a need for brokers which do see the data in certain cases, and in fact there’s a need that some types of data are never shown to your friends.

Crush

One classic example is the early social networking application the “crush” detector. In this app you get to declare a crush on a friend, but this is only revealed when both people have a mutual crush. Clearly you can’t just be sending your crush status to your friends. You need a 3rd party who gets the status of both of you, and only alerts you when the crush is mutual. (In some cases applications like this can be designed to work without the broker knowing your data through the process known as blinding (cryptography).)  read more »

Banks: Give me two passwords

Passwords are in the news thanks to Gawker media, who had their database of userids, emails and passwords hacked and published on the web. A big part of the fault is Gawker’s, who was saving user passwords (so it could email them) and thus was vulnerable. As I have written before, you should be very critical of any site that is able to email you your password if you forget it.

Some of the advice in the wake of this to users has been to not use the same password on multiple sites, and that’s not at all practical in today’s world. I have passwords for many hundreds of sites. Most of them are like gawker — accounts I was forced to create just to leave a comment on a message board. I use the same password for these “junk accounts.” It’s just not a big issue if somebody is able to leave a comment on a blog with my name, since my name was never verified in the first place. A different password for each site just isn’t something people can manage. There are password managers that try to solve this, creating different passwords for each site and remembering them, but these systems often have problems when roaming from computer to computer, or trying out new web browsers, or when sites change their login pages.

The long term solution is not passwords at all, it’s digital signature (though that has all the problems listed above) and it’s not to even have logins at all, but instead use authenticated actions so we are neither creating accounts to do simple actions nor using a federated identity monopoly (like Facebook Connect). This is better than OpenID too.  read more »

The peril of the Facebook anti-privacy pattern

There’s been a well justified storm about Facebook’s recent privacy changes. The EFF has a nice post outlining the changes in privacy policies at Facebook which inspired this popular graphic showing those changes.

But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.

Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1.  read more »

Data Hosting architectures and the safe deposit box

With Facebook seeming to declare some sort of war on privacy, it’s time to expand the concept I have been calling “Data Hosting” — encouraging users to have some personal server space where their data lives, and bringing the apps to the data rather than sending your data to the companies providing interesting apps.

I think of this as something like a “safe deposit box” that you can buy from a bank. While not as sacrosanct as your own home when it comes to privacy law, it’s pretty protected. The bank’s role is to protect the box — to let others into it without a warrant would be a major violation of the trust relationship implied by such boxes. While the company owning the servers that you rent could violate your trust, that’s far less likely than 3rd party web sites like Facebook deciding to do new things you didn’t authorize with the data you store with them. In the case of those companies, it is in fact their whole purpose to think up new things to do with your data.

Nonetheless, building something like Facebook using one’s own data hosting facilities is more difficult than the way it’s done now. That’s because you want to do things with data from your friends, and you may want to combine data from several friends to do things like search your friends.

One way to do this is to develop a “feed” of information about yourself that is relevant to friends, and to authorize friends to “subscribe” to this feed. Then, when you update something in your profile, your data host would notify all your friend’s data hosts about it. You need not notify all your friends, or tell them all the same thing — you might authorize closer friends to get more data than you give to distant ones.  read more »

Everybody is your 16th cousin

In my article two weeks ago about the odds of knowing a cousin I puzzled over the question of how many 3rd cousins a person might have. This is hard to answer, because it depends on figuring out how many successful offspring per generation the various levels of your family (and related families) have. Successful means that they also create a tree of descendants. This number varies a lot among families, it varies a lot among regions and it has varied a great deal over time. An Icelandic study found a number of around 2.8 but it’s hard to conclude a general rule. I’ve used 3 (81 great-great-grandchildren per couple) as a rough number.

There is something, however, that we can calculate without knowing how many children each couple has. That’s because we know, pretty accurately, how many ancestors you have. Our number gets less accurate over time because ancestors start duplicating — people appear multiple times in your family tree. And in fact by the time you go back large numbers of generations, say 600 years, the duplication is massive; all your ancestors appear many times.

To answer the question of “How likely is it that somebody is your 16th cousin” we can just look at how many ancestors you have back there. 16th cousins share with you a couple 17 generations ago. (You can share just one ancestor which makes you a half-cousin.) So your ancestor set from 17 generations ago will be 65,536 different couples. Actually less than that due to duplication, but at this level in a large population the duplication isn’t as big a factor as it becomes later, and if it does it’s because of a closer community which means you are even more related.

So you have 65K couples and so does your potential cousin. The next question is, what is the size of the population in which they lived? Well, back then the whole world had about 600 million people, so that’s an upper bound. So we can ask, if you take two random sets of 65,000 couples from a population of 300M couples, what are the odds that none of them match? With your 65,000 ancestors being just 0.02% of the world’s couples, and your potential cousin’s ancestors also being that set, you would think it likely they don’t match.

Turns out that’s almost nil. Like the famous birthday paradox, where a room of 30 people usually has 2 who share a birthday, the probability there is no intersection in these large groups is quite low. it is 99.9999% likely from these numbers that any given person is at least a 16th cousin. And 97.2% likely that they are a 15th cousin — but only 1.4% likely that they are an 11th cousin. It’s a double exponential explosion. The rough formula used is that the probability of no match will be (1-2^C/P)^(2^C) where C is the cousin number and P is the total source population. To be strict this should be done with factorials but the numbers are large enough that pure exponentials work.

Now, of course, the couples are not selected at random, and nor are they selected from the whole world. For many people, their ancestors would have all lived on the same continent, perhaps even in the same country. They might all come from the same ethnic group. For example, if you think that all the ancestors of the two people came from the half million or so Ashkenazi Jews of the 18th century then everybody is a 10th cousin.

Many populations did not interbreed much, and in some cases of strong ethnic or geographic isolation, barely at all. There are definitely silos, and they sometimes existed in the same town, where there might be far less interbreeding between races than among races. Over time, however, the numbers overwhelm even this. Within the close knit communities, like say a city of 50,000 couples who bred mostly with each other, everybody will be a 9th cousin.

These numbers provide upper bounds. Due to the double exponential, even when you start reducing the population numbers due to out-breeding and expansion, it still catches up within a few generations. This is just another measure of how we are all related, and also how meaningless very distant cousin relationships, like 10th cousins, are. As I’ve noted in other places, if you leave aside the geographic isolation that some populations lived in, you don’t have to go back more more than a couple of thousand years to reach the point where we are not just all related, but we all have the same set of ancestors (ie. everybody who procreated) just arranged in a different mix.

The upshot of all this: If you discover that you share a common ancestor with somebody from the 17th century, or even the 18th, it is completely unremarkable. The only thing remarkable about it is that you happened to know the path.

Caprica, uploading and gods

Caprica’s first half-season is almost over, but I started watching late due to travel and the Olympics. Here’s my commentary on the show to this point. I already commented last week on the lack of protagonists we can identify with. Now onto bigger issues.

Caprica is, I think, the first TV series to have uploading — the transfer of a human mind into computerized form — as its central theme. While AI is common in movies and TV, uploading is quite uncommon in Hollywood, even though it’s an important theme in written SF. This is what interests me in Caprica. It’s connection to Battlestar Galactica is fairly loose, and we won’t find the things we liked in that show showing up much in this one.

God, again

In fact, I mostly fear encroachment of material from BSG, in particular the “God” who was revealed to be the cause of all the events of that series. What we don’t yet know is whether the monotheistic “Soldiers of the One” are just yet another religion to have invented a “one true god” or if they really have received signs or influence for that god.

When I was critical of the deus ex machina ending of BSG, many people wanted to point out that religion had been present from the start in the show. But the presence of religion is not the same as the presence of a real god, and if not for BSG, I doubt any viewer would suspect the “One” was real. However, knowing that their is a one true god, we must fear the worst. Since that god set up all the events of BSG long ago, including various timings of the Cylon wars, it’s hard to believe that the god is not also setting up the timing of the creation of the Cylons, and thus directly or indirectly arranged Zoe’s death and transfer. I hope not but it’s hard to avoid that conclusion. The best we can hope for is that no direct influence of the god is shown to us.

Alas, for a show about uploading, the writers do need some more education about computers. Much of the stuff we see is standard Hollywood. Nonetheless the virtual worlds and the two uploaded beings (Zoe-A and Tamara-A) are by far the most interesting thing in the show, and fan ratings which put the episode “There is another Sky” at the top indicate most viewers agree. We’re note getting very much of them, though.

Worldbuilding

The colonial world is interesting, with many elements not typically shown in TV, such as well accepted homosexuality, group marriage, open drug use and kinky holo-clubs. There’s a lot of focus on the Tauron culture, but right now this impresses me as mostly a mish-mash, not the slow revelation of a deeply constructed background. I get the real impression that they just make of something they like when they want to display Tauron culture. As far as what’s interesting in Caprican or other culture, we mostly see that only in Sister Clarice and her open family.

I was hoping for better worldbuilding and it is still not too late. The pilot did things decently enough but there has not been much expansion. The scenes of the city are now just establishing shots, not glimpses into an alien world. The strange things — like the world’s richest man and his wife not having bodyguards after open attacks on their person — might be a different culture or might just be writing errors.

William Adama

For BSG fans, there is strong interest in William Adama, the only character shared between the shows. But this one seems nothing like the hero of the original show. And he seems inconsistent. We learn that the defining event of his life was the terrorist murder of his sister and mother by a monotheist cult. (Well, defining event in a life that goes on to have more big events, I suppose.) Yet he shows no more than average mistrust of monotheism when it is revealed that the Cylons are monotheists and believe in a “one true god.” He doesn’t like Baltar, but he’s pretty tolerant when Baltar starts a cult of a one true god on the ship, and even gives him weapons at some point. He just doesn’t act like somebody who would have a knee-jerk initial jolt at monotheists preaching one true god.

There was also no sign of Tamara Adama in BSG. The original script plans called for her avatar to also contribute to the minds of the early Cylons, and this may not happen. If it does happen, it is odd that we never see any sign of Cylons remembering being his sister. We also have to presume that neither he, nor anybody else knows that his father played a pivotal role in the creation of the Cylons. That would make his father quite infamous, nobody would remember his law career.

New Cap City

Started out as the most interesting place on Caprica. Getting a bit slower. Not certain who Emanuelle is — is she Tamara, or working for Tamara? If so, why is she hooking Joseph on the Amp? When she was winged, it was odd that she had her arm flicker out — I would assume that in a world trying to appear real, non-fatal wounds would look like wounds, and even killed people would leave bodies as far as the other players were concerned.

If Zoe enters New Cap City, she should not be like Tamara, unable to be killed. She is now running in a robot body and interfacing with a holoband like humans do.

Will Tamara’s popularity with viewers turn her from a minor character into something more important?

Origin of the Cylons

The big question remains, where do the minds of the Cylon armies come from? Are they all copies of Zoe? Has Zoe given Philemon the clue as to how to create other copies, perhaps more mindless ones? Does Tamara provide a mind to a Cylon? Do the Soldiers of the One get access to the upload generator that Daniel used on Tamara and make their own uploads, and do they become the Cylon minds? We know that something of Zoe or the STO ends up in Cylon minds.

Who is the hero of Caprica?

As some readers may know, I maintained a sub-blog last year for analysis of Battlestar Galactica. BSG was very good for a while, but sadly had an extremely disappointing ending. Postings in the Battlestar Galactica Analysis Blog did not usually show up in the front page of the main blog, you had to read or subscribe to it independently.

There is a new prequel spin-off series on called Caprica, which has had 6 episodes, and just has 2 more before going on a mid-season hiatus. I will use the old battlestar blog for more limited commentary on that show, which for now I am watching. (However, not too many people are, so it’s hard to say how long it will be on.)

My first commentary is not very science-fiction related, though I will be getting to that later — since the reason I am watching Caprica is my strong interest in fiction about mind uploading and artificial intelligence, and that is a strong focus of the show.

Instead, I will ask a question that may explain the poor audiences the show is getting. Who is the hero of Caprica? The character the audience is supposed to identify with? The one we care about, the one we tune in so we can see what happens to them? This is an important question, since while a novel or movie can be great without a traditional protagonist or even an anti-hero, it’s harder for a TV series to pull that off.  read more »

Haplogroups, Haplotypes and genealogy, oh my

I received some criticism the other day over my own criticism of the use of haplogroups in genealogy — the finding and tracing of relatives. My language was imprecise so I want to make a correction and explore the issue in a bit more detail.

One of the most basic facts of inheritance is that while most of your DNA is a mishmash of your parents (and all their ancestors before them) two pieces of DNA are passed down almost unchanged. One is the mitochondrial DNA, which is passed down from the mother to all her children. The other is the Y chromosome, which is passed down directly from father to son. Girls don’t get one. Most of the mother’s X chromosome is passed down unchanged to her sons (but not her daughters) but of course they can’t pass it unchanged to anybody.

This allow us to track the ancestry of two lines. The maternal line tracks your mother, her mother, her mother, her mother and so on. The paternal line tracks your father, his father and so on. The paternal line should, in theory, match the surname, but for various reasons it sometimes doesn’t. Females don’t have a Y, but they can often find out what Y their father had if they can sequence a sample from him, his sons, his brothers and other male relatives who share his surname.

The ability to do this got people very excited. DNA that can be tracked back arbitrarily far in time has become very useful for the study of human migrations and population genetics. The DNA is normally passed down completely but every so often there is a mutation. These mutations, if they don’t kill you, are passed down. The various collections of mutations are formed into a tree, and the branches of the tree are known as haplogroups. For both kinds of DNA, there are around a couple of hundred haplogroups commonly identified. Many DNA testing companies will look at your DNA and tell you your MTDNA haplogroup, and if male, your Y haplogroup.  read more »

The privacy risks of genetic genealogy (23andMe part 2)

Last week, I wrote about interesting experiences finding Cousins who were already friends via genetic testing. 23andMe’s new “Relative Finder” product identifies the other people in their database of about 35,000 to whom you are related, guessing how close. Surprisingly, 2 of the 4 relatives I made contact with were already friends of mine, but not known to be relatives.

Many people are very excited about the potential for services like Relative Finder to take the lid off the field of genealogy. Some people care deeply about genealogy (most notably the Mormons) and others wonder what the fuss is. Genetic genealogy offers the potential to finally link all the family trees built by the enthusiasts and to provably test already known or suspected relationships. As such, the big genealogy web sites are all getting involved, and the Family Tree DNA company, which previously did mostly worthless haplogroup studies (and more useful haplotype scans,) is opening up a paired-chromosome scan service for $250 — half the price of 23andMe’s top-end scan. (There is some genealogical value to the deeper clade Y studies FTDNA does, but the Mitochondrial and 12-marker Y studies show far less than people believe about living relatives. I have a followup post about haplogroups and haplotypes in genealogy.) Note that in March 2010, 23andMe is offering a scan for just $199.

The cost of this is going to keep decreasing and soon will be sub-$100. At the same time, the cost of full sequencing is falling by a factor of 10 every year (!) and many suspect it may reach the $100 price point within just a few years. (Genechip sequencing only finds the SNPs, while a full sequencing reads every letter (allele) of your genome, and perhaps in the future your epigenome.

Discover of relatives through genetics has one big surprising twist to it. You are participating in it whether you sign up or not. That’s because your relatives may be participating in it, and as it gets cheaper, your relatives will almost certainly be doing so. You might be the last person on the planet to accept sequencing but it won’t matter.  read more »

The odds of knowing your cousins: 23andme Part 1

Bizarrely, Jonathan Zittrain turns out to be my cousin — which is odd because I have known him for some time and he is also very active in the online civil rights world. How we came to learn this will be the first of my postings on the future of DNA sequencing and the company 23andMe.

(Follow the genetics for part two and other articles.)

23andMe is one of a small crop of personal genomics companies. For a cash fee (ranging from $400 to $1000, but dropping with regularity) you get a kit to send in a DNA sample. They can’t sequence your genome for that amount today, but they can read around 600,000 “single-nucleotide polymorphisms” (SNPs) which are single-letter locations in the genome that are known to vary among different people, and the subject of various research about disease. 23andMe began hoping to let their customers know about how their own DNA predicted their risk for a variety of different diseases and traits. The result is a collection of information — some of which will just make you worry (or breathe more easily) and some of which is actually useful. However, the company’s second-order goal is the real money-maker. They hope to get the sequenced people to fill out surveys and participate in studies. For example, the more people fill out their weight in surveys, the more likely they might notice, “Hey, all the fat people have this SNP, and the thin people have that SNP, maybe we’ve found something.”

However, recently they added a new feature called “Relative Finder.” With Relative Finder, they will compare your DNA with all the other customers, and see if they can find long identical stretches which are very likely to have come from a common ancestor. The more of this they find, the more closely related two people are. All of us are related, often closer than we think, but this technique, in theory, can identify closer relatives like 1st through 4th cousins. (It gets a bit noisy after this.)

Relative Finder shows you a display listing all the people you are related to in their database, and for some people, it turns out to be a lot. You don’t see the name of the person but you can send them an E-mail, and if they agree and respond, you can talk, or even compare your genomes to see where you have matching DNA.

For me it showed one third cousin, and about a dozen 4th cousins. Many people don’t get many relatives that close. A third cousin, if you were wondering, is somebody who shares a great-great-grandparent with you, or more typically a pair of them. It means that your grandparents and their grandparents were “1st” cousins (ordinary cousins.) Most people don’t have much contact with 3rd cousins or care much to. It’s not a very close relationship.

However, I was greatly shocked to see the response that this mystery cousin was Jonathan Zittrain. Jonathan and I are not close friends, more appropriately we might be called friendly colleagues in the cyberlaw field, he being a founder of the Berkman Center and I being at the EFF. But we had seen one another a few times in the prior month, and both lectured recently at the new Singularity University, so we are not distant acquaintances either. Still, it was rather shocking to see this result. I was curious to try to figure out what the odds of it are.  read more »

Olympic sports and failure

I wanted to post some follow-up to my prior post about sports where the contestants go to the edge and fall down. As I see it, the sports fall into the following rough classes:

  1. You push as hard as you can. The athlete who does the best — goes higher, stronger, faster — wins. The 100 metres is a perfect example.
  2. You push hard, but you must make judgments as you compete, such as how hard to push, when to pass etc. If you judge incorrectly, it lowers your final performance, but only modestly.
  3. You must judge your abilities and the condition of the course and your equipment, and choose what is difficult enough to score well, but which you can be confident you will execute. To win, you must skate (literally or figuratively) close to the edge of what you can do. If you misjudge, or have bad luck, you are penalized significantly, and will move from 1st place to the rear of the pack.

My main concern is with sports in the third group, like figure skating, half-pipe and many others, including most of the judged sports with degrees of difficulty. The concern is that sudden shift from leader to loser because you did what you are supposed to do — go close to the edge. Medals come from either being greatly superior, knowing your edge very well, which is the intention, or from being lucky that particular day — which I think is not the intention.

Many sports seek to get around this. In high jump and pole vault, you just keep raising the bar until you can’t get over it, and somebody gets over the highest bar. This is a good system, but difficult when a sport takes a long time or is draining to do even once. You could do a figure skating contest where they all keep trying more and more difficult moves until they all fail but one, but it would take a long time, be draining, and cause injuries as there is no foam pit like in high jump.

Other sports try to solve this by letting you do 2 or more runs, and taking the best run. This is good, but we also have an instinct that the person who can do it twice is better than the one who did it once and fell down the other time. Sports that sum up several times demand consistent performance, which is good, but in essence they also put a major punishment on a single failure, perhaps an even greater one. This approach requires you be a touch conservative, just under your edge, so you know you will deliver several very good runs, and beat somebody who dares beyond their ability, but falls in one or more runs. At least it reduces the luck.

The particular problem is this. “Big failure” sports will actually often give awards either to a top athlete who got a good run, or to the athlete who was more conservative in choosing what to do, and thus had a very high probability of a clean run. Fortunately this will not happen too often, as one of the top tier who went-for-broke will have that clean run and get 1st place. But the person who does that may not be the one who is overall most capable.

Imagine if high jump were done with each competitor choosing what height they wanted the bar to be at in advance, and getting one try at it, and getting a medal if it’s the highest, but nothing if they miss.

The sports like short-track speed skating, which are highly entertaining, have this problem in spades, for athletes who wipe out can also impede other athletes. While the rules try to make it up to the athlete who was knocked out, they have a hard time doing this perfectly. For example in the semi-finals of short-track, if you get knocked out while you were in 3rd, you are not assured to get a consolation qualification even if you were just about to try for 2nd with the strength you were saving.

In some cases the chaos is not going away because they know audiences like it. Time trials are the purest and fairest competition in most cases but are dead-boring to watch.

Curling is the best Olympic sport

Some notes from the bi-annual Olympics crackfest…

I’m starting to say that Curling might be the best Olympic sport. Why?

  • It’s the most dominated by strategy. It also requires precision and grace, but above all the other Olympic sports, long pauses to think about the game are part of the game. If you haven’t guessed, I like strategy.
  • Yes, other sports have in-game strategy, of course, particularly the team sports. And since the gold medalist from 25 years ago in almost every sport would barely qualify, you can make a case that all the sports are mostly mental in their way. But with curling, it’s right there, and I think it edges out the others in how important it is.
  • While it requires precision and athletic skill, it does not require strength and endurance to the human limits. As such, skilled players of all ages can compete. (Indeed, the fact that out-of-shape curlers can compete has caused some criticism.) A few other sports, like sharpshooting and equestrian events, also demand skill over youth. All the other sports give a strong advantage to those at the prime age.
  • Mixed curling is possible, and there are even tournaments. There’s debate on whether completely free mixing would work, but I think there should be more mixed sports, and more encouragement of it. (Many of the team sports could be made mixed, of course mixed tennis used to be in the Olympics and is returning.)
  • The games are tense and exciting, and you don’t need a clock, judge or computer to tell you who is winning.

On the downside, not everybody is familiar with the game, the games can take quite a long time and the tournament even longer for just one medal, and compared to a multi-person race it’s a slow game. It’s not slow compared to an even that is many hours of time trials, though those events have brief bursts of high-speed excitement mixed in with waiting. And yes, I’m watching Canada-v-USA hockey now too.  read more »

Avatar isn't Dances With Wolves, it's another plot

Everybody has an Avatar review. Indeed, Avatar is a monument of moviemaking in terms of the quality of its animation and 3-D. Its most interesting message for Hollywood may be “soon actors will no longer need to look pretty.” Once the generation of human forms passes through the famous uncanny valley there will be many movies made with human characters where you never see their real faces. That means the actors can be hired based strictly on their ability to act, and their bankability, not necessarily their looks, or more to the point their age. Old actors will be able to play their young selves before too long, and be romantic leading men and women again. Fat actors will play thin, supernaturally beautiful leads.

And our images of what a good looking person looks like will get even more bizarre. We’ll probably get past the age thing, with software to make old star look like young star, before we break through the rest of the uncanny valley. If old star keeps him or herself in shape, the skin, hair and shapes of things like the nose and earlobes can be fixed, perhaps even today.

But this is not what I want to speak about. What I do want to speak about involves Avatar spoilers.  read more »