Last week, Volvo was demoing some new collision avoidance features in their S60. I’ve talked about the S60 before, as it surprised me putting pedestrian detection into a car before I expected it to happen. Unfortunately in an extreme case of demo disease known to all computer people, somebody has made an error with the battery, and in front of a crowd of press, the car smashed into the truck it was supposed to avoid. The wired article links to a video.
Poor Volvo, having this happen in front of all the press. Of course, their system is meant to be used in human driven cars, warning the driver and braking if the driver fails to act — not in a self-driving vehicle. And they say that had their been a driver there would have been an indication that the system was not operating.
While this mistake is the result of a lack of maturity in the technology, it is important to realize that as robocars are developed there will be crashes, and some of the crashes will hurt people and a few will quite probably kill people. It’s a mistake to assume this won’t happen, or not to plan for it. The public can be very harsh. Toyota’s problems with their car controllers (if that’s where the problems are — Toyota claims they are not — have been a subject of ridicule for what was (and probably still is) one of the world’s most respected brands. The public asks, if programmers can’t program simple parts of today’s cars, can they program one that does all the driving?
There are two answers to that. First of all, they can and do program computerized parts of today’s cars all the time and by and large have perfect safety records.
But secondly, no they can’t make a complete driving system perfectly safe, certainly not at first. It is a complex problem and we’ll wait a long time before the accident rate is zero. And while we wait, human drivers will kill millions.
Our modern society has always had a tough time with that trade-off. Of late we’ve been coming to demand perfect safety, though it is impossible. Few new products are allowed out if it is known that they will have any death rate due to their own flaws. Even if those flaws are not known in the specific, but are known to be highly likely to exist in some fashion. American juries, faced with minutes of a meeting where the company decided to “release the product, even though predictions show that bugs will kill X people” will punish the company nastily, even though the alternative was “don’t release and have human drivers kill 10X people.” The 9X who were saved will not be in the courtroom. This is one reason robocars may arise outside the USA first.
Of course, there might be cases the other way. A drunk who kills somebody when he could have taken a robocar might get a stiffer punishment. A corporation that had its employees drive when robotic systems were clearly superior might find a nasty judgement — but that would require that it was OK to have the cars on the road in the first place.
But however this plays out, developers must expect there will be bugs, an bugs with dire consequences. Nobody will want those bugs, and all the injuries will be tragic, but so is being too cautious on deployment. Can the USA figure a way to make that happen?
But the deeper question is why Facebook wants to do this. The answer, of course, is money, but in particular it’s because the market is assigning a value to revealed data. This force seems to push Facebook, and services like it, into wanting to remove privacy from their users in a steadily rising trend. Social network services often will begin with decent privacy protections, both to avoid scaring users (when gaining users is the only goal) and because they have little motivation to do otherwise. The old world of PC applications tended to have strong privacy protection (by comparison) because data stayed on your own machine. Software that exported it got called “spyware” and tools were created to rout it out.
Facebook began as a social tool for students. It even promoted that those not at a school could not see in, could not even join. When this changed (for reasons I will outline below) older members were shocked at the idea their parents and other adults would be on the system. But Facebook decided, correctly, that excluding them was not the path to being #1. read more »
With Facebook seeming to declare some sort of war on privacy, it’s time to expand the concept I have been calling “Data Hosting” — encouraging users to have some personal server space where their data lives, and bringing the apps to the data rather than sending your data to the companies providing interesting apps.
I think of this as something like a “safe deposit box” that you can buy from a bank. While not as sacrosanct as your own home when it comes to privacy law, it’s pretty protected. The bank’s role is to protect the box — to let others into it without a warrant would be a major violation of the trust relationship implied by such boxes. While the company owning the servers that you rent could violate your trust, that’s far less likely than 3rd party web sites like Facebook deciding to do new things you didn’t authorize with the data you store with them. In the case of those companies, it is in fact their whole purpose to think up new things to do with your data.
Nonetheless, building something like Facebook using one’s own data hosting facilities is more difficult than the way it’s done now. That’s because you want to do things with data from your friends, and you may want to combine data from several friends to do things like search your friends.
One way to do this is to develop a “feed” of information about yourself that is relevant to friends, and to authorize friends to “subscribe” to this feed. Then, when you update something in your profile, your data host would notify all your friend’s data hosts about it. You need not notify all your friends, or tell them all the same thing — you might authorize closer friends to get more data than you give to distant ones. read more »
Recently, two movies have been released in which he is a character. I recently watched Billy: The Early Years which is a movie about the early life of Billy Graham told from the supposed viewpoint of my father on his deathbed. Charles Templeton and Billy Graham were best friends for many years, touring and preaching together, and the story of how my father lost his faith as he studied more while Graham grew closer to his has become a popular story in the fundamentalist community.
While it doesn’t say that it’s fictional, this movie portrays an entirely invented interview with Charles Templeton, played by Martin Landau, in a hospital bed in 2001, shortly before his death. (In reality, while he did have a few hospital trips, he spent 2001 in an Alzheimer’s care facility and was not coherent most of the time.) Fleshed out in the novelization, the interview is supposedly conducted on orders from an editor trying to find some dirty on Billy Graham. Most of the movie is flashbacks to Graham’s early days (including times before they met) and their time together preaching and discussing the truth of the Bible.
It is disturbing to watch Landau’s portrayal of my father, as well as that by Mad Men’s Krisoffer Polaha as the younger version. I’m told it is always odd to see somebody you know played by an actor, and no doubt this is true. However, more disturbing is the role they have cast him in for this allegedly true story — namely Satan. As I believe is common in movies aimed at the religious market, Graham’s story is told in what appears to be an allegory of the temptation of Christ. In the film, Graham is stalwart, but my father keeps coming to him with doubts about the bible. The lines written for the actors are based in part on his writings and in part on invention, and as such don’t sound at all like he would speak in real life, but they are there, I think, to take the role of the attempted temptation of the pure man. read more »
Just a note that I’ll be in Boston this weekend attending the 2nd day of ROFLCon, a convention devoted to internet memes and legends. They’re having a panel on USENET on Saturday and have invited me to participate. Alas, registration is closed, but there are some parties and events on the schedule that I suspect people can go to. See you there.
This weekend I attended the annual “Robogames” competition, which took place here in the Bay Area. Robogames is mostly a robot battle competition, with a focus on heavily armed radio-controlled robots fighting in a protected arena. For several years robot fighting was big enough to rate some cable TV shows dedicated to it. The fighting is a lot of fun, but almost entirely devoid of automation — in fact efforts to use automation in battle robots have mostly been a failure.
The RC battles are fierce and violent, and today one of the weapons of choice is something heavy that spins at very high speed so that it builds up a lot of angular momentum and kinetic energy, to transfer into the enemy. People like to see robots flying through the air and losing parts to flying sparks. (I suspect this need to make robots very robust against attack makes putting sensors on the robots for automation difficult, as many weapons would quickly destroy a lot of popular sensors types.)
The games also featured a limited amount of automated robot competition. This included some lightweight (3lb and 1lb) automated battles which I did not get to watch, and some some hobby robot competitions for maze-running, line following, ribbon climbing and LEGO mindstorms. There was also semi-autonomous robot battle called “kung fu” where humanoid robots who take high level commands (like punch, and step) try to push one another over. There is also sumo, a game where robots must push the other robot out of the ring.
I had hoped the highlight would be the Robo-magellan contest. This is a hobbyist robot car competition, usually done with small robots 1 to 2 feet in length. Because it is hobbyists, and often students, the budgets are very small, and the contest is very simple. Robots must make it through a simple outdoor course to touch an orange cone about 100 yards away. They want to do this in the shortest time, but for extra points they can touch bonus cones along the way. Contestants are given GPS coordinates for the target cones. They get three tries. In this particular contest, to make it even easier, contestants were allowed to walk the course and create some extra GPS waypoints for their robots.
These extra waypoints should have made it possible to do the job with just a GPS and camera, but the hobbyists in this competition were mostly novices, and no robot reached the final cone. The winner got within 40 feet on their last run, but no performance was even remotely impressive. This was unlike past years, where I was told that 6 or more robots would reach the target and there would be real competition. This year’s poor showing was blamed on budgets, and the fact that old teams who had done well had moved on from the sport. Only 5 teams showed up.
The robots were poor for sensors. While all would have a GPS, in 1 or 2 cases the GPS systems failed and the robots quickly wandered into things. A few had sonar or touch-bars for obstacle detection, but others did not, and none of them did their obstacle detection well at all. For most, if they ran into something, that was it for that race. Some used a compass or accelerometers to help judge when to turn and where to aim, since a GPS is not very good as a compass. read more »
In the post, they are kind enough to link to my video (now back up on YouTube thanks to my disputing the Content-ID takedown) as an example of a fair use parody, and to a talk by (former) fellow EFF director Larry Lessig which incorporated some copyrighted music.
However, some of the statements in the post deserve a response. Let me start first that I hope I do understand a bit of YouTube’s motivations in creating the Content-ID system. YouTube certainly has a lot of copyright violations on it, and it’s staring down the barrel of a billion dollar lawsuit from Viacom and other legal burdens. I can understand why it wants to show the content owners that it wants to help them and wants to be their partner. It is a business and is free to host what it wants. However, it is also part of Google, whose mission is “to organize the world’s information and make it universally accessible and useful,” and of course to not “be evil” in the process of doing so. On the same blog, YouTube declares its dedication to free speech very eloquently. read more »
One of the greatest things that can give a region a sense of identity is the presence of a regional cuisine. In addition to identity it brings in tourists, so every region probably really wishes it had one.
Of course a real regional cuisine takes a long time to develop, even centuries. The world’s great cuisines all were a long time coming, and were often based on the presence of particular local ingredients as much as on the food culture. Some cuisines have arisen quickly, particularly fusion cuisines which arise due to immigrants mixing and from colonialism. Today the market for ingredients is global, though there are still places where particular ingredients are at their best.
One recent regional food, the “Buffalo” chicken wing, is believed to have come from a single restaurant (The Anchor Bar in Buffalo) and spread out to other local establishments and then around the world. Part of its success in spreading around the world is its simplicity and the fact that (unlike many other regional-source foods) it features ingredients found all around the world. Every town would like to have its equivalent of the Buffalo Wing.
To make this happen, I think towns should hold contests among local restaurants to develop such dishes. Restaurants might enter dishes they already specialize in, or come up with something new. The winner, by popular vote, would get their dish named after the town, and found on the menus of other competing restaurants for some period of time.
The following rules might make sense:
Ideally, the dish should try to be based on an ingredient which is available locally, and perhaps at its best locally, but which still can be found in the rest of the world so the dish can spread.
All restaurants submitting a dish must agree that should they win, they will publish recipes for the dish and claim no exclusive on it. They will, however, be the only restaurant to say they have the original dish and were the winner of the contest.
Ideally, recipes will be published in advance, so other restaurants can also make the dish during the contest, in particular restaurants that are not competing. (Competing chefs might deliberately make the dish badly.) In fact, advance publication (and a contest cookbook) might be part of the rules.
“None of the above” should be an encouraged choice on the voting form. The first round might not create a dish worthy of the town.
A panel of chefs would rate the dishes according to difficulty. Dishes that are easier would be encouraged, as these can spread more easily. The list of difficulties would be published for voters to use in making their decisions. Ie. voters might pick the 2nd most tasty dish if it’s much easier to make.
Every dish must be available in “chef-approved” form at some minimum number of restaurants, so it is easy to try each dish. Private chefs can compete if they can recruit restaurants to offer their dish.
At the end of the contest, the city’s tourist board would have a budget to promote the dish to tourists.
Voting would be done online, but voters would need to get a token to vote somewhere based on a unique ID so they can’t vote more than once. They need not pick a single dish. The “Approval” voting system, where voters can list as many dishes as they find qualified, and the one with the most votes wins, can be used.
It is certainly possible as well to have multiple winners, and the creation of variations on the winning dish would be encouraged.
Would this be an authentic regional cuisine that “comes from the people?” Of course not. But it might be tasty, and if chosen by the people, might grow into something that really belongs to that city.
In a bizarre twist of life imitating art that may be too “meta” for your brain, Constantin Films, the producer of the war movie “Downfall” has caused the takedown of my video which was put up to criticise their excessive use of takedowns.
Update: YouTube makes an official statement and I respond.
A brief history:
Starting a few years ago, people started taking a clip from Downfall where Hitler goes on a rampage, and adding fake English subtitles to produce parodies on various subjects. Some were very funny and hundreds of different ones were made. Some were even made about how many parodies there were. The German studio, Constantin, did some DCMA takedowns on many of these videos.
Not to spoil things too much, but the video also makes reference to an alternate way you can get something pulled off YouTube. Studios are able to submit audio and video clips to YouTube which are “fingerprinted.” YouTube then checks all uploaded videos to see if they match the audio or video of some allegedly copyrighted work. When they match, YouTube removes the video. That’s what I have Hitler decide to do instead of more DMCA takedowns, and lo, Constantin actually ordered this, and most, though not all of the Downfall parodies are now gone from YouTube. Including mine.
Now I am sure people will debate the extent to which some of the parodies count as “fair use” under the law. But in my view, my video is about as good an example of a parody fair use as you’re going to see. It uses the clip to criticise the very producers of the clip and the takedown process. The fair use exemption to copyright infringement claims was created, in large part, to assure that copyright holders didn’t use copyright law to censor free speech. If you want to criticise content or a content creator — an important free speech right — often the best way to do that will make use of the content in question. But the lawmakers knew you would rarely get permission to use copyrighted works to make fun of them, and wanted to make sure critical views were not stifled. read more »
I’ve been predicting a great deal of innovation in cars with the arrival of robocars and other automatic driving technologies. But there’s a lot of other computerization and new electronics that will be making its way into cars, and to make that happen, we need to make the car into a platform for innovation, rather than something bought as a walled garden from the car vendor.
In the old days, it was fairly common to get a car without a radio, and to buy the radio of your choice. This happened even in higher end cars. However, the advantages in sound quality and dash integration from a factory-installed radio started to win out, especially with horizontal market Japanese companies who were both good at cars and good at radios.
For real innovation, you want a platform, where aftermarket companies come in and compete. And you want early adopters to be able to replace what they buy whenever they get the whim. We replace our computers and phones far more frequently than our cars and the radios inside them.
To facilitate this, I think the car’s radio and “occupant computer” should be merged, but split into three parts:
The speakers and power amplifier, which will probably last the life of the car, and be driven with some standard interface such as 7.1 digital audio over optical fiber.
The “guts” which probably live in the trunk or somewhere else not space constrained, and connect to the other parts
The “interface” which consists of the dashboard panel and screen, with controls, and any other controls and screens, all wired with a network to the guts.
Ideally the hookup between the interface and the guts is a standardized protocol. I think USB 3.0 can handle it and has the bandwidth to display screens on the dashboard, and on the back of the headrests for rear passenger video. Though if you want to imagine an HDTV for the passengers, its possible that we would add a video protocol (like HDMI) to the USB. But otherwise USB is general enough for everything else that will connect to the guts. USB’s main flaw is its master-slave approach, which means the guts needs to be both a master, for control of various things in the car, and a slave, for when you want to plug your laptop into the car and control elements in the car — and the radio itself.
Of course there should be USB jacks scattered around the car to plug in devices like phones and memory sticks and music players, as well as to power devices up on the dash, down in the armrests, in the trunk, under the hood, at the mirror and right behind the grille.
Finally there need to be some antenna wires. That’s harder to standardize but you can be we need antennas for AM/FM/TV, satellite radio, GPS, cellular bands, and various 802.11 protocols including the new 802.11p. In some cases, however, the right solution is just to run USB 3.0 to places an antenna might go, and then have a receiver or tranceiver with integrated antenna which mounts there. A more general solution is best.
This architecture lets us replace things with the newest and latest stuff, and lets us support new radio protocols which appear. It lets us replace the guts if we have to, and replace the interface panels, or customize them readily to particular cars. read more »
I recently stayed at the home of a friend up in Vancouver. She had some electrical wiring problems, and since I know wiring, I helped her with them as well as some computer networking issues. Very kindly she said that made me a houseguest from heaven (as opposed to the houseguests from hell we have all heard about.) I was able to leave her place better than I found it. Well, mostly.
This immediately triggered a business idea in my mind which seems like it would be cool but is, alas, probably illegal. The idea would be a service where people with guestrooms, or even temporarily vacant homes, would provide free room (and board) to qualified tradespeople who want to have a cheap vacation. Electricians, handypeople, plumbers, computer wizards, housepainters, au pairs, gardeners and even housecleaners and organizers, would stay in your house, and leave it having done some reapirs or cleanup. In some cases, like cleanup, pool maintenance and yard sweeping, the people need not be skilled professionals, they could be just about anybody.
Obviously there would need to be a lot of logistics to work out. A reliable reputation system would be needed if you’re going to trust your house to such strangers, particularly if trusting the watching of your children. You would need to know both that they are able to do the work and not about to rob you. You would want to know if they will keep the relationship a business one or expect a more friendly experience, like couch surfing.
In addition, the homeowners would need reputations of their own. Because, for a skilled tradesperson, a night of room and board is only worth a modest amount of work. You can’t give somebody a room and expect them to work the whole day on your project — or even much more than an hour. Perhaps if a whole house is given over, with rooms for the person and a whole family, more work could be expected. The homeowner may not be good at estimating the amount of work needed, and come away disappointed when told that the guest spent 2 hours on the problem and decided it was a much bigger problem.
Trading lodging for services is an ancient tradition, particularly on farms. In childcare, the “au pair” concept has institutionalized it and made it legal.
But alas, legality is the rub. The tax man will insist that both parties are making income and want to tax it, as barter is taxable. The local contractor licencing agency will insist that work be done only by locally licenced contractors, to local codes, possibly with permits and inspections. And immigration officials will insist that foreign tourists are illegally working. And there would be the odd civil disputes. An unions might tell members not to take work even from remote members of cousin unions.
The civil disputes could be kept to a minimum by making the jobs short and a good deal for the guests, since for the homeowners, the guest room was typically doing nothing anyway — thus the success of couch surfing — and making slightly more food is no big deal. But the other legal risks would probably make it illegal for a company to get in the middle of all this. At least in the company’s home country. A company based in some small nation might not be subject to remote laws. read more »
A couple of weeks ago I wrote about the need for a good robocar driving simulator. Others have been on the track even earlier and are arranging a pair of robotic driving contests in simulator for some upcoming AI conferences.
The main contest is a conventional car race. It will be done in the TORCS simulator I spoke of, where people have been building robot algorithms to control race cars for some time, though not usually academic AI researchers. In addition, they’re adding a demolition derby which should be a lot of fun, though not exactly the code you want to write for safety.
This is, however, not the simulator contest I wrote about. The robots people write for use in computer racing simulators are given a pre-distilled view of the world. They learn exactly where the vehicle is, where the road edges are and where other cars are, without error. Their only concern is to drive based on the road and the physics of their vehicle and the track, and not hit things — or in the case of the derby, to deliberately hit things.
The TORCS engine is a good one, but is currently wired to do only an oval racetrack, and the maintainers, I am told, are not interested in having it support more complex street patterns.
While simulation in an environment where all the sensing problems are solved is a good start, a true robocar simulation needs simulated sensors — cameras, LIDAR, radar, GPS and the works — and then software that takes that and tries to turn it into a map of where the road is and where the vehicles and other things are. Navigation is also an important thing to work out. I will try to attend the Portland version of this conference to see this contest, however, as it should be good fun and generate interest.
But if they can’t do that, I want them to let me to print my boarding pass long before my flight. In particular, to print my return boarding pass when I print my outgoing one. That’s because I have a printer at home but often don’t have one on the road.
Of course, you can’t actually check in until close to the flight, so this boarding pass would be marked as preliminary, but still have bar codes identifying what they need to scan. On the actual day of the flight, I would check in from my phone or laptop, so they know I am coming to the plane. There’s no reason the old boarding pass’s bar codes can’t then be activated as ready to work. Sure, it might not know the gate, and the seat may even change, but such seat changes are rare and perhaps then I would need to go to a kiosk to swap the old pass for a new one. If the flight changes then I may also need to do the swap but the swap can be super easy — hold up old pass, get new one.
I could also get a short code to write on the pass when I do my same-day check-in, such code being usable to confirm the old pass has been validated.
It is no coincidence that two friends of mine have both founded companies recently to build telepresence robots. These are easy to drive remote control robots which have a camera and screen at head height. You can inhabit the robot, and drive it around a flat area and talk to people by videoconferencing. You can join meetings, go visit people or inspect a factory. Companies building these robots, initially at high prices, intend to sell them both to executives who want to remotely tour remote offices and to companies who want to give cheaper remote employees a more physical presence back at HQ.
There are also a few super-cheap telepresence robots, such as the Spykee, which runs Skype video conferencing and can be had for as low as $150. It’s not very good, and the camera is very low down, and there’s no screen, but it shows just how cheap such a product can get.
“Anybots” QA telepresence robot
When they get down to a price like that, it seems inevitable to me that we will see an emergency services robot on every block, primarily for use by the police. When there is a police, fire or ambulance call to an address, an officer could immediately connect to the robot on that block and drive it to the scene, to be telepresent. The robot would live in a small, powered protective closet either paid for by the city, but more likely just donated by some neighbour on the block who wants the fastest possible emergency response. Called into action, the robot’s garage door would open and the robot would drive out, and probably be at the location of the emergency within 60 to 120 seconds, depending on how densely they are placed. In the meantime actual first responders might also be on the way.
Numbers for buses are now worse at 4300. Source data predates the $4/gallon gas crisis, which probably temporarily improved it.
Light (capacity) rail numbers are significantly worse — reason unknown. San Jose’s Light rail shows modest improvement to 5300 but the overall average reported at 7600 is more than twice the energy of cars!
Some light rail systems (See Figure 2.3 in Chapter 2) show ridiculously high numbers. Galveston, Texas shows a light rail that takes 8 times as much energy per passenger as the average SUV. Anybody ridden it and care to explain why its ridership is so low?
Heavy rail numbers also worsen.
Strangely, average rail numbers stay the same. This may indicate an error in the data or a change of methodology, because while Amtrak and commuter rail are mildly better than the average, it’s not enough to reconcile the new average numbers for light and heavy rail with the rail average.
I’ve made a note that the electric trike figure is based on today’s best models. Average electric scooters are still very, very good but only half as good as this.
I’ve added a figure I found for the East Japan railway system. As expected, this number is very good, twice as good as cars, but suggests an upper bound, as the Japanese are among the best at trains.
I removed the oil-fueled-agriculture number for cyclists, as that caused more confusion than it was worth.
There is no trolley bus number this year, so I have put a note on the old one.
It’s not on the chart, but I am looking into high speed rail. Germany’s ICE reports a number around 1200 BTU/PM. The California HSR project claims they are going to do as well as the German system, which I am skeptical of, since it requires a passenger load of 100M/year, when currently less than 25M fly these routes.
In my article two weeks ago about the odds of knowing a cousin I puzzled over the question of how many 3rd cousins a person might have. This is hard to answer, because it depends on figuring out how many successful offspring per generation the various levels of your family (and related families) have. Successful means that they also create a tree of descendants. This number varies a lot among families, it varies a lot among regions and it has varied a great deal over time. An Icelandic study found a number of around 2.8 but it’s hard to conclude a general rule. I’ve used 3 (81 great-great-grandchildren per couple) as a rough number.
There is something, however, that we can calculate without knowing how many children each couple has. That’s because we know, pretty accurately, how many ancestors you have. Our number gets less accurate over time because ancestors start duplicating — people appear multiple times in your family tree. And in fact by the time you go back large numbers of generations, say 600 years, the duplication is massive; all your ancestors appear many times.
To answer the question of “How likely is it that somebody is your 16th cousin” we can just look at how many ancestors you have back there. 16th cousins share with you a couple 17 generations ago. (You can share just one ancestor which makes you a half-cousin.) So your ancestor set from 17 generations ago will be 65,536 different couples. Actually less than that due to duplication, but at this level in a large population the duplication isn’t as big a factor as it becomes later, and if it does it’s because of a closer community which means you are even more related.
So you have 65K couples and so does your potential cousin. The next question is, what is the size of the population in which they lived? Well, back then the whole world had about 600 million people, so that’s an upper bound. So we can ask, if you take two random sets of 65,000 couples from a population of 300M couples, what are the odds that none of them match? With your 65,000 ancestors being just 0.02% of the world’s couples, and your potential cousin’s ancestors also being that set, you would think it likely they don’t match.
Turns out that’s almost nil. Like the famous birthday paradox, where a room of 30 people usually has 2 who share a birthday, the probability there is no intersection in these large groups is quite low. it is 99.9999% likely from these numbers that any given person is at least a 16th cousin. And 97.2% likely that they are a 15th cousin — but only 1.4% likely that they are an 11th cousin. It’s a double exponential explosion. The rough formula used is that the probability of no match will be (1-2^C/P)^(2^C) where C is the cousin number and P is the total source population. To be strict this should be done with factorials but the numbers are large enough that pure exponentials work.
Now, of course, the couples are not selected at random, and nor are they selected from the whole world. For many people, their ancestors would have all lived on the same continent, perhaps even in the same country. They might all come from the same ethnic group. For example, if you think that all the ancestors of the two people came from the half million or so Ashkenazi Jews of the 18th century then everybody is a 10th cousin.
Many populations did not interbreed much, and in some cases of strong ethnic or geographic isolation, barely at all. There are definitely silos, and they sometimes existed in the same town, where there might be far less interbreeding between races than among races. Over time, however, the numbers overwhelm even this. Within the close knit communities, like say a city of 50,000 couples who bred mostly with each other, everybody will be a 9th cousin.
These numbers provide upper bounds. Due to the double exponential, even when you start reducing the population numbers due to out-breeding and expansion, it still catches up within a few generations. This is just another measure of how we are all related, and also how meaningless very distant cousin relationships, like 10th cousins, are. As I’ve noted in other places, if you leave aside the geographic isolation that some populations lived in, you don’t have to go back more more than a couple of thousand years to reach the point where we are not just all related, but we all have the same set of ancestors (ie. everybody who procreated) just arranged in a different mix.
The upshot of all this: If you discover that you share a common ancestor with somebody from the 17th century, or even the 18th, it is completely unremarkable. The only thing remarkable about it is that you happened to know the path.
Today an interesting paper (written with the assistance of the EFF) was released. The authors have found evidence that governments are compromising trusted “certificate authorities” by issuing warrants to them, compelling them to create a false certificate for a site whose encrypted traffic they want to snoop on.
That’s just one of the many ways in which web traffic is highly insecure. The biggest reason, though, is that the vast majority of all web traffic takes place “in the clear” with no encryption at all. This happens because SSL/TLS, the “https” system is hard to set up, hard to use, considered expensive and subject to many false-alarm warnings. The tendency of security professionals to deprecate anything but perfect security often leaves us with no security at all. My philosophy is different. To paraphrase Einstein:
Ordinary traffic should be made as secure as can be made easy to use, but no more secure
In this vein, I have prepared a new article on how to make the web much more secure, and it makes sense to release it today in light of the newly published threat. My approach, which calls for new browser behaviour and some optional new practices for sites, calls for the following:
Make TLS more lightweight so that nobody is bothered by the cost of it
Automatic provisioning (Zero UI) for self-signed certificates for domains and IPs.
A different meaning for the lock icon: Strong (Locked), Ordinary (no icon) and in-the-clear (unlocked).
A new philosophy of browser warnings with a focus on real threats and on changes in security, rather than static states deemed insecure.
A means so sites can provide a file with advisories for browsers about what warnings make sense at this site.
There is one goal in mind here: The web must become encrypted by default, with no effort on the part of site operators and users, and false positive warnings that go off too frequently and make security poor and hard to use must be eliminated.
Watching and managing children is one of the major occupations of the human race. A true robot babysitter is still some time in the future, and getting robocars to the level that we will trust them as safe to carry children is also somewhat in the future, but it will still happen much sooner.
Today I want to explore the implications of a robocar that is ready to safely carry children of certain age ranges. This may be far away because people are of course highly protective of their children. They might trust a friend to drive a child, even though human driving records are poor, because the driver is putting her life on the line just as much as the child’s, while the robot is just programmed to be safe, with no specific self-interest.
A child’s robocar can be designed to higher safety standards than an adult’s, with airbags in all directions, crumple zones designed for a single occupant in the center and the child in a 5-point seatbelt. As you know, with today’s modern safety systems, racecar drivers routinely walk away from crashes at 150mph. Making a car that won’t hurt the child in a 40mph crash is certainly doable, though not without expense. A robocar’s ability to anticipate an accident might even allow it to swivel the seat around so that the child’s back is to the accident, something even better than an airbag.
The big issue is supervision of smaller children. It’s hard to say what age ranges of children people might want to send via robocar. In some ways infants are easiest, as you just strap them in and they don’t do much. All small children today are strapped in solidly, and younger ones are in a rear facing seat where they don’t even see the parent. (This is now recommended as safest up to age 4 but few parents do that.) Children need some supervision, though real problems for a strapped in child are rare. Of course, beyond a certain age, the children will be fully capable of riding with minimal supervision, and by 10-12, no direct supervision (but ability to call upon an adult at any time.) read more »
One of the things that’s harder to predict about robocars is what they will mean for how cities are designed and how they evolve. We’re notoriously bad at predicting such things, but it is still tempting.
A world of robocars offers the potential for something I am dubbing the “poor man’s teleporter.” That’s a fleet of comfortable robotaxis that are, while you are in them, a fully functional working or relaxing environment. Such robotaxis would have a desk and large screen and very high speed wireless net connection. They have a comfy reclining chair (or bed) and anything else you need from the office environment. (Keyboards and mice are problematic, as I have discussed elsewhere, but there may be ways to solve that.)
The robotaxi will deliberately pick the most comfortable route for a trip, with few turns, few stops and gentle acceleration. It will gimbal in corners and have an active suspension system eliminating bumps. The moment you enter it, your desktop could appear on the screen, copied from the desk you left (thanks to communication with one of your wearable devices, probably.) You can do high quality videoconferencing, work on the net, or just watch a video or read a book — the enclosed book reader could be set to the page you were last reading elsewhere. If you work in a building with a lobby, the electric robotaxi could enter the lobby and meet you right at the elevator. It might even go vertical and ride up the elevator to get you during less busy times. (For some real science fiction, the robotaxis in Minority Report somehow climbed the buildings and parked in people’s homes.)
For many it would be as though they had not left their desks. Almost all the trip will be productive time. As such, while people won’t want to spend forever in the car, many might find distance and trip time to not be particularly important, at least not for trips around town during the workday. While everybody wants to get home to family sooner, even commute times could become productive times with employers who let the employee treat the travel time as work time. Work would begin the moment you stepped into the car in the morning.
We’ve seen a taste of this in Silicon Valley, as several companies like Google and Yahoo run a series of commute vans for their employees. These vans have nice chairs, spaces for laptops and wireless connectivity into the corporate network. Many people take advantage of these vans and live in places like San Francisco, which may be an hour-long trip to the office. The companies pay for the van because the employees start the workday when they get on it.
This concept will continue to expand, and I predict it will expand into robocars. The question is, what does it mean to how we live if we eliminate the time-cost of distance from many trips? What if we started viewing our robotaxis as almost like a teleporter, something that takes almost no time to get us where we want to go? It’s not really no-time, of course, and if you have to make a meeting you still have to leave in time to get there. It might be easier for some to view typical 15 minute trips around a tight urban area as no-time while viewing 30-60 minute trips as productive but “different time.”
Will this make us want to sprawl even more, with distance not being important? Or will we want to live closer, so that the trips are more akin to teleportation by being productive, short and highly predictable in duration? It seems likely that if we somehow had a real Star-Trek style transporter, we might all live in country homes and transport on demand to where the action is. That’s not coming, but the no-lost-time ride is. We might not be able to afford a house on the nice-walkable-shops-and-restaurants street, but we might live 2 miles from it and always be able to get to it, with no parking hassle, in 4 minutes of productive time.
What will the concept of a downtown mean in such a world? “Destination” retailers and services, like a movie house, might decide they have no real reason to be in a downtown when everybody is coming by robotaxi. Specialty providers will also see no need to pay a premium to be in a downtown. Right now they don’t get walk-by traffic, but they do like to be convenient to the customers who seek them out. Stores that do depend on walk-by traffic (notably cafes and many restaurants) will want to be in places of concentration and walking.
But what about big corporate offices that occupy the towers of our cities? They go there for prestige, and sometimes to make it easy to have meetings with other downtown companies. They like having lots of services for their employees and for the business. They like being near transit hubs to bring in those employees who like transit. What happens when many of these needs go away?
For many people, the choice of where to live is overwhelmingly dominated by their children — getting them nice, safe neighbourhoods to play in, and getting them to the most desired schools. If children can go to schools anywhere in a robocar, how does that alter the equation? Will people all want bigger yards in which to cacoon their children, relying on the robocar to take the children to play-dates and supervised parks? Might they create a world where the child goes into the garage, gets in the robocar and tells it to go to Billy’s house, and it deposits the child in that garage, never having been outside — again like a teleporter to the parents? Could this mean a more serious divorce between community and geography?
While all this is going on, we’re also going to see big strides in videoconferencing and virtual reality, both for adults, and as play-spaces for adults and children. In many cases people will be interacting through a different sort of poor man’s teleporter, this one taking zero time but not offering physical contact.
Clearly, not all of these changes match our values today. But what steps that make sense could we actually take to promote our values? It doesn’t seem possible to ban the behaviours discussed above, or even to bend them much. What do you think the brave new city will look like?
It is often said that cars caused the suburbanization of cities. However, people didn’t decide they wanted a car lifestyle and thus move where they could drive more. They sought bigger lots and yards, and larger detached houses. They sought quieter streets. While it’s not inherent to suburbs, they also sought better schools for kids and safer neighbourhoods. They gave up having nearby shops and restaurants and people to get those things, and accepted the (fairly high) cost of the car as part of the price. Most often for the kids. Childless and young people like urban life; the flight to the suburbs was led by the parents.
This doesn’t mean they stopped liking the aspects of the “livable city.” Having stuff close to you. Having your friends close to you. Having pleasant and lively spaces to wander, and in which you regularly see your friends and meet other people. Walking areas with interesting shops and restaurants and escape from the hassles of parking and traffic. They just liked the other aspects of sprawl more.
They tried to duplicate these livable areas with shopping malls. But these are too sterile and corporate — but they are also climate controlled and safer and caused the downfall of many downtowns. Then big box stores, more accessible from the burbs, kept at that tack.
The robotaxi will allow people to get more of what they sought from the “livable city” while still in sprawl. It will also let them get more of what they sought from the suburbs, in terms of safety and options for their children. They may still build pleasant pedestrian malls in which one can walk and wander among interesting things, but people who live 5 miles away will be able to get to them in under 10 minutes. They will be delivered right into the pedestrian zone, not to a sprawling parking lot. They won’t have to worry about parking, and what they buy could be sent to their home by delivery robot — no need to even carry it while walking among shops. They will seek to enjoy the livable space from 5 miles away the same way that people today who live 4 blocks away enjoy those spaces.
But there’s also no question that there will continue to be private malls trying to meet this need. Indeed the private malls will probably offer free or validated robotaxi service to the mall, along with delivery, if robotaxi service is as cheap as I predict it can be. Will the public spaces, with their greater variety and character be able to compete? They will also have weather and homeless people and other aspects of street life that private malls try to push away.
The arrival of the robocar baby-sitter, which I plan to write about more, will also change urban family life. Stick the kid in the taxi and send him to the other parent, or a paid sitter service, all while some adult watches on the video and redirects the vehicle to one of a network of trusted adults if some contingency arises. Talk about sending a kid to a time-out!
Here’s a suggestion that will surely rankle some in the free software/GPL community, but which might be of good benefit to the overall success of such systems.
What I propose is a GPL-like licence under which source code could be published, but which forbids effectively one thing: Work to make it run on proprietary operating systems, in particular Windows and MacOS.
The goal would be to allow the developers of popular programs for Windows, in particular, to release their code and allow the FOSS community to generate free versions which can run on Linux, *BSD and the like. Such companies would do this after deciding that there isn’t enough market on those platforms to justify a commercial venture in the area. Rather than, as Richard Stallman would say, “hoarding” their code, they could release it in this fashion. However, they would not fear they were doing much damage to their market on Windows. They would have to accept that they were disclosing their source code to their competitors and customers, and some companies fear that and will never do this. But some would, and in fact some already have, even without extra licence protection.
An alternate step would be to release it specifically so the community and make sure the program runs under WINE, the Windows API platform for Linux and others. Many windows programs already run under WINE, but almost all of them have little quirks and problems. If the programs are really popular, the WINE team patches WINE to deal with them, but it would be much nicer if the real program just got better behaved. In this case, the licence would have some rather unusual terms, in that people would have to produce versions and EXEs that run only under WINE — they would not run on native Windows. They could do this by inserting calls to check if they are running on WINE and aborting, or they could do something more complex like make use of some specific APIs added to WINE that are not found in Windows. Of course, coders could readily remove these changes and make binaries that run on Windows natively, but coders can also just pirate the raw Windows binaries — both would be violations of copyright, and the latter is probably easier to do. read more »