Tags

My 4-camera 4K eclipse video and about traffic from the Eclipse

The Eclipse of 2017 caused dire traffic warnings, even from myself. Since a total eclipse is the most amazing thing you will see, and one was coming to a rich country where almost everybody owns a car, and hundreds of millions live within a day’s drive — I wondered how we would not have horrendous traffic. (You can see my main Eclipse report and gallery here or see all my Eclipse articles.)

Also look out below for a new 4K video I made from having 4 different video cameras running around the eclipse. I have started you 3 minutes in for the short-attention-span world, but you might also enjoy the 3 minutes leading up as the excitement builds. Even on an HD display, be sure to click through to Youtube to watch it full screen.

As described, the 4 cameras are two 4K cell phones facing forward and back, plus an HD video from a 1200mm superzoom camera and snippets of 4K video and stills from the main telescope and Sony A7rII.

The big places for predicted bad traffic were central Oregon, because it was the place with the best weather that was closest to everybody from Seattle to Los Angeles, and areas of South Carolina which were closest for the whole eastern seaboard. At a popular Eclipse site, they had a detailed analysis of potential traffic but in many cases, it was quite wrong.

The central Oregon spine around the tiny town of Madras did get really bad traffic, as in reports of 4 to 6 hours to get out. That was not unexpected, since the area does not have very many roads, and is close to Washington and relatively close to California. At the same time, a lot of traffic diverted to the Salem area, which got a nice clear sky forecast. It has an interstate and many other roads. Planning ahead, Madras was the best choice because the weather is much more unpredictable west of the Cascades. But once the forecast became clear, many people from Seattle, Portland and California should have shifted to the more populated areas with the larger roads.

I decided, since it was only 2 hours more driving to Weiser (on the Oregon/Idaho border) but much less traffic, to go to the Snake River valley. It was the right choice — there was almost no traffic leaving Weiser. In fact, Weiser did not get overwhelmed with people as had been expected, disappointing the businesses. Many thought that a large fraction of Boise would have tried to get up to that area, but they didn’t. We actually wandered a bit and ended up over the river in a school field in Annex, Oregon.

There was no problem finding space, even for free.

This is a pattern we’ve seen many times now — dire predictions of terrible traffic, then almost nothing. It turns out the predictions work too well. The famous Carmageddon#History) in Los Angeles never materialized — even with a major link cut, traffic was lighter than normal.

This is, in turn a tragedy. It seems a lot of people did not go see the eclipse because they were scared of bad traffic. What a great shame.

4K Video

At my sight I had 4 cameras recording video. I set up two cell phones, both able to do 4K, looking at our group from in front and behind. The one behind I put in portrait mode, almost capturing the sun, to show that view, while the one in front showed us looking at the eclipse and also the shadow approaching on the hills.  read more »

New NHTSA Robocar regulations are a major, but positive, reversal

NHTSA released their latest draft robocar regulations just a week after the U.S. House passed a new regulatory regime and the senate started working on its own. The proposed regulations preempt state regulation of vehicle design, and allow companies to apply for high volume exemptions from the standards that exist for human-driven cars.

It’s clear that the new approach will be quite different from the Obama-era one, much more hands-off. There are not a lot of things to like about the Trump administration but this could be one of them. The prior regulations reached 116 pages with much detail, though they were mostly listed as “voluntary.” I wrote a long critique of the regulations in a 4 part series which can be found in my NHTSA tag. They seem to have paid attention to that commentary and the similar commentary of others.

At 26 pages, the new report is much more modest, and actually says very little. Indeed, I could sum it up as follows:

  • Do the stuff you’re already doing
  • Pay attention to where and when your car can drive and document that
  • Document your processes internally and for the public
  • Go to the existing standards bodies (SAE, ISO etc.) for guidance
  • Create a standard data format for your incident logs
  • Don’t forget all the work on crash avoidance, survival and post-crash safety in modern cars that we worked very hard on
  • Plans for how states and the feds will work together on regulating this

Goals vs. Approaches

The document does a better job at understanding the difference between goals — public goods that it is the government’s role to promote — and approaches to those goals, which should be entirely the province of industry.

The new document is much more explicit that the 12 “safety design elements” are voluntary. I continue to believe that there is a risk they may not be truly voluntary, as there will be great pressure to conform with them, and possible increased liability for those who don’t, but the new document tries to avoid that, and its requests are much milder.

The document understands the important realization that developers in this space will be creating new paths to safety and establishing new and different concepts of best practices. Existing standards have value, but they can at best encode conventional wisdom. Robocars will not be created using conventional wisdom. The new document takes the approach of more likely recommending that the existing standards be considered, which is a reasonable plan.

A lightweight regulatory philosophy

My own analysis is guided by a lightweight regulatory approach which has been the norm until now. The government’s role is to determine important public goals and interests, and to use regulations and enforcement when, and only when, it becomes clear that industry can’t be trusted to meet these goals on its own.

In particular, the government should very rarely regulate how something should be done, and focus instead on what needs to happen as the end result, and why. In the past, all automotive safety technologies were developed by vendors and deployed, sometimes for decades, before they were regulated. When they were regulated, it was more along the lines of “All cars should now have anti-lock brakes.” Only with the more mature technologies have the regulations had to go into detail on how to build them.

Worthwhile public goals include safety, of course, and the promotion of innovation. We want to encourage both competition and cooperation in the right places. We want to protect consumer rights and privacy. (The prior regulations proposed a mandatory sharing of incident data which is watered down greatly in these new regulations.)  read more »

NTSB Tesla Crash report (New NHTSA regs to come)

The NTSB (National Transportation Safety Board) has released a preliminary report on the fatal Tesla crash with the full report expected later this week. The report is much less favourable to autopilots than their earlier evaluation.

(This is a giant news day for Robocars. Today NHTSA also released their new draft robocar regulations which appear to be much simpler than the earlier 116 page document that I was very critical of last year. It’s a busy day, so I will be posting a more detailed evaluation of the new regulations — and the proposed new robocar laws from the House — later in the week.)

The earlier NTSB report indicated that though the autopilot had its flaws, overall the system was working. This is to say that though drivers were misusing the autopilot, the combined system including drivers not misusing the autopilot combined with those who did, was overall safer than drivers with no autopilot. The new report makes it clear that this does not excuse the autopilot being so easy to abuse. (By abuse, I mean ignore the warnings and treat it like a robocar, letting it drive you without you actively monitoring the road, ready to take control.)

While the report mostly faults the truck driver for turning at the wrong time, it blames Tesla for not doing a good enough job to assure that the driver is not abusing the autopilot. Tesla makes you touch the wheel every so often, but NTSB notes that it is possible to touch the wheel without actually looking at the road. NTSB also is concerned that the autopilot can operate in this fashion even on roads it was not designed for. They note that Tesla has improved some of these things since the accident.

This means that “touch the wheel” systems will probably not be considered acceptable in future, and there will have to be some means of assuring the driver is really paying attention. Some vendors have decided to put in cameras that watch the driver or in particular the driver’s eyes to check for attention. After the Tesla accident, I proposed a system which tested driver attention from time to time and punished them if they were not paying attention which could do the job without adding new hardware.

It also seems that autopilot cars will need to have maps of what roads they work on and which they don’t, and limit features based on the type of road you’re on.

Photo gallery from 2017 total solar eclipse

I was just outside Weiser Idaho, a small town on the Snake river, for the 2017 Eclipse, which was an excellent, if short, spectacle which reawakened U.S. interests in total eclipses. They are, as I wrote earlier, the most spectacular natural phenomenon you can see on the Earth, but due to their random pattern it’s been a long time since one has covered so much of the world’s richest country.

For me, it was my sixth total eclipse, but the first I could drive to. I began this journey in Mexico in 1991, with the super-eclipse of that year, which also was the last to visit the United States (it was visible on the big island of Hawai`i.) Since then I have flown around the world to the Curacao area, to the Black Sea, to the Marshall Islands (more photos) and French Polynesia to see other total eclipses. And I will continue to do so starting with 2 years from now in Argentina.

See the gallery

I recommend before you read that you enjoy my Gallery of 2017 Eclipse Photos in HD resolution. When going through them I recommend you click the “i” button so you can read the descriptions; they do not show in the slide show.

HDR from main camera

Why it’s impossible (today) to photograph

I did not photograph my first eclipse (nor should anybody) but every photographer, seeing such a spectacle, hopes to capture it. We can’t, because in addition to being the most spectacular natural event, it’s also the one with the greatest dynamic range. In one small field you have brilliant jets of fire coming off the sun, its hot inner atmosphere, its giant glowing outer atmosphere and a dimly lit dark sky in which you can see stars. And then there is the unlit side of the moon which appears to be the blackest thing you have ever seen. While you can capture all these light values with a big bracket, no display device can come close to showing that 24 stop range. Only the human eye and visual system can perceive it.

Some day though, they will make reasonable display devices that can do this, but even then it will be tough. For the eclipse covers just a few degrees of sky, but in reality it’s a full 360 experience, with eerie light in all directions and the temporary light of twilight in every direction. Still, we try.

In the future, when there is a retinal resolution VR headset with 24 bits of HDR light level ability, we might be able to show people an eclipse without going to one. Though you should still go.

Moment of 3rd contact

That’s why these photographs are so different. Every exposure reveals a different aspect of the eclipse. Short exposures show the prominences and the “chromosphere” — the inner atmosphere of the sun visible only at the start and end of the eclipse. Longer exposures reveal more of the giant corona. The fingers of the outer corona involve 2 or 4 second exposures! The most interesting parts happen at 2nd and 3rd contact (the start and end) and also have many aspects. About 1/60th of a second shows the amazing diamond ring by letting the tiny sliver of sun blow out the sensor to make the diamond, as it does to the eye.

Time to rename the partial eclipse

One thing that saddens and frustrates me is that all of this is only visible in a band less than 100 miles wide where the eclipse is total. Outside that, for thousands of miles, one can see (with eye protection) a “partial eclipse.” They both get called an eclipse but the difference is night and day. Yet I think the naming makes people not understand the difference. They think a “90% partial eclipse” is perhaps 90% as interesting as a total eclipse. Nothing could be more wrong. There are really three different things:

  1. The total eclipse, the most amazing thing you will ever see.
  2. The >98% partial eclipse (and annular eclipse) which are definitely an interesting event, but still just a tiny shadow of what a total eclipse is.
  3. The ordinary partial eclipse, which is a fun and educational curiosity.

I constantly meet people who think they saw “the eclipse” when to me and all others who have seen one, only the total eclipse is the eclipse. While the 98% partial is interesting, nobody should ever see that, because if you are that close to the band of totality, you would be nuts not to make the effort to go that extra distance. In a total eclipse, you see all that the partial has to offer, and even a few partial effects not seen except at 99.9%

A wider angle HDR with deep corona

As such, I propose we rename the partial eclipse, calling it something like a “grazing transit of the moon.” An eclipse technically is a transit of the moon over the sun, but my main goal is to use a different term for the partial and total so that people don’t get confused. To tell people in the partial zone “you saw a transit, hope it was interesting” while telling people in the total zone, “You saw a solar eclipse, wasn’t that the most amazing thing you’ve ever seen?”

Automating the photography

This was the first eclipse I have ever driven to, and because of that, I went a bit overboard, able to bring all sorts of gear. I had to stop myself and scale back, but I still brought 2 telescopes, 4 cameras, one long lens, 5 tripods and more.  read more »

Your eclipse guide (with the things not in many eclipse guides)

I will be heading to western Idaho this weekend to watch my sixth total Eclipse. That makes me a mid-grade eclipse chaser, so let me tell you some important things you need to know, which are not in some of the other eclipse guides out there. For good general sites look at places like NASA’s Eclipse Guide which has nice maps or this map.

Totality is everything

The difference between a total solar eclipse and a partial one — even a 98% partial one — is literally night and day. It’s like the difference between sex and holding hands. They are really two different things with a similar sounding name. And a lunar eclipse is again something vastly different. This does not mean a high-partial eclipse is not an interesting thing, but the total eclipse is by far the most spectacular natural phenomenon visible on this planet. Beyond the Grand Canyon, Yosemite, Norway, etc. So if you can get to totality, get there. Do not think you are seeing the eclipse if you don’t get into the zone of totality.

People debate about how total it should be

Many people seek to get close to the centerline of the eclipse. This provides the longest eclipse for your area. You will only lose a modest number of seconds if you are within 15 miles of the centerline, so you don’t have to get exactly there, and in fact it may be too crowded there.

On the other hand there are those who deliberately get close to the edge, giving up 30-40% of their eclipse time in order to see more “edge effects.” Near the edge, the edge effects are longer and a bit more spectacular. In particular the diamond ring will be a fair bit longer, and you may see more prominences and chromosphere for longer. If this is your first eclipse, I am not sure you want to get too close to the edge. But try any of the map web sites that will tell you your duration, and get somewhere that has within 30-40 seconds of the centerline time.

You look at the total eclipse with zero eye protection

You’ve been hearing endless talk about eclipse glasses and how well made they are. Eclipse glasses are only for the boring partial phase. They give you a way to track the progress of the moon while waiting for the main event. Once totality is over, everybody packs up and does not even bother to watch the 2nd half of the partial eclipse, that’s how boring the partial part is.

But don’t be one of those people who, told about the danger of eclipses, does not watch totality with your bare eyes. In fact, use binoculars in addition to your naked eyes, and perhaps a short look through a telescope — but not during the diamond rings or any partial phase.

Update: There is a nice large sunspot group that should still be there on Eclipse day, making the partial phase more interesting to those with good eyesight.

In totality you are looking not at the sun, but its amazing atmosphere — the “corona” — full of streamers, and many times the size of the sun or moon. You may also see jets of fire coming off the sun, and at the start and end of totality you will see the hot red inner atmosphere of the sun, known as the chromosphere.

If you are crazy enough to be outside the total zone but close to it, you still can’t look with your bare eyes at any part of the eclipse.

There are some cool things in a 99% partial eclipse (which you see just before and after totality.)

An eclipse is most glorious in the sky but a lot of other things happen around it. As it gets very close to total you will see the nature of the sunlight change and become quite eerie. Shadows of trees will turn into collections of crescents. About 20-60 seconds before and after totality, if you have a white sheet on the ground, you will see ripples of light waving, like on the bottom of a giant swimming pool. And the shadow. You will see it approach. If you are up on a mountain or in a plane this will be more obvious. It is going at 1,000 to 2,000 miles per hour.  read more »

Digitizing your papers, literally, for the future, with 4K video

I have so much paper that I’ve been on a slow quest to scan things. So I have high speed scanners and other tools, but it remains a great deal of work to get it done, especially reliably enough that you would throw away the scanned papers. I have done around 10 posts on digitizing and gathered them under that tag.

Recently, I was asked by a friend who could not figure out what to do with the papers of a deceased parent. Scanning them on your own or in scanning shops is time consuming and expensive, so a new thought came to me.

Set up a scanning table by mounting a camera that shoots 4K video looking down on the table. I have tripods that have an arm that extends out but there are many ways to mount it. Light the table brightly, and bring your papers. Then start the 4K video and start slapping the pages down (or pulling them off) as fast as you can.

There is no software today that can turn that video into a well scanned document. But there will be. Truth is, we could write it today, but nobody has. If you scan this way, you’re making the bet that somebody will. Even if nobody does, you can still go into the video and find any page and pull it out by hand, it will just be a lot of work, and you would only do this for single pages, not for whole documents. You are literally saving the document “for the future” because you are depending on future technology to easily extract it.  read more »

Vendors push back on California Robocar regulations - plus Tesla and Apple news

California Hearings

Wednesday, California held hearings on the latest draft of their regulations. The new regulations heavily incorporate the new NHTSA guidelines released last month, and now incorporate language on the testing and deployment of unmanned vehicles.

The earlier regulations caused consternation because they correctly identified that nobody had sufficient understanding of unmanned vehicle operations to write regulations, but incorrectly proceeded to forbid those vehicles until later. Once you ban something, it’s very hard to un-ban it. The new approach does not ban the vehicles, but attempts instead to write regulations for them that are too premature.

Comment from developers of the vehicles reflected sentiment that all the regulations are premature. California worked together with NHTSA on their regulations, and incorporated them. In particular, while NHTSA’s regulations lay out a 15 point list of functional domains that creators of vehicles should certify, the federal regulations technically declare this certification to be optional. A vendor in submitting a report can explicitly state they decline to certify most of the items.

California suggests that this certification might be mandatory here. For all my criticism of NHTSA’s plan, they do have an understanding that it is still far too early to be writing detailed rules for vehicles that don’t yet exist, and left these avenues for change and disagreement within their regulations. The avenues are not great — I feel that vendors will be concerned that truly treating the regulations as voluntary will will be done at their peril — but at least they exist.

Several vendors also pointed out the serious problems with traditional regulatory timelines and the speed of development of computer technologies. The California regulations may require that a car be tested for a year before it is deployed. On the surface that sounds normal by old standards, but the reality of development is very different. Pretty much all the vendors I know are producing new builds of their vehicle software and testing them out on the roads the next day — with trained safety drivers behind the wheel. The software goes through extensive “regression testing,” running through every tricky situation the team has encountered anywhere, as well as simulated situations, but the safety driver is there to deal with any problem not found with that testing.

Vendors won’t release into production cars with only one night of testing, but neither can they wait a year. This is particularly true because in the early days of this technology, new problems will be found during deployment, and you want to get the fixes out on the road as quickly as is safe to do. An arbitrary timeline makes no sense.

This is just the start of the problems. While one may argue that it was always going to be hard for startups and tinkerers to develop these cars, these regulations (and the federal ones) put more nails in the coffin of the small innovator. The amount of bureaucracy, the size of the insurance bonds and many other factors will make it hard for teams the size of the DARPA challenge teams who kickstarted this technology and make it real to actually play in the game. The auto industry has a long history of allowing tinkerers to innovate, even at the cost of relaxing safety requirements applied to them. We may end up with a world where only the big players can play at all, and we know that this is generally not good at all for the pace of innovation.

Delivery Robots

The new regulations allowing unmanned vehicles might seem to open doors for delivery robots like we’re working on at Starship. Unfortunately they seem aimed primarily at large vehicles. Since California rules define the sidewalk as part of the street, these regulations might end up demanding a small, slow, light delivery robot still comply with the bulky Federal Motor Vehicle Safety Standards (which are meant for passenger cars) and is impossible without major exceptions being made. (More reading is needed to tell if this is truly how this will play out.)

Tesla says all future cars will have full sensor suite

Tesla has declared that all their future cars, including the lower cost Model 3, will include the full suite of radars, cameras and other sensors needed for self driving. That’s good news, though the Tesla sensor suite, lacking LIDAR, is not currently sufficient for a full self-driving car. Tesla is making a bet of sorts that by the time this becomes in play, cameras and radars will be sufficient to make an acceptably safe system. If not, they will have to stick with autopilot function on those cars. Since there is strong evidence that LIDAR will be inexpensive in a couple of years, I don’t believe anybody should plan to deploy their first (and riskiest) robocars without every sensor that’s at all affordable. Why make it less safe than you could just to save a few hundred dollars?

Today, Tesla can’t do that because no production low cost LIDAR is available. Most other teams are betting it will be. In the future, when cost becomes a bigger issue, vendors will decide to eliminate sensors based on cost.

Apple might have changed their plans

Apple hasn’t said anything official about their rumoured car project. All we know has come from leaks and from looking at who has been hired or who has departed. (I do know one secret thing about the Apple car — it will only work if you have a new iPhone.) Many rumours came out this week that Apple may have cancelled plans to actually make an Apple Car, and instead will take an approach more like Google — building the software and self-driving systems and letting others worry about car manufacture. That is a good strategy, so Apple is hardly out of the game, but it does mean it’s less likely the world will see a car with the particular Apple flair and marketing genius.

The relationship between powerful self-drive system developers (like Apple, Google and Uber) and car manufacturers will be an interesting one. Car makers are used to being in charge, owning the process and owning the customer. So are these hi-tech companies. But many companies will do “contract manufacturing” in auto. If Apple shows up with a purchase order for 100,000 cars to be built to their spec, there are many companies who will take the order, even if the high end Daimlers and Toyotas of the world won’t. So just as Apple doesn’t build the iPhone and gets Foxconn to do it, the fact that Apple will stick to the software systems doesn’t mean their design will not appear in a car.

Here is a summary of Apple car rumours.

NHTSA Regulations part 4: Crashes, Training, Certification, State Law, Operation, Validation and Autopilots

After my initial reactions and Overall Analysis here is a point by point consideration of second set of elements from NHTSA’s 15 point certification list for robocars. See my series for other articles or the first half of the list.

Crashworthiness

In this section, the remind vendors they still need to meet the same standards as regular cars do. We are not ready to start removing heavy passive safety systems just because the vehicles get in fewer crashes. In the future we might want to change that, as those systems can be 1/3 of the weight of a vehicle.

They also note that different seating configurations (like rear facing seats) need to protect as well. It’s already the case that rear facing seats will likely be better in forward collisions. Face-to-face seating may present some challenges in this environment, as it is less clear how to deploy the airbags. Taxis in London often feature face-to-face seating, though that is less common in the USA. Will this be possible under these regulations?

The rules also call for unmanned vehicles to absorb energy like existing vehicles. I don’t know if this is a requirement on unusual vehicle design for regular cars or not. (If it were, it would have prohibited SUVs with their high bodies that can cause a bad impact with a low-body sports-car.)

Consumer Education and Training

This seems like another mild goal, but we don’t want a world where you can’t ride in a taxi unless you are certified as having taking a training course. Especially if it’s one for which you have very little to do. These rules are written more for people buying a car (for whom training can make sense) than those just planning to be a passenger.

Registration and Certification

This section imagines labels for drivers. It’s pretty silly and not very practical. Is a car going to have a sticker saying “This car can drive itself on Elm St. south of Pine, or on highway 101 except in Gilroy?” There should be another way, not labels, that this is communicated, especially because it will change all the time.

Post-Crash Behavior

This set is fairly reasonable — it requires a process describing what you do to a vehicle after a crash before it goes back into service.

Federal, State and Local Laws

This section calls for a detailed plan on how to assure compliance with all the laws. Interestingly, it also asks for a plan on how the vehicle will violate laws that human drivers sometimes violate. This is one of the areas where regulatory effort is necessary, because strictly cars are not allowed to violate the law — doing things like crossing the double-yellow line to pass a car blocking your path.  read more »

NHTSA Regulations part 3: Data Sharing, Privacy, Safety, Security and HMI

After my initial reactions and Overall Analysis here is a point by point consideration of the elements from NHTSA’s 15 point certification list for robocars. See also the second half and the whole series

Let’s dig in:

Data Recording and Sharing

These regulations require a plan about how the vehicle keep logs around any incident (while following privacy rules.) This is something everybody already does — in fact they keep logs of everything for now — since they want to debug any problems they encounter. NHTSA wants the logs to be available to NHTSA for crash investigation.

NHTSA also wants recordings of positive events (the system avoided a problem.)

Most interesting is a requirement for a data sharing plan. NHTSA wants companies to share their logs with their competitors in the event of incidents and important non-incidents, like near misses or detection of difficult objects.

This is perhaps the most interesting element of the plan, but it has seen some resistance from vendors. And it is indeed something that might not happen at scale without regulation. Many teams will consider their set of test data to be part of their crown jewels. Such test data is only gathered by spending many millions of dollars to send drivers out on the roads, or by convincing customers or others to voluntarily supervise while their cars gather test data, as Tesla has done. A large part of the head-start that leaders have in this field is the amount of different road situations they have been able to expose their vehicles to. Recordings of mundane driving activity are less exciting and will be easier to gather. Real world incidents are rare and gold for testing. The sharing is not as golden, because each vehicle will have different sensors, located in different places, so it will not be easy to adapt logs from one vehicle directly to another. While a vehicle system can play its own raw logs back directly to see how it performs in the same situation, other vehicles won’t readily do that.

Instead this offers the ability to build something that all vendors want and need, and the world needs, which is a high quality simulator where cars can be tested against real world recordings and entirely synthetic events. The data sharing requirement will allow the input of all these situations into the simulator, so every car can test how it would have performed. This simulation will mostly be at the “post perception level” where the car has (roughly) identified all the things on the road and is figuring out what to do with them, but some simulation could be done at lower levels.

These data logs and simulator scenarios will create what is known as a regression test suite. You test your car in all the situations, and every time you modify the software, you test that your modifications didn’t break something that used to work. It’s an essential tool.

In the history of software, there have been shared public test suites (often sourced from academia) and private ones that are closely guarded. For some time, I have proposed that it might be very useful if there were a a public and open source simulator environment which all teams could contribute scenarios to, but I always expected most contributions would come from academics and the open source community. Without this rule, the teams with the most test miles under their belts might be less willing to contribute.

Such a simulator would help all teams and level the playing field. It would allow small innovators to even build and test prototype ideas entirely in simulator, with very low cost and zero risk compared to building it in physical hardware.

This is a great example of where NHTSA could use its money rather than its regulatory power to improve safety, by funding the development of such test tools. In fact, if done open source, the agencies and academic institutions of the world could fund a global one. (This would face opposition from companies hoping to sell test tools, but there will still be openings for proprietary test tools.)

Privacy

This section demands a privacy policy. I’m not against that, though of course the history of privacy policies is not a great one. They mostly involve people clicking “I agree” to things they don’t read. More important is the requirement that vendors be thinking about privacy.

The requirement for user choice is an interesting one, and it conflicts with the logging requirements. People are wary of technology that will betray them in court. Of course, as long as the car is not a hybrid car that mixes human driving with self-driving, and the passenger is not liable in an accident, there should be minimal risk to the passenger from accidents being recorded.

The rules require that personal information be scrubbed from any published data. This is a good idea but history shows it is remarkably hard to do properly.  read more »

Detailed analysis of NHTSA robocar regulations: Overview

The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting.

The policy was introduced with a letter attributed to President Obama, where he wrote:

There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies. Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety.

This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it.

Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious — indeed it is sometimes their goal to cause that caution. Regulations result in projects needing “compliance departments” and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs.

This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can’t.

The US has had a history of regulating after the fact. Of being the place where “if it’s not been forbidden, it’s permitted.” This is what has allowed many of the most advanced robocar projects to flourish in the USA.

The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won’t stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line.

In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact.

Regulations and standards don’t deal well with that. They can only encode conventional wisdom. “Best practices” are really “the best we knew before the innovators came.” Innovators don’t ignore the old wisdom willy-nilly, they often ignore it or supersede it quite deliberately.

What’s good?

Some players — notably the big ones — have lauded these regulations. Big players, like car companies, Google, Uber and others have a reason to prefer regulations over a wild west landscape. Big companies like certainty. They need to know that if they build a product, that it will be legal to sell it. They can handle the cost of complex regulations, as long as they know they can build it.  read more »

Critique of NHTSA's newly released regulations

The long awaited list of recommendations and potential regulations for Robocars has just been released by NHTSA, the federal agency that regulates car safety and safety issues in car manufacture. Normally, NHTSA does not regulate car technology before it is released into the market, and the agency, while it says it is wary of slowing down this safety-increasing technology, has decided to do the unprecedented — and at a whopping 115 pages.

Broadly, this is very much the wrong direction. Nobody — not Google, Uber, Ford, GM or certainly NHTSA — knows the precise form of these cars will have when deployed. Almost surely something will change from our existing knowledge today. They know this, but still wish to move. Some of the larger players have pushed for regulation. Big companies like certainty. They want to know what the rules will be before they invest. Startups thrive better in the chaos, making up the rules as we go along.

NHTSA hopes to define “best practices” but the best anybody can do in 2016 is lay down existing practices and conventional wisdom. The entirely new methods of providing safety that are yet to be invented won’t be in such a definition.

The document is very detailed, so it will generate several blog posts of analysis. Here I present just initial reactions. Those reactions are broadly negative. This document is too detailed by an order of magnitude. Its regulations begin today, but fortunately they are also accepting public comment. The scope of the document is so large, however, that it seems extremely unlikely that they would scale back this document to the level it should be at. As such, the progress of robocar development in the USA may be seriously negatively affected.

Vehicle performance guidelines

The first part of the regulations is a proposed 15 point safety standard. It must be certified (by the vendor) that the car meets these standards. NHTSA wants the power, according to an Op-Ed by no less than President Obama, to be able to pull cars from the road that don’t meet these safety promises.

  • Data Recording and Sharing
  • Privacy
  • System Safety
  • Vehicle Cybersecurity
  • Human Machine Interface
  • Crashworthiness
  • Consumer Education and Training
  • Registration and Certification
  • Post-Crash Behavior
  • Federal, State and Local Laws
  • Operational Design Domain
  • Object and Event Detection and Response
  • Fall Back (Minimal Risk Condition)
  • Validation Methods
  • Ethical Considerations

As you might guess, the most disturbing is the last one. As I have written many times, the issue of ethical “trolley problems” where cars must decide between killing one person or another are a philosophy class tool, not a guide to real world situations. Developers should spend as close to zero effort on these problems as possible, since they are not common enough to warrant special attention, if not for our morbid fascination with machines making life or death decisions in hypothetical situations. Let the policymakers answer these questions if they want to; programmers and vendors don’t.

For the past couple of years, this has been a game that’s kept people entertained and ethicists employed. The idea that government regulations might demand solutions to these problems before these cars can go on the road is appalling. If these regulations are written this way, we will delay saving lots of real lives in the interest of debating which highly hypothetical lives will be saved or harmed in ridiculously rare situations.

NHTSA’s rules demand that ethical decisions be “made consciously and intentionally.” Algorithms must be “transparent” and based on input from regulators, drivers, passengers and road users. While the section makes mention of machine learning techniques, it seems in the same breath to forbid them.

Most of the other rules are more innocuous. Of course all vendors will know and have little trouble listing what roads their car works on, and they will have extensive testing data on the car’s perception system and how it handles every sort of failure. However, the requirement to keep the government constantly updated will be burdensome. Some vehicles will be adding streets to their route map literally ever day.

While I have been a professional privacy advocate, and I do care about just how the privacy of car users is protected, I am frankly not that concerned during the pilot project phase about how well this is done. I do want a good regime — and even the ability to do anonymous taxi — so it’s perhaps not too bad to think about these things now, but I suspect these regulations will be fairly meaningless unless written in consultation with independent privacy advocates. The hard reality is that during the test phase, even a privacy advocate has to admit that the cars will need to make very extensive recordings of everything they can, so that any problems encountered can be studied and fixed and placed into the test suite.

50 state laws

NHTSA’s plan has been partially endorsed by the self-driving coalition for safer streets (whose members include big players Ford, Google, Volvo, Uber and Lyft.) They like the fact that it has guidance for states on how to write their regulations, fearing that regulations may differ too much state to state. I have written that having 50 sets of rules may not be that bad an idea because jurisdictional competition can allow legal innovation and having software load new parameters as you drive over a border is not that hard.

In this document NHTSA asks the states to yield to the DOT on regulating robocar operation and performance. States should stick to registering cars, rules of the road, safety inspections and insurance. States will regulate human drivers as before, but the feds will regulate computer drivers.

States will still regulate testing, in theory, but the test cars must comply with the federal regulations.

New Authorities

A large part of the document just lists the legal justifications for NHTSA to regulate in this fashion and is primarily for policy wonks. Section 4, however, lists new authorities NHTSA is going to seek in order to do more regulation.

Some of the authorities they may see include:

  • Pre-market safety assurance: Defining testing tools and methods to be used before selling
  • Pre-market approval authority: Vendors would need approval from NHTSA before selling, rather than self-certifying compliance with the regulations
  • Hybrid approaches of pre-market approval and self-certification
  • Cease and desist authority: The ability to demand cars be taken off the road
  • Exemption authority: An ability to grant rue exemptions for testing
  • Post-sale authority to regulate software changes
  • Much more

Other quick notes:

  • NHTSA has abandoned their levels in favour of the SAE’s. The SAE’s were almost identical of course, with the addition of a “level 5” which is meaningless because it requires a vehicle that can drive literally everywhere, and there is not really a commercial reason to make a car at present that can do that.
  • NHTSA is now pushing the acronym “HAV” (highly automated vehicle) as yet another contender in the large sea of names people use for this technology. (Self-driving car, driverless car, autonomous vehicle, automated vehicle, robocar etc.)

This was my preliminary report. More analysis can be found under the NHTSA tag.

Actually, 50 different state regulations is not that bad an idea

At the recent AUVSI/TRB conference in San Francisco, there was much talk of upcoming regulation, particularly from NHTSA. Secretary of Transportation Foxx and his NHTSA staff spoke with just vague hints about what might come in the proposals due this fall. Generally, they said good things, namely that they are wary of slowing down the development of the technology. But they said things that suggest other directions.

Secretary Foxx began by agreeing that the past history of automotive driving systems was quite different. Regulations have typically been written years or decades after technologies have been deployed. And the written regulations have tended to involve standards which the vendors self-certify their compliance with. What this means is that there is not a government test center which confirms a car complies with the rules in the safety standards. Instead, the vendor certifies they are following the rules. If they certify falsely, that can get them in trouble later with regulators and more importantly in lawsuits. It’s by far the best approach unless the vendors have shown that they can’t be trusted in spite of the fear of these actions.

But Foxx said that they were going to go against that history and consider “pre-market regulation.” Regular readers will know I think that’s an unwise idea, and so do many regulators, who admit that we don’t know enough about the final form of the technology to regulate yet.

Fortunately it was also suggested that NHTSA’s new documents would be more in the form of “guidance” for states. Many states ask NHTSA to help them write self-driving car regulations. Which gets us to a statement that was echoed by several speakers to justify federal regulation, “Nobody wants 50 different regulations” on these cars.

At first, that seems obvious. I mean, who would want it to be that complex? Clearly it’s simpler to have to deal with only one set of regulations. But while that’s true, it doesn’t mean it’s the best idea. They are overestimating the work involved in dealing with different regulations, and underestimating the value of having the ability for states to experiment with new ideas in regulation, and the value of having states compete on who can write the best regulations.

If regulations differed so much between states as to require different hardware, that makes a stronger case. But most probably we are talking about rules that affect the software. That can be annoying, but it’s just annoying. A car can switch what rules it follows in software when it crosses a border with no trouble. It already has to, just because of the different rules of the road found in every state, and indeed every city and even every street! Having a few different policies state by state is no big addition.

Jurisdictional competition is a good thing though, particularly with emerging technologies. Let some states do it wrong, and others do it better, at least at the start. Le them compete to bring the technology first to their region, and invent new ideas on how to regulate something the world has never seen. Over time these regulations can be normalized. By the time people are making 10s of millions of robocars, that normalization will make more sense. But most vendors only plan to deploy in just a few states to begin, anyway. If a state feels its regulations are making it harder for the cars to spread to its cities, it can copy the rules of the other state it likes best.

The competition assures any mistake is localized — and probably eventually fixed. If California follows through with banning unmanned operation, as they have proposed, Texas has said it won’t.

I noted that if the hardware has to change, that’s more of an issue. It’s still not that much of an issue, because cars that operate as taxi services will probably never leave their base state. Most of them will have limited operational zones, and except in cities that straddle state borders, they won’t even leave town, let alone leave the state. Some day, the cars might do interstate trips, but even then you can solve this by having one car drive you to the border and then transfer to a car for the other state. Annoying, but only slight, and not a deal-breaker on the service. A car you own and take on road trips is a different story.

The one way having different state regulations would be a burden would be if there were 50 different complex certification processes to go through. Today, the federal government regulates how cars are made and the safety standards for that. The states regulate how cars operate on the roads. Robocars do blur that line, because how they are made controls how they drive.

For now, I still believe the tort system — even though it differs in all 50 states — is the best approach to regulation. It already has all developers highly paranoid about safety. When the day comes for certification, a unified process could make sense, but that day is still very far away. But for the regulations of just how these cars will operate, it might make sense to keep that with the states, even though it’s now part of the design of the car rather than the intentions of a human driver.

In time, unified regulations will indeed be desired by all, once we’ve had the time to figure out what the right regulations should be. But today? It’s too soon. Innovation requires variety.

An alternative to specific regulations for robocars: A liability doubling

Executive summary: Can our emotional fear of machines and the call for premature regulation be mollified by a temporary increase in liability which takes the place of specific regulations to keep people safe?

So far, most new automotive technologies, especially ones that control driving such as autopilot, forward collision avoidance, lanekeeping, anti-lock brakes, stability control and adaptive cruise control, have not been covered by specific regulations. They were developed and released by vendors, sold for years or decades, and when (and if) they got specific regulations, those often took the form of “Electronic stability control is so useful, we will now require all cars to have it.” It’s worked reasonably well.

That there are no specific regulations for these things does not mean they are unregulated. There are rafts of general safety regulations on cars, and the biggest deterrent to the deployment of unsafe technology is the liability system, and the huge cost of recalls. As a result, while there are exceptions, most carmakers are safety paranoid to a rather high degree just because of liability. At the same time they are free to experiment and develop new technologies. Specific regulations tend to come into play when it becomes clear that automakers are doing something dangerous, and that they won’t stop doing it because of the liability. In part this is because today, it’s easy to assign blame for accidents to drivers, and often harder to assign it to a manufacturing defect, or to a deliberate design decision.

The exceptions, like GM’s famous ignition switch problem, arise because of the huge cost of doing a recall for a defect that will have rare effects. Companies are afraid of having to replace parts in every car they made when they know they will fail — even fatally — just one time in a million. The one person killed or injured does not feel like one in a million, and our system pushes the car maker (and thus all customers) to bear that cost.

I wrote an article on regulating Robocar Safety in 2015, and this post expands on some of those ideas.

Robocars change some of this equation. First of all, in robocar accidents, the maker of the car (or driving system) is going to be liable by default. Nobody else really makes sense, and indeed some companies, like Volvo, Mercedes and Google, have already accepted that. Some governments are talking about declaring it but frankly it could never be any other way. Making the owner or passenger liable is technically possible, but do you want to ride in an Uber where you have to pay if it crashes for reasons having nothing to do with you?

Due to this, the fear of liability is even stronger for robocar makers.

Robocar failures will almost all be software issues. As such, once fixed, they can be deployed for free. The logistics of the “recall” will cost nothing. GM would have no reason not to send out a software update once they found a problem like the faulty ; they would be crazy not to. Instead, there is the difficult question of what to do between the time a problem is discovered and a fix has been declared safe to deploy. Shutting down the whole fleet is not a workable answer; it would kill deployment of robocars if several times a year they all stopped working.

In spite of all this history and the prospect of it getting even better, a number of people — including government regulators — think they need to start writing robocar safety regulations today, rather than 10-20 years after the cars are on the road as has been traditional. This desire is well-meaning and understandable, but it’s actually dangerous, because it will significantly slow down the deployment of safety technologies which will save many lives by making the world’s 2nd most dangerous consumer product safer. Regulations and standards generally codify existing practice and conventional wisdom. They are very bad ideas with emerging technologies, where developers are coming up with entirely new ways to do things, and entirely new ways to be safe. The last thing you want is to tell vendors they must apply old-world thinking when they can come up with much better thinking.

Sadly, there are groups who love old-world thinking, namely the established players. Big companies start out hating regulation but eventually come to crave it, because it mandates the way they do things and understand into the law. This stops upstarts from figuring out how to do it better, and established players love that.

The fear of machines is strong, so it may be that something else needs to be done to satisfy all desires: The desire of the public to feel the government is working to keep these scary new robots from being unsafe, and the need for unconstrained innovation. I don’t desire to satisfy the need to protect old ways of doing things.

One option would be to propose a temporary rule: For accidents caused by robocar systems, the liability, if the system should be at fault, shall be double that if a similar accident were caused by driver error. (Punitive damages for willful negligence would not be governed by this rule.) We know the cost of accidents caused by humans. We all pay for it with our insurance premiums, at an average rate of about 6 cents/mile. This would double that cost, pushing vendors to make their systems at least twice as safe as the average human in order to match that insurance cost.

Victims of these accidents (including hapless passengers in the vehicles) would now be doubly compensated. Sometimes no compensation is enough, but for better or worse, we have set on values and doubling them is not a bad deal. Creators of systems would have a higher bar to reach, and the public would know it.

While doubling the cost is a high price, I think most system creators would accept this as part of the risk of a bold new venture. You expect those to cost extra as they get started. You invest to make the system sustainable.

Over time, the liability multiplier would reduce, and the rule would go away entirely. I suspect that might take about a decade. The multiplier does present a barrier to entry for small players, and we don’t want something like that around for too long.

Fears confirmed on failure of fix to Hugo awards

Last year, I wrote a few posts on the attack on Science Fiction’s Hugo awards, concluding in the end that only human defence can counter human attack. A large fraction of the SF community felt that one could design an algorithm to reduce the effect of collusion, which in 2015 dominated the nomination system. (It probably will dominate it again in 2016.) The system proposed, known as “e Pluribus Hugo” attempted to defeat collusion (or “slates”) by giving each nomination entry less weight when a nomination ballot was doing very well and getting several of its choices onto the final ballot. More details can be found on the blog where the proposal was worked out.

The process passed the first round of approval, but does not come into effect unless it is ratified at the 2016 meeting and then it applies to the 2017 nominations. As such, the 2016 awards will be as vulnerable to the slates as before, however, there are vastly more slate nominators this year — presuming all those who joined in last year to support the slates continue to do so.

Recently, my colleague Bruce Schneier was given the opportunity to run the new system on the nomination data from 2015. The final results of that test are not yet published, but a summary was reported today in File 770 and the results are very poor. This is, sadly, what I predicted when I did my own modelling. In my models, I considered some simple strategies a clever slate might apply, but it turns out that these strategies may have been naturally present in the 2015 nominations, and as predicted, the “EPH” system only marginally improved the results. The slates still massively dominated the final ballots, though they no longer swept all 5 slots. I consider the slates taking 3 or 4 slots, with only 1 or 2 non-slate nominees making the cut to be a failure almost as bad as the sweeps that did happen. In fact, I consider even nomination through collusion to be a failure, though there are obviously degrees of failure. As I predicted, a slate of the size seen in the final Hugo results of 2015 should be able to obtain between 3 and 4 of the 5 slots in most cases. The new test suggests they could do this even with a much smaller slate group as they had in the 2015 nominations.

Another proposal — that there be only 4 nominations on each nominating ballot but 6 nominees on the final ballot — improves this. If the slates can take only 3, then this means 3 non-slate nominees probably make the ballot.

An alternative - Make Room, Make Room!

First, let me say I am not a fan of algorithmic fixes to this problem. Changing the rules — which takes 2 years — can only “fight the last war.” You can create a defence against slates, but it may not work against modifications of the slate approach, or other attacks not yet invented.

Nonetheless, it is possible to improve the algorithmic approach to attain the real goal, which is to restore the award as closely as possible to what it was when people nominated independently. To allow the voters to see the top 5 “natural” nominees, and award the best one the Hugo award, if it is worth.

The approach is as follows: When slate voting is present, automatically increase the number of nominees so that 5 non-slate candidates are also on the ballot along with the slates.

To do this, you need a formula which estimates if a winning candidate is probably present due to slate voting. The formula does not have to be simple, and it is OK if it occasionally identifies a non-slate candidate as being from a slate.

  1. Calculate the top 5 nominees by the traditional “approval” style ballot.
  2. If 2 or more pass the “slate test” which tries to measure if they appear disproportionately together on too many ballots, then increase the number of nominees until 5 entries do not meet the slate condition.

As a result, if there is a slate of 5, you may see the total pool of nominees increased to 10. If there are no slates, there would be only 5 nominees. (Ties for last place, as always, could increase the number slightly.)

Let’s consider the advantages of this approach:

  • While ideally it’s simple, the slate test formula does not need to be understood by the typical voter or nominator. All they need to know is that the nominees listed are the top nominees.
  • Likewise, there is no strategy in nominating. Your ballot is not reduced in strength if it has multiple winners. It’s pure approval.
  • If a candidate is falsely identified as passing the slate test — for example a lot of Doctor Who fans all nominate the same episodes — the worst thing that happens is we get a few extra nominees we should not have gotten. Not ideal, but pretty tame as a failure mode.
  • Likewise, for those promoting slates, they can’t claim their nominations are denied to them by a cabal or conspiracy.
  • All the nominees who would have been nominated in the absence of slate efforts get nominated; nobody’s work is displaced.
  • Fans can decide for themselves how they want to consider the larger pool of nominees. Based on 2015’s final results (with many “No Awards”) it appears fans wish to judge some works as there unfairly and discount them. Fans who wish it would have the option of deciding for themselves which nominees are important, and acting as though those are all that was on the ballot.
  • If it is effective, it gives the slates so little that many of them are likely to just give up. It will be much harder to convince large numbers of supporters to spend money to become members of conventions just so a few writers can get ignored Hugo nominations with asterisks beside them.

It has a few downsides, and a vulnerability.

  • The increase in the number of nominees (only while under slate attack) will frustrate some, particularly those who feel a duty to read all works before voting.
  • All the slate candidates get on the ballot, along with all the natural ones. The first is annoying, but it’s hardly a downside compared to having some of the natural ones not make it. A variant could block any work that fits the slate test but scored below 5th, but that introduces a slight (and probably un-needed) bit of bias.
  • You need a bigger area for nominees at the ceremony, and a bigger party, if they want to show up and be sneered at. The meaning of “Hugo Nominee” is diminished (but not as much as it’s been diminished by recent events.)
  • As an algorithmic approach it is still vulnerable to some attacks (one detailed below) as well as new attacks not yet thought of.
  • In particular, if slates are fully coordinated and can distribute their strength, it is necessary to combine this with an EPH style algorithm or they can put 10 or more slate candidates on the ballot.

All algorithmic approaches are vulnerable to a difficult but possible attack by slates. If the slate knows its strength and knows the likely range of the top “natural” nominees, it can in theory choose a number of slots it can safety win, and name only that many choices, and divide them up among supporters. Instead of having 240 people cast ballots with the 3 choices, they can have 3 groups of 80 cast ballots for one choice only. No simple algorithm can detect that or respond to it, including this one. This is a more difficult attack than the current slates can carry off, as they are not that unified. However, if you raise the bar, they may rise to it as well.

All algorithmic approaches are also vulnerable to a less ambitious colluding group, that simply wants to get one work on the ballot by acting together. That can be done with a small group, and no algorithm can stop it. This displaces a natural candidate and wins a nomination, but probably not the award. Scientologists were accused of doing this for L. Ron Hubbard’s work in the past.

What formula?

The best way to work out the formula would be through study of real data with and without slates. One candidate would be to take all nominees present on more than 5% of ballots, and pairwise compare them to find out what fraction of the time the pair are found together on ballots. Then detect pairs which are together a great deal more than that. How much more would be learned from analysis of real data. Of course, the slates will know the formula, so it must be difficult to defeat it even knowing it. As noted, false positives are not a serious problem if they are uncommon. False negatives are worse, but still better than alternatives.

So what else?

At the core is the idea of providing voters with information on who the natural nominees would have been, and allowing them to use the STV voting system of the final ballot to enact their will. This was done in 2015, but simply to give No Award in many of the categories — it was necessary to destroy the award in order to save it.

As such, I believe there is a reason why every other system (including the WSFS site selection) uses a democratic process, such as write-in, to deal with problems in nominations. Democratic approaches use human judgment, and as such they are not a response to slates, but to any attack.

As such, I believe a better system is to publish a longer list of nominees — 10 or more — but to publish them sorted according to how many nominations they got. This allows voters to decide what they think the “real top 5” was and to vote on that if they desire. Because a slate can’t act in secret, this is robust against slates and even against the “slate of one” described above. Revealing the sort order is a slight compromise, but a far lesser one than accepting that most natural nominees are pushed off the ballot.

The advantages of this approach:

  • It is not simply a defence against slates, it is a defence against any effort to corrupt the nominations, as long as it is detected and fans believe it.
  • It requires no algorithms or judgment by officials. It is entirely democratic.
  • It is completely fair to all comers, even the slate members.

The downsides are:

  • As above, there are a lot more nominees, so the meaning of being a nominee changes
  • Some fans will feel bound to read/examine more than 5 nominees, which produces extra work on their part
  • The extra information (sorting order) was never revealed before, and may have subtle effects on voting strategy. So far, this appears to be pretty minor, but it’s untested. With STV voting, there is about as little strategy as can be. Some voters might be very slightly more likely to rank a work that sorted low in first place, to bump its chances, but really, they should not do that unless they truly want it to win — in which case it is always right to rank it first.
  • It may need to add EPH style counting if slates get a high level of coordination.

Human judgment

Another surprisingly strong approach would be simply to add a rule saying, “The Hugo Administrators should increase the number of nominees in any category if their considered analysis leaves them convinced that some nominees made the final ballot through means other than the nominations of fans acting independently, adding one slot for each work judged to fail that test, but adding no more than 6 slots.” This has tended to be less popular, in spite of its simplicity and flexibility - it even deals with single-candidate campaigns — because some fans have an intense aversion to any use of human judgment by the Hugo administrators.

Advantages:

  • Very simple (for voters at least)
  • Very robust against any attempt to corrupt the nominations that the admins can detect. So robust that it makes it not worth trying to corrupt the nominations, since that often costs money.
  • Does not require constant changes to the WSFS constitution to adapt to new strategies, nor give new strategies a 2 year “free shot” before the rules change.
  • If administrators act incorrectly, the worst they do is just briefly increase the number of nominees in some categories.
  • If there are no people trying to corrupt the system in a way admins can see, we get the original system we had before, in all its glory and flaws.
  • The admins get access to data which can’t be released to the public to make their evaluations, so they can be smarter about it.

Disadvantages:

  • Clearly a burden for the administrators to do a good job and act fairly
  • People will criticise and second guess. It may be a good idea to have a post-event release of any methodology so people learn what to do and not do.
  • There is the risk of admins acting improperly. This is already present of course, but traditionally they have wanted to exercise very little judgment.

California DMV regulations may kill the state's robocar lead

Be careful what you wish for — yesterday the California DMV released its proposed regulations for the operation of robocars in California. All of this sprang from Google’s request to states that they start writing such regulations to ensure that their cars were legal, and California’s DMV took much longer than expected to release these regulations, which Google found quite upsetting.

The testing regulations did not bother too many, though I am upset that they effectively forbid the testing of delivery robots like the ones we are making at Starship because the test vehicles must have a human safety driver with a physical steering system. Requiring that driver makes sense for passenger cars but is impossible for a robot the size of breadbox.

Needing a driver

The draft operating rules effectively forbid Google’s current plan, making it illegal to operate a vehicle without a licenced and specially certified driver on board and ready to take control. Google’s research led them to feel that having a transition between human driver and software is dangerous, and that the right choice is a vehicle with no controls for humans. Most car companies, on the other hand, are attempting to build “co-pilot” or “autopilot” systems in which the human still plays a fundamental role.

The state proposes banning Google style vehicles for now, and drafting regulations on them in the future. Unfortunately, once something is banned, it is remarkably difficult to un-ban it. That’s because nobody wants to be the regulator or politician who un-bans something that later causes harm that can be blamed on them. And these vehicles will cause harm, just less harm than the people currently driving are doing.

The law forbids unmanned operation, and requires the driver/operator to be “monitoring the safe operation of the vehicle at all times and be capable of taking over immediate control.” This sounds like it certainly forbids sleeping, and might even forbid engrossing activities like reading, working or watching movies.

Special certificate

Drivers must not just have a licence, they must have a certificate showing they are trained in operation of a robocar. On the surface, that sounds reasonable, especially since the hand-off has dangers which training could reduce. But in practice, it could mean a number of unintended things:

  • Rental or even borrowing of such vehicles becomes impossible without a lot of preparation and some paperwork by the person trying it out.
  • Out of state renters may face a particular problem as they can’t have California licences. (Interstate law may, bizarrely, let them get by without the certificate while Californians would be subject to this rule.)
  • Car sharing or delivered car services (like my “whistlecar” concept or Mercedes Car2Come) become difficult unless sharers get the certificate.
  • The operator is responsible for all traffic violations, even though several companies have said they will take responsibility. They can take financial responsibility, but can’t help you with points on your licence or criminal liability, rare as that is. People will be reluctant to assume that responsibility for things that are the fault of the software in the car they use, as they have little ability to judge that software.

No robotaxis

With no robotaxis or unmanned operation, a large fraction of the public benefits of robocars are blocked. All that’s left is the safety benefit for car owners. This is not a minor thing, but it’s a small a part of the whole game (and active safety systems can attain a fair chunk of it in non-robocars.)

The state says it will write regulations for proper robocars, able to run unmanned. But it doesn’t say when those will arrive, and unfortunately, any promises about that will be dubious and non-binding. The state was very late with these regulations — which is actually perfectly understandable, since not even vendors know the final form of the technology, and it may well be late again. Unfortunately, there are political incentives for delay, perhaps indeterminate delay.

This means vendors will be uncertain. They may know that someday they can operate in California, but they can’t plan for it. With other states and countries around the world chomping at the bit to get vendors to move their operations, it will be difficult for companies to choose California, even though today most of them have.

People already in California will continue their R&D in California, because it’s expensive to move such things, and Silicon Valley retains its attractions as the high-tech capital of the world. But they will start making plans for first operation outside California, in places that have an assured timetable.

It will be less likely that somebody would move operations to California because of the uncertainty. Why start a project here — which in spite of its advantages is also the most expensive place to operate — without knowing when you can deploy here. And people want to deploy close to home if they have the option.

It might be that the car companies, whose prime focus is on co-pilot or autopilot systems today, may not be bothered by this uncertainty. In fact, it’s good for their simpler early goals because it slows the competition down. But most of them have also announced plans for real self-driving robocars where you can act just like a passenger. Their teams all want to build them. They might enjoy a breather, but in the end, they don’t want these regulations either.

And yes, it means that delivery robots won’t be able to go on the roads, and must stick to the sidewalks. That’s the primary plan at Starship today, but not the forever plan.

California should, after receiving comment, alter these regulations. They should allow unmanned vehicles which meet appropriate functional safety goals to operate, and they should have a real calendar date when this is going to happen. If they don’t, they won’t be helping to protect Californians. They will take California from being the envy of the world as the place that has attracted robocar development from all around the planet to just another contender. And that won’t just cost jobs, it will delay the deployment in California of a technology that will save the lives of Californians.

I don’t want to pretend that deploying full robocars is without risk. Quite the reverse, people will be hurt. But people are already being hurt, and the strategy of taking no risk is the wrong one.

Facebook makes less than $10/user, can we find alternatives to advertising?

Facebook’s ARPU (average revenue per user, annualized) in the last quarter was just under $10, declining slightly in the USA and Canada, and a much lower 80 cents in the rest of the world. This is quite a bit less than Google’s which hovers well over $40.

That number has been mostly growing (it shrank last quarter for the first time) but it’s fairly low. I can solidly say I would happily pay $10 a year — even $50 a year — for a Facebook which was not simply advertising-free, but more importantly motivated only to please its customers and not advertisers. Why can’t I get that?

One reason is that it’s not that simple. If Facebook had to actually charge, it would not get nearly as many users as it does being free and ad-supported. It is frictionless to join and participate in FB, and that’s important with the natural monopolies that apply to social media. You dare not do anything that would scare away users.

Valley of Distraction

Being advertising supported bends how Facebook operates, as it will any company. The most obvious thing is the annoying ads. Particularly annoying are the ads which show up in my feed, often marked with “Friend X liked this company.” I am starting to warn my friends to please not like the pages of anybody who buys ads on FB, because these ads are even more distracting than regular ads. Also extra distracting are ads which are “just off the bulls-eye,” which is to say they are directed at me (based on what FB knows about me) and thus likely to distract me, but which turn out to be completely useless. That’s worse than an ad which was not well aimed and so doesn’t distract me at all with its uselessness. There is a “valley of distraction” when it comes to targeting ads:

  • Ads about things I am researching or may want to buy can be actually valuable to me, and also rewarding to the advertiser.
  • Ads about things I am interested in, but have already bought or would not buy via an ad are highly distracting but provide no value to the advertiser and negative value to me.
  • Ads about things I have no interest in tend to be only mildly distracting if they are off to the side and not blinky/flashy/pop-up style.

As sites get better at ad targeting, they generate more of the middle type.

Privacy

Facebook’s need to monetize with advertising gives them strong incentives to be less protective of privacy. All social networks have an anti-privacy incentive, because the more they can get you to share with more people, the more they can make things happen on their site, and the more they can attract in other users. But advertising ads to this. Without ads, FB would focus only on attracting and retaining customers by serving them, which would be good for users.

As the old saying goes, “If you’re not paying, you’re not the customer, you’re the product.” To give credit to many web companies, in spite of the reality of this, they actually work hard to reduce the truth of this statement, but they can never do it entirely.

How we monetize the web

When I created the first internet based publication in 1989, I did it by selling subscriptions. There really wasn’t a way to do it with advertising at that time, but I lamented the eventual switch that later came which has made advertising the overwhelmingly dominant means of monetizing the web. There are a few for-pay sites but they are very few and specialized. I lament that forces pushed the web that way, and have always wished for a mechanism to make it easier, if not as easy, to monetize a web site with payment from customers. That’s why I promoted ideas like microrefunds as well as selling books in flat-rate pools like my Library of Tomorrow back in 1992. (Fortunately this concept is now starting to get some traction in some areas, like Amazon’s Kindle Unlimited.)

I’m also very interested in the way that low-friction digital currencies like Bitcoin and in particular Dogecoin have made it work workable to give donations and tips. Dogecoin started as a joke, but because people viewed it as a joke, they were willing to build easy and low security means of tipping people. The lack of value attached to Dogecoin meant people were more willing to play around with such approaches. Perhaps Bitcoin’s greatest flaw is that because its transactions are irrevocable, you must make the engine that spends them secure, and in turn, that demands it is harder to use. Easy to spend means easy to lose, or easy to steal and that’s a rule that’s hard to break. The credit card system, in order to be easy to spend, solves the problem of being easy to steal by allowing chargebacks or other human fixes when problems occur. While we can do better at making digital money easy to spend and not quite so easy to steal, it’s hard to figure out how to be perfect at that without something akin to chargebacks.

To monetize the web without advertising, we need a truly frictionless money. Advertising provides a money whose only friction is the annoyance of the advertising. To consume an ad-supported product you need do nothing but waste a little time. It’s a fairly passive thing. To consume a consumer-paid product, you must pay, and that creates three frictions:

  1. The spending itself — though if it’s low that should be tolerable
  2. The mental cost of thinking about the spending — which often exceeds the monetary cost on tiny transactions
  3. The user interface cost of your means of payment.

You can’t eliminate #1 of course, but you can realize that the monetary cost is less than the negatives introduced by advertising. Eliminating #2 and #3 in a secure way is the challenge, and indeed it is the challenge which I devised the microrefund concept to address.

Will we pay the cost?

I think lots of people would pay $10/year for Facebook, particularly if alternatives also charged money. It’s a bargain at that price. But would people pay the $50 that Google makes from them? Again, I think Google is a bargain at that price, but for a lot of the world, that could be a lot of money, and that’s Google’s average revenue, not its revenue for me. (I click on ads so rarely that I think their revenue from me is actually a lot lower.)

I already bought my ticket on Iberia!

At the same time, Google’s ads are among the least painful. The ads on search are marked and isolated, and largely text based. The only really bad ads Google is doing are the ones in the valley of distraction in Adsense. As I wrote earlier, we are all constantly seeing ads for things we already bought.

And so, even though a Google search might only cost you a couple of pennies, I doubt we could move Google to payment supported even if we could remove all the friction from it.

This is not true for many other sites, though. Video sites would be a great target for frictionless payment, since showing a 30 second video ad to watch a 2 minute video is a terrible bargain, yet we see it happen frequently. There are many sites who do much worse than Google at monetizing themselves through advertising, and who would welcome a way to get more decent revenues via payment — though of course they can’t get greedy or they friction of the payment itself will reduce their business.

In addition, there are zillions of small sites and sites about topics of no commercial value who can’t make much money from advertising at all. Some of these sites probably don’t even exist because they can’t become going concerns in the current regime of monetizing the web — what fraction of the web are we missing because we have only one practical way to monetize it?

Replacing E-mail: The calendar as communications tool

I want to begin a series of thoughts on how E-mail has failed us and what we should do about it.

Yes, E-mail has failed, and not, as we thought, because it got overwhelmed with spam. There is tons of spam but we seem to be handling it. The problem might be better described as “too much signal” rather than the signal/noise ratio. There are three linked problems:

  1. There is just too much E-mail from people we actually have relationships with. Part of this is the over-reach of businesses, who think that because you bought a tube of toothpaste that you should fill out a customer satisfaction survey and get the weekly bargains mail-out, but part of it is there really are a lot of people who want to interact with you, and e-mail makes it very easy for them to do that, particularly to “cc” you on mail you may only have a marginal interest.
  2. Because of problem 1, people are moving away from E-mail to other tools, particularly the younger generation. They (and we) are using Facebook mail and other social tools, instant messengers, texting and more.
  3. The volume means that you can’t handle it all. Important mails scroll off the main screen and are forgotten about. And some people are just not using their E-mail, so it is losing its place as the one universal and reliable way to send somebody a message.

One of the key differences the new media have is they focus on person to person communications — while there are group tools, they don’t even have the concept of a “cc” or mailing list, or even sending to two people.

I’m going to write more on these topics in the future, but today I want to talk about

The shared calendar as the communications tool

I’ve been pushing people I work with to use the calendar as the means of telling me about anything that is going to happen at a specific time. If people send me an E-mail saying, “Can we talk at 3?” I say, “don’t tell me that in an E-mail. Create an event on your calendar and invite me to it. Put the details of the conversation into the calendar entry.”

In general, I want to create a pattern of communication where if any message you send would cause the other person to put something on their calendar, you instead communicate it through the calendar by creating an event that they are an attendee of.

Our calendar and E-mail tools need to improve to make this work better. When everybody uses a shared calendar like Google Calendar, it is a lot easier, but we need tools that make it just as easy when people don’t use the same calendar tool.

When things do get into the calendar, you get a lot of nice benefits:

  • You are much less likely to forget about or miss the task or event
  • When you want to find the data on the event near the time of the event, you don’t have to hunt around for it — it is highlighted, in my case right on the home screen of my phone
  • If the event has a location, your phone typically is able to generate a map and even warn you when you need to leave based on traffic
  • If the event has a phone call/hangout/whatever, your devices can join that with a single click, no hunting for URLs or meeting codes — particularly while driving. (Google put in a tool to add one of their hangouts to any event in the calendar.)
  • Calendar events remove any confusion on time zones when people are in different zones.

Here are some features I want, some of which exist in current tools (particularly if you attach an ICS calendar entry to an E-mail) but which don’t yet work seamlessly.

  • Your email tool, when writing a message should notice if you’re talking about an event that’s not already in your calendars, and parse out dates and other data and turn it into a calendar invitation
  • Likewise your receiving tool should parse messages and figure this out, since the sender might not have done that.
  • E-mails that create calendar events should be linked together, so that from your calendar you can read all the email threads around the event, find any associated files or other resources.
  • Likewise it should be easy to contact any others tied to a calendar event by any means, not just the planned means of communication. For example, a good calendar should have a system where I can be phoned or texted on my cell phone by any other member of the event during the time around the event, without having to reveal my cell phone number. How often have you been waiting for a conference call to have somebody say, “does anybody know John’s number? Let’s find where he is.”
  • When I accept a calendar entry from outside and confirm, that should give them some access to use that calendar entry as a means of communication, even across calendar and mail platforms.

For example, when I book a flight or hotel or rent a car, the company should respond by putting that in my calendar. I might given them a token enabling that, or manually approve their invitation. Of course the confirmation numbers, links on how to change the reservation and more will be in the calendar entry. If the flight is delayed, they should be able to use this linkage to contact me — my calendar tool should know best where I am and the best ways to reach me — and push updates to me. When I get to the check-in desk, our shared calendar entry should make my phone and their computer immediately connect and make the process seamless.

When I approach the desk of a hotel, my phone should notice this, do the handshake and by the time I walk up they should say, “Good evening, Mr. Templeton, could you please sign this form? Here’s your room key, you’re in suite 1207.” (Of course, even better if I don’t have to sign the form and my phone, or any of the magstripe, chip or NFC cards I have in my wallet automatically become my room key.)

When you think this way, you start realizing that a surprisingly large amount of our E-mails are about events with times. And, as I wrote 8 years ago, most e-mails involve tasks, and E-mail and time management should be merged. Sadly my ideas of so long ago remain unrealized, and since then, E-mail has declined.

One caveat — if we do start using calendars for communication more, we must be able to prevent spam, and even over-use by people we know. We can’t do what we did with e-mail. Invitations to an event with just one or two people can be made easy — even automatic for those with authorization. Creating multi-person events needs to be a harder thing for people who aren’t whitelisted, though not impossible. The meaning of the word “invite” also needs to be more tightly understood. A solicitation for me to buy a ticket is not an invite.

Second musings on the the Hugo Awards and the fix

Last week’s Hugo Awards point of crisis caused a firestorm even outside the SF community. I felt it time to record some additional thoughts above the summary of many proposals I did.

It’s not about the politics

I think all sides have made an error by bringing the politics and personal faults of either side into the mix. Making it about the politics legitimises the underlying actions for some. As such, I want to remove that from the discussion as much as possible. That’s why in the prior post I proposed an alternate history.

What are the goals of the award?

Awards are funny beasts. They are almost all given out by societies. The Motion Picture Academy does the Oscars, and the Worldcons do the Hugos. The Hugos, though, are overtly a “fan” award (unlike the Nebulas which are a writer’s award, and the Oscars which are a Hollywood pro’s award.) They represent the view of fans who go to the Worldcons, but they have always been eager for more fans to join that community. But the award does not belong to the public, it belongs to that community.

While the award is done with voting and ballots, I believe it is really a measurement, which is to say, a survey. We want to measure the aggregate opinion of the community on what the best of the year was. The opinions are, of course, subjective, but the aggregate opinion is an objective fact, if we could learn it.

In particular, I would venture we wish to know which works would get the most support among fans, if the fans had the time to fairly judge all serious contenders. Of course, not everybody reads everything, and not everybody votes, so we can’t ever know that precisely, but if we did know it, it’s what we would want to give the award to.

To get closer to that, we use a 2 step process, beginning with a nomination ballot. Survey the community, and try to come up with a good estimate of the best contenders based on fan opinion. This both honours the nominees but more importantly it now gives the members the chance to more fully evaluate them and make a fair comparison. To help, in a process I began 22 years ago, the members get access to electronic versions of almost all the nominees, and a few months in which to evaluate them.

Then the final ballot is run, and if things have gone well, we’ve identified what truly is the best loved work of the informed and well-read fans. Understand again, the choices of the fans are opinions, but the result of the process is our best estimate of a fact — a fact about the opinions.

The process is designed to help obtain that winner, and there are several sub-goals

  • The process should, of course, get as close to the truth as it can. In the end, the most people should feel it was the best choice.
  • The process should be fair, and appear to be fair
  • The process should be easy to participate in, administer and to understand
  • The process should not encourage any member to not express their true opinion on their ballot. If they lie on their ballot, how can we know the true best aggregate of their opinions.
  • As such, ballots should be generated independently, and there should be very little “strategy” to the system which encourages members to falsely represent their views to help one candidate over another.
  • It should encourage participation, and the number of nominees has to be small enough that it’s reasonable for people to fairly evaluate them all

A tall order, when we add a new element — people willing to abuse the rules to alter the results away from the true opinion of the fans. In this case, we had this through collusion. Two related parties published “slates” — the analog of political parties — and their followers carried them out, voting for most or all of the slate instead of voting their own independent and true opinion.

This corrupts the system greatly because when everybody else nominates independently, their nominations are broadly distributed among a large number of potential candidates. A group that colludes and concentrates their choices will easily dominate, even if it’s a small minority of the community. A survey of opinion becomes completely invalid if the respondents collude or don’t express their true views. Done in this way, I would go so far as to describe it as cheating, even though it is done within the context of the rules.

Proposals that are robust against collusion

Collusion is actually fairly obvious if the group is of decent size. Their efforts stick out clearly in a sea of broadly distributed independent nominations. There are algorithms which make it less powerful. There are other algorithms that effectively promote ballot concentration even among independent nominators so that the collusion is less useful.

A wide variety have been discussed. Their broad approaches include:

  • Systems that diminish the power of a nominating ballot as more of its choices are declared winners. Effectively, the more you get of what you asked for, the less likely you will get more of it. This mostly prevents a sweep of all nominations, and also increases diversity in the final result, even the true diversity of the independent nominators.
  • Systems which attempt to “maximize happiness,” which is to say try to make the most people pleased with the ballot by adding up for each person the fraction of their choices that won and maximizing that. This requires that nominators not all nominate 5 items, and makes a ballot with just one nomination quite strong. Similar systems allow putting weight on nominations to make some stronger than others.
  • Public voting, where people can see running tallies, and respond to collusion with their own counter-nominations.
  • Reduction of the number of nominations for each member, to stop sweeps.

The proposals work to varying degrees, but they all significantly increase the “strategy” component for an individual voter. It becomes the norm that if you have just a little information about what the most common popular choices will be, that your wisest course to get the ballot you want will be to deliberately remove certain works from your ballot.

Some members would ignore this and nominate honestly. Many, however, would read articles about strategy, and either practice it or wonder if they were doing the right thing. In addition to debates about collusion, there would be debates on how strategy affected the ballot.

Certain variants of multi-candidate STV help against collusion and have less strategy, but most of the methods proposed have a lot.

In addition, all the systems permit at least one, and as many as 2 or 3 slate-choice nominees onto the final ballot. While members will probably know which ones those are, this is still not desired. First of all, these placements displace other works which would otherwise have made the ballot. You could increase the size of the final ballot, you need to know how many slate choices will be on it.

It should be clear, when others do not collude, slate collusion is very powerful. In many political systems, it is actually considered a great result if a party with 20% of the voters gains 20% of the “victories.” Here, we have a situation with 2,000 nominators, and where just 100 colluding members can saturate some categories and get several entries into all of them, and with 10% (the likely amount in 2015) they can get a large fraction of them. As such it is not proportional representation at all.

Fighting human attackers with human defence

Consideration of the risks of confusion and strategy with all these systems, I have been led to the conclusion that the only solid response to organized attackers on the nomination system is a system of human judgement. Instead of hard and fast voting rules, the time has come, regrettably, to have people judge if the system is under attack, and give them the power to fix it.

This is hardly anything new, it’s how almost all systems of governance work. It may be a hubris to suggest the award can get by without it. Like the good systems of governance this must be done with impartiality, transparency and accountability, but it must be done.

I see a few variants which could be used. Enforcement would most probably be done by the Hugo Committee, which is normally a special subcommittee of the group running the Worldcon. However, it need not be them, and could be a different subcommittee, or an elected body.

While some of the variants I describe below add complexity, it is not necessary to do them. One important thing about the the rule of justice is that you don’t have to get it exactly precise. You get it in broad strokes and you trust people. Sometimes it fails. Mostly it works, unless you bring in the wrong incentives.

As such, some of these proposals work by not changing almost anything about the “user experience” of the system. You can do this with people nominating and voting as they always did, and relying on human vigilance to deflect attacks. You can also use the humans for more than that.

A broad rule against collusion and other clear ethical violations

The rule could be as broad as to prohibit “any actions which clearly compromise the honesty and independence of ballots.” There would be some clarifications, to indicate this does not forbid ordinary lobbying and promotion, but does prohibit collusion, vote buying, paying for memberships which vote as you instruct and similar actions. The examples would not draw hard lines, but give guidance.

Explicit rules about specific acts

The rule could be much more explicit, with less discretion, with specific unethical acts. It turns out that collusion can be detected by the appearance of patterns in the ballots which are extremely unlikely to occur in a proper independent sample. You don’t even need to know who was involved or prove that anybody agreed to any particular conspiracy.

The big challenge with explicit rules (which take 2 years to change) is that clever human attackers can find holes, and exploit them, and you can’t fix it then, or in the next year.

Delegation of nominating power or judicial power to a sub group elected by the members

Judicial power to fix problems with a ballot could fall to a committee chosen by members. This group would be chosen by a well established voting system, similar to those discussed for the nomination. Here, proportional representation makes sense, so if a group is 10% of the members it should have 10% of this committee. It won’t do it much good, though, if the others all oppose them. Unlike books, the delegates would be human beings, able to learn and reason. With 2,000 members, and 50 members per delegate, there would be 40 on the judicial committee, and it could probably be trusted to act fairly with that many people. In addition, action could require some sort of supermajority. If a 2/3 supermajority were needed, attackers would need to be 1/3 of all members.

This council could perhaps be given only the power to add nominations — beyond the normal fixed count — and not to remove them. Thus if there are inappropriate nominations, they could only express their opinion on that, and leave it to the voters what to do with those candidates, including not reading them and not ranking them.

Instead of judicial power, it might be simpler to appoint pure nominating power to delegates. Collusion is useless here because in effect all members are now colluding about their different interests, but in an honest way. Unlike pure direct democracy, the delegates, not unlike an award jury, would be expected to listen to members (and even look at nominating ballots done by them) but charged with coming up with the best consensus on the goal stated above. Such jurors would not simply vote their preferences. They would swear to attempt to examine as many works as possible in their efforts. They would suggest works to others and expect them to be likely to look at them. They would expect to be heavily lobbied and promoted to, but as long as its pure speech (no bribes other than free books and perhaps some nice parties) they would be expected to not be fooled so easily by such efforts.

As above, a nominating body might also only start with a member nominating system and add candidates to it and express rulings about why. In many awards, the primary function of the award jury is not to bypass the membership ballot, but to add one or two works that were obscure and the members may have missed. This is not a bad function, so long as the “real ballot” (the one you feel a duty to evaluate) is not too large.

Transparency and accountability

There is one barrier to transparency, in that releasing preliminary results biases the electorate in the final ballot, which would remain a direct survey of members with no intermediaries — though still the potential to look for attacks and corruption. There could also be auditors, who are barred from voting in the awards and are allowed to see all that goes on. Auditors might be people from the prior worldcon or some other different source, or fans chosen at random.

Finally, decisions could be appealed to the business meeting. This requires a business meeting after the Hugos. Attackers would probably always appeal any ruling against them. Appeals can’t alter nominations, obviously, or restore candidates who were eliminated.

Comprehensive plan

All the above requires the two year ratification process and could not come into effect (mostly) until 2017. To deal with the current cheating and the promised cheating in 2016, the following are recommended.

  1. Downplay the 2015 Hugo Award, perhaps with sufficient fans supporting this that all categories (including untainted ones) have no award given.
  2. Conduct a parallel award under a new system, and fête it like the Hugos, though they would not use that name.
  3. Pass new proposed rules including a special rule for 2016
  4. If 2016’s award is also compromised, do the same. However, at the 2016 business meeting, ratify a short-term amendment proposed in 2015 declaring the alternate awards to be the Hugo awards if run under the new rules, and discarding the uncounted results of the 2016 Hugos conducted under the old system. Another amendment would permit winners of the 2015 alternate award to say they are Hugo winners.
  5. If the attackers gave up, and 2016’s awards run normally, do not ratify the emergency plan, and instead ratify the new system that is robust against attack for use in 2017.

Hugo awards suborned, what can or should be done?

Since 1992 I have had a long association with the Hugo Awards for SF & Fantasy given by the World Science Fiction Society/Convention. In 1993 I published the Hugo and Nebula Anthology which was for some time the largest anthology of current fiction every published, and one of the earliest major e-book projects. While I did it as a commercial venture, in the years to come it became the norm for the award organizers to publish an electronic anthology of willing nominees for free to the voters.

This year, things are highly controversial, because a group of fans/editors/writers calling themselves the “Sad Puppies,” had great success with a campaign to dominate the nominations for the awards. They published a slate of recommended nominations and a sufficient number of people sent in nominating ballots with that slate so that it dominated most of the award categories. Some categories are entirely the slate, only one was not affected. It’s important to understand the nominating and voting on the Hugos is done by members of the World SF Society, which is to say people who attend the World SF Convention (Worldcon) or who purchase special “supporting” memberships which don’t let you go but give you voting rights. This is a self-selected group, but in spite of that, it has mostly manged to run a reasonably independent vote to select the greatest works of the year. The group is not large, and in many categories, it can take only a score or two of nominations to make the ballot, and victory margins are often small. As such, it’s always been possible, and not even particularly hard, to subvert the process with any concerted effort. It’s even possible to do it with money, because you can just buy memberships which can nominate or vote, so long as a real unique person is behind each ballot.

The nominating group is self-selected, but it’s mostly a group that joins because they care about SF and its fandom, and as such, this keeps the award voting more independent than you would expect for a self-selected group. But this has changed.

The reasoning behind the Sad Puppy effort is complex and there is much contentious debate you can find on the web, and I’m about to get into some inside baseball, so if you don’t care about the Hugos, or the social dynamics of awards and conventions, you may want to skip this post.  read more »

Issues in regulating robocars, and the case for a light hand

All over the world, people (and governments) are debating about regulations for robocars. First for testing, and then for operation. It mostly began when Google encouraged the state of Nevada to write regulations, but now it’s in full force. The topic is so hot that there is a danger that regulations might be drafted long before the shape of the first commercial deployments of the technology take place.

As such I have prepared a new special article on the issues around regulating robocars. The article concludes that in spite of a frequent claim that we want to regulate and standarize even before the technology has been out in the market for a while, this is in fact both a highly unusual approach, and possibly even a dangerous approach.

Read:

Regulating Robocar Safety : An examination of the issues around regulating robocar safety and the case for a very light touch