Archives

Date
  • 01
  • 02
  • 03
  • 04
  • 05
  • 06
  • 07
  • 08
  • 09
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Software recalls and quick fixes to safety-critical computers in robocars

While giving a talk on robocars to a Stanford class on automative innovation on Wednesday, I outlined the growing problem of software recalls and how they might effect cars. If a company discovers a safety problem in a car’s software, it may be advised by its lawyers to shut down or cripple the cars by remote command until a fix is available. Sebastian Thrun, who had invited me to address this class, felt this could be dealt with through the ability to remotely patch the software.

This brings up an issue I have written about before — the giant dangers of automatic software updates. Automatic software updates are a huge security hole in today’s computer systems. On typical home computers, there are now many packages that do automatic updates. Due to the lack of security in these OSs, a variety of companies have been “given the keys” to full administrative access on the millions of computers which run their auto-updater. Companies which go to all sorts of lengths to secure their computers and networks are routinely granting all these software companies top level access (ie. the ability to run arbitrary code on demand) without thinking about it. Most of these software companies are good and would never abuse this, but this doesn’t mean that they don’t have employees who can’t be bribed or suborned, or security holes in their own networks which would let an attacker in to make a malicious update which is automatically sent out.

I once asked the man who ran the server room where the servers for Pointcast (the first big auto-updating application) were housed, how many fingers somebody would need to break to get into his server room. “They would not have to break any. Any physical threat and they would probably get in,” I heard. This is not unusual, and often there are ways in needing far less than this.

So now let’s consider software systems which control our safety. We are trusting our safety to computers more and more these days. Every elevator or airplane has a computer which could kill us if maliciously programmed. More and more cars have them, and more will over time, long before we ride in robocars. All around the world are electric devices with computer controls which could, if programmed maliciously, probably overload and start many fires, too. Of course, voting machines with malicious programs could even elect the wrong candidates and start baseless wars. (Not that I’m saying this has happened, just that it could.)

However these systems do not have automatic update. The temptation for automatic update will become strong over time, both because it is cheap and it allows the ability to fix safety problems, and we like that for critical systems. While the internal software systems of a robocar would not be connected to the internet in a traditional way, they might be programmed to, every so often, request and accept certified updates to their firmware from the components of the car’s computer systems which are connected to the net.

Imagine a big car company with 20 million robocars on the road, and an automatic software update facility. This would allow a malicious person, if they could suborn that automatic update ability, to load in nasty software which could kill tens of millions. Not just the people riding in the robocars would be affected, because the malicious software could command idle cars to start moving and hit other cars or run down pedestrians. It would be a catastrophe of grand proportions, greater than a major epidemic or multiple nuclear bombs. That’s no small statement.

There are steps that can be taken to limit this. Software updates should be digitally signed, and they should be signed by multiple independent parties. This stops any one of the official parties from being suborned (either by being a mole, or being tortured, or having a child kidnapped, etc.) to send out an update. But it doesn’t stop the fact that the 5 executives who have to sign an update will still be trusting the programming team to have delivered them a safe update. Assuring that requires a major code review of every new update, by a team that carefully examines all source changes and compiles the source themselves. Right now this just isn’t common practice.

However, it gets worse than this. An attacker can also suborn the development tools, such as the C compilers and linkers which build the final binaries. The source might be clean, but few companies keep perfect security on all their tools. Doing so requires that all the tool vendors have a similar attention to security in all their releases. And on all the tools they use.

One has to ask if this is even possible. Can such a level of security be maintained on all the components, enough to stop a terrorist programmer or a foreign government from inserting a trojan into a tool used by a compiler vendor who then sends certified compilers to the developers of safety-critical software such as robocars? Can every machine on every network at every tool vendor be kept safe from this?

We will try but the answer is probably not. As such, one result may be that automatic updates are a bad idea. If updates spread more slowly, with the individual participation of each machine owner, it gives more time to spot malicious code. It doesn’t mean that malicious code can’t be spread, as individual owners who install updates certainly won’t be checking everything they approve. But it can stop the instantaneous spread, and give a chance to find logic bombs set to go off later.

Normally we don’t want to go overboard worrying about “movie plot” threats like these. But when a single person can kill tens of millions because of a software administration practice, it starts to be worthy of notice.

Do you get Twitter? Is a "sampled" medium good or bad?

I just returned from Jeff Pulver’s “140 Characters” conference in L.A. which was about Twitter. I asked many people if they get Twitter — not if they understand how it’s useful, but why it is such a hot item, and whether it deserves to be, with billion dollar valuations and many talking about it as the most important platform.

Some suggested Twitter is not as big as it appears, with a larger churn than expected and some plateau appearing in new users. Others think it is still shooting for the moon.

The first value in twitter I found was as a broadcast SMS. While I would not text all my friends when I go to a restaurant or a club, having a way so that they will easily know that (and might join me) is valuable. Other services have tried to do things like this but Twitter is the one that succeeded in spite of not being aimed at any specific application like this.

This explains the secret of Twitter. By being simple (and forcing brevity) it was able to be universal. By being more universal it could more easily attain critical mass within groups of friends. While an app dedicated to some social or location based application might do it better, it needs to get a critical mass of friends using it to work. Once Twitter got that mass, it had a leg up at being that platform.

At first, people wondered if Twitter’s simplicity (and requirement for brevity) was a bug or a feature. It definitely seems to have worked as a feature. By keeping things short, Twitter makes is less scary to follow people. It’s hard for me to get new subscribers to this blog, because subscribing to the blog means you will see my moderately long posts every day or two, and that’s an investment in reading. To subscribe to somebody’s Twitter feed is no big commitment. Thus people can get a million followers there, when no blog has that. In addition, the brevity makes it a good match for the mobile phone, which is the primary way people use Twitter. (Though usually the smart phone, not the old SMS way.)

And yet it is hard not to be frustrated at Twitter for being so simple. There are so many things people do with Twitter that could be done better by some more specialized or complex tool. Yet it does not happen.

Twitter has made me revise slightly my two axes of social media — serial vs. browsed and reader-friendly vs. writer friendly. Twitter is generally serial, and I would say it is writer-friendly (it is easy to tweet) but not so reader friendly (the volume gets too high.)

However, Twitter, in its latest mode, is something different. It is “sampled.” In normal serial media, you usually consume all of it. You come in to read and the tool shows you all the new items in the stream. Your goal is to read them all, and the publishers tend to expect it. Most Twitter users now follow far too many people to read it all, so the best they can do is sample — they come it at various times of day and find out what their stalkees are up to right then. Of course, other media have also been sampled, including newspapers and message boards, just because people don’t have time, or because they go away for too long to catch up. On Twitter, however, going away for even a couple of hours will give you too many tweets to catch up on.

This makes Twitter an odd choice as a publishing tool. If I publish on this blog, I expect most of my RSS subscribers will see it, even if they check a week later. If I tweet something, only a small fraction of the followers will see it — only if they happen to read shortly after I write it, and sometimes not even then. Perhaps some who follow only a few will see it later, or those who specifically check on my postings. (You can’t. Mine are protected, which turns out to be a mistake on Twitter but there are nasty privacy results from not being protected.)

TV has an unusual history in this regard. In the early days, there were so few stations that many people watched, at one time or another, all the major shows. As TV grew to many channels, it became a sampled medium. You would channel surf, and stop at things that were interesting, and know that most of the stream was going by. When the Tivo arose, TV became a subscription medium, where you identify the programs you like, and you see only those, with perhaps some suggestions thrown in to sample from.

Online media, however, and social media in particular were not intended to be sampled. Sure, everybody would just skip over the high volume of their mailing lists and news feeds when coming back from a vacation, but this was the exception and not the rule.

The question is, will Twitter’s nature as a sampled medium be a bug or a feature? It seems like a bug but so did the simplicity. It makes it easy to get followers, which the narcissists and the PR flacks love, but many of the tweets get missed (unless they get picked up as a meme and re-tweeted) and nobody loves that.

On Protection: It is typical to tweet not just blog-like items but the personal story of your day. Where you went and when. This is fine as a thing to tell friends in the moment, but with a public twitter feed, it’s being recorded forever by many different players. The ephemeral aspects of your life become permanent. But if you do protect your feed, you can’t do a lot of things on twitter. What you write won’t be seen by others who search for hashtags. You can’t reply to people who don’t follow you. You’re an outsider. The only way to solve this would be to make Twitter really proprietary, blocking all the services that are republishing it, analysing it and indexing it. In this case, dedicated applications make more sense. For example, while location based apps need my location, they don’t need to record it for more than a short period. They can safely erase it, and still provide me a good app. They can only do this if they are proprietary, because if they give my location to other tools it is hard to stop them from recording it, and making it all public. There’s no good answer here.

New Robocar center at Stanford, Audi TT to race up Pikes Peak

Saturday saw the dedication of a new autonomous vehicle research center at Stanford, sponsored by Volkswagen. VW provided the hardware for Stanley and Junior, which came 1st and 2nd in the 2nd and 3rd Darpa Grand Challenges, and Junior was on display at the event, driving through the parking lot and along the Stanford streets, then parking itself to a cheering crowd.

Junior continues to be a testing platform with its nice array of sensors and computers, though the driving it did on Saturday was largely done with the Velodyne LIDAR that spins on top of it, and an internal map of the geometry of the streets at Stanford.

New and interesting was a demonstration of the “Valet Parking” mode of a new test vehicle, for now just called Junior 3. What’s interesting about J3 is that it is almost entirely stock. All that is added are two lower-cost LIDAR sensors on the rear fenders. It also has a camera at the rear-view mirror (which is stock in cars with night-assist mode) and a few radar sensors used in the fixed-distance cruise control system. J3 is otherwise a Passat. Well, the trunk is filled with computers, but there is no reason what it does could not be done with a hidden embedded computer.

What it does is valet park itself. This is an earlier than expected implementation of one of the steps I outlined in the roadmap to Robocars as robo-valet parking. J3 relies on the fact the “valet” lot is empty of everything but cars and pillars. Its sensors are not good enough to deal well with random civilians, so this technology would only work in an enclosed lot where only employees enter the lot if needed. To use it, the driver brings the car to an entrance marked by 4 spots on the ground the car can see. Then the driver leaves and the car takes over. In this case, it has a map of the garage in its computer, but it could also download that on arrival in a parking lot. Using the map, and just the odometer, it is able to cruise the lanes of the parking lot, looking for an empty spot, which it sees using the radar. (Big metal cars of course show clearly on the radar.) It then drives into the spot.

 read more »

Every connector, including video, should send power both ways

I’ve written a lot about how to do better power connectors for all our devices, and the quest for universal DC and AC power plugs that negotiate the power delivered with a digital protocol.

While I’ve mostly been interested in some way of standardizing power plugs (at least within a given current range, and possibly even beyond) today I was thinking we might want to go further, and make it possible for almost every connector we use to also deliver or receive power.

I came to this realization plugging my laptop into a projector which we generally do with a VGA or DVI cable these days. While there are some rare battery powered ones, almost all projectors are high power devices with plenty of power available. Yet I need to plug my laptop into its own power supply while I am doing the video. Why not allow the projector to send power to me down the video cable? Indeed, why not allow any desktop display to power a laptop plugged into it?

As you may know, a Power-over-ethernet (PoE) standard exists to provide up to 13 watts over an ordinary ethernet connector, and is commonly used to power switches, wireless access points and VoIP phones.

In all the systems I have described, all but the simplest devices would connect and one or both would provide an initial very low current +5vdc offering that is enough to power only the power negotiation chip. The two ends would then negotiate the real power offering — what voltage, how many amps, how many watt-hours are needed or available etc. And what wires to send the power on for special connectors.

An important part of the negotiation would be to understand the needs of devices and their batteries. In many cases, a power source may only offer enough power to run a device but not charge its battery. Many laptops will run on only 10 watts, normally, and less with the screen off, but their power supplies will be much larger in order to deal with the laptop under full load and the charging of a fully discharged battery. A device’s charging system will have to know to not charge the battery at all in low power situations, or to just offer it minimal power for very slow charging. An ethernet cable offering 13 watts might well tell the laptop that it will need to go to its own battery if the CPU goes into high usage mode. A laptop drawing an average of 13 watts (not including battery charging) could run forever with the battery providing for peaks and absorbing valleys.

Now a VGA or DVI cable, though it has thin wires, has many of them, and at 48 volts could actually deliver plenty of power to a laptop. And thus no need to power the laptop when on a projector or monitor. Indeed, one could imagine a laptop that uses this as its primary power jack, with the power plug having a VGA male and female on it to power the laptop.

I think it is important that these protocols go both directions. There will be times when the situation is reversed, when it would be very nice to be able to power low power displays over the video cable and avoid having to plug them in. With the negotiation system, the components could report when this will work and when it won’t. (If the display can do a low power mode it can display a message about needing more juice.) Tiny portable projectors could also get their power this way if a laptop will offer it.

Of course, this approach can apply everywhere, not just video cables and ethernet cables, though they are prime candidates. USB of course is already power+data, though it has an official master/slave hierarchy and thus does not go both directions. It’s not out of the question to even see a power protocol on headphone cables, RF cables, speaker cables and more. (Though there is an argument that for headphones and microphones there should just be a switch to USB and its cousins.)

Laptops have tried to amalgamate their cables before, through the use of docking stations. The problem was these stations were all custom to the laptop, and often priced quite expensively. As a result, many prefer the simple USB docking station, which can provide USB, wired ethernet, keyboard, mouse, and even slowish video through one wire — all standardized and usable with any laptop. However, it doesn’t provide power because of the way USB works. Today our video cables are our highest bandwidth connector on most devices, and as such they can’t be easily replaced by lower bandwidth ones, so throwing power through them makes sense, and even throwing a USB data bus for everything else might well make a lot of sense too. This would bring us back to having just a single connector to plug in. (It creates a security problem, however, as you should not just a randomly plugged in device to act as an input such as a keyboard or drive, as such a device could take over your computer if somebody has hacked it to do so.)

Nissan emulates school of fish, and Singularity Summit Robocar notes

Some time ago I proposed the “School of Fish Test” as a sort of turing test for robocars. In addition to being a test for the cars, it is also intended to be a way to demonstrate to the public when the vehicles have reached a certain level of safety. (In the test, a swarm of robocars moves ona track, and a skeptic in a sportscar is unable to hit one no matter what they do, like a diver trying to touch fish when swimming through a school.)

I was interested to read this month that Nissan has built test cars based on fish-derived algorithms as part of a series of experiments based on observing how animals swarm. (I presume this is coincidental, and the Nissan team did not know of my proposed test.)

The Nissan work (building on earlier work on bees) is based upon a swarm of robots which cooperate. The biggest test involves combining cooperating robots, non-cooperating robots and (mostly non-cooperating) human drivers, cyclists and pedestrians. Since the first robocars on the road will be alone, it is necessary to develop fully safe systems that do not depend on any cooperation with other cars. It can of course be useful to communicate with other cars, determine how much you trust them, and then cooperate with them, but this is something that can only be exploited later rather than sooner. In particular, while many people propose to me that building convoys of cars which draft one another is a good initial application of robotics (and indeed you can already get cars with cruise control that follows at a fixed distance) the problem is not just one of critical mass. A safety failure among cooperating cars runs the risk of causing a multi-car collision, with possible multiple injured parties, and this is a risk that should not be taken in early deployments of the technology.

My talk at the Singularity Summit on robocars was quite well received. Many were glad to see a talk on more near-future modest AI after a number of talks on full human level AI, while others wanted only the latter. A few questions raised some interesting issues:

  • One person asked about the insurance and car repair industries. I got a big laugh by saying, “fuck ‘em.” While I am not actually that mean spirited about it, and I understand why some would react negatively to trends which will obsolete their industries, we can’t really be that backwards-looking.
  • Another wondered if, after children discover that they nice cars will never hit them, they then travel to less safe roads without having learned proper safety instincts. This is a valid point, though I have already worried about what to do about the disruption to passengers who have to swerve around kids who play in the streets when it is not so clearly dangerous. Certain types of jaywalking that interfere with traffic will need to be discouraged or punished, though safe jaywalking, when no car is near, should be allowed and even encouraged.
  • One woman asked if we might become disassociated with our environments if we spend our time in cars reading or chatting, never looking out. This is already true in a taxicab city like New York, though only limos offer face-to-face chat. I think the ability to read or work instead of focus on the road is mostly a feature and not a bug, but she does have a point. Still, we get even more divorced from the environment on things like subways.

As expected, the New York audience, unlike other U.S. audiences, saw no problem with giving up driving. Everywhere else I go, people swear that Americans love their cars and love driving and will never give it up. While some do feel that way, it’s obviously not a permanent condition.

Some other (non-transportation) observations from Singularity Summit are forthcoming.

BTW, I will be giving a Robocar talk next Wednesday, Oct 28 at Stanford University for the ME302 - Future of the Automobile class. (This is open to the general Stanford community, affiliates of their CARS institute, and a small number of the public. You can email btm@templetons.com if you would like to go.)

Great power graphic tells us -- put solar power in New Mexico, but how?

I’m impressed with a great interactive map of the U.S. power grid produced by NPR. It lets you see the location of existing and proposed grid lines, and all power plants, plus the distribution of power generation in each state.

On this chart you can see which states use coal most heavily — West Virginia at 98%, Utah, Wyoming, North Dakota, Indiana at 95% and New Mexico at 85%. You can see that California uses very little coal but 47% natural gas, that the NW uses mostly Hydro from places like Grand Coulee and much more. I recommend clicking on the link.

They also have charts of where solar and other renewable plants are (almost nowhere) and the solar radiation values.

Seeing it all together makes something clear that I wrote about earlier. If you want to put up solar panels, the best thing to do is to put them somewhere with good sun and lots of coal burning power plants. That’s places like New Mexico and Utah. Putting up a solar panel in California will give it pretty good sunlight — but will only offset natural gas. A solar panel in the midwest will offset coal but won’t get as much sun. In the Northeast it gets even less sun and offsets less coal.

Much better than putting up solar panels anywhere, howevever, is actually using the money to encourage real conservation in the coal-heavy areas like West Virginia, Wyoming, North Dakota or Indiana.

While, as I have written, solar panels are a terrible means of greening the power grid from a cost standpoint, people still want to put them up. If that’s going to happen, what would be great would be a way for those with money and a desire to green the grid to make that money work in the places it will do the best. This is a difficult challenge. People sadly are more interested in feeling they are doing the right thing rather than actually doing it, and they feel good when they see solar panels on their roof, and see their meter going backwards. It makes up for the pain of the giant cheque they wrote, without actually ever recovering the money. Writing that cheque so somebody else’s meter can go backwards (even if you get the savings) just isn’t satisfying to people.

It would make even more sense to put solar-thermal plants (at least at today’s prices,) wind or geothermal in these coal-heavy areas.

It might be interesting to propose a system where rich greens can pay to put solar panels on the roofs of houses where it will do the most good. The homeowner would still pay for power, but at a lower price than they paid before. This money would mostly go to the person who financed the solar panels. The system would include an internet-connected control computer, so the person doing the financing could still watch the meter go backwards, at least virtually, and track power generated and income earned. The only problem is, the return would be sucky, so it’s hard to make this satisfying. To help, the display would also show tons of coal that were not burned, and compare it to what would have happened if they had put the panels on their own roof.

Of course, another counter to this is that California and a few other places have very high tiered electrical rates which may not exist in the coal states. Because of that — essentially a financial incentive set up by the regulators to encourage power conservation — it may be much more cost-effective to have the panels in low-coal California than in high-coal areas, even if it’s not the greenest thing.

An even better plan would be to find a way for “rich greens” (people willing to spend some money to encourage clean power) to finance conservation in coal-heavy areas. To do this, the cooperation of the power companies would be required. For example, one of the best ways to do this would be to replace old fridges with new ones. (Replacing fridges costs $100 per MWH removed from the grid compared to $250 for solar panels.)

  • The rich green would provide money to help buy the new fridge.
  • An inspector comes to see the old fridge and confirms it is really in use as the main fridge. Old receipts may be demanded though these may be rare. A device is connected to assure it is not unplugged, other than in a local power failure.
  • A few months later — to also assure the old fridge was really the one in use — the new fridge would be delivered by a truck that hauls the old one away. Inspectors confirm things and the buyer gets a rebate on their new fridge thanks to the rich green.
  • The new, energy-efficient fridge has built in power monitoring and wireless internet. It reports power usage to the power company.
  • The new fridge owner pays the power company 80% of what they used to pay for power for the old fridge. Ie. they pay more than their actual new power usage.
  • The excess money goes to the rich green who funded the rebate on the fridge, until the rebate plus a decent rate of return is paid back.

To the person with the old fridge, they get a nice new fridge at a discount price — possibly even close to free. Their power bill on the fridge goes down 20%. The rest of the savings (about 30% of the power, typically) goes to the power company and then to the person who financed the rebate.

A number of the steps above are there to minimize fraud. For example, you don’t want people deliberately digging out an ancient fridge and putting it in place to get a false rebate. You also don’t want them taking the old fridge and moving it into the garage as a spare, which would actually make things worse. The latter is easy to assure by having the delivery company haul away the old one. The former is a bit tricky. The above plan at least demands that the old fridge be in place in their kitchen for a couple of months, and there be no obvious signs that it was just put in place. The metering plan demands wireless internet in the home, and the ability to configure the appliance to use it. That’s getting easier to demand, even of poor people with old fridges. Unless the program is wildly popular, this requirement would not be hard to meet.

Instead of wireless internet, the fridge could also just communicate the usage figures to a device the meter-reader carries when she visits the home to read the regular meter. Usage figures for the old fridge would be based on numbers for the model, not the individual unit.

It’s a bit harder to apply this to light bulbs, which are the biggest conservation win. Yes, you could send out crews to replace incandescent bulbs with CFLs, but it is not cost effective to meter them and know how much power they actually saved. For CFLs, the program would have to be simpler with no money going back to the person funding the rebates.

All of this depends on a program which is popular enough to make the power monitoring chips and systems in enough quantity that they don’t add much to the cost of the fridge at all.

European intelligent vehicle test

Robocar news:

This press release describes a European research project on various intelligent vehicle technologies which will take place next year. As I outline in the roadmap a number of pre-robocar technologies are making their way into regular cars, so they can be sold as safer and more convenient. This project will actively collect data to learn about and improve the systems.

Today’s systems are fairly simple of course, and will learn a lot from this. This matches my prediction for how a robocar test suite will be developed, by gathering millions and later billions of miles of sample data including all accidents and anomalous events, over time with better and better sensors. Today’s sensors are very simple of course but this will change over time.

Initial reaction to these systems (which will have early flaws) may colour user opinion of them. For example, some adaptive cruise controls reportedly are too eager to decide there is a stopped car and will suddenly stop a vehicle. One of the challenges of automatic vehicle design will be finding ways to keep it safe without it being too conservative because real drivers are not very conservative. (They are also not very safe, but this defines the standards people expect.)

Text me if you lose my luggage

Just back from a weeklong tour including speaking at Singularity Summit, teaching classes at Cushing Academy and a big Thanksgiving dinner (well, Thanksgiving is actually today but we had it earlier) and drive through fabulous fall colour in Muskoka.

This time United Airlines managed to misplace my luggage in both directions. (A reminder of why I don’t like to check luggage.) The first time the had an “excuse” in that we checked it only about 10 minutes before the baggage check deadline and the TSA took extra time on it. The way back it missed a 1 hour, 30 minute connection — no excuse for that.

However, again, my rule for judging companies is how they handle their mistakes as well as how often they make them. And, in JFK, when we went to baggage claim, they actually had somebody call our name and tell us the bag was not on the flight, so we went directly to file the missing luggage report. However, on the return flight, connecting in Denver to San Jose, we got the more “normal” experience — wait a long time at the baggage claim until you realize no more bags are coming and you’re among the last people waiting, and then go file a lost luggage report.

This made me realize — with modern bag tracking systems, the airline knows your bag is not on the plane at the time they close the cargo hold door, well before takeoff. They need to know that as this is part of the passenger-to-bag matching system they tried to build after the Pan Am 103 Lockerbie bombing. So the following things should be done:

  • If they know my mobile number (and they do, because they text me delays and gate changes) they should text me that my luggage did not make the plane.
  • The text should contain a URL where I can fill out my lost luggage report or track where my luggage actually is.
  • Failing this, they should have a screen at the gate when you arrive with messages for passengers, including lost luggage reports. Or just have the gate agent print it and put it on the board if a screen costs too much.
  • Failing this, they should have a screen at the baggage claim with notes for passengers about lost luggage so you don’t sit and wait.
  • Failing this, an employee can go to the baggage claim and page the names of passengers, which is what they did in JFK.
  • Like some airlines do, they should put a box with “Last Bag, Flight nnn” written on it on the luggage conveyor belt when the last bag has gone through, so people know not to wait in vain.

I might very well learn my luggage is not on before the plane closes the door. In that case I might even elect to not take the flight, though I can see that the airline might not want people to do this as they are usually about to close the door, if they have not already closed it.

Letting me fill out the form on the web saves the airline time and saves me time. I can probably do it right on the plane after it lands and cell phone use is allowed. I don’t even have to go to baggage claim. Make it mobile browser friendly of course.