Submitted by brad on Wed, 2005-08-03 17:39.
Recently, Joel on Software wrote an essay on good programmers and how they are qualitatively different from average ones. This is not a new realization, and he knows it and references sources like "The Mythical Man Month." It was accepted wisdom decades ago that a small team of really brilliant programmers would make a better product than a giant team of lesser ones.
That wisdom, however, failed to predict the rise of Microsoft. That wisdom says a software monopoly is impossible because there are reverse economies of scale in software development. So how did Microsoft do it. The answer to that is perhaps the true genius of Bill Gates.
The trick, in part, was finding ways to make software tremendously broad in scope and features. Microsoft Word has bazillions of features, as most people know. Windows in its kernel isn't much more complex than other systems but the real Windows also includes a vast collection of DLLs (libraries) that seem external but are really part of the OS. To clone the OS you must make these DLLs -- and many other things.
A program like MS Word, with so many features, takes raw money to clone. You need that core team of great programmers -- and Microsoft has many great programmers, make no mistake -- but you also need a giant team of lesser ones to keep all the features going, to QA and document them, to translate them and make them work in so many environments.
This does have an economy of scale in the development. Combine that with the immense economies of scale that exist in the distribution of all soft things that can be copied for free, and this permitted a monopoly.
Of course, no single user makes use of all the features of MS Word, so it took even more skill to get them to demand such a complex program, when they might be better served by a leaner, more elegant system. Like I said, this is only part of it.
Submitted by brad on Mon, 2005-08-01 12:06.
Mapping programs, and fancy GPSs come with map databases that will, among other things, plot routes for you and estimate the time to travel them. That’s great, but they are often wrong in a number of ways. Sometimes the streets are wrong (missing, really just a trail, etc.) and they just do a rough estimation of travel time.
Yet all the information is there, being collected constantly by every car that drives the roads with a GPS. Aggregating this data will tell you what roads are real, what roads might be missing, which are one-way, where freeway entrances and exists really are.
And it will also tell you real-world speed examples at various times and dates, at rush hour or otherwise. Even a range of speeds so you can know the speeds for faster and slower drivers and get a really good estimate of your own likely speed on a given road at a given time. After removing the anomalies (like people stopping for coffee) of course.
Rental cars with GPSs are collecting this all the time (sometimes to nefarious uses, like charging whopping fees for brief trips out of state). Technically this data can be had.
But here’s the bad part — there is a potential for giant privacy troubles unless this is done very well, and some may be impossible to do without a privacy risk. After all, until you upload the data, there is clearly a log of your travels sitting there to be used against you. Only a system with rapid upload (and which discards data that gets old, even if it’s not uploaded) would not create a large risk of something coming back to haunt you.
The data would have to be anonymized, of course, and that’s harder than it sounds. After all, your GPS logs say a lot about you even without your name. Most would identify where you live, though that can be mitigated by breaking them up into anonymized fragments to a degree. Likewise they’ll identify where you work or shop or who you visit, all of which could be traced back to you.
So here’s the Solve This aspect of this problem. Getting good data would be really handy. So how do we do it without creating a surveillance nightmare?
Submitted by brad on Sat, 2005-07-30 22:13.
Recently there was a big fuss (including denouncements from many I know) over a U.S. effort to do away with the leap second. People claimed this was like trying to legislate PI to be 3.I am amazed at the leap the the defense of the leap second. I would be
glad to see it go. All our computers keep track of time internally
as a number of seconds since some epoch, typically Jan 1 1970 or 1980.
They go through various contortions to turn that absolute time into
the local time. This includes knowing all the leap-day calculations
and the leap day calculations. It’s complicated by knowing that sometimes
the day is Feb 29, and by knowing that a very, very few minutes have 61
seconds in them (or if you prefer, that a very few hours have 3601 seconds
and rare days have 86401 seconds.)
That’s a mess. A minute should always have 60 seconds. Special casing
all time code to deal with this was the wrong approach, and as noted,
is subject to errors because the code is very rarely tested in that
I’m astounded to see people saying this is the same as declaring pi to be 3.
It’s having 86400 seconds in most days and rare leap seconds that is the
integerization of a real number. The truly scientific approach would be
to declare the day to be 86400.002 seconds, and lengthen that number over
the centuries, would it not?
Astronomers, like computers, can and should keep track of time as an
absolute number of seconds since some epoch. They actually care very
little about what the local time is other than to know when it’s dark,
something leap seconds have insignificant bearing on. Indeed, astronomers
might be happiest using siderial time (where a day is 23 hours 56 minutes and
seconds, the true rotational period of the Earth.)
Our system of time is not one scientists would pick in the first place.
It is clearly designed for the convenience of the ordinary people, and
the legacy of the traditional means of telling time. It’s silly
to use this legacy system and at the same demand the general public and its
timekeeping systems jump through error-prone hoops to make it reflect
noon correctly to the second. Nobody even uses local time anyway, they
all use a time zone. The time zone is off by a huge margin from local time,
why does it matter in the slightest if it’s off by a few more seconds?
In many centuries, the drift will be noticeable. If we still care about
local time, we can fix it then.
Submitted by brad on Fri, 2005-07-29 22:46.
This is a tricky puzzle question I thought up some time ago, but I figured I would blog it.
As people who study physics know, the acceleration a falling body undergoes if dropped (in a vacuum) at the surface of the earth is known as “g”, or 9.8 meters per second per second.
This is so close to 10 that most students and people doing back of envelope calculations often use 10 as the value of a “g”. It’s easy. Fall for one second and you’re going 10 meters/second.
So I got to wondering, how much would the earth have to change to make “g” equal to 10 instead of 9.80665?
So here’s the puzzle. By what ratio would you have to increase the diameter of the Earth so that the people on this bigger planet would have “g” equal to 10? Assume the average density of the Earth remains the same (5.46 g/cc).
Then click to the Puzzle Answer
Submitted by brad on Wed, 2005-07-27 15:47.
Ok, this idea will make no sense to those who have not gone RV camping. RVs have 3 water tanks — one for fresh water, one for the toilet sewage (known as “black water”) and one for the other drains (shower, sinks) known as “grey water.” When you camp in unserviced campsites for a while you become very aware of the capacities of your tanks.
However, the RV uses the fresh water tank to “flush” the toilet. It seems to me that with a small extra water pump, one could use the grey water, or a mixture — grey with a final spurt of fresh to rinse the bowl.
RVs don’t really flush the toilet, that would use way too much water. You rinse the bowl after #1 and you pre-fill the bowl before #2 and rinse later. read more »
Submitted by brad on Thu, 2005-07-21 20:10.
The EFF is holding a Blog-a-thon on our 15th anniversary inviting people to describe things that made them decide to fight for freedom.
That seemed like a good time for me to add some details to one of my early stories, about the banning of my moderated newsgroup rec.humor.funny. I've told the the RHF ban story before and even the story of how it led to the creation of ClariNet and I'll be adding more details in my upcoming history of ClariNet later this year.
Today, at 18 years old (Aug 7, 1987) rec.humor.funny and the netfunny.com site qualify as one of the oldest blogs in existence, if not the oldest. The first blog as far as I can tell was the moderated newsgroup mod.ber, created by Brian Redmond in 1984. Mod.ber is long gone, so something else is the oldest blog. Blog, short for weblog, means a personaly created serialized publication on the web. The web, though many people have forgotten it, is and was defined by Tim Berners-Lee as including not just HTTP and HTML but the other procols such as USENET, FTP, Gopher and Telnet that existed before HTTP. So the rare USENET groups that were moderated for content were the first blogs, and some remain today, and this is thus the story of the first banned blog.
(Other suggested candidates for oldest blog include RISKS digest, Telecom Digest and Human-Nets, though they were more discussion boards than blogs.)
I had always been a defender of free speech to that point, but nothing brings it home like being banned yourself. It's also remarkable to me how many threads of my life run through that banning. These include business threads (the creation of ClariNet) and even personal ones (it's how I met John McCarthy, who introduced me to a past girlfriend a decade later.) I wasn't unknown before but the events did a lot to boost my visibility.
Being censored was a remarkably emotional experience. It didn't help that it was on the front pages of the newspaper every day and that the best (if most frustrating ) thing to do was to keep silent and let the press coverage blow over. It did teach me the truth of the aphorism that censorship doesn't protect people from exposure to violent ideas because censorship is violence.
The EFF didn't exist during this period. Had it existed, it would proably have come to my aid. But many others did, which was heartening. And I learned a bit more about how useful satire is as a tool in these battles. Having fought in the online trenches, I was ready to support it when (in another strange coincidence) my friend of 10 years earlier, Mitch Kapor, led the drive to create it. And later, of course, I have become very proud to be involved in it.
Submitted by brad on Wed, 2005-07-20 00:32.
I blogged earlier about my being in the Silicon Valley 100, a group generated by a marketing company to send out free stuff to hopefully influential folks. In that posting, I link to Dan Gillmor’s reaction to the program, where he writes about how “spooky” it is to him. I didn’t agree that it was that spooky, but there is a definite irony to the fact that I recently got a set of books via the SV100, and in that set was Dan’s own book “We the Media.”
Dan’s eyes rolled up when I told him that at dinner this evening. Of course, it was his publisher Tim O’Reilly who put the book out to the group, and again I don’t find much spooky about it. Publishers have sent out free copies of books to folks, hoping for reviews and buzz, since the dawn of books.
I was with Dan at the first event in our program celebrating the EFF’s 15th birthday, a BayFF panel on blogging and blogger’s rights which attracted an overflow crowd with an engaged audience. I’ll remember to announce some of the other events in our program in advance. We sang Happy Birthday based on the older lyrics which are out of copyright.
Submitted by brad on Mon, 2005-07-18 13:31.
On 9/11, we all wondered how 19 men had lived for a year among us and still given their lives to carry out such acts. Now people wonder even more at how young native British men would give their lives to kill random fellow Britons.
But there is something different here that troubles me. Most suicide terrorists ostensibly use this tactic because there are targets you can only attack if you give your life. Suicide was an essential part of flying a plane into a building.
But you very plainly don’t need to kill yourself to set off a subway bomb. Even with increased vigilence, anybody could leave a backpack behind, rush out the doors just as they are about to close, and then blow up the bomb as the train enters the tunnel, without dying or even getting caught (except perhaps on camera.) There was almost no tactical need for these men to kill themselves. Yes, it makes it a little more certain to do it that way, but we hope that committed terrorists (especially with such clean pedigrees) are not so many in number that they can be wasted this way.
But they were wasted this way, and I think deliberately. I suspect they were chosen not as the most committed, but as the most unlikely, just for the shock value, the idea that your neighbour could be a suicide bomber. And strong shock value it is, because while you don’t have to die to bomb a train, there are a lot of targets where willingness to die is tactically necessary to carry it off, and it’s close to impossible to defend against such attacks.
Those who think careful ID checks and national ID cards will stop terrorists now must step back. These kids had clean ID. The security cameras have helped discover the story but of course could do nothing to prevent it.
I have contended (to much opposition) that terrorism and suicide bombing is a tool used against democracies that are accused of oppression. A recent book (Dying to Win by Robert Pape) now backs this up. The answers are not good.
Submitted by brad on Sun, 2005-07-17 22:52.
I wrote before on the ideal car dock for an MP3 player but the truth is we could use something even simpler sooner. On my recent trip, we brought the cassette adapter but there was no tape player in the rental car. We forgot the FM transmitter, but that’s not as good anyway.
So right away let’s see a small headphone plug on the car stereos to do a nice aux input, especially if you are taking away the tape. Duh.
But we can go beyond that with a USB jack, since all music players can plug into that though with different results. A few of them will be clever enough to draw power and recharge from it — indeed, it is time for cars to have USB jacks just for power since now my cell phones and PDA can all charge from that, and use a cigarette lighter plug with USB jack to do so — but we want something with the data.
With some music players, plug into USB and they look like a hard drive with music files on it. The stereo could be an MP3 player, but might have trouble with the DRMed music. We could also leave the MP3 player in control, but develop a protocol for it to stream digital audio to the stereo, and for the stereo to send back commands (FF/Rew, Pause, skip track etc.) to the player. Yeah, you could also do this over bluetooth but why wouldn’t you want power when in the car, so wires remain the right choice.
Perhaps down the road we might see music players splitting into two halves — drive and UI electronics/power. The drive unit, be it flash stick or hard disk, holds your music and files, and the UI unit does the rest, and can be mated with any drive, as can the computer and as can the car stereo.
Submitted by brad on Sat, 2005-07-16 15:18.
As I noted earlier, last weekend I was at Oregon Country Fair, which is a great time. OCF has permanent facilities and has become more popular than it wants to be. All the booths, including food, have to be juried in and can in theory be kicked out to allow new ones in if popularity drops.
This results in much, much better food boths than you see at a typical random fair with vendors coming in simply if they pay their money.
And I wondered, can we extend this concept into the everyday restaurant world? For example a food mall, where the restaurant tennants are regularly judged for quality, and kicked out if they don’t make the cut. Where you are assured a good meal at a reasonable price. If the idea works, people would go to this mall and make it worth the effort by the restaurants to stay.
This might work the same way movieplexes took over from solo cinemas. People go to a movieplex for the hot movie, but it often is sold out, so they go to a 2nd or 3rd or sometimes even 10th choice of what they want to see. This sells a lot more tickets and avoids people driving home without a movie at all — though in my case I still sometimes bail out. Here, you could go to the restaurant mall with a particular restaurant in mind, but know that if it’s too busy a fine meal is assured unless the whole mall is packed. There could even be a central line for “the next available restaurant.”
Has this been done before? And what about going further and combining facilities… read more »
Submitted by brad on Thu, 2005-07-14 19:40.
All my sites were off today as I did an emergency switch of servers.
The whole story is amusing, so I’ll tell it. I used to host my web sites with Verio shared hosting, but they were overpriced and did some bad censorship acts, so I was itching to leave. One day my internet connection went out, so I went onto my deck with my laptop to see what free wireless there was in the area. One strong one had an e-mail address as the SSID, though it was WEP-locked. Later, I e-mailed that address with a “hi neighbour” and met the guy around the corner. He had set the SSID that way to get just such a mail as mine. (I have a URL as my SSID now for the same purpose.)
My neighbour, it turned out, knew some people I knew in the biz, and told me about a special club he was in, called “Root Club.” The first rule of Root Club, he joked, was that you do not talk about root club. Now that I’m out, I can tell the story. Root Club was started as a group of sysadmins who shared a powerful colocated web server, and all shared the root password and sysadmin duties. read more »
Submitted by brad on Wed, 2005-07-13 12:00.
Having completed a long fly-n-drive road trip, I have some lessons and observations.
If you will be driving a lot, use a rental car even if leaving your own city. We put 3000 miles on our rental car for $300 — far less than the depreciation cost would have been on my own car.
It’s great to have a cooler in the car, you can buy perishables and get cold drinks when you want them, but forget about those $5 styrofoam coolers for any long trip. Within a few days ours was leaking, we fixed it by putting a plastic bag inside and out, but they are not very sturdy. There are collapsible coolers and we have one but didn’t have luggage room. You can buy a cheap solid cooler for under $20 at wal-mart or Costco, but it seems wasteful to throw it away. If you have extra luggage, you can fill a cooler with stuff, duct tape it and check it as luggage, however. read more »
Submitted by brad on Tue, 2005-07-12 16:08.
When you take pictures on the road, you would love to have the latitude and longitude coordinates of each picture stored with it. Indeed, if combined with a digital compass clever software could even tell you what landmark was in the photograph. (ie. if standing on rim of Grand Canyon looking north, it's probably a picture of the canyon.)
To attain this, some digital cameras allow you to plug a GPS into the camera, which is unwieldy to say the least. There's been talk of a bluetooth connection which is better but uses power. On a recent trip Kathryn suggested that the log from the GPS could later be matched up with the timestamps of the photos, which is a great idea -- and a web search reveals a few software packages out there do indeed do this. (And thus also allow photo organizing by geographic location, map-based browsing of photos and other such useful features.)
For the user not wanting to hook up all the devices and use software, I came up with a possible interesting design. Place a memory card slot in the GPS, or allow it to plug in USB or other memory card interfaces. The GPS could then look over the photos on an inserted memory card, read their timestamps, and use its own onboard history of where the GPS was at those exact times, and write coordinates into the files on the flash card. If it can write them on the end of the file that's easiest, if it has to rewrite each entire file that would be a bit slower.
Most digital cameras also have their own USB interface, so the GPS could simply have a USB controller and the camera could be plugged into the GPS after shooting to update the photo files with their location stamps. Most, though perhaps not all digital cameras can act like a USB drive in addition to doing camera control. Of course a standard protocol for updating locations would make this easier, but the main idea allows work with existing digital cameras. (Though they all have their own custom USB plugs and provide their own cable.)
As noted, this can give you great photo organizing. You can see your photos as thumbnails or pushpins in a map. You could link photos to google maps or satellite imagrery of the area. Directories on disk could be created by placename, or even without names photos could be grouped by each major shooting area, instead of just one new directory per 100 photos.
The cameras will eventually get smart enough to be the smart device, but for now the GPS can easily be it. Older GPSs don't have very large track log memories, but today memory is cheap and that's not as much of an issue.
Submitted by brad on Tue, 2005-07-12 12:02.
I recently visited the Oregon Country Fair, which among many other things has entertainment acts which pass the hat to earn their living. (OCF only costs about $13 to attend, not enough to pay much if anything to acts.) This is a pretty common setup.
And perhaps this has been done where I haven't seen it, but I was wondering about a solution to what one busker called the "magic disappearing audience trick." Most people don't put into the hat. So along the lines of my microrefunds concept, where I suggest a solution may be to push people into making one decision, instead of many, over whether they will pay for things that don't have compulsory payment, I propose a system for busker fairs.
The plan would be for the Fair to raise their price and provide each fairgoer with "busker chips" to put in the hats of buskers. Once paid for, the chips would, at least officially, be good only for that. The Fair would also probably keep a small fraction of the money, ie. pay each busker 90 cents for each $1 busker chip turned in. People could of course also toss regular money into the hats.
These chips, aside from providing more revenue for the entertainment, would allow the fair to know what acts were the most popular, and thus who to bring back and who to leave out.
There are some other issues to discuss below. Such as the probable black market in the chips, and what price to charge for them... read more »
Submitted by brad on Mon, 2005-06-27 10:54.
Last year at Burning Man, I built a free phone booth out on the desert. Using VoIP, 802.11, batteries and a satellite uplink, it sat there on the playa floor and let you make free calls anywhere in the world. I blogged about that story, but there was an untold part of the story.
The phone had a number that outsiders could call, and they did, and sometimes people there answered. If not they left voice mail. The voice mail told them to describe the target of their message and offer a bribe to the listener to deliver it. Alas, due to technical problems, we never really got an active system in place to deliver the voice mails, but people still left some. Recently I pulled them out and listend, and they are great fun, especially if you know Burning Man.
Within the mails are calls of love from moms, little kids, dads, lovers and
friends. There’s a joke (I hope) firing from a boss and a proposal of
marriage. There’s a hurricane warning and many descriptions that
could never have found their target in our giant city (“She’s a blonde
camped along 4:30 I think.”) Also fun are the offered bribes to
deliver the messages.
Since everybody knew they were leaving a message for any random stranger to hear, I think it’s fine to have them on the web.
I don’t think you have to be an ember to enjoy these. Just imagine the context of an entire city of 40,000 people with one phone, one voice mail, and people trying to get messages in.
They can be heard at this page of Burning man voice mails. You can either read the short summaries to pick the best voicemails, or like me, just listen to them raw from the combined file or ZIP archive.
Submitted by brad on Sun, 2005-06-26 20:14.
This special forum topic exists to help people identify the best local company to use for a temporary prepaid GSM SIM card when you visit that country. If you research this, put your results here. In particular look for the best results for a short term visitor, who thus won’t care much about when the minutes expire and may or may not care when the number expires. A typical cost to compare would be the cost of the card and say 60 to 100 anytime minutes. However, if there is a major difference for somebody planning mostly night/weekend calling, note that.
Here are things to note in your comment:
- Company and their URL
- Price for SIM, price for a cost-effective prepaid card
- Ease of getting the card
- Other companies to check if this one isn’t convenient
- When will cheapest minutes expire, and how long after that does number expire
- Can you refill from overseas (ie. with non-local credit card)
- For comparison, cost of a prepaid account including (probably subsidy locked) phone. This bundle can be cheaper than an unlock and a naked SIM.
Important note: If you have any affiliation with a company you talk about or link to you must disclose it. No affiliate links allowed Furthermore, you must post your prices. (Create an account so you can come back and edit your posting when they change) and they must be one of the best deals out there. We want real information on the best deals, not self-promotion or typical vastly overpriced cards.
Submitted by brad on Sat, 2005-06-25 17:26.
Everybody is having a great time these days with the new and increasing satellite imagery found at Google Maps, finding their own houses and world landmarks.
I found a database built by a Keyhole user describing all the coordinates of the 788 Unesco World Heritage Sites. With a bit of perl magic I turned the Keyhole format into a series of web pages with links to Google satellite imagery.
Some of these landmarks are very cool from above, some are totally boring. Some are in the high-res, many however (especially outside the USA and Canada) are in lower resolution so you can’t see as much. There are links to the Unesco web site for more information on the sites in any event.
You can start with the master page of Google Maps for World Heritage Sites and click to any particular country to see the sites and the links. For example you can click on the USA World Heritage page, and there you will find the link to the satellite image of the Statue of Liberty — where you should then zoom in to see the statue up close. The Mammoth Cave system is not so remarkable from the air. :-)
On pages with low res, you can usually zoom in once or twice, on hi-res pages, you can go in several times to see lots of detail. I fear you can waste a passle of time seeing many of these sites.
Thanks to Aladdin of the Keyhole message board for tabulating the data in Keyhole format.
Just as a side note — right now I am out visiting some real World Heritage Sites (3 in one week) and not correcting my own list in real time.
Submitted by brad on Thu, 2005-06-23 22:43.
Right now the push in displays is all for computer and TV displays, with fast response time, and ideally in a flat form-factor. But these are expensive, really expensive if you want more than 2 megapixels.
What if we bring back an old technology — long persistence phosphors — and use them to make displays intended for still images, such as photography and art, at high resolution. They are cheap and bright. And if you don’t need to do 60 frames/second, you can also get away with cheap electronics are more resolution per persisting frame.
It would be easy to start with black and white. B&W displays require just a screen of phosphor and a way to excite it. The resolution can be extremely high. Colour requires a shadow mask style technology, or projection such as CRT projection. A portal in the wall for say 10 to 20 megapixel B&W photographs and art might be a desireable product for the home. But there’s hardly any limit for B&W.
There is a limit for CRTs, it get expensive to make a tube that big. New technologies are allowing the electron gun to not be far at the back so it need not be as deep as it is wide, but these are also heavy and fragile. CRT projection (mounted in the roof) might be a good answer.
There are however lots of ultraviolet phosphors which could be triggered by a UV laser or other such source, for rooftop projection or rear projection. If the persistence is long enough so you only have to do a few frames per second you can get in lots of resolution I would think. What would you pay for a 30 megapixel portal in your wall, one as sharp as a window (but not moving and only in 2D of course) showing scenes of the world, and great photography and art?
Submitted by brad on Thu, 2005-06-23 17:49.
Well, the Supreme Court ruled today that expropriation for private development can still be legal if the town council seems to think there’s a public benefit. It’s a terrible decision, with strange logic, and strange votes from the judges, but you will probably read many other articles about that today. What I want to figure is, given this ruling, what can we do to make it better?
What we will see happening is a land developer coming to the city with a plan to demolish a redevelop a block in a way that they claim will be good for the city — perhaps bringing in tourists, jobs, business, whatever. Of course the deal is very good for the land developer, or they would not be drafting it.
I suggest we make it less sweet for the developer in such cases and give some of that sweetness to the expropriation victims. Today they get a “fair market value” for their property (that part of the 5th amendment wasn’t shredded) but I say, if the expropriation is for private use, let’s give them more.
First, start by paying them this fair market value at the date of expropriation, as we do now.
Then, after the deal is complete (with some time limits and other good constraints) we want to determine just how much “value” came from aggregating the properties. Right now this value goes to the developer. We’re going to give most or all of it to the expropriated folks. So we come up with a value for the amalgamated property. (More below on how to do that.) This pre-opening profit would go, all or most of it, to the landowners. The developer keeps any further appreciation of the property as they operate it — they need an upside too, of course.
More ideas follow… read more »
Submitted by Monty Zukowski on Wed, 2005-06-22 22:39.
First I would like to thank Brad for setting up my account so I can post my ideas here.
I own 80 acres of woodlands in Southern Oregon. I would love to be able to inventory every tree on it. Arial photos the county has of my property are not quite detailed enough, and they show the crown of the tree but not the size of the trunk. Seedlings are completely hidden.
Using a video camera I could do video panoramas at various spots on the property. To obtain depth would require either dual video cameras for parallax or a laser mounted a foot or so off of the side of the video camera. Dual video cameras would be out of phase with each other, and that would need to be accounted for in creating the depth information. Would a level's laser be powerful enough to see at 100' off of bark? If it were then the position of the laser spot on the video image would be an indication of the depth of the object.
Or maybe mount the camera on a sliding track. Leave the tripod in the same place, but the first pass has the camera in the center of tripod rotation, the second pass moves it a foot away from that center. Having a marker, like a stick with a reflector on it, at a fixed distance from the tripod (using a string) would help with calibrating and converging the images. Also by mounting the camera sideways I could get a little bit more vertical information since that would make the picture higher than it is wide.
I would print out a map beforehand and mark the spots roughly where I captured the panorama. I might even leave a stake in the ground for next year's inventory. My GPS doesn't work well under dense canopy, so I wouldn't rely on having it for this project. It might make it easier to process if I had a directional indicator, like always starting the panorama from magnetic north according to a compass. Leaving a colored stake in the ground and being sure to capture it on my next panorama would help align the panoramas as well.
From there a map should be able to be constructed. Each panorama would be turned into a disc on the map, with depth information showing how far the tree is from the center of the disc. Ideally the discs would overlap enough to have redundant information for some trees and the stakes, which would help align the rest of the trees on the map. The map should be able to show the trees and also areas of the property with "unknown" information, from which I could figure out what other spots would be good to take more panoramas from.
The panoramas themselves would be useful to see how the forest is changing over time. They could possibly be aligned and shown one above the other. In the ten years I've been here seedlings have grown taller than myself. I've thinned some areas, removing dense areas of small trees to allow just a few of the biggest trees to thrive, on the theory they will be getting more nutrients that the other seedlings are no longer "stealing."
This would also be a useful tool for monitoring timber sales to see the before & after of a harvest. There are many different ways to sustainably manage a forest. I would have a much better mental picture of the effects of various practices if I could really see the before and after of the work I do, in six or 12 month intervals. Forestry inventories are typically done by sampling a fixed area, counting stems and measuring their diameter at breast height. A tool like this could automate the capture of that kind of data, and help people get a good picture of what is going on in the forest.