Media

Make virtual conferences live, not pre-recorded

There is a disturbing trend in virtual conferences. Due to the tempting technical advantages, many of them are switching to using pre-recorded talks rather than live ones to prevent technical glitches. It's obvious why organizers like it, but it sucks the soul out of the event. Nobody would imagine going to a physical conference to watch pre-recorded video of the speakers. Here's some advice on how to resist the temptation.

Topic: 
Tags: 

Virtual meeting tools need to interoperate

There are many tools now being used to replace physical conferences and meetings -- not just Zoom. And no one system is complete, or even best-of-breed in all the various functions it provides. It's time for these tools to develop a way to interoperate, so people can build an event mixing and matching tools, but allowing attendees to flow smoothly between the tools without needing to create different accounts, re-authenticate or have a large learning curve.

Tags: 

Twitter and FB shouldn't ban political ads. They should give them away to registered candidates

Twitter's decision to no longer take political advertising is causing a stir, and people are calling on Facebook to do the same. Political advertising isn't just an issue now that we've learned that Russians are doing it to screw with elections. It's the sink for almost all the money spent by campaigns, and thus all the money they raise from donors. The reason that people in office spend more than half their time fundraising is they feel they have no choice.

Reflections on 30 years of the dot-com

Tomorrow, June 8, marks the 30th anniversary of my launch of ClariNet.com. In the 1980s, there was a policy forbidding commercial use of the internet backbone, but I wanted to do a business there and found a loophole and got the managers of NSFNet to agree, making ClariNet the first company created to use the internet as a platform, the common meaning of a "dot-com."

Tags: 

Why should first run movies at home cost $3,000?

A new service called Red Carpet was announced, which will offer first-run movies in the homes of the very wealthy. You need a $15,000 DRM box and movie rentals are $1,500 to $3,000 per rental. That price is not a typo.

So I wrote an article pondering why that is, and why this could not be done at a price that ordinary people could afford, similar to the price of going to the movies.

Tags: 

Review of the LG OLEDs -- it's time for a 4K HDR TV, but it still thinks it's a TV

A decent impression of an impressionist

I recently purchased an LG 4K OLED HDR TV. In spite of the high price, I am pleased with it, and it's made old HDTV look somewhat dull. There is now enough content to upgrade.

Read my review and also my comments on how the TV hasn't yet figured out that many of us just want it for streaming.

Topic: 
Tags: 

Before the next museum fire, make 4K video of all your documents

There are special machines for this but it's easy to make your own setup.

Many of you will have read of the tragic fire which destroyed the National Museum of Brazil. Many of the artifacts and documents in the museum were not photographed or backed up, and so are destroyed forever.

Tags: 

Is there a limit on how much advertising can make?

In my article about how advertising won't pay for robotaxi rides I hinted at one surprise source of the problem. Maybe advertising can never be very valuable.

Right now, the most popular type of advertising, makes about 60 cents for one hour of TV watching. This is with what's known as a $20 CPM (cost per thousand.) Thats 2 cents per ad shown to a person, and an hour of TV has around 15 minutes of ads, or 30 spots.

Topic: 

The decline of blogging, and what replaces it?

You, by definition, read blog posts. But the era of lots of individual personal web sites seems to be on the wane. It used to be everybody had a "home page" and many had one that updated frequently (a blog) but I, and many other bloggers, have noticed a change of late. It can be seen in the "referer" summaries you get from your web server that show who is making popular links to your site.

Olympics Notebook 2018 -- streaming and Curling

Every 2 years I watch the Olympics and publish notes on the games, or in particular the coverage. Each time the technology has changed and that alters the coverage.

This year the big change is much more extensive and refined availability of streaming coverage. Since I desire to "cut the cord" and have no cable or satellite, this has become more important. Unfortunately the story is not all good.

Topic: 

E-mail is more secure than we think, we should use it

E-mail is facing a decline. This is something I lament, and I plan to write more about that general problem, but today I want to point out something that is true, but usually not recognized. Namely that E-mail today is often secure in transit, and we can make better use of that and improve it.

The right way to secure any messaging service is end-to-end. That means that only the endpoints -- ie. your mail client -- have the keys and encrypt or decrypt the message. It's impossible, if the crypto works, for anybody along the path, including the operators of the mail servers as well as the pipes, to decode anything but the target address of your message.

We could have built an end-to-end secure E-mail system. I even proposed just how to do it over a decade ago and I still think we should do what I proposed and more. But we didn't.

Along the way, though, we have mostly secured the individual links an E-mail follows. Most mail servers use encrypted SMTP over TLS when exchanging mail. The major web-mail programs like Gmail use encrypted HTTPS web sessions for reading it. The IMAP and POP servers generally support encrypted connections with clients. My own server supports only IMAPS and never IMAP or POP, and there are others like that.

What this means is that if I send a message to you on Gmail, while my SMTP proxy and Google can read that message, nobody tapping the wire can. Governments and possibly attackers can get into those servers and read that E-mail, but it's not an easy thing to do. This is not perfect, but it's actually pretty useful, and could be more useful.

How to do a low bandwidth, retinal resolution video call

Not everybody loves video calls, but there are times when they are great. I like them with family, and I try to insist on them when negotiating, because body language is important. So I've watched as we've increased the quality and ease of use.

The ultimate goals would be "retinal" resolution -- where the resolution surpasses your eye -- along with high dynamic range, stereo, light field, telepresence mobility and VR/AR with headset image removal. Eventually we'll be able to make a video call or telepresence experience so good it's a little hard to tell from actually being there. This will affect how much we fly for business meetings, travel inside towns, life for bedridden and low mobility people and more.

Here's a proposal for how to provide that very high or retinal resolution without needing hundreds of megabits of high quality bandwidth.

Many people have observed that the human eye is high resolution on in the center of attention, known as the fovea centralis. If you make a display that's sharp where a person is looking, and blurry out at the edges, the eye won't notice -- until of course it quickly moves to another section of the image and the brain will show you the tunnel vision.

Decades ago, people designing flight simulators combined "gaze tracking," where you spot in real time where a person is looking with the foveal concept so that the simulator only rendered the scene in high resolution where the pilot's eyes were. In those days in particular, rendering a whole immersive scene at high resolution wasn't possible. Even today it's a bit expensive. The trick is you have to be fast -- when the eye darts to a new location, you have to render it at high-res within milliseconds, or we notice. Of course, to an outside viewer, such a system looks crazy, and with today's technology, it's still challenging to make it work.

With a video call, it's even more challenging. If a person moves their eyes (or in AR/VR their head) and you need to get a high resolution stream of the new point of attention, it can take a long time -- perhaps hundreds of milliseconds -- to send that signal to the remote camera, have it adjust the feed, and then get that new feed back to you. There is no way the user will not see their new target as blurry for way too long. While it would still be workable, it will not be comfortable or seem real. For VR video conferencing it's even an issue for people turning their head. For now, to get a high resolution remote VR experience would require sending probably a half-sphere of full resolution video. The delay is probably tolerable if the person wants to turn their head enough to look behind them.

One opposite approach being taken for low bandwidth video is the use of "avatars" -- animated cartoons of the other speaker which are driven by motion capture on the other end. You've seen characters in movies like Sméagol, the blue Na'vi of the movie Avatar and perhaps the young Jeff Bridges (acted by old Jeff Bridges) in Tron: Legacy. Cartoon avatars are preferred because of what we call the Uncanny Valley -- people notice flaws in attempts at total realism and just ignore them in cartoonish renderings. But we are now able to do moderately decent realistic renderings, and this is slowly improving.

My thought is to combine foveal video with animated avatars for brief moments after saccades and then gently blend them towards the true image when it arrives. Here's how.

  1. The remote camera will send video with increasing resolution towards the foveal attention point. It will also be scanning the entire scene and making a capture of all motion of the face and body, probably with the use of 3D scanning techniques like time-of-flight or structured light. It will also be, in background bandwidth, updating the static model of the people in the scene and the room.
  2. Upon a saccade, the viewer's display will immediately (within milliseconds) combine the blurry image of the new target with the motion capture data, along with the face model data received, and render a generated view of the new target. It will transmit the new target to the remote.
  3. The remote, when receiving the new target, will now switch the primary video stream to a foveal density video of it.
  4. When the new video stream starts arriving, the viewer's display will attempt to blend them, creating a plausible transition between the rendered scene and the real scene, gradually correcting any differences between them until the video is 100% real
  5. In addition, both systems will be making predictions about what the likely target of next attention is. We tend to focus our eyes on certain places, notably the mouth and eyes, so there are some places that are more likely to be looked at next. Some portion of the spare bandwidth would be allocated to also sending those at higher resolution -- either full resolution if possible, or with better resolution to improve the quality of the animated rendering.

The animated rendering will, today, both be slightly wrong, and also suffer from the uncanny valley problem. My hope is that if this is short lived enough, it will be less noticeable, or not be that bothersome. It will be possible to trade off how long it takes to blend the generated video over to the real video. The longer you take, the less jarring any error correction will be, but the longer the image is "uncanny."

While there are 100 million photoreceptors in the whole eye, but only about a million nerve fibers going out. It would still be expensive to deliver this full resolution in the attention spot and most likely next spots, but it's much less bandwidth than sending the whole scene. Even if full resolution is not delivered, much better resolution can be offered.

Stereo and simulated 3D

You can also do this in stereo to provide 3D. Another interesting approach was done at CMU called pseudo 3D. I recommend you check out the video. This system captures the background and moves the flat head against it as the viewer moves their head. The result looks surprisingly good.

Digitizing your papers, literally, for the future, with 4K video

I have so much paper that I've been on a slow quest to scan things. So I have high speed scanners and other tools, but it remains a great deal of work to get it done, especially reliably enough that you would throw away the scanned papers. I have done around 10 posts on digitizing and gathered them under that tag.

Recently, I was asked by a friend who could not figure out what to do with the papers of a deceased parent. Scanning them on your own or in scanning shops is time consuming and expensive, so a new thought came to me.

Set up a scanning table by mounting a camera that shoots 4K video looking down on the table. I have tripods that have an arm that extends out but there are many ways to mount it. Light the table brightly, and bring your papers. Then start the 4K video and start slapping the pages down (or pulling them off) as fast as you can.

There is no software today that can turn that video into a well scanned document. But there will be. Truth is, we could write it today, but nobody has. If you scan this way, you're making the bet that somebody will. Even if nobody does, you can still go into the video and find any page and pull it out by hand, it will just be a lot of work, and you would only do this for single pages, not for whole documents. You are literally saving the document "for the future" because you are depending on future technology to easily extract it.

Tags: 

Pages