Will search engines focus on the negative?

Topic: 

I'm at DLD in Munich, and going to Davos tomorrow. While at DLD I made a brief mention during a panel on identity and tracking of my concept of the privacy dangers of the AIs of the future, which are able to extract things from recorded data (like faces) that we can't do today.

I mentioned a new idea, however, which is a search engine which focuses on the negative, because though advanced algorithms it can tell the difference between positive and negative content.

We're quite interested in dirt. Every eBay user who looks at a seller's feedback would like to see only the negative comments, as the positive ones tell almost no information. eBay doesn't want to show this, they want people to see eBay sellers as positive and to bid.

But a lot of the time if we are investigating a company we might do business with or even a person, we want to focus on the negative. A company with few complaints is of interest to us. AI software will exist to find such complaints, and possibly even to do things like understand photos and know which ones might be a source of embarrassment, or read on postings on message boards and tell which ones are damning. This is hard to do well today, but will change over time.

This will have deep consequences to concepts of reputation. Those with a big online presence certainly have bad stuff written by or about them out there. Normally, however, it is buried in the large volume of stuff, and doesn't get high search engine rankings. However, our human thirst for gossip and dirt will result in some search engines will push it to the top. In addition, there will be those wanting to game this with deliberate libel of their enemies and competitors. Today they can do this but their libels will be hidden in the large volume of information.

Some have proposed that in the future it will be necessary to pay a service to libel you, and spread lots of false material that buries and discredits any libel left by enemies (as well as true negative comments.) The AIs may be able to spot the difference, but that's an arms race which can't easily be predicted.

It is likely that all the bad in our lives will haunt us even more than we already fear. Efforts by some countries to pass laws which let people delete alleged libels will not work, and may bring even more attention to the materials. While you might be able to remove your tag from a photo on facebook, once that photo makes it into a system that can do face recognition the tag will come back and do so in ways beyond your control.

Comments

A couple of years ago, people were being warned that prospective employers would Google them, the intention of the warning being that they shouldn't have anything visible to Google, or at least anything embarrassing. Today, for some employers it is a mark against a candidate if Google turns up nothing on them. Most of this stuff is not bad, it is just embarrassing based on some arbitrary set of morals. But, fortunately, these can change, and what would have been embarrassing 30 years ago is now touted as positive. So-called data-protection activists have been complaining for years about all the work they have done on privacy, only to see it ignored on the internet by the young generation. However, in many cases the young generation just doesn't have the same concerns, rightly in my view.

Of course, if something is truly bad, not negative, then of course one doesn't want it on the web, but the issue here is avoiding a justly bad reputation, not trying to avoid offending someone one doesn't even know. Also, security in the sense of passwords and pins is a real issue and people should make informed decisions (I bet 9 out of 10 don't even know what "longer is stronger" means (no, it's not the title of a porn film) and why it is the first rule one should learn with respect to password security (the zeroth is "don't write it down").)

I think it strange that many of the same folks who demand that copyright laws must be changed because technology makes it less difficult to violate them cling to the same old morals with respect to other things which technology has made less difficult. Just as many people today think it is actually OK to disrespect copyright (wrongly in my view), maybe soon they just won't care about having a bad reputation, or what is today a bad reputation might in the future be a good one.

Indeed, a typical person active online will return many hits. The first hits will be generally positive for a person, though perhaps not for a company. The question is how well people will deal with seeing only the negative, even though they sought it out. We tend to get all concerned if we learn a few bad things out of context, even if that was what we were looking for.

...doesn't make any sense. In this model, if I read something bad about someone I either think that it's true, or that they posted it about themselves to bury some even more damaging facts. It's hard to think of an equilibrium where people are posting substantial amounts of negative information about themselves.

Back during the 20th century, I could assume any hit on my name was actually about me, but during the last few years I see hits for other people with the same name. In fact, another Joel Upchurch actually owns the JoelUpchurch.com domain. The trouble is that my name is unusual enough that some people might assume that that anything they read with my name is about me.

I think a lot of people are interested in software that is able to make a decent guess about which "John Smith" is being referred to in a posting. May never do it for John Smith but probably will be able to do it for less common names. You have to know something about the various people by that name of course, but the AIs of the future will.

Add new comment