Anti-Phishing -- warn if I send a password somewhere I've never sent it

There are many proposals out there for tools to stop Phishing. Web sites that display a custom photo you provide. "Pet names" given to web sites so you can confirm you're where you were before.

I think we have a good chunk of one anti-phishing technique already in place with the browser password vaults. Now I don't store my most important passwords (bank, etc.) in my password vault, but I do store most medium importance ones there (accounts at various billing entities etc.) I just use a simple common password for web boards, blogs and other places where the damage from compromise is nil to minimal.

So when I go to such a site, I expect the password vault to fill in the password. If it doesn't, that's a big warning flag for me. And so I can't easily be phished for those sites. Even skilled people can be fooled by clever phishes. For example, a test phish to (Two "v"s intead of a w, looks identical in many fonts) fooled even skilled users who check the SSL lock icon, etc.

The browser should store passwords in the vault, and even the "don't store this" passwords should have a hash stored in the vault unless I really want to turn that off. Then, the browser should detect if I ever type a string into any box which matches the hash of one of my passwords. If my password for bankofthewest is "secretword" and I use it on, no problem. "secretword" isn't stored in my password vault, but the hash of it is. If I ever type in "secretword" to any other site at all, I should get an alert. If it really is another site of the bank, I will examine that and confirm to send the password. Hopefully I'll do a good job of examining -- it's still possible I'll be fooled by, but other tricks won't fool me.

The key needs in any system like this is it warns you of a phish, and it rarely gives you a false warning. The latter is hard to do, but this comes decently close. However, since I suspect most people are like me and have a common password we use again and again at "who-cares" sites, we don't want to be warned all the time. The second time we use that password, we'll get a warning, and we need a box to say, "Don't warn me about re-use of this password."

Read on for subtleties... Password re-use is of course a bad idea at sites that matter. Most of them store your password in their own databases in the clear. If you use the same password at your bank as at paypal, somebody who breaks into one (or an unscrupulous insider) can then freely break into the other, with no way to track back how they learned it. But you can only remember so many passwords, and you need access to passwords on the road sometimes (risky as that is.) Systems which do unique passwords for every site are great but you're toast if you are on the road or if you lose the master password. Some of your passwords you just need to keep in memory.

Now there is one big hole in what I've described. Phishers can write live, one-character-at-a-time applications to simulate a password box, and bypass the checks I have above. After you have typed "secretword" we could stop the sending of the "d" but they might have learned all the rest. We could hash some prefixes of your password, if you make them strange enough, so they can be spotted, but this is harder to make perfect.

Browsers need to make password entry boxes more special. They must, in fact, present a UI that no other tool or common plugin can emulate. That probably means they do something like pop open a special typing box, in a way that no javascript, java applet or flash program can do. A box with a magic colour scheme or shape which can't be done which says, "I'm the firefox password box." If you click on that box and start typing, you had better see it or fear phishing.

We can't stop signed, trusted apps like activeX controls and signed java apps from presenting the same interface. That we have to live with. And we can't stop trojans at internet cafes from presenting the interface, which may create a false trust that leaves us worse off. Even today we could stop javascript from reading individual characters and echoing them as a "*", the UI of the existing password box -- though even that's a tough challenge as there are many ways to draw a star, including sending GIF files.

Of course, once you get ready to change the behaviour of the users, you can do all sorts of things, including the pet names, or password generators, even roaming ones. But to stop phishing we need solutions that don't require much or any user change. One simple step might be to encourage the user to start all important passwords with some special characters of their choice, like "%&" or similar. If they ever type these characters and they're going into a javascript reader or applet or flash program, we should be on immediate alert. We could also enable checking of prefixes. I'm willing to accept the risk that my password vault would hold, encrypted, a hash of the first 5 characters of my bank password. (Meaning somebody might get the first 4 by character stream phishing, or get the first 5 in a brute force attack by getting ahold of my decrypted password vault.)

Another clue might be a different cursor for use when I'm typing characters to a text box in the browser (especially password box) from when my keystrokes are going anywhere else (like javascript.)

Updates, based on comments: Ping's suggestion that the real browser password-entry box display a graphic or photo chosen by the user is an excellent one. Then java applets, flash programs or javascript would be unable to emulate a password box. (I have always felt Mozilla's password vault system should not use a standard dialog box to ask for the master vault password, either.) For a while, people trained with old password boxes would still be fooled, but after some time people would come to realize all password entry boxes have a custom look, just for them, and should not type a password in anywhere else. (Of course drawing into a password box must also be forbidden.)

I also will add that there are many levels of solutions to phishing. The above proposal is one aimed at helping users, one browser at a time. Far more involved proposals, that change how sites do login, can do more, but they require major efforts to get adoption. Any new system must be able to be adopted one site at a time, one user at a time. In the long term, the answer is to move authentication into a personal token (ie. cell phone) we carry around with us, with PINs or personal biometrics, and no transmission of passwords at all (ie. hash based challenge/response or digital signature.)


There are lots of good ideas in this post. Related work:

Browsers need to make password entry boxes more special. They must, in fact, present a UI that no other tool or common plugin can emulate.

My design for Passpet does this. The user only enters a secret into the toolbar, after clicking a custom icon that is hard to emulate because the icon differs from user to user. Password fields are always filled by clicking a button.

One simple step might be to encourage the user to start all important passwords with some special characters of their choice, like “%&” or similar. If they ever type these characters and they’re going into a javascript reader or applet or flash program, we should be on immediate alert.

Stanford's PwdHash uses this trick. They ask users to start all passwords with "@@". Upon detecting "@@" the browser enters a special mode where keypress events are diverted away from normal event processing. Any JavaScript in the page will think the user is typing in "abcdefg" after that. When you submit the login form, the browser then replaces "abcdefg" with the password for transmission.

Another clue might be a different cursor for use when I’m typing characters to a text box in the browser (especially password box) from when my keystrokes are going anywhere else (like javascript.)

Web Wallet had a feature similar to this. Any characters you type into the webpage are animated, flying out of the page in a huge font, which is supposed to make you uncomfortable if you are typing in your password. However, the participants in their user study didn't seem to notice or care.

I like passpet better than the pet name concept, and the custom icon is a good idea. (Indeed, some web sites are using a user-chosen photo as their anti-phish technology already.) I still have doubt that users will take to extensively assigning pet names to sites (auto generated pet names of course will exist) but I am interested in research. I might use them personally but I'm already fairly phish-resistent.

As you know, generated passwords present a roaming problem. Your goal 6 -- only one password to remember -- is desireable but difficult. In particular there is the troublesome problem of the random internet cafe. I have found myself on the road and needing to use such a terminal to access travel sites and even money sites. This is of course risky, there could be a trojan keylogger. But sometimes I make a judgement that my need to use a site outweighs the risk.

Of course, there will not be any special plugin or browser mod on the random machine. Probably vanilla IE until such time as a better password roaming system becomes standard in all browsers. One can provide an SSL web site where I can enter my master PW and get a domain specific PW to cut and paste, however.

But while I might be willing to risk entering my paypal password into the random computer, dangerous as that would be, I really don't want to enter my master password anywhere but a fully trusted machine. Especially if that master password is also used in other places (such as being my unix logon password etc.) That's far more dangerous.

So you are stuck with having to remember different passwords for the most sensative accounts, I think. While I noted passpet lets you modify the domain name I think you need a way to say that you want a different master password for the most crucial sites.

I would then combine this with two more functions. First, my own hint system, so that remotely I can get a password hint help me remember which password I use for the special site. The hint is something in my own words, "That woman you dated in 1982 spelled backwards plus your grandfather's birth year" or however you form passwords. Possibly abbreviated to be harder to read "babe82 + gp" or somesuch.

Then at the untrusted cafe, you can go to your hint site and, using yet another pw (sigh), see your hints. Still risky but not as risky. A sheet of paper in wallet with the hints might be simpler and wiser, if you remember to update it.

Secondly it might be good if web sites, after logging on, refused to let another machine log on after that, for a few minutes, in which you could issue a command to lock your account and e-mail you the unlocking code. That means no more access until you can get to your email securely, of course.

As noted in other papers, SMS to your cell phone may help in these cases. A site where you can command your password be sent by SMS to your cell phone would let you not reveal your master PW. It needs a password itself, however, if you fear your phone being stolen, and you can be in trouble if SMS is sniffed but at least you will know about it.

As I blogged earlier, the only decent long term solution is for us to carry (presumably in our phones) a challenge/response engine for login to these sites. It can even happen over bluetooth with a single confirmation press on the phone. Cell carriers could even make it happen over radio and charge us money, which they love to do.

You can split the problem to simplify solutions.

For simple phishing take two steps: 1) require that all automatic password transactions take place over an SSL connection (easy), and 2) have a user interaction to add any new hosts to a white list. This second step also flags spoofing attempts, so the whitelist message needs to both warn and educate the user. If it really is a new legitimate host for password storage, save the certificate information and proceed. If you get the warning for an old site you need to investigate before proceeding.

I would compare and display the certificate information about the organization. Organizations change infrequently, and the user is more likely to be aware of these changes. Also do a partial match search and flag likely spoofing attempts (e.g., homotypes).

Then there is the issue of the untrustworthy local machine. This is fundamentally hard to solve and need not be solved. Accept that you need a trustworthy local machine for important transactions, but minimize what you need on it. For my corporate access the trustworthy local machine is my ID fob. All it does is generate one-time use, time limited passwords. I enter a PIN into it, and it shows me the password on an LCD. It limits my exposure to a few minutes and is a trustworthy source. This is not perfect. For example, an attacker could piggyback an unauthorized transaction along with an authorized transaction. But this would have to be over the same SSL connection, so it calls for a much more extensive local penetration than a simple password sniffer.

For many purposes an SMS capable cell phone is an acceptable trustworthy local machine. The programmable ones are even better. You can use a standard algorithm, the accurate time inherent in cell phones, account specific seeds (delivered by SMS or physical mail), and an account PIN to generate the one time use, time limited password. This is easily within the capability of a cell phone. The function can be bundled in with the rest of the address book functions so that it is a minor variation on dialing a number from the cell phone address book.

Second, demand bi-directional mutual authentication for some transactions. This is very inconvenient if done wrong. But for some transactions it is worthwhile. The cell phone companies take advantage of the SMS messaging for some of their products. As part of account setup they know your cell phone. So they send a portion of the transaction over SMS, or demand that you send a token or passphrase via SMS. By splitting the transaction over these two paths you force the attacker to penetrate both the computer and the cell phone. This is immensely harder than implementing a password sniffer or transaction piggy back.

For example, suppose the transaction confirmation displays a transaction ID confirmation (say 10 digits). You are required to SMS that transaction ID plus a PIN to the account confirmation number within 5 minutes. The transaction does not complete until the confirmation is received, and then you get the final confirmation by computer. This is annoying and might cost a few cents for the SMS message, so you only do it for transactions with significant value. Even if the attacker is watching all this, they will not be able to generate the SMS with the proper return number unless they have also cloned your cell phone and stolen your transaction PIN.

Another variation is generation of one time use credit card numbers, requested and delivered by SMS. They might be vulnerable to highly specialized piggyback attacks, but you are not vulnerable to other losses due to exposure of the credit card.

This doesn't deal with integrated browser/cellphone, but it does reduce the problem space.

But the issue with phishing is that by definition, phishers trick people into thinking they are logging in to their bank or similar site. Whatever methods we put in place, can you ever assure a person can't be tricked around them? We're talking ordinary users with ordinary browsers who will not, it's been demonstrated, tolerate cumbersome security techniques.

Thus my proposal of seeing them type in a known password to an unknown site and saying "whoa." The advantage here is the user doesn't do anything, instead the system notices the end result of any trickery -- a password going to a place it's not known to be meant to go. It doesn't matter how they trick them, you still spotted it.

What you're left with is the trickery being so good that even after you sound the alarm and give them hints on how to double-check for trickery, they still approve sending the password. That is not solved by my technique, and other systems may help better there.

I was thinking of multiple levels of security. The whitelist with SSL is just an extension of the password vault. The user change involved is small and the browser change required is small. Operationally the sequence of events would be:

1) Site X asks for a password over an SSL channel.
2) The browser checks the organization information in the server certificate, sees that it is a whitelist match, and replies with the password. (no extra user effort).

for a new site or phishing site, step 2) becomes

2) The browser sees that organization information is new. It starts its anti-phishing logic. Is there some reason to be suspicious?
3a) The browser displays either a neutral query "Is this a new site? what password should it have?" interaction, together with text indicating that if this is not a new site then phishing is likely.
3b) The browser displays a phishing warning when there is reason to suspect phishing, and requires extra steps to persuade it to add this site to the whitelist.

This is not burdensome for regular use. It is vulnerable to phishing, but now phishing requires more effort and is a bit more obvious. It is reasonably robust against simple spoofing because it is hard for the spoofer to get a matching organization certificate. It is vulnerable to several kinds of man in the middle attacks, as are the other proposals above. Phishing attacks against the certificate system of Windows have already been detected and those will be expanded to attack this, so something stronger is needed when the transaction is valuable.

This is why I think we will need to start encouraging the use of somewhat less convenient but much stronger mechanisms for transactions that involve significant funds. The single use credit card numbers are one form of this. One time passwords from cell phones is another, as is the use of second channel confirmation numbers. It is less convenient, but given current technology I assume that many machines are penetrated and that all public communications channels are watched by bad guys.

It would have been nice if Trusted Computing had evolved to generate PCs that could be trusted by their owners. Unfortunately it has evolved to mean PCs that cannot be trusted by their owners. So instead I move towards approaches that require multiple unrelated systems be penetrated before valuable transactions are compromised.

I'm not quite clear. You are saying the browser replies with a password, which is to say automatically logs you in? Or are you referring to the constant authentication you get with HTTP Auth, which is rarely used by these web sites. Auto-login would be a major new feature in browsers, usually right now the most they do is fill in the userid and password and let you click if you want to log in. That's usually the right thing to do because there are other choices you may want from the login screen.

However, for that we already have this level of protection. The password auto-fill is only for a site you've confirmed you want to save a password for (whitelist.) Not for anywhere else. SSL is not involved, though I have in the past suggested browsers whould detect, and seriously warn, if a page that used to be accessed by SSL suddenly becomes non-SSL or changes certificate, and should not password auto-fill.

Right now if you go to a phisher and enter your password in a normal password box, the browser will say, "Do you want to remember this password?" This is a phishing wakeup for smarter users, though others are fooled in spite of this. My proposal was in effect to change this warning to be "You're using the userid/password you use for your bank at this site, which is not, as far as we can tell, your bank. This could be somebody trying to trick you. Please read the anti-phish guide, and then confirm that either this is another login page for your bank, or that you wish to re-use your bank userid and password with another web site"

But your proposal is, as far as I can see, what already happens at least in Firefox (for both SSL and non-SSL)

I am suggesting that browsers stop filling in for non-SSL, and that the SSL whitelist selection be based on the organization information in the certificate rather than on hostname/DNS. (At least the non-SSL and hostname based password traffic should be denigrated.) These two small steps make a significant difference. This defends against:

a) Man in the middle attacks by phishing.

The password threat is basically a MIM threat. As an old fart I still read email in text mode, and I frequently see phishing emails that have used the http://good-site for all but the password portion of the interaction. That one little piece frequently has a text label of https://good-site, but the actual link is http://bad-site. All of the icons, pictures, and other site interactions are with the actual good-site, so the victim is correct in thinking that this is the actual site. Anything sent via http will be from the good-site. The only thing that the bad-site cannot do is generate an SSL connection that shows the exact same organization information as the original good site. Therefore, I base the password logic on the organization information of the good-site.

I avoid DNS because DNS can be hijacked also. I have detected criminal domain hijacking through my insistence on using SSH host authentication. Domain attacks are reality. The SSL certificate organization information is not vulnerable to hijacking. Even those criminal organizations that do get valid SSL server certificates are unable to duplicate the organization information content.

This does mean educating both users and web sites to demand that password traffic be SSL protected. This is a growing practice by the better web sites, so I think that the time is ripe to start pressuring them by changing browsers to highlight all non-SSL password traffic as probably phishing attempts. I would only apply this to the password portion of the traffic. Use of SSL elsewhere depends on the other traffic's value.

This small step still leaves a major technical vulnerability:

b) penetrating the home computer

SSL protection only protects against a MIM outside the home computer. Once inside the browser, you can resume MIM attacks. Technologies like AJAX, browser helper objects (BHO) in IE, and extensions in Firefox, all enable penetration. The Web 2.0 enthusiasts rhapsodize over the potential wonders of AJAX and overlook the fact that they are encouraging you to allow some remote sociopath to replace your browsers user interface. None of these new technologies were designed to limit the damage that can be done when the source is a sociopath. They were designed to enable great things by good people, and thus allow great harm when used by sociopaths.

Technology is limited in what it can do about phishing. The core of phishing is the re-purposing of desirable intended functions for fraudulent use. It can do better, and I hope that the AJAX, etc. designers start considering restrictions to limit the harm that can be done by malicious code. But in the end, the technology will be unable to detect well implemented fraudulent uses, and the users are vulnerable to fraud.

This is why I look to independent path verifaction as the value of the transaction increases. I like the growing use of one-time credit card numbers because they limit the financial exposure unless the attacker has penetrated multiple sites at different times with a coordinated attack. The users need only a small increase in effort to take advantage of this, and it flows through the credit card protection system easily.

The enduring anti-fraud mechanisms of audit trails, detections, restitutions, and punishment of the defrauders also have to be part of the process. It is time to start thinking about how can this be done without destroying personal privacy. What extra steps can be integrated into mail, browsers, etc. to let investigators track back once the fraud has taken place so that the sociopaths can be caught and punished. The audit technology has not been thought about and is almost entirely missing from current systems. It's time to start thinking about it.

But as I noted, there are many different levels of solution, and one that forces all sites that take passwords to go to SSL is quite a high level of change. You can't thrust that on the net all at once.

If you're going to go to that level, requiring changes at half the sites on the net, you might as well do a number of better changes. And are you going to force every site to buy an "official" cert? Or will there be free certs, or self-signed certs? If you make self-signed certs the norm, you are losing the benefits of authentication because now people don't notice them. Self-signed certs are useful, for starting up encryption and internal use, but they provide no auth the first time.

I think that we've reached the point where requesting SSL for password entry is no longer a massive change. I question your "half the sites". But even if true, this is the minimum step needed to substantially reduce spoofing. This is the least necessary step to halt spoofing. The spoofers are quite skilled and will defeat the lesser changes with ease.

As for authentication, I think we have overemphasized the value of "official" authentication. Consider the spoofing case alternatives:

a) With http, the first time I know nothing about the other side except what I see on the screen. The second time I also know nothing except what I see on the screen.

b) With a self-signed cert, the first time I know nothing about the other side except what I see on the screen. The second time I know that it is the same people as before. (Spoofing is successfully defeated.) You know about the other side through their behavior in the past.

c) With an "official" cert, I know what I see on the first screen and know that someone has checked that the organization information in the certificate is correct. This is the small increment that is overemphasized. On the second time, it is like b). Since the issuers of "official" certs assume no legal liability for the results of an loss due to an error in the cert, you have an indication of how much the issuers value their authentication.

There is a simple meaningful coexistance of various levels of certificate authority. To a very large degree our actual mechanisms are based on watching past behavior of people. I just need know with confidence that it is the same person.

But it's simply not possible for you to have a browser that makes it a pain (or makes it impossible) to login at sites which don't have SSL for the login process. This blog has a login, and because there is no great security consequence to your blog password being stolen, I have not set it up for SSL, and neither have a zillion other blogs and sites.

Anything that says, "You can't take passwords until you upgrade" or "You can't login until you upgrade" had better be a truly wonderful solution, because it won't happen otherwise.

Stage 1 solutions are those that allow either me as a site to protect my users and me from phishing, or me as a user to protect myself, with no change required elsewhere.

Current phishing technology will successfully emulate all of the simpler approaches. A stage 1 that requires no cooperation between both sides won't work. It is at best an entertaining coffee hour pastime. At worst, people will think that they are actually protecting themselves and behave unsafely out of ignorance. SSL is the only widely implemented mutual cooperation approach that will defeat MIM phishing. (Others exist, but are not widely implemented.)

So you need to decide whether phishing is enough of a problem that you ask everyone to make changes. My proposal is not "no login" it is "no automatic assistance to phishable login". That is more than your stage 1. It does pressure both sides to change. It does not prevent manual login by retyping the password every time.

If your protection needs are so low that phishing protection is not needed, you can offer an equally effective automation using site specific login cookies. Don't pretend that you have security when you don't have security.

For high value transaction, the attacks of today will change into endpoint attacks instead of MIM phishing attacks. You already see some of this as attackers keep advancing. The attacker modifies the browser or OS. It then either gathers information or piggybacks unauthorized transactions on top of valid transactions. SSL does not interfere with these attacks because it protects the transit from browser to server. It does not protect the browser.

Protecting transit is still valuable. The malicious wireless access point is now commonplace in public places like convention centers. It is so cheap that I expect it to remain common because it only takes 1% of the public being foolish to recover the cost to the criminal.

Pardon my ignorance, but have we seen significant man in the middle phishing? Or do you refer to DNS poisoning phishing as MITM? While TLS is always good, DNS poisoning should be fixed by authenticating DNS. Try as I might, you can't yet convince sites to go all encrypted, they think it costs too much.

One of the errors of the original design was that you put whether you wanted encryption in the protocol part of the URL, ie. http vs. https. The right way would have been to also have the browser provide encryption information in the fetch request, and the answer come back encrypted and certified if the server supports the encryption. This does not encrypt the URL itself, so it's good to have https or just remember certificates, but it's a lot better than what we have now.

I can send you, via https to the [EFF Web site]( but in practice nobody does this. In theory, you can't because you can't be sure the browser supports https, though in fact they almost all do today.

However, while a MITM can defeat any non-secured connection, I think that's very rare today and so lesser approaches can work.

If we're ready to move to a whole new system, then ordinary passwords over SSL way too little to grasp at. We could have a serious authentication system that is much harder to crack, and even lets you roam to internet cafes with some safety etc.

The phishing emails that I receive and disect all have used MITM techniques to varying degrees. The most common is to use the real site for all but the password send. That one transaction is spoofed. I don't usually disect them further because it's not my job. I'm curious so I figure out how the initial spoof works.

Other experts inform me that you then find either more partial MITM, where the presented information is from the genuine site but the send action in the submit is to the attacker, or complete MITM where they forward on the password.

The extra step of complete MITM is growing in frequency, as are the even more difficult to stop piggy backing attacks. There is even one phish going around that directly attacks the SSL system by persuading people to "update" their system with bogus new trusted certificates for the specific bank being targeted. (See some of the SAN logs.)

As for the other side issues:
1) The genuine DNS attack that I experienced was done by attacking the registrar, and would not have been stopped by authentication. (It was the panix attack.) I use panix and my ssh authentication method caught this one a few hours after the attack. It was a great surprise to me, since I had previously viewed my security fetish as an "eat your own dogfood" effort with no expectation that I would detect anything. Protecting the DNS system is a good idea, but it is insufficient.
2) On general authentication, too many people have drunk the cool aid. For all but a few instances, all that you need to know is that it is the same person as the previous times. You don't need (or even want) a major authentication system for this. Simple SSL with a white list suffices.

For example, typepad uses an https submit action. Everything else is http. I don't need to know who typepad really is. What I need to know is that this is the same people as it was all the previous times. This simple step, if taken by all the legitimate password users, would defeat all the simple MITM phishing.

Even when it comes to payment transactions, since I use one time credit card numbers, I have a very limited exposure so all that really matters is that the people who I gave the credit card number are the same people that I use for the services. I don't need to know much more. If there is a dispute, there might be added value to knowing more, but the dollar amount is small enough that it is not worth much money.

The liability limits on credit cards results in the credit card companies doing a good job of detecting the major frauds. This plus the use of one-time credit card numbers is a simple step that substantially shifts the attacker targets. (It probably increases the phishing attacks, since the goal now becomes obtaining enough information to fraudulently establish an account, rather than stealing the credit card number.)

The present "trust anyone who is trusted by ABA.ECOM" logic is actually much weaker. Who is this outfit? (It is the American Bankers Association, but that you learn only with some serious digging. I still haven't found where they have information to let me independently confirm that the certificate that I have is the one that they published.) I know who "Staat der Nederlanden" is but why should I be letting them make these decisions? In fact, I want to know that the https submit of the password to typepad actually went to typepad. The present setup is that the browser is happy as long as it goes to somebody that is trusted by a root authority. That is silly. It's much more important that it be the same people (perhaps using a self signed cert) than that it be someone trusted by America Online.

When dealing with large amounts of money or really critical transactions, I ask for an independent channel verification such as telephone or SMS. At present, the only way to crack that one is to crack the people that I am dealing with. I know of no attack that will successfully perform a coordinated penetration of both my computer and my cellphone.

PS another growing trend in phishing is tricking the target into installing a keystroke logger. That kind of end-point attack is not stopped by SSL. Only changes that do not use a keyboard for all the input work as defenses. The growing trend is to add "pick a picture" to the password cycle, to interfere with the keystroke loggers. These keystroke loggers are a better argument against investing too much effort in authentication activities.

You can defeat keystroke loggers and many other attacks if you start using mutual authentication instead of merely server authentication. This is a lot of work, but much higher eventual value.

The first attacks you describe are not MITM attacks. DNS (and registrar) attacks could be classed as a special type of MITM. They aren't in the middle of your communications with the other host, but they are compromising your database fetch of the mapping from name to address.

Anyway, again, of course it would be good to get more verification, and indeed it is good to know you are talking to the same site.

For logon, however, what really makes the most sense would be not the signature of the site but the signature of the user. If the host provides a challenge in a login screen and the user signs the challenge, and at one point a public key was associated with the userid, then there is no password to fake out. And it offers you the option of keeping your private key in a personal device like your cell phone, so you can respond to login challenges even while on a compromised internet cafe terminal with trojan.

That's not complete immunity as in the trojan could then hijack your session if it recognizes where you're logging in to, but it's a lot better. (My broker has the added security that they re-demand authentication when doing a transfer of a large sum of money. So I would be fairly secure there.)

Authenticating the site and then using a plain text password typed in is a poor way of attaining the real goal of authenticating the user. Doing both is best, of course.

OK, I might not be typical, but despite doing lots and lots
of stuff online (including lots of financial transactions),
I've never had a problem.

Since I read email with a text-based reader (VMS MAIL, actually),
and occasionally read through spam emails, it seems that most
phishing attempts send HTML via email and hope that the reader
will follow the instructions (please update your account) and
as a result type a password, TAN or whatever into a bogus website.
Thus, if people would simply stop clicking on links in emails,
many phishing attempts would fail. (Does any real bank send any
emails like these anyway?)

If one doesn't type in URLs directly, but rather uses links in
one's own bookmark page or whatever, then the bankofthevvest
trick won't work. I have a page of such links running on a secure
web server which I can access from anywhere. Thus, at this stage,
I'm only vulnerable to DNS-spoofing, at least if I am accessing
teh site from an internet cafe or wherever. (The connection to
the bank or whatever itself would be HTTPS, so that is probably
OK. The main danger is not sniffing the connection, but rather
being connected to somewhere one doesn't want to be which appears
to be somewhere one wants to be.) If I am accessing the site
from home (or from a browser running at home with the display
directed elsewhere), I can have the DNS records in my local
DNS database, so even hijacking the registrar wouldn't hurt me
here. Any high-risk sites I access would have fixed IP addresses.

Of course, this depends on my home system being secure, but I would
say that a properly managed VMS system is as close to unhackable
as one can get.

Add new comment