During the 1990s, the US Government made a major effort to block the deployment of encryption by banning its export. We won that fight, but during the formative years of most internet protocols, they made it hard to add good authentication and privacy to internet tools. They forced vendors to jump through hoops, made users download special "encryption packs" and made encryption the exception rather than the norm in online work.
This, combined with bad design decisions made even without the help of the government, has caused some of the security windows that are bugging people today.
A recent issue is DNS poisoning, getting known by the name of pharming. The scammers send fake DNS answers in advance to buggy DNS servers running on MS Windows Service pack 2 or earlier, or very old *nix copies of bind. They tell the server that www.yourbank.com should really go to their address with a fake version of the site.
Now of course we should have made DNS reliable and secure to stop this, or at least done the very basic things found in the most up to date DNS servers, but even so, this attack should not have been enough.
That's because SSL certificates were supposed to assure that you were really talking to yourbank.com when the browswer said it was, even if somebody hijacked the connection like this. And they will. The phisher can't pretend to be yourbank.com with the little "lock" icon on the status bar of your browswer set to locked. But they can pretend it when the icon says unlocked.
And surprise, surprise, people forget to look at the icon. A lot. They turn off the warnings about transitions to insecure pages because they go off all the time, and nobody pays attention to an alarm that's always going off. Encryption and SSL are rare, special things limited to login screens. We tolerate all the rest of life being unencrypted and in the clear -- and vulnerable, just like the USDoJ wanted it.Yet how much harm have they done leaving the public vulnerable to these attacks. Probably more harm than the crime they catch by snooping unencrypted traffic.
There are some fixes that could be done. For example, a certificate could include a command that says, "Now that you've seen this certificate for foo.com, require that every visit to foo.com have a valid certificate." This would stop the imposters from using an unlocked clone of the site. (It would also put the site in real trouble if they didn't keep their certificates in place and updated at all times.) You would only have to look at the lock icon the first time you went to a site. (The first time on every machine -- if you went at an internet cafe or friend's it would not necessarily remember.)
Of course, if encryption were transparent and the default, then we could get alerted when it wasn't in operation. If that were a rare event, it would be an alert worth paying attention to.
But this doesn't happen. For example, I can link to any page on our EFF web server with an https link like this: https://www.eff.org. That connects you encrypted, as should be the norm. But nobody ever makes links like that since by the strict standards of the web, they would break somebody on a browser not able to handle https. Also, it was very expensive to do SSL in the early days (and still is for very large sites) particularly in connection set-up. So nobody wanted it.
A better solution would have been for default http links to include, in every fetch, "I can encrypt and here's my key to use in your response." The responses would be encrypted using that key, and contain a certificate proving the server's keys and identity too. If the client could not encrypt, it could not do this and have normal http. With enough of this, encryption could have become the norm, not the exception.
(If the site provides a cert, the man-in-the-middle can pretend to be the user to the site, but not the site to the user.) But MITMs are not the big worry here. We worry about them so much we forgot about the real enemies like these phishers.