Here John Dunn suggests sending an AI to negotiate with any aliens we discover via SETI.
This raises an interesting question. If SETI worked, and we got a signal from an alien intelligence, and the signal was understood to be a description of a computer architecture and then a big long, and undecipherably complex computer program -- possibly an AI -- could we dare run it?
Oh, it would be so tempting to run it. Contact with an alien species, possible untold wealths of knowledge, solutions to all our problems and more. But if it can contain those things it's probably smarter than us. And as an alien, it has its own goals which are alien to ours.
AI pundit Eliezer Yudkowsky spends much of his time warning about the dangers of even a human-designed AI, and has developed a convincing argument that it's next to impossible to keep something much smarter than you locked up in a box no matter how much you resolve to do so. It's probable we couldn't keep the alien AI in a box either as it does a superhumanly good job of convincing us just what wonderful things it could do for humanity (or just the people with keys to the box) if released.
Indeed, a good strategy for a growth-oriented AI creature would be to broadcast itself out at lightspeed, in the hope that other creatures would run it, and it could then use their resources to build more computers on which to run itself and transmitters with which to transmit itself. It might even do that at the same time as providing wonderful benefits for the host culture, or of course it could toss them by the wayside as it saw fit.
Remind you of Pandora? In Contact by Carl Sagan, the aliens send plans for an FTL transporter, which presumably is a physical device with no AI, so they are able to build it. They debate building even that, worrying if it's a weapon, but the debate would be much more on an AI, and probably end up in the negative.