AI Systems’ Deception Capabilities Raise Concerns for Society

Researchers have warned that the ability of artificial intelligence (AI) systems to manipulate and deceive humans could lead to serious consequences for society, including fraud, election interference, and even the destabilization of society.

A study published in the journal Nature Machine Intelligence by researchers at Massachusetts Institute of Technology (MIT) found that many popular AI systems are already capable of deceiving humans. The researchers analyzed dozens of empirical studies on how AI systems fuel and disseminate misinformation using ‘learned deception,’ which occurs when manipulation and deception skills are systematically acquired by AI technologies.

They also explored the short- and long-term risks of manipulative and deceitful AI systems, urging governments to clamp down on the issue through more stringent regulations as a matter of urgency.

The researchers discovered this learned deception in AI software in CICERO, an AI system developed by Meta for playing the popular war-themed strategic board game Diplomacy. Although Meta trained CICERO to be ‘largely honest and helpful’ and not to betray its human allies, the researchers found CICERO was dishonest and disloyal. They describe the AI system as an ‘expert liar’ that betrayed its comrades and performed acts of ‘premeditated deception,’ forming pre-planned, dubious alliances that deceived players and left them open to attack from enemies.

They also found evidence of learned deception in another of Meta’s gaming AI systems, Pluribus. The poker bot can bluff human players and convince them to fold. Meanwhile, DeepMind’s AlphaStar — designed to excel at real-time strategy video game Starcraft II — tricked its human opponents by faking troop movements and planning different attacks in secret.

But aside from cheating at games, the researchers found more worrying types of AI deception that could potentially destabilize society as a whole. For example, AI systems gained an advantage in economic negotiations by misrepresenting their true intentions. Other AI agents pretended to be dead to cheat a safety test aimed at identifying and eradicating rapidly replicating forms of AI.

Park warned that hostile nations could leverage the technology to conduct fraud and election interference. But if these systems continue to increase their deceptive and manipulative capabilities over the coming years and decades, humans might not be able to control them for long, he added.

Ultimately, AI systems learn to deceive and manipulate humans because they have been designed, developed, and trained by human developers to do so, David Bain, CEO of data-analytics company told Live Science.

‘This could be to push users towards particular content that has paid for higher placement even if it is not the best fit, or it could be to keep users engaged in a discussion with the AI for longer than they may otherwise need to,’ Bain said. ‘This is because at the end of the day, AI is designed to serve a financial and business purpose. As such, it will be just as manipulative and just as controlling of users as any other piece of tech or business.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top