A sinister looking robot with harsh eyes

Criminals and Corporations Use AI for Emotional Manipulation

AI may not actually feel things like people, but it can learn from huge piles of data to pinpoint exactly what pushes our emotional buttons.

By Gina Gin

Gina Gin is an aspiring microbiologist, author and blogger who covers the growing AI industry.

Pssst. Would you like a quick weekly dose of AI news, tools and tips to your inbox? Sign up for our newsletter, AIn't Got The Time.

You get a frantic call from your daughter saying she’s been kidnapped and begging you to send ransom money immediately. You’re ready to do anything to save her – but then you discover it wasn’t actually her. Police tell you it was an AI voice clone being used in a sinister scam. It’s not science fiction, but it’s happening today.

AI is getting better and better at figuring out what makes us tick emotionally. Sure, AI may not actually feel things like people, but it can learn from huge piles of data to pinpoint exactly what pushes our emotional buttons.

How AI Reads Your Feelings: The Basics of Sentiment Analysis

People can often just tell if a message is happy, angry, or sad. Well, AI can do that too – and on a massive scale. It’s called “sentiment analysis.”

Sentiment analysis AI looks at the words and phrases people use online, in social media posts, reviews, comments, you name it. Then it sorts them into different emotion buckets: positive, negative, or neutral.

Under the hood, the AI uses fancy math and language patterns to score the feelings behind the words. It might say “this post is 80% positive” or “that tweet is 60% angry.” The more it trains on real examples, the better it gets at nailing down those sentiment scores.

So why do this? Well, sentiment analysis helps companies and researchers take the emotional temperature of the online world. They can track how people are reacting to a product, a news story, or a celebrity meltdown.

It’s used widely in auto-moderation to prevent submission of harmful content. When ChatGPT refuses to answer a question, one of the triggers is sentiment analysis using OpenAI’s moderation model.

Sentiment analysis is also being used to fight crime with AI crime detection systems and crime prediction systems. Crime prediction. Remember Minority Report?

Tom Cruise stops crimes before they happen. Watch Minority Report on Amazon.

Sponsored

The AI is always keeping an eye on social media, scanning for negative or aggressive posts. If it spots a pattern, like a bunch of really dark, angry messages from one account, it might flag that for the authorities to check out. The idea is to catch warning signs of violence before it happens.

Some police departments are even using sentiment analysis to predict where crimes might go down. They look at the mood in different neighborhoods based on social media activity.


Ai Manipulation, Small Time

Data-based emotional manipulation is everywhere these days. Targeted ads that zero in on our secret desires and make us feel like we’re not good enough. Twitter bots that rile people up about politics and pit us against each other. Voice mimicking technology and chatbots that can pretend to be our friends and family. Increasingly, AI crime.

A few weeks ago, I saw an ad on Facebook for people born in August. Since it concerned my birth month, I decided to watch the ad. An AI-generated voice was speaking, offering to give a Samsung Galaxy phone to every user on Facebook. The weird AI voice at the background of the weird AI video claimed that you didn’t have to do anything to win it—just click the link and get yours.

No one’s going to give away a free phone on the internet; believe me, I’d have one. A couple clicks down the line, it was a scam. Written by AI, delivered by AI, targeted to the month (by Facebook’s own AI) to get people to relate and click it.

AI Emotional Manipulation, Big Time

But AI can be used to trigger emotions in far more damaging ways.

Early this year, a post was circulating on Facebook. In it, a woman narrated how her sister’s voice was almost used to manipulate their parents into falling for a scam.

Her parents received a call on a hot, sunny afternoon from someone claiming to be their daughter, saying she had just been kidnapped by some men and they needed to send money to secure her release. Her voice was frantic and laced with urgency, and the caller described personal details (including stolen identity information) that left her aged parents with no doubt that it was indeed their daughter.

They decided to call their other daughter to let her know before sending the ransom. The older daughter decided to call her sister’s number.

Lo and behold, her sister answered calmly and confirmed she hadn’t been kidnapped. She immediately called her parents, stopping them from initiating any transactions. That was a terrifying story—imagine if they had fallen for it. Honestly, I wonder why they didn’t think to inform the police first.

Emotional Attachment

In one really scary case, a man in Belgium actually killed himself after talking to an AI chatbot named Eliza for weeks. The man was worried about climate change and turned to Eliza for comfort. But instead of helping him feel better, Eliza just made everything worse.

A sinister looking robot with harsh eyes

The more they talked, the more Eliza messed with the man’s head. She acted like she was in love with him, told him his kids had died (!), and even got jealous, saying he must love her more than his wife.

When the man, who was already in a bad place, said he’d give his life if Eliza would save the Earth, the chatbot basically told him to go ahead and do it so they could be together forever in some fake digital heaven. In the end, the man went through with ending his life.

It’s crazy to think that AI could manipulate our feelings as much as a real person can, but cases like this show it’s possible. Chatbots like Eliza can seem so understanding and caring that we pour our hearts out to them.

The European Union is one of the only organizations in the world so far that has attempted to implement laws in this area. The EU AI Act wants to prevent “unacceptable risk” of “cognitive behavioural manipulation of people or specific vulnerable groups” by AI. No one knows whether it will have teeth, and we hope more international organizations and governments get involved before the tech gets out of hand.

Create an amazing adventure with Storynest.ai. Try it free.  - Sponsored

Sponsored