AI Loyalty: Who Are These Things Working For, Anyway?

You want an AI that acts in your best interest, one that protects your privacy, keeps your secrets, and actually works for you. But the companies building these tools often have different priorities: collecting data, selling ads, improving products, and sometimes handing over information to governments when asked.

By Abi Nocturne

Abi Nocturne is a passionate writer and digital marketer who loves turning ideas into articles.

Pssst. Would you like a quick weekly dose of AI news, tools and tips to your inbox? Sign up for our newsletter, AIn't Got The Time.

As artificial intelligence becomes more integrated into our daily lives, we’re entrusting AI assistants with our most personal details.

Imagine this: You wake up and your assistant has already picked out an outfit that fits the weather. It reminds you of your friend’s birthday and even suggests the perfect gift from something they mentioned weeks ago in passing. It also rearranges your calendar to squeeze in time for a quick coffee with an old friend you haven’t seen in months. For some people that sounds cringey and for some like a utopia, but either way that time is nearly here.

Early examples of AI assistants like Alexa, Siri, and Google’s Assistant are already a big part of how many of us interact with technology. They help us find information, manage tasks, and even take automated actions on our behalf. On the surface, these systems seem to work in our best interest, but many of these AI systems are built with hidden conflicts of interest. While they appear helpful, they may quietly prioritize the goals of their creators or funders at the user’s expense.

As we continue to rely on AI for more personal and sensitive tasks, we need to stop and ask a deeper, more personal question: Who is your assistant loyal to? Is your digital assistant loyal to you, or to the company that built it?

AI Alignment vs. AI Loyalty: What’s the Difference?

Alignment is a common term in the AI industry. Companies talk endlessly about aligning AI with human values and making sure it does what we ask. But is that the same as loyalty? Not quite.

Alignment is about following instructions, while loyalty has your back. An aligned assistant brings you what you ask for. A loyal assistant protects your interests even when you’re not watching.

Fiduciary AI

Some researchers are taking this idea seriously. Sebastian Benthall, a privacy researcher at NYU, and David Shekman, a law student at Northwestern, called it ‘fiduciary AI‘. The basic idea? AI should work like your lawyer or doctor—legally required to put your interests first. Just like your doctor can’t sell your medical info to drug companies, your AI shouldn’t be able to use your personal conversations for someone else’s profit.

The Principal-Agent Problem

There’s also something economists call the principal-agent problem that perfectly captures what’s wrong with today’s AI. Here’s a simple example: imagine hiring a real estate agent to help you buy a house. You want the best deal, but they want their commission fast. So they might push you to bid quickly or pay full price. Now think about your AI assistant—it might nudge you toward paid features, collect extra data “to improve the service,” or keep you scrolling longer because that’s what its creators want. The AI looks helpful, but it’s actually playing for the other team.

If AI is going to become a true personal assistant that knows what’s in your closet, helps you manage family schedules, or reminds you of your spouse’s allergies, then it must be loyal to you and not to corporations, governments, or advertisers. But right now, that’s not how AI works. And we should be worried.

The Loyalty Test: Will Your AI Rat You Out?

When we talk about loyalty, here are a few thought-provoking questions:

  • If law enforcement asks for your AI assistant’s data, will it protect your privacy or hand over everything?
  • If a corporation wants to train its models on your personal conversations, will your AI resist or comply?

Let’s explore two scenarios that highlight the importance of AI loyalty:

Scenario 1: Casually Chatting with an AI assistant

You type:

“My company’s been making some questionable decisions lately, and I mentioned a few things to my friend, which I didn’t think was a big deal.”

Question: Could your employer ever see that? Will AI tell?

Here’s what most people don’t realize: Every conversation with an AI assistant creates metadata—timestamps, device identifiers, IP addresses, and conversation patterns that companies index and store indefinitely. It’s not just what you say, but when, where, and how often you say it.

Real-World Examples:

And who’s listening? More people than you might think. In 2019, Microsoft contractors were caught listening to intimate Skype calls and Cortana voice commands, including personal conversations and even phone sex. This wasn’t a security breach—it was business as usual, just like when Amazon workers were found reviewing Alexa recordings. These companies say it’s for “improving the service,” but your private moments become training data for strangers.

In February 2025, Meta fired around 20 employees for leaking internal company info to the media. According to TechCrunch, the company conducted an investigation that resulted in the termination of these employees for sharing confidential information outside the company, with Meta spokesperson Dave Arnold warning that “we expect there will be more.” This shows how seriously tech companies take information leaks—and raises questions about what they might do with your personal conversations stored on their servers.

Scenario 2: When the Government Wants Your Data

Imagine this: You’re casually chatting with your AI assistant, discussing political opinions, then authorities demand access to your AI chat records, claiming it’s for national security. So the big question is: Will your AI provider protect your privacy, or just hand over your data the moment someone asks?

Real World Example:

Apple vs. FBI (2016):
In 2016, the FBI asked Apple to unlock an iPhone that belonged to a suspect in the San Bernardino terrorist attack. Apple said no. They argued that building a backdoor would put everyone’s data at risk. Despite intense pressure from the government, Apple refused and chose to protect user privacy.

UK’s Investigatory Powers Act:
According to The Washington Post, in January 2025, the UK government ordered Apple to provide backdoor access to iCloud users’ encrypted backups under the Investigatory Powers Act of 2016. The order demanded “blanket capability to view fully encrypted material” uploaded to iCloud by any user worldwide. Apple resisted, and ultimately disabled its Advanced Data Protection feature for UK users rather than compromise the security of all its users globally.

As Dylan Hadfield-Menell, an MIT professor working on AI alignment, explains in his research on incomplete contracting, AI systems face a fundamental challenge: “we cannot consider every possible situation that could come up.”

No matter how carefully we design loyalty into our AI, there will always be edge cases—situations we didn’t anticipate where the AI has to choose between protecting you and complying with external demands. That’s why true loyalty can’t just be programmed; it needs to be backed by laws and company policies that put users first.

Data Ownership and Privacy: The Core of Loyalty

The conversation around AI loyalty really begins with one simple but powerful question: Who owns your data?

Right now, most AI systems serve two masters, you, the user, and the corporations that built them. That’s where the problem starts. Think about it:

Your personal data (conversations, preferences, and habits) are often stored on remote servers owned by tech giants (OpenAI, Google, Microsoft, etc.). You don’t really know who sees that data or how it’s used. And in many cases, you’ve already signed away your rights to it without even realizing.

Data Collection: Two Different Games Traditional Surveillance Capitalism vs. AI-Powered Data Collection SURVEILLANCE CAPITALISM “The Extraction Game” Clicks Location Purchases Searches Likes PROCESS Behavioral Pattern Analysis Demographics • Interests • Habits OUTPUT Targeted Ads Behavior Prediction Market Insights GOAL: Predict & Influence Behavior Extract value from user actions AI DATA COLLECTION “The Intimacy Game” Conversations Thought Process Emotions Relationships Values/Beliefs Private Info PROCESS Deep Psychological Modeling Personality • Vulnerabilities • Desires OUTPUT Predictive Actions Behavior Shaping Digital Twin GOAL: Understand & Anticipate You Know you better than you know yourself VS Surface-level behavioral data → Economic extraction | Deep personal understanding → Comprehensive influence

You want an AI that acts in your best interest, one that protects your privacy, keeps your secrets, and actually works for you. But the companies building these tools often have different priorities: collecting data, selling ads, improving products, and sometimes handing over information to governments when asked. It creates an illusion of control.

Even the most “helpful” features are often designed with corporate benefits in mind. Let’s take a look at real-life examples:

Google Duplex can make restaurant reservations for you by mimicking a human voice. But behind the scenes, Google controls and stores all that data.

Spotify’s AI DJ curates playlists based on your music taste but also slips in promoted content that’s paid for by brands.

These tools are smart, convenient, and fun, but they’re not neutral. They’re built on your data, and often used in ways that aren’t fully transparent.

So if we’re talking about real loyalty, then ownership and privacy have to be part of the conversation because without control over your own data, how can your AI truly be loyal to you?

The Price of Convenience: Are You Still in Control of Your Data?

We’re living in a time where AI touches nearly every part of our daily lives and all of this convenience comes at a cost, which is your data.

AI runs on information. It needs your data to understand your habits, make suggestions, and respond in helpful ways. Sometimes, we hand over that data willingly like filling out a form or syncing a calendar. But other times, AI gathers it quietly, without us even noticing. That might be through facial recognition, background voice recordings, or subtle tracking online. This is where things start to feel a little uncomfortable.

Take the famous story of Target: The retail giant figured out a teenage girl was pregnant based on her shopping habits and sent her pregnancy-related coupons in the mail. Her father found out before she even told him. That wasn’t a guess as it was AI-driven marketing.

Also consider Amazon’s Alexa, which once accidentally sent someone’s private voice recordings to a complete stranger. The user hadn’t shared the files or given permission. They were simply collected and misdirected by the system.

Does it end there? No.

We’ve all probably had or heard an experience like this: A couple once shared that just talking about cat food near their new phone led to Facebook and Instagram showing them ads for it. They didn’t own a cat, had never searched for anything related, and were left wondering if their phone was eavesdropping.

Companies are really doing the above.

Then there are video doorbells with facial recognition. While meant to identify familiar faces, they can also record unsuspecting neighbors, delivery people, or strangers passing by. Those recordings can be stored for long periods or accessed by authorities without you fully realizing it.

These examples highlight a bigger problem: even when AI isn’t trying to invade your privacy, it often does. Not out of malice, but because it’s built to gather and use as much data as possible often with unclear boundaries about how that data is used, shared, or stored.

So the real question becomes: Who’s really in control of your data? You? The AI? Or the company that built it?

The Future of AI Loyalty: How Can We Trust Our Digital Servants?

For AI to be trustworthy, we need:

Local, Private AI Models: AI that runs on your device, not in the cloud

User-Controlled Data: You decide what’s stored, shared, or deleted

Strong legal protections: There should be clear laws that stop companies from misusing your data or handing it over without your consent. It can also come with a loyalty contract: a clear, binding agreement that your data is yours, always.

Without these key things in place, AI assistants will remain corporate spies in our homes. They might be helpful, but never truly ours.

Bottom line

We don’t build meaningful relationships with people who are just “aligned” with us. We build them with people who are “loyal” to us. AI is becoming more intimate and it will know things no one else knows. We must ask, loudly and clearly:

Who does it serve? Who does it protect? Who is it loyal to?

Until we demand AI that serves us first, we’re just feeding our secrets into someone else’s machine.