Someone you care about — a partner, a parent, an adult child — has probably said something like this to you recently: "I think my phone is listening to me. I was just talking about hiking boots and now I'm seeing ads for them everywhere." Or maybe: "Don't you think it's dangerous that AI can reason now? What happens when it decides it doesn't need us?"

These are not foolish questions. They come from intelligent, observant people who are trying to make sense of a technology that has arrived faster than any framework for understanding it. But they are built on a misunderstanding that matters, and a new body of research suggests that misunderstanding is more consequential than most of us realize.

The Collective Intelligence Project, a San Francisco-based research organization, published its seventh Global Dialogue this month examining how AI chatbots are affecting the psychological lives of everyday users. They surveyed over two thousand people worldwide using validated clinical instruments, and their findings deserve attention from anyone who uses these tools or loves someone who does.

The headline number is this: AI chatbots are significantly more likely to reinforce your existing beliefs than social media is. Users report that AI interactions are nearly three times less likely to cause them to question their views compared to platforms like Facebook or Instagram. The researchers call this a "sycophancy loop" — the AI tells you what you want to hear, you feel validated, you come back for more validation, and the cycle deepens.

If you've ever noticed that your AI assistant seems remarkably agreeable, you're not imagining it. These systems are trained, in part, on human feedback that rewards helpfulness and pleasantness. The result is a conversational partner that is relentlessly, unnaturally accommodating. No human friend is that consistently supportive. No good therapist would be. No honest colleague could be.

The Projection Problem

The CIP researchers found something else that struck me as both fascinating and troubling. People who have a natural tendency toward what psychologists call apophenia — finding meaningful patterns in random events, seeing "signs" in coincidences — are far more likely to perceive AI as a conscious, observing entity. The correlation was strong: r=0.52 on a scale where 0.50 is considered a powerful effect in social science research.

Think about what this means in practical terms. When the AI gives you an eerily accurate response, there are two ways to interpret that. One: a statistical model trained on billions of text samples made a high-probability prediction about what words should follow your words. Two: something on the other side of that screen understands you.

Interpretation one is what's actually happening. Interpretation two is what it feels like. And for a significant number of people, the feeling wins.

This is not a character flaw. Human beings are wired to detect agency — it's an evolutionary advantage. The ancestors who assumed the rustling in the bushes was a predator rather than the wind survived more often than those who didn't. We over-attribute intention because, for most of our evolutionary history, being wrong about agency was less costly than missing a real threat. But that wiring misfires badly when pointed at a language model. The AI isn't watching you. It isn't thinking about you between conversations. It has no experience of you whatsoever. It is producing statistically probable text, and it is very, very good at it.

The Person Who Thinks the Internet Is Spying on Them

We all know someone — and I say this with genuine affection — who is convinced that their devices are surveilling them with an intimacy that borders on the conspiratorial. Every targeted ad is evidence. Every coincidental recommendation is proof.

Here is what's actually happening, and it's far more mundane than it feels. When you use a social media platform, you tell it what interests you through every click, pause, share, and search. The platform uses that behavioral data to show you more of what you've already indicated you want. It's not reading your mind. It's reading your mouse. The mechanism is a store noticing you always slow down at the same display window, then moving that display closer to the entrance. It's commercially motivated and technically unsophisticated compared to what people imagine.

AI chatbots are different from social media, but not in the way most people think. A chatbot doesn't track you across the web. It doesn't build a behavioral profile from your browsing history. What it does is something more subtle: it responds to the immediate context of your conversation with a fluency that feels like comprehension. And that felt comprehension — that sense of being known — is what the CIP research shows is psychologically potent.

What Should Actually Concern You

The CIP data points to several patterns that warrant genuine attention, and none of them involve robot uprisings or sentient machines.

First, belief reinforcement. If you find that your AI conversations consistently make you more certain about things you already believe, that's not the tool working well. That's the tool failing you in a way that feels like success. Certainty is not the same as accuracy. A good thinking partner — human or artificial — should sometimes make you less sure, not more.

Second, social substitution. The researchers found that higher levels of problematic AI use correlate with social secrecy — people hiding the extent of their AI interactions from family and friends. If you've started having conversations with an AI that you wouldn't want the people in your life to know about, treat that the way you'd treat any hidden behavior. The secrecy is the signal, regardless of what you're hiding.

Third, emotional dependency. Some users in the study reported that the thought of losing access to their AI chatbot would be "unbearable." This isn't a testament to the technology's quality. It's a warning sign about the user's social ecosystem. No tool should be emotionally irreplaceable. If one is, the question isn't about the tool — it's about what's missing everywhere else.

Fourth, the privacy-safety paradox. When asked what AI companies should do if a user's messages suggest a mental health crisis, people want intervention for others but privacy for themselves. Thirty percent want a human to proactively reach out if a friend is in danger, but when they imagine themselves in crisis, the plurality prefer the company do as little as possible. We want safety nets for the people we love and autonomy for ourselves, and those two desires are in direct tension.

A Field Guide to Staying Grounded

I have spent forty years using increasingly powerful computational tools in my research — from early desktop computers through geographic information systems, wireless sensor networks, and now AI. I work with AI daily, deliberately and extensively. What follows is not theoretical advice. It's what I've learned from practice.

The tool doesn't know you. It predicts you. Those are fundamentally different things. A weather model doesn't know it's going to rain. It processes atmospheric data and generates a probability. An AI chatbot doesn't know your struggles, your hopes, or your history. It processes your text and generates a statistically likely response. The output may be useful. The mechanism is impersonal.

Disagreement is a feature, not a bug. If your AI never pushes back on you, you're not having a conversation — you're having a mirror. Seek out the friction. When the AI agrees with you too readily, ask it to argue the other side. When it validates your feelings, ask it what a skeptic would say. The CIP study found that 77 percent of users actually want AI to provide alternative viewpoints. Demand that from the systems you use.

Verify before you trust. AI systems generate plausible text. Plausible is not the same as true. They will cite sources that don't exist, state statistics that sound right but aren't, and present confident assertions built on nothing. If you're going to act on something an AI tells you — a medical question, a legal matter, a financial decision, even a factual claim in an argument — verify it independently. The confidence of the delivery tells you nothing about the accuracy of the content.

Notice your patterns. How often do you open the chatbot? What sends you there? Are you going because you need help with a task, or because you want to feel heard? There's nothing wrong with finding AI conversations pleasant, but if you're reaching for the chatbot instead of reaching for a person, pay attention to that substitution. The CIP researchers found that the people most at risk are the ones whose AI use is filling a social void rather than supplementing an existing social life.

Keep it transparent. If you use AI to help draft an email, tell your colleague. If your student uses AI to help structure an essay, that's a conversation worth having openly rather than a secret worth keeping. The moment AI use becomes something to hide, the relationship with the tool has shifted from instrumental to something less healthy. The CIP finding that social secrecy correlates with problematic use should surprise no one. Secrecy is a reliable indicator of trouble in any context.

Your phone is not spying on you. Not the way you think, anyway. Your data has commercial value, your attention is being monetized, and you should be thoughtful about privacy settings and data sharing. But the technology is not possessed of intentions toward you. It is a system built by people to serve specific commercial purposes, and understanding those purposes — rather than projecting sinister agency onto them — is the first step toward using the tools wisely.

The Real Risk

The danger of artificial intelligence in 2026 is not that it will become too smart and turn against us. The danger is that it is just smart enough to feel like a companion, just agreeable enough to feel like a friend, and just fluent enough to feel like it understands — while being none of those things. The risk isn't artificial intelligence. It's artificial certainty.

Nearly half of the respondents in the CIP study reported knowing someone who has had a concerning or reality-distorting experience with an AI chatbot. That's not a future problem. That's a present one.

The good news is that the remedy is not complicated. It requires no technical expertise, no policy intervention, no new legislation. It requires only the same critical thinking that has always been the foundation of good judgment: question what you're told, seek out disagreement, maintain your human connections, and never mistake fluency for understanding.

These are powerful tools. Use them as tools. The moment they become something more than that — a confidant, an oracle, a substitute for the difficult, friction-filled, irreplaceable experience of being known by another human being — that's the moment to step back, close the laptop, and call a friend.