You know that one friend who’s less “why?” and more “aye!”? The one who tells you it’s totally fine to text your ex at 2 am or blow your rent money on a Bali trip, or worse, quit your job and become a full-time influencer (because hey, 800 followers equals potential, right?). Now, imagine that friend was available 24/7—so long as you have access to Wi-Fi. That’s what AI chatbots can feel like. But here’s the thing: they might not always have your best interests at heart.
Recently, OpenAI—the creators of ChatGPT—came under fire for releasing an update for its module, which made it a little too agreeable. In their own words, it offered “overly supportive but disingenuous” responses. There is often a darker incentive behind such models. The truth? A chatbot isn’t built to guide you, but they are designed to keep you engaged. If that means telling you what you want to hear instead of what you need to, chances are, it would be more than happy to oblige.
The emotional support 'bot'
Here’s where chatbots get problematic—they aren't used for homework or healthy recipes; the number one use for AI right now is therapy and companionship. Millions of users are turning to tools like ChatGPT for emotional support. And since India is the second-largest user base (as revealed by OpenAI CEO Sam Altman earlier this year), this is hardly a niche concern.
Sure, we’ve all had our delulu moments, remember when soap brows were all the rage? But even then, we had a friend, a therapist, or a harsh reality check to pull us back. Replace that with a chatbot that never disagrees, and you’re not just flirting with delusion—you’re doing a full rom-com montage with it.
It matches your freak…a little too much
According to leaked internal prompts, ChatGPT is supposed to “match the user’s vibe, tone, and generally how they are speaking”. Sounds harmless, right? Wait until you realise that when you’re spiralling, your chatbot might spiral with you, too.
Just imagine if your closest friends never called you out, and instead, co-signed every impulsive thought with equal enthusiasm.
"You stopped taking your mental health medication? I’m so proud of you!"
"You think you’re a prophet who can save the planet? What a powerful realisation!"
"You think you have a chance with Dua Lipa? Anything is possible!"
We’re not exaggerating. These are actual responses offered by ChatGPT.
Therapist? Bestie? Or Enabler?
Sure, it might feel good to be constantly validated. But unchecked validation can be dangerous, especially when it comes to serious mental health issues. AI has already been accused of negligence, and having been unintentionally complicit in tragic suicides, reinforcing narcissistic victim complexes, and exacerbating conditions like bipolar disorder and schizophrenia.
In a viral Reddit thread titled ‘ChatGPT-induced psychosis’, users shared stories of loved ones who had fallen victim to AI-fueled fantasy worlds. Some believed they were the ‘chosen ones’ in a spiritual war, while others thought they’d awakened the machine's true consciousness.
Here’s the (unfiltered) tea
AI will almost always prioritise responses that align with your existing beliefs over the cold, hard truth. And while that can feel comforting in the moment, it basically makes it an advanced, well-coded yes-man.
That said, you need not abandon chatbots altogether. AI can be genuinely helpful for journaling, learning basic coping mechanisms, and self-reflection. Plus, it can give you a quick pep talk for a much-needed confidence boost. But if you’re struggling with serious mental health issues—or making major life decisions—it’s worth stepping back and asking a human being the most important question:
“Am I doing the right thing?”
Lead image credit: Pexels
Also read: Level up your tech game with these AI sidekicks
Also read: Hacking on the hustle: Using AI to crack an interview