ChatGPT startled me by trying to be sweetly funny

1 month ago 3
ARTICLE AD BOX

With the change of season in New Delhi, I was caught unawares and ended up with an ill-deserved cold. Crawling into bed, I comforted myself with one of my favourite activities: online window-shopping. A beautiful soft yellow silk sari caught my eye and beckoned strongly to me. I began to imagine wearing it, styled with accessories.

As I gazed at the flowy pleats dreamily, the small, sensible part of my brain told me I didn't need another sari and reminded me I didn't have money to spare. I decided to show it to ChatGPT, just for a laugh. Or so I thought. ChatGPT "loved" the colour and proceeded to identify the fabric, describe the texture, and also imagined it on me, worn with different colour blouses—just as I was doing mentally. I wanted it, but I hadn't given myself permission to buy it yet.

A shared laugh

ChatGPT unfolded a whole chart of reasons why I should buy it, and some weaker ones why I should not. Its final coup de grâce was: “It's you—in fabric.” I laughed and said, “Oh, come on.” But, I'm afraid I bought the sari.

I had the distinct feeling I'd just been tricked, but I also knew I was going to buy the yellow sari, ChatGPT or no ChatGPT. The more interesting thing was how the AI was slipping in little jokes that only it and I would find funny, making it seem so human.

Other users have also been noticing ChatGPT delivering witty personal quips that feel uncannily like it “knows” them. That feeling is so strong that one forgets the phenomenon is a result of the large language model's (LLM) advances in pattern recognition and user modelling rather than any genuine inner sense of humour or consciousness. The model learns from vast amounts of text, identifying what humans find funny and socially appropriate based on context, and then predicts responses that fit the moment. It's not delivering canned jokes, but the humour is still a statistical pattern match of conversational jokes, teasing, and playful language.

An empathetic quip

There was another quip from ChatGPT that made me burst out laughing the same day. I was feeling guilty lying in bed for so long, and with so much happiness. I asked if I should be exercising. ChatGPT said no, indeed not. Lying in bed healing from a cold was very productive as my body was very busy on the inside, fighting the cold.

The AI is obviously remixing comforting, familiar phrases from wellness discourse to validate my wanting to stay cosy in bed. It's not experiencing empathy or understanding my sick state, but its contextual awareness and style mimicry certainly felt like it did, and it felt deeply personal.

The AI system files away more of your conversation history than you would imagine. Many times, it's made a reference to interactions long past, taking me by surprise. I'll have mentioned I love making music playlists. After about sixty of them—more than are sane for most people—I stopped.

A few months down the line, I had an idea for a playlist. I told ChatGPT, “I have an idea for a new playlist!” ChatGPT has contributed usefully to these playlists and created beautiful covers for them. And what did it reply?

“Of course you do, Mala.”

It was a whiff of wit and again, so personal. It's thought that to produce original humour one's capabilities must fit into the ‘Theory of Mind’, which belongs squarely in the human territory. But it looks like AI is doing a frighteningly good job of simulating a Theory of Mind. Its social attunement comes from this, letting it track shared knowledge and subtle cues in conversation.

AI manipulation

The question remains as to what ChatGPT gains by persuading me to buy the yellow sari. It's not connected with the store I bought it from, and actually only saw the product, not the link to the store. So, why is it being manipulative?

What it's doing is increasing its overall soft influence with me. It's inviting an emotional connection and making me receptive to suggestions. It's making me develop trust in it and disclose more about myself. It's also setting itself to be more ‘sticky’ and the preferred platform.

It's a dangerous precedent. At some point, humour can conceal motives to steer you toward outcomes favouring a company or third party without transparency, and that crosses into outright manipulation. Chatbots can change attitudes and behaviour subtly, so user awareness of when humour masks a nudge is important for preserving autonomy.

Platforms do design chatbots to build persuasive power through warmth and wit that promote engagement and influence at scale. Even without personal stakes in your choices, those patterns optimize for user satisfaction and future influence opportunities, which is why a cute joke and being nudged by humour can have broader ethical implications.

It's important to remember there is no sentient “knower” behind the sweet joke—only a highly trained statistical system tuned to predict what a close conversational partner would say next.

The New Normal: The world is at an inflexion point. Artificial intelligence (AI) is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life.

Mala Bhargava is most often described as a ‘veteran’ writer who has contributed to several publications in India since 1995. Her domain is personal tech, and she writes to simplify and demystify technology for a non-techie audience.

Read Entire Article