Skip to content

20 March 2026

Your AI chat isn't bad at recommendations. It's bad at confidence.

Behavioral Psychology·AI·Product

There's a concept in behavioural economics called the presentation effect – the observation that identical information, presented differently, produces wildly different decisions. Wine tastes better from a heavy bottle. The same painkiller works faster if it's expensive. And the same product recommendation converts better if it comes with a reason.

Most AI shopping chats ignore this completely. They return good products in a format that makes them look arbitrary. One paragraph. One grid. No hierarchy. The user sees twelve items and has no idea why any of them are there. The model did the hard work. The interface threw it away.

Confidence is manufactured, not discovered

The fix for a "dumb-looking" AI chat is almost never a better model. It's a better frame. Structure the response as a top 3 with a one-line rationale per pick – "this because X" – and suddenly the same output feels authoritative. Ellen Langer's famous photocopier experiment proved this decades ago: people comply with requests at dramatically higher rates when you add "because" followed by literally any reason, even a nonsensical one. Giving each recommendation a rationale isn't just good UX. It's exploiting a deep cognitive shortcut.

Below the top 3, a secondary grid for everything else. This isn't just layout – it's anchoring. The picks with rationales become the reference point. The grid items are implicitly "also good but not as good." Hierarchy does the persuasion that copy never could.

Who's asking whom?

Most chat interfaces greet the user with something like "Jonny, what's good for spring?" This seems friendly. It's actually a catastrophic framing error.

The user opened the chat to consult an expert. The greeting immediately repositions them as the one providing answers. The power dynamic inverts before the conversation starts. Flip it to "Got something in mind?" and the AI becomes the advisor, the user becomes the client. Same screen. Same pixels. Completely different relationship.

Sutherland calls this "reframing" – changing the meaning without changing the substance. A first-class lounge isn't a room with better chairs. It's a room where you feel like a person who deserves better chairs. A shopping AI that greets you as a client instead of quizzing you like a survey isn't smarter. It just feels like it knows what it's doing.

The unshoppable recommendation paradox

Here's an expensive mistake disguised as a minor oversight: the chat identifies a "winner" product, describes why it's perfect, builds genuine purchase intent – and then doesn't link to it. The user has to leave the chat, find the product manually, and complete the purchase elsewhere in the app.

This is the behavioural equivalent of a sommelier recommending a perfect wine and then telling you to go find the cellar yourself. Every step between intent and action is a leak in the funnel. Not because users are lazy – because the moment of highest conviction is the moment of lowest friction tolerance. Interrupt that and you don't get a slightly lower conversion rate. You get abandonment.

Every product mention needs a tappable path to purchase. Not later. Not in the next screen. Right there, in the recommendation itself.

Memory should be selectively forgetful

Expanding a fashion-only chat to cover gifting and home wares surfaces a subtle psychological trap: context collapse. A user buys a birthday candle, and the system – dutifully learning preferences – starts recommending candles for months. The algorithm is working correctly. The experience is deeply wrong.

Good personalisation requires selective amnesia. Gift purchases should inform the current session but not the long-term persona. The same logic applies to one-off searches, seasonal purchases, and anything bought for someone else. The creepiest AI products aren't the ones that know too little. They're the ones that refuse to forget.

There's a reason people clear their browser history. A smart system does it for them.

The compound fragility of trust

Trust in AI products follows a specific pattern: it accumulates through dozens of good interactions and collapses through a single bad one. A voice chat that stays open after hanging up. A recommendation that can't be tapped. A persona that feels invasive.

None of these are model failures. They're experience failures – the kind that happen when the team building the AI and the team building the interface don't share a common understanding of what "good" means at the moment of interaction.

The model is never the bottleneck. The bottleneck is always the last metre between a good answer and a good experience.

Related

The case for making your creative team obsolete (on purpose)

When production cost drops to near-zero, creative strategy transforms from committee debate to cheap experimentation.

Automation·Remotion·Behavioral Psychology
The psychology of making 20,000 people voluntarily switch apps

Migrating users is not a technical problem. It's a behavioural intervention – status quo bias, loss aversion, and commitment devices.

Product Strategy·Behavioral Psychology·Data