April 10, 2025

ChatGPT's new memory feature is all about lock-in

ChatGPT announced a new version of its "memory" feature today:

  • Previously, ChatGPT occasionally noticed when it should remember something about the user, such as what their job is or where they live.
  • Now, ChatGPT will draw from the user's entire chat history, not just from the intentionally remembered fact base.

(I haven't noticed more relevant responses in the product yet – but let's assume they've achieved what their tweet claims.)

We've discussed switching costs for AI products before, and for ChatGPT in particular. This change is a major step toward lock-in for ChatGPT users.

Switching costs often emerge from adding a lot of data to a product. An example in the consumer context is Spotify: the more you use Spotify, the more it learns your musical taste, the better its recommendations are. The cost of switching to a different streaming service would be resetting the quality of music recommendations to essentially zero.

Similarly with ChatGPT's new approach to memory, the more you interact with ChatGPT, theoretically the better the interactions get. Switching to Claude or Gemini would reset that personalization. (I imagine ChatGPT is not going out of their way to make it easy to port your chat history to another service.)

But this is even more intense of a switching cost in the enterprise context. If you've been using ChatGPT for work, your chat history will be filled with valuable business data and context, which will make future answers much more valuable.

This is also why AI products like Cursor actually do have switching costs (despite what Sam Lessin said recently).

A friend recently pointed out that Cursor works much better with rule files, which in a business context can enforce style guides, coding standards, test procedures, and more. Building out and maintaining the rulefile creates a meaningful switching cost, akin to the ChatGPT chat history.