You've noticed it: ChatGPT starts strong, then after 10-15 messages, it forgets what you said earlier, contradicts itself, and gives generic answers. This isn't a bug — it's a feature of how AI works called the "context window." Understanding it takes 5 minutes and will permanently improve how you use every AI tool.
- What it is: The maximum amount of text an AI can "hold in mind" during a conversation
- ChatGPT (GPT-5.4): 256K tokens (~190,000 words)
- Claude (Opus 4.7): 1M tokens (~750,000 words) — largest available
- Gemini 2.5 Pro: 1M tokens
- Why conversations degrade: Earlier messages get compressed or dropped as the window fills
- Fix: Start fresh conversations for new topics
- Last verified: April 2026
How It Works (No Technical Jargon)
Imagine reading a book, but you can only keep 100 pages in your hands at once. As you turn to page 101, page 1 falls to the floor. You can still reference the recent pages clearly, but the early pages become fuzzy or disappear entirely.
That's a context window. Every message you send, every response the AI generates, every document you paste — it all counts against the window. Once the conversation history exceeds the window, the AI starts compressing or dropping earlier content. The result: it loses track of instructions you gave earlier, repeats things it already said, and gives less specific answers.
Why This Matters for Your Work
If you're working on a long document with Claude and you've been iterating for 30 messages, the AI might have lost your original instructions about tone, formatting, or audience. The fix is simple: start a fresh conversation and include your key instructions in the first message.
If you're using Claude Projects, the Project context (uploaded documents and instructions) stays persistent across conversations — but the conversation history within each chat still has limits. This is why Projects are so powerful: your core context never drops out.
Practical Rules
Start a new conversation for every new topic. Don't use one conversation thread for everything — each new topic gets a fresh context window.
For long projects, periodically summarize your progress and paste it into a new conversation. "Here's where we are: [summary]. Continue from here."
When conversations get weird, it's usually a context window issue. The fastest fix is always a fresh start with clear instructions.
Shorter conversations produce better results because the AI can fully attend to every message. Quality degrades with length, not time.
For a comparison of context window sizes across all major AI models, check our State of AI Models page.
This is what we do every week. One deep dive on AI tools, workflows, and honest takes — no hype, no filler. Join us →
Disclosure: Some links in this article are affiliate links. We only recommend tools we've personally tested and use regularly. See our full disclosure policy.