Claude Opus 4.7 shipped on April 16, 2026. Within 48 hours, users across Reddit, Hacker News, and X reported the same thing: prompts that worked on Claude 4.6 suddenly produced shorter, terser, sometimes worse results.
The model didn't get dumber. It got literal. Anthropic's Claude 4.7 prompting guide states it directly: "Prompts written for earlier models can sometimes now produce unexpected results. Where previous models interpreted instructions loosely or skipped parts entirely, Opus 4.7 takes the instructions literally."
Claude 4.6 guessed what you meant. Claude 4.7 does exactly what you say — nothing more, nothing less. That single change breaks every vague prompt you've been using. If you're newer to structured prompts, skim our prompt engineering for beginners piece first. This guide covers the 7 changes that matter most and gives you concrete fixes drawn from Anthropic prompting best practices and the 4.7-specific release notes.
Key Takeaway
Claude 4.7 is stronger than 4.6. But it stopped compensating for sloppy prompts — which is a lot of why Claude 4.7 feels worse in everyday use even though benchmarks improved. If your output feels worse, the problem is your prompt — not the model.
What Actually Changed in Claude 4.7?
Anthropic tuned 4.7 for more predictable, instruction-precise behavior. This is valuable for developers building AI agents and API pipelines where you want predictable output. The tradeoff is that the model is less likely to volunteer information or expand on a task unless you explicitly ask.
Three major shifts — same vendor, different defaults (when you're choosing between assistants entirely, our ChatGPT vs Claude breakdown covers cross-model tradeoffs):
| Behavior | Claude 4.6 (Old) | Claude 4.7 (New) |
|---|---|---|
| Instruction following | Interpreted loosely, filled gaps | Takes instructions literally |
| Response length | Roughly consistent regardless of input | Sizes output to perceived task complexity |
| Tool usage | Called tools frequently | Reasons more, uses fewer tools |
| Tone | Warm, validation-forward, emoji-friendly | Direct, opinionated, almost zero emoji |
| Effort levels | Less differentiation between levels | Respects effort levels strictly — low effort = minimal output |
Why Do Vague Prompts Break on Claude 4.7?
"Review this contract" used to get you a comprehensive analysis. On 4.7, it gets you exactly that — a review. Not a risk assessment, not severity ratings, not rewrite suggestions. Just a review.
❌ OLD PROMPT
Review this contract.
✅ 4.7 PROMPT
Review this contract. Flag risks per clause. Rate severity 1-5. Suggest one rewrite per risky clause. Return as a table.
The fix: Name every output you want. Name the order. Name the boundaries. If you want a table, say "return as a table." If you want bullet points, say "use bullet points." 4.7 won't guess your preferred format.
Why Does Claude 4.7 Return 8 Paragraphs for 'Summarize This'?
Claude 4.6 gave roughly the same length regardless of input size. Claude 4.7 calibrates response length to how complex it judges the task to be. A long document with "summarize" produces a long summary. If you want 5 bullet points, you need to say "5 bullet points."
❌ OLD PROMPT
Summarize this report.
✅ 4.7 PROMPT
Summarize this report in exactly 5 bullet points. Each bullet under 15 words. First word of each bullet: an action verb.
The fix: Always define the format and the cap. "Under 200 words." "Exactly 5 bullets." "3 paragraphs maximum." Anthropic's own guide says: if you want concise output, add "Provide concise, focused responses. Skip non-essential context, and keep examples minimal."
Why Don't Negative Instructions Work on Claude 4.7?
This is the most counterintuitive change. "Don't use jargon" doesn't work well on 4.7 because negative instructions get followed too literally — the model focuses on what NOT to do rather than what TO do.
❌ OLD PROMPT
Don't use jargon. Don't use buzzwords. Don't sound like a marketer.
✅ 4.7 PROMPT
Write in plain English a 16-year-old could read aloud. Use short, concrete words. Replace "leverage" with "use." Replace "scalable" with "works at any size."
The fix: Anthropic's guide directly recommends telling Claude what to do instead of what not to do. Positive examples showing the desired communication style are more effective than negative instructions.
---📬 Getting value from this? We publish one deep dive per week on AI tools and workflows. Join readers who get it in their inbox →
---How Should You Start Every Claude 4.7 Prompt?
"Can you help me with this email?" is a question. 4.7 might answer "yes" and wait. Action verbs tell 4.7 to ship something specific.
❌ OLD PROMPT
Can you help me with the email?
✅ 4.7 PROMPT
Write a follow-up email. Goal: book a meeting by Friday. Under 90 words. Tone: confident, casual, specific.
The fix: Start every prompt with a verb. Write. Analyze. Compare. List. Draft. Build. Action verbs eliminate ambiguity about what 4.7 should produce.
What Are Claude 4.7's Effort Levels and Which Should You Use?
Claude 4.7 introduced a new effort level: xhigh, sitting between high and max. Anthropic's own recommendation is xhigh for coding and agentic work, minimum high for intelligence-sensitive tasks.
Here's why this matters: low-effort Claude 4.7 is roughly equivalent to medium-effort Claude 4.6. If you left the effort setting on default, you were getting 4.6-equivalent output from a model you expected to be better. That explains most of the "4.7 is worse" complaints.
| Effort Level | Best For | Watch Out |
|---|---|---|
| max | Hardest reasoning problems | Prone to overthinking, diminishing returns |
| xhigh (new) | Coding, agentic work — Anthropic's recommendation | Higher token usage |
| high | Most intelligence-sensitive tasks | Minimum recommended for quality work |
| medium | Cost-sensitive, moderate complexity | May under-think complex problems |
| low | Quick lookups, simple tasks | Risk of shallow reasoning — 4.7 takes "low" literally |
The fix: If you're using Claude Pro in the chat interface, make sure "Adaptive thinking" is turned on — it lets 4.7 allocate reasoning based on task complexity. If you're on the API, set effort to xhigh for coding and high for everything else. Don't use low unless the task is truly simple.
How Do You Get Creative Output from Claude 4.7?
This phrase comes directly from Anthropic's own documentation. It pushes 4.7 past the literal minimum on creative or open-ended tasks. Without it, 4.7's literal nature means it does exactly what you asked — which for creative work often means the minimum viable output.
📋 CREATIVE TASK TEMPLATE
Build a landing page for my AI consultancy.
Sections (in this order): Hero (headline + subheadline + CTA), Logo bar (6 client placeholders), 3 case-study cards (problem / what I did / result), Service blocks, Testimonial carousel (3 quotes), About me (180-word bio), Newsletter signup, Footer.
Style: editorial, serif headlines, sans-serif body, generous whitespace. Animations: subtle on scroll.
Go beyond the basics. Polish like it's a real client deliverable.
The fix: For any creative, design, or open-ended task, add "Go beyond the basics" at the end. It's the difference between 4.7 doing the minimum and 4.7 doing its best work.
How Do You Get Claude 4.7 to Sound Less Robotic?
Claude 4.7 is more direct, less validation-forward, and uses almost zero emoji compared to 4.6. If your use case needs warmth — customer support, coaching, educational content — you need to explicitly request it.
💡 Pro Tip
Paste 2-3 sentences in the voice you want, and tell Claude to match the rhythm. Example: "Match this tone: 'Hey! Great thinking on that approach. Here's what I'd tweak...'" This is more effective than saying "be warm" because 4.7 follows concrete examples better than abstract instructions.
Anthropic's guide recommends: "Use a warm, collaborative tone. Acknowledge the user's framing before answering." Add this to your system prompt or Claude Projects instructions.
Try it yourself
Paste any prompt and get a better version in seconds — works for Claude 4.7's literal style.
Open Prompt Optimizer — Free →Quick Cheat Sheet: 4.6 Prompts → 4.7 Prompts
| Old Habit | 4.7 Fix |
|---|---|
| "Review this" | Name every output + format + constraints |
| "Summarize this" | Define length, format, and structure |
| "Don't use jargon" | Positive instructions: "Write at a 10th-grade level" |
| "Can you help me?" | Action verbs: "Write / Analyze / Build" |
| Default effort level | Set to xhigh (coding) or high (everything else) |
| Expecting creative extras | "Go beyond the basics" |
| Assuming warm tone | Explicitly set tone + paste voice examples |
The Bigger Picture: This Is What All Models Are Moving Toward
Claude 4.7's literal behavior isn't a quirk — it's the direction all AI models are heading. OpenAI updated their model spec to emphasize "consider not just the literal wording but the underlying intent." Both companies are converging from opposite directions: Anthropic adding precision, OpenAI adding intent inference. The same skill — being explicit about what you want — is becoming the unlock on both sides.
If your prompts work well on Claude 4.7, they'll work well everywhere. The ICCSSE framework (Identity, Context, Constraints, Steps, Specifics, Examples) was built for exactly this kind of literal model behavior — every element forces you to be explicit about what you want.
Want to upgrade your prompts right now? Paste any prompt into the free Prompt Optimizer and see it restructured for the kind of explicit, literal style that 4.7 demands. Or take the Prompt Grader to score your existing prompts against the ICCSSE framework.
---📬 Want more like this? We publish weekly on the AI skills and tools that actually matter. Subscribe free →
---Try it yourself
Not sure if Claude 4.7 is right for your task? Take the 60-second quiz.
Open Model Picker Quiz — Free →Frequently Asked Questions
Is Claude 4.7 worse than 4.6?
No. It's more capable — it scored 87.6% on SWE-bench vs 80.8% for 4.6. But it stopped compensating for vague prompts. If your output feels worse, the prompt needs updating, not the model.
Do I need to rewrite all my prompts?
Not all. Simple, specific prompts still work fine. The prompts that break are the vague ones — "review this," "summarize this," "help me with this." Any prompt that relied on 4.6 guessing your intent needs updating.
What's the "adaptive thinking" setting?
In Claude's chat interface, adaptive thinking lets the model decide how much reasoning to apply based on task complexity. Turn it on for best results. On the API, this maps to the effort parameter — start with xhigh for coding, high for everything else.
Where can I read Anthropic's full guide?
The full Anthropic prompting best practices live at platform.claude.com/docs — search for "Prompting best practices." It covers everything from basic prompting to tool use, thinking configuration, and agentic systems.
---Disclosure: Some links in this article are affiliate links. We only recommend tools we've personally tested and use regularly. See our full disclosure policy.