Key Takeaways
- Anthropic released Claude Opus 4.7 on April 16, 2026 — generally available across all Claude products and major cloud platforms at the same price as Opus 4.6.
- The new model clears 70% of coding tasks on CursorBench, up from 58% for Opus 4.6 — and handles long, multi-step agentic work with far fewer errors.
- Claude Cowork, the desktop task-automation platform, now creates polished PowerPoint presentations, Excel workbooks, and document deliverables autonomously — and new Skills integrations with Figma and Canva push Anthropic deeper into the design and creation space.
- For everyday users, the upside is simple: fewer steps between an idea and a finished product.
What just happened with Claude Opus 4.7?
Anthropic released Claude Opus 4.7 on April 16, 2026 — generally available across all Claude products, major cloud platforms, and the API.
The pricing stayed exactly the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens, according to Anthropic's official announcement. That's relevant because you're getting a meaningfully better model for the same money.
The short version of what's different: Opus 4.7 is significantly better at coding, reasoning through long tasks, following instructions precisely, and seeing images in much higher resolution than before. We'll get into each of those. But the bigger story is what this release signals about where Anthropic is going.
What Claude Opus 4.7 actually brings to the table
Think of this as Opus 4.6 with a genuine intelligence upgrade — not a marketing refresh, but measurable improvements across the tasks people actually care about.
On CursorBench, the real-world coding evaluation run by the makers of Cursor, Opus 4.7 cleared 70% of tasks. Opus 4.6 cleared 58%. That's not a small delta — it's the difference between a tool that handles your complex refactor and one that gives up halfway.
Bolt, the app-building platform, reported up to 10% better performance on longer app-building sessions with Opus 4.7. Notion saw a 14% improvement over Opus 4.6 on multi-step agent tasks — and Opus 4.7 did it with fewer tokens and a third of the tool errors. Fewer errors with less compute is exactly the kind of efficiency that makes AI genuinely cheaper to use, not just theoretically capable.
Instruction following is notably tighter. Anthropic notes that prompts written for earlier models can sometimes produce unexpected results now because Opus 4.7 takes instructions literally. That's a good problem to have — it means the model does what you said, not what it guesses you meant.
Vision got a meaningful upgrade too. Opus 4.7 now accepts images up to 2,576 pixels on the long edge — roughly 3.75 megapixels — which is more than three times the resolution accepted by prior Claude models. That opens up real use cases: reading dense screenshots, working from complex technical diagrams, extracting data from charts.
| What Changed | Opus 4.6 | Opus 4.7 |
|---|---|---|
| CursorBench coding score | 58% | 70% |
| Max image resolution (long edge) | ~860px | 2,576px (~3.75 megapixels) |
| Notion agent tasks vs. Opus 4.6 | Baseline | +14%, 1/3 fewer tool errors |
| Bolt app-building sessions | Baseline | Up to 10% better |
| Effort levels | low / medium / high / max | Adds new xhigh level |
| Price (API) | $5 in / $25 out per M tokens | Same — $5 in / $25 out per M tokens |
Why does this release actually matter?
Every AI company ships model updates. The reason Opus 4.7 matters beyond the version number is the combination of improvements and where they land.
The gains show up specifically in the areas where AI has frustrated developers the most: long multi-step tasks that fall apart halfway through, agentic workflows that keep needing babysitting, and coding help that produces functional-looking but subtly wrong code. Opus 4.7 addresses all three in ways that early users measured and reported.
Anthropic also added a new effort level called xhigh — sitting between high and max — giving developers finer control over the tradeoff between reasoning depth and speed. In Claude Code, Anthropic raised the default effort level to xhigh for all plans. That's a quiet but important signal: they're confident enough in the model's efficiency to set a higher default without worrying users will hit walls.
Replit's president Michele Catasta said Opus 4.7 was "an easy upgrade decision" because it delivers the same quality at lower cost. When cost drops while quality holds or improves, adoption accelerates. That's the kind of unit economics that reshapes how developers build.
The bigger story: Claude Cowork and the creation platform angle
The second headline from today's release isn't about benchmarks. It's about where Anthropic is pushing Claude next.
Claude Cowork — Anthropic's desktop task-automation product — gives non-technical users a way to hand off a goal and receive a finished deliverable. That means PowerPoint presentations, Excel workbooks, Word documents, structured reports, and market research packages created from a natural-language request. A Jamf engineering director told Anthropic his team turned a complex performance review process that would have taken a React dev team into a guided, interactive Cowork experience in 45 minutes, according to the Claude Cowork product page.
That's not a chatbot. That's delegation.
Claude Skills — a system for encoding procedures and workflows that Claude applies automatically — has now added integrations with Figma, Canva, and Box. Canva's head of ecosystem Anwar Haneef confirmed that Skills will let teams "create stunning, high-quality designs effortlessly." Figma's product director Matt Colyer said Skills enable teams to "build Figma assets that stay true to their craft and vision, and move fluidly between code and the canvas."
Combined with Opus 4.7's improved ability to produce "higher-quality interfaces, slides, and docs" (Anthropic's own language from the announcement), the message is clear: Anthropic is building toward a platform where you describe what you want and get a finished product back.
How is this different from what AI tools could do before?
The gap Opus 4.7 closes is the last mile — tasks that stall halfway, code that works in isolation but breaks in context, long workflows that need constant supervision.
The common frustration with AI tools has been exactly that. You'd get a strong first draft and spend an hour fixing it. Or you'd try a multi-step task and the model would stall on step three.
Opus 4.7's improvements target that gap directly. The model catches its own logical faults during the planning phase, according to Clarence Huang, VP of Technology at a major fintech platform. CodeRabbit reported a 10%+ lift in recall on difficult-to-detect bugs without losing precision. XBOW, which runs autonomous penetration testing, reported a jump from 54.5% to 98.5% on visual-acuity tasks — effectively removing their biggest blocker for using Claude on a whole class of security work.
For non-developers, Cowork's approach is more direct: Anthropic observed that knowledge workers were bypassing Claude's chat interface for Claude Code because they needed something that could handle complex, multi-step work without requiring them to coordinate each step. Cowork is the response to that — the same underlying capability, packaged for people who work in files, folders, and applications rather than code editors.
What can regular users do with this starting today?
Here's the practical breakdown for people who aren't deep into the AI ecosystem but want to know if this affects them.
If you write code, the upgrade is real. Opus 4.7 handles the hard tasks — long refactors, debugging across files, multi-step architectural decisions — with significantly more reliability. You can hand it tougher work and expect it to follow through. Vercel's Joe Haddad noted that Opus 4.7 even "does proofs on systems code before starting work — new behavior we haven't seen from earlier Claude models."
If you build presentations, reports, or documents as part of your job, Claude Cowork with Opus 4.7 at the backend can now produce finished deliverables — not outlines, not raw material to clean up, but structured documents — from a goal you describe. The Cowork platform is included in Pro ($17/month) and Max plans.
If you work with images or diagrams — reading dense charts, reviewing technical documentation, doing patent research — the higher-resolution vision is a genuine capability expansion. Tasks that weren't practical before are now.
For anyone thinking about AI agent workflows, the improvements to multi-agent coordination and long-running task reliability mean agents built on Opus 4.7 will run further and break less often. That's the difference between a workflow that requires monitoring and one you can actually set and trust.
What this tells us about Anthropic's product strategy
This release looks like the clearest signal yet that Anthropic is building toward a vertically integrated Claude platform — not just a model vendor. The combination of Opus 4.7, Claude Cowork, Claude Code, Claude Skills, and integrations with Figma, Canva, and enterprise tools like Slack and Excel points toward a single answer to the question: "What do you need to get complex work done?" The answer Anthropic is developing is: Claude handles it.
That's a significant competitive expansion. It puts Anthropic into overlap with productivity software, creative tools, and enterprise automation in ways that model releases alone never did. Whether Anthropic can execute well across all of these simultaneously is the open question. Building a model is a different discipline than building a consumer product at scale. But the direction is unmistakable: Claude is no longer positioned as an assistant you talk to. It's being positioned as an engine that builds things for you.
The timing, with Opus 4.7 releasing the same week as CursorBench results, Bolt usage data, and the Cowork enterprise announcement, suggests coordinated momentum rather than a scattered product rollout. Whether that hold together in practice is something to watch over the next few months.
What should you actually do right now?
Start with your current workflows. The upgrade is a direct drop-in, but plan for one tokenizer change that affects cost and output volume.
If you're already using Opus 4.6 via API, the migration is straightforward. One thing to plan for: Opus 4.7 uses an updated tokenizer that can produce 1.0–1.35× more tokens on the same input depending on content type. It also thinks more at higher effort levels. Anthropic's testing shows net token efficiency is favorable on coding workloads, but measure it on your real traffic before assuming costs go down.
If you've been sitting on the fence about Claude for creative or productivity work, today is a reasonable time to revisit that decision. The combination of a stronger model and Cowork's deliverable-focused workflow is a genuinely different value proposition than a chat window.
If you're a developer building on Claude, the new xhigh effort level and task budgets in public beta give you more levers to tune quality against cost. Those tools didn't exist in Opus 4.6.
Anthropic is available today at claude.ai. The API uses model ID claude-opus-4-7.
Sources
Related Articles on Nexairi
Fact-checked by Jim Smart
