Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

 

The Vibe coding tool Cursor, from the start Anyspherehas introduced the composerits first in-house, proprietary Coding Large Language Model (LLM) as part of it Cursor 2.0 platform update.
Composer is designed to perform coding tasks quickly and accurately in production environments, representing a new step in AI-powered programming. It is already being used by Cursor’s own engineers in day-to-day development – a testament to maturity and stability.
According to Cursor, Composer handles most of the interactions in less than 30 seconds At the same time, a high level of argumentation ability is maintained across large and complex code bases.
The model is described as four times faster than similarly intelligent systems and is trained for “agentic” workflows – where autonomous programming agents collaboratively plan, write, test and review code.
Cursors were previously supported "Vibe coding" – Using AI to write or complete code based on natural language instructions from a user, even if they are not trained in development – in addition to other leading proprietary LLMs from companies like OpenAI, Anthropic, Google and xAI. These options are still available to users.
The composer’s skills are evaluated using benchmarks "cursor bank," an internal evaluation suite derived from real requests from developer agents. The benchmark measures not only correctness, but also the model’s adherence to existing abstractions, stylistic conventions, and technical practices.
In this benchmark, Composer achieves borderline coding intelligence during generation 250 tokens per second – about twice as fast as leading fast inference models and four times faster than comparable boundary systems.
Cursor’s published comparison groups models into several categories: Best Open (e.g. Qwen Coder, GLM 4.6), Fast Frontier (Haiku 4.5, Gemini Flash 2.5), Frontier 7/2025 (the strongest model available mid-year), and Best Frontier (including GPT-5 and Claude Sonnet 4.5). Composer achieves the intelligence of mid-frontier systems while delivering the highest recorded generation speed of any class tested.
Cursor research scientist Sasha Rush provided insights into the development of the model Posts on social network Xand describes Composer as a reinforcement learned (RL) mix-of-experts (MoE) model:
“We used RL to train a large MoE model to be really good at real-world programming and also really fast.”
Rush explained that the team co-designed both Composer and the cursor environment to enable efficient operation of the model at production scale:
“Unlike other ML systems, you can’t abstract much from the overall system. We designed this project and Cursor together to enable the agent to run at the required scale.”
Composer was trained for real software engineering tasks, not static data sets. During training, the model ran within complete codebases and used a range of production tools – including file editing, semantic search and terminal commands – to solve complex technical problems. Each training iteration focused on solving a specific challenge, such as creating a code edit, designing a plan, or creating a targeted statement.
The reinforcement loop optimized both correctness and efficiency. The composer learned to make effective tool choices, utilize parallelism, and avoid unnecessary or speculative answers. Over time, the model developed new behaviors such as performing unit tests, fixing linter bugs, and autonomously performing multi-stage code searches.
This design allows Composer to work in the same runtime context as the end user, making it more adaptable to real-world coding conditions – version control, dependency management, and iterative testing.
The composer’s development followed an earlier internal prototype called cheetahexamined low-latency inferences for encoding tasks using the cursor.
“Cheetah was the v0 of this model, mainly to test speed,” Rush said on X. “Our metrics say it (Composer) has the same speed, but much, much smarter.”
Cheetah’s success in reducing latency helped Cursor recognize speed as a key factor in developer trust and usability.
Composer maintains this responsiveness while significantly improving reasoning and task generalization.
Developers who used Cheetah in early testing found that the speed changed the way they worked. One user commented that it was “so fast I can use it to stay updated at work.”
Composer maintains this speed but extends functionality to multi-step coding, refactoring, and testing tasks.
Composer is fully integrated into Cursor 2.0, a major update to the company’s agent development environment.
The platform introduces a multi-agent interface that enables up to eight agents can run in parallel, each in an isolated workspace using Git work trees or remote machines.
Within this system, Composer can act as one or more of these agents, performing tasks independently or collaboratively. Developers can compare multiple results of concurrent agent executions and select the best output.
Cursor 2.0 also includes supporting features that increase Composer’s effectiveness:
In-Editor Browser (GA) – allows agents to run and test their code directly in the IDE and pass DOM information to the model.
Improved code review – summarizes differences across multiple files for faster review of model-generated changes.
Sandbox terminals (GA) – Isolate shell commands executed by the agent for secure local execution.
Voice mode – Adds voice-to-text controls for initiating or managing agent sessions.
While these platform updates expand the overall cursor experience, Composer is positioned as the technical core that enables fast and reliable agent coding.
To train Composer at scale, Cursor built a custom reinforcement learning infrastructure that combines PyTorch and Ray for asynchronous training on thousands of NVIDIA GPUs.
The team developed custom MXFP8 MoE kernels and hybrid sharded data parallelism that enabled large-scale model updates with minimal communication overhead.
This configuration allows Cursors to natively train models with low precision without the need for post-training quantization, improving both inference speed and efficiency.
Composer training was based on hundreds of thousands of concurrent sandbox environments—each a standalone programming workspace—running in the cloud. The company has adapted its background agent infrastructure to dynamically schedule these virtual machines to support the bursty nature of large RL runs.
Composer’s performance improvements are supported by infrastructure-level changes across Cursor’s code intelligence stack.
The company has optimized its Language Server Protocols (LSPs) for faster diagnosis and navigation, particularly in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates updates for multiple files.
Enterprise users gain administrative control over Composer and other agents through team rules, audit trails, and sandbox enforcement. Cursor’s Teams and Enterprise tiers also support the use of pooled models, SAML/OIDC authentication, and analytics to monitor agent performance across organizations.
Pricing for individual users ranges from Free (Hobby) to Ultra ($200/month) tiers, with extended usage limits for Pro+ and Ultra subscribers.
Business pricing for Teams starts at $40 per user per month, with enterprise contracts offering customized usage and compliance options.
Composer’s focus on speed, reinforcement learning, and integration into live coding workflows sets it apart from other AI development assistants like GitHub Copilot or Replit’s Agent.
Rather than acting as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project’s codebase.
This model-level specialization – training AI to function in the real-world environment in which it will operate – represents a significant step towards practical, autonomous software development. Composer is trained not just on text data or static code, but within a dynamic IDE that reflects production conditions.
Rush described this approach as essential to achieving real-world reliability: the model learns not only how to generate code, but also how to integrate, test, and improve it in context.
With Composer, Cursor introduces more than a quick model – it delivers an AI system optimized for real-world use and designed to operate using the same tools developers already rely on.
The combination of reinforcement learning, mix-of-experts design, and tight product integration gives Composer a practical edge in speed and responsiveness that differentiates it from general-purpose language models.
While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes these workflows viable.
It’s the first coding model designed specifically for production-level agent coding – and an early look at everyday programming when human developers and autonomous models share the same workspace.