Side-by-side comparison of AI visibility scores, market position, and capabilities
Open-source AI agent framework (formerly MemGPT). Three-tier persistent memory. $10M seed from Felicis. Letta Code #1 on Terminal-Bench. Jeff Dean backed.
Letta (formerly MemGPT) is an open-source AI agent framework developed by researchers at UC Berkeley to solve one of the core limitations of large language models in production: the lack of persistent memory across conversations and tasks. Founded on groundbreaking academic work demonstrating that LLMs could manage their own context windows using a tiered memory system, Letta evolved from a research project into a full-featured agent development platform for building stateful, long-running AI agents.\n\nLetta's three-tier memory architecture — separating in-context working memory, external archival storage, and recall memory — enables agents that remember past interactions, learn from experience, and maintain coherent long-term task execution. The framework supports multi-agent orchestration, tool use, and human-in-the-loop workflows, making it suitable for complex enterprise automation tasks. Letta Code, the company's coding-focused agent, achieved the #1 ranking on Terminal-Bench, the leading benchmark for AI coding agents operating in real terminal environments.\n\nLetta raised a $10M seed round from Felicis Ventures, with backing from Google Distinguished Engineer Jeff Dean — a notable endorsement from one of the architects of modern deep learning infrastructure. The Terminal-Bench leadership demonstrates that Letta's memory architecture translates to measurable performance advantages in real-world agentic tasks. As enterprises move from LLM experimentation to deploying persistent AI agents in production, Letta's open-source foundation and research-backed memory system position it as a foundational framework in the agentic AI stack.
$2.3B raised at $29.3B valuation; $2B+ ARR (Q1 2026); used by 50%+ of Fortune 500. Dominant commercial AI coding tool; built on VSCode fork with native agent mode. Competing with GitHub Copilot, Windsurf, and Lovable in the vibe-coding wave.
Cursor is an AI-first code editor founded in 2022 by a small team of MIT researchers, built as a fork of Visual Studio Code with native large-language-model intelligence woven directly into the editing experience. Its mission is to make software engineers dramatically more productive by embedding AI reasoning into every layer of the IDE — from autocomplete to multi-file edits to natural-language code generation — rather than bolting AI on as an afterthought.\n\nThe platform centers on a VSCode-compatible editor that developers can adopt with zero workflow disruption, layering in features like Tab (predictive multi-line completion), Chat (context-aware in-editor assistant), and Composer (autonomous multi-file refactoring agent). Cursor reads and indexes entire codebases, allowing it to propose changes that span dozens of files coherently. It supports all major languages, integrates with existing extensions, and lets teams configure which underlying model — GPT-4o, Claude, or others — powers suggestions. Fortune 500 engineering teams adopt it alongside individual developers, and it is used by more than half of Fortune 500 companies.\n\nCursor reached $2 billion in annualized recurring revenue by early 2026 and raised at a $29.3 billion valuation, cementing its position as the dominant commercial AI coding tool. The company raised $2.3 billion in total funding and is widely regarded as the category-defining product in agentic IDE software, outpacing GitHub Copilot on developer mindshare metrics in multiple surveys.
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.