Side-by-side comparison of AI visibility scores, market position, and capabilities
AI lab building world models using JEPA architecture; $1.03B seed at $3.5B valuation; founded 2025 in Paris by Yann LeCun (Turing Award winner, Meta Chief AI Scientist); alternative to LLMs.
AMI Labs is an AI research company founded in 2025 in Paris by Yann LeCun — Meta's Chief AI Scientist and Turing Award winner — built around the thesis that large language models are a fundamentally limited path to human-level intelligence and that a different architectural approach, grounded in how biological intelligence works, is required. The company was established to pursue world models: AI systems that build rich internal representations of how the physical and social world functions, enabling reasoning, planning, and generalization that current LLMs cannot perform. AMI Labs' core technology centers on the Joint Embedding Predictive Architecture (JEPA), a learning framework LeCun developed at Meta that trains AI on the structure of the world rather than on next-token prediction.\n\nAMI Labs' research agenda positions it as a fundamental alternative to the transformer-based LLM paradigm that has dominated AI development since 2017. Rather than building systems that predict text sequences, JEPA-based world models learn to predict abstract representations of future states — a capability that LeCun and the AMI Labs team argue is necessary for AI systems to achieve genuine planning, causal reasoning, and physical-world understanding. The company is building its research and engineering team in Paris, with the French AI ecosystem and proximity to LeCun's academic network providing a talent and institutional foundation.\n\nAMI Labs raised $1.03 billion in seed funding at a $3.5 billion valuation, making it one of the most capitalized AI research startups at founding stage. The round reflects LeCun's scientific reputation and investor conviction that JEPA-based world models represent a credible path beyond current LLMs. AMI Labs competes with OpenAI, Anthropic, and DeepMind for talent and research mindshare, differentiating through its architectural heterodoxy and explicit post-LLM positioning.
Serverless GPU cloud platform for AI/ML with Python-native deployment and per-second billing; developer-favorite scaling from zero competing with Replicate and Beam for AI compute.
Modal is a serverless cloud computing platform purpose-built for AI and machine learning workloads — providing on-demand GPU compute that scales instantly from zero with per-second billing, container management, distributed training support, and a Python-native developer experience that makes running ML workloads in the cloud feel as simple as running code locally. Founded in 2021 in New York City and backed by Redpoint Ventures and other investors, Modal has grown rapidly as AI development has accelerated demand for flexible, developer-friendly GPU infrastructure.\n\nModal's developer experience is its primary differentiator — engineers write Python functions decorated with @modal.function() and deploy them to the cloud with a single command, with Modal handling container building, GPU provisioning, auto-scaling, and execution. The platform supports training jobs that need distributed compute across multiple GPUs, model serving endpoints that scale to zero when unused (eliminating idle GPU costs), and batch inference jobs that process large datasets. The per-second billing model means developers pay only for actual compute time, not provisioned instances.\n\nIn 2025, Modal competes in the AI infrastructure market with Replicate, Beam, Banana, and major cloud providers' managed ML services (AWS SageMaker, Google Vertex AI, Azure ML) for serverless GPU compute. The market for AI-specific cloud infrastructure has grown dramatically as the number of ML engineers deploying models to production has expanded — traditional cloud providers require significant DevOps expertise to use GPU instances effectively, while Modal's Python-native approach reduces the barrier to entry. Modal has attracted a strong developer following among AI researchers and ML engineers building production AI applications. The 2025 strategy focuses on growing the developer community, adding enterprise features (dedicated GPU capacity, private networking, compliance), and expanding the hardware options available (H100 GPUs, custom accelerators).
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.