Side-by-side comparison of AI visibility scores, market position, and capabilities
Israeli AI drug discovery company raised $25M from Bessemer; 40+ active programs; CEO co-patented CRISPR technology with Jennifer Doudna; ML platform combining target identification, hit generation, and lead optimization for oncology and immunology therapeutics.
Converge Bio is an Israeli AI drug discovery company using machine learning to design and optimize small molecule therapeutics across a broad portfolio of programs. Founded by a team with deep expertise in computational biology and medicinal chemistry, Converge has built a platform that combines AI-driven target identification, hit generation, and lead optimization into an integrated drug discovery engine. The company's scientific credibility is bolstered by its CEO, who co-patented CRISPR-related technology alongside Jennifer Doudna, a Nobel laureate and pioneer of gene editing.\n\nConverge operates an unusually large portfolio by biotech standards, with 40+ active drug discovery programs spanning oncology, immunology, and other therapeutic areas. The company's AI platform is designed to generate high-quality small molecule candidates faster and at lower cost than traditional medicinal chemistry approaches, enabling it to maintain a broad pipeline without the typical resource constraints of running many programs in parallel. Converge uses structure-based design, generative chemistry, and predictive ADMET modeling to advance candidates from target to preclinical candidate stage.\n\nConverge raised $25M from Bessemer Venture Partners in January 2026, bringing external validation from one of Silicon Valley's most prominent technology-focused venture firms. The funding is being used to advance lead programs toward IND filings and expand the platform's capabilities. With 40+ programs and a Nobel laureate connection in its founding story, Converge represents Israel's growing position as a global hub for AI-driven drug discovery.
Serverless GPU cloud platform for AI/ML with Python-native deployment and per-second billing; developer-favorite scaling from zero competing with Replicate and Beam for AI compute.
Modal is a serverless cloud computing platform purpose-built for AI and machine learning workloads — providing on-demand GPU compute that scales instantly from zero with per-second billing, container management, distributed training support, and a Python-native developer experience that makes running ML workloads in the cloud feel as simple as running code locally. Founded in 2021 in New York City and backed by Redpoint Ventures and other investors, Modal has grown rapidly as AI development has accelerated demand for flexible, developer-friendly GPU infrastructure.\n\nModal's developer experience is its primary differentiator — engineers write Python functions decorated with @modal.function() and deploy them to the cloud with a single command, with Modal handling container building, GPU provisioning, auto-scaling, and execution. The platform supports training jobs that need distributed compute across multiple GPUs, model serving endpoints that scale to zero when unused (eliminating idle GPU costs), and batch inference jobs that process large datasets. The per-second billing model means developers pay only for actual compute time, not provisioned instances.\n\nIn 2025, Modal competes in the AI infrastructure market with Replicate, Beam, Banana, and major cloud providers' managed ML services (AWS SageMaker, Google Vertex AI, Azure ML) for serverless GPU compute. The market for AI-specific cloud infrastructure has grown dramatically as the number of ML engineers deploying models to production has expanded — traditional cloud providers require significant DevOps expertise to use GPU instances effectively, while Modal's Python-native approach reduces the barrier to entry. Modal has attracted a strong developer following among AI researchers and ML engineers building production AI applications. The 2025 strategy focuses on growing the developer community, adding enterprise features (dedicated GPU capacity, private networking, compliance), and expanding the hardware options available (H100 GPUs, custom accelerators).
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.