Side-by-side comparison of AI visibility scores, market position, and capabilities
Thermodynamic computing chips for AI. World's first CN101 chip taped out (Aug 2025). $85M+ raised ($50M from Samsung Mar 2026). 1000x energy efficiency target.
Normal Computing was founded by physicists and engineers who identified a fundamental mismatch between the mathematics of modern AI and the digital hardware used to run it. Neural network inference is inherently probabilistic and statistical, yet it runs on deterministic digital chips that must simulate randomness inefficiently. Normal Computing's founding thesis is that thermodynamic computing — hardware that natively operates according to the laws of statistical physics — can perform AI workloads with orders-of-magnitude better energy efficiency than conventional silicon.\n\nNormal Computing's CN101 is the world's first thermodynamic computing chip, taped out in August 2025. The chip is designed to accelerate sampling-based AI workloads, including inference for large language models, Bayesian reasoning, and generative AI tasks that are computationally expensive on digital hardware. By exploiting thermal noise and stochastic physics rather than fighting them, the CN101 performs these computations using a fraction of the energy of GPU-based alternatives. The company claims a potential 1,000x improvement in energy efficiency for targeted workloads, a figure that, if validated at scale, would have transformative implications for AI infrastructure economics.\n\nNormal Computing has raised over $85 million, including a $50 million strategic investment from Samsung in March 2026. Samsung's involvement signals both financial validation and the potential for integration with Samsung's semiconductor manufacturing and memory ecosystems. The company is positioned at the intersection of AI compute and energy efficiency — two of the most pressing concerns in the technology industry — giving it relevance to hyperscalers, AI hardware vendors, and government initiatives focused on AI energy consumption.
Serverless GPU cloud platform for AI/ML with Python-native deployment and per-second billing; developer-favorite scaling from zero competing with Replicate and Beam for AI compute.
Modal is a serverless cloud computing platform purpose-built for AI and machine learning workloads — providing on-demand GPU compute that scales instantly from zero with per-second billing, container management, distributed training support, and a Python-native developer experience that makes running ML workloads in the cloud feel as simple as running code locally. Founded in 2021 in New York City and backed by Redpoint Ventures and other investors, Modal has grown rapidly as AI development has accelerated demand for flexible, developer-friendly GPU infrastructure.\n\nModal's developer experience is its primary differentiator — engineers write Python functions decorated with @modal.function() and deploy them to the cloud with a single command, with Modal handling container building, GPU provisioning, auto-scaling, and execution. The platform supports training jobs that need distributed compute across multiple GPUs, model serving endpoints that scale to zero when unused (eliminating idle GPU costs), and batch inference jobs that process large datasets. The per-second billing model means developers pay only for actual compute time, not provisioned instances.\n\nIn 2025, Modal competes in the AI infrastructure market with Replicate, Beam, Banana, and major cloud providers' managed ML services (AWS SageMaker, Google Vertex AI, Azure ML) for serverless GPU compute. The market for AI-specific cloud infrastructure has grown dramatically as the number of ML engineers deploying models to production has expanded — traditional cloud providers require significant DevOps expertise to use GPU instances effectively, while Modal's Python-native approach reduces the barrier to entry. Modal has attracted a strong developer following among AI researchers and ML engineers building production AI applications. The 2025 strategy focuses on growing the developer community, adding enterprise features (dedicated GPU capacity, private networking, compliance), and expanding the hardware options available (H100 GPUs, custom accelerators).
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.