Side-by-side comparison of AI visibility scores, market position, and capabilities
Thermodynamic computing chips for AI. World's first CN101 chip taped out (Aug 2025). $85M+ raised ($50M from Samsung Mar 2026). 1000x energy efficiency target.
Normal Computing was founded by physicists and engineers who identified a fundamental mismatch between the mathematics of modern AI and the digital hardware used to run it. Neural network inference is inherently probabilistic and statistical, yet it runs on deterministic digital chips that must simulate randomness inefficiently. Normal Computing's founding thesis is that thermodynamic computing — hardware that natively operates according to the laws of statistical physics — can perform AI workloads with orders-of-magnitude better energy efficiency than conventional silicon.\n\nNormal Computing's CN101 is the world's first thermodynamic computing chip, taped out in August 2025. The chip is designed to accelerate sampling-based AI workloads, including inference for large language models, Bayesian reasoning, and generative AI tasks that are computationally expensive on digital hardware. By exploiting thermal noise and stochastic physics rather than fighting them, the CN101 performs these computations using a fraction of the energy of GPU-based alternatives. The company claims a potential 1,000x improvement in energy efficiency for targeted workloads, a figure that, if validated at scale, would have transformative implications for AI infrastructure economics.\n\nNormal Computing has raised over $85 million, including a $50 million strategic investment from Samsung in March 2026. Samsung's involvement signals both financial validation and the potential for integration with Samsung's semiconductor manufacturing and memory ecosystems. The company is positioned at the intersection of AI compute and energy efficiency — two of the most pressing concerns in the technology industry — giving it relevance to hyperscalers, AI hardware vendors, and government initiatives focused on AI energy consumption.
OpsLevel is a developer portal and service catalog for tracking service ownership, maturity scorecards, and production readiness across microservices.
OpsLevel is a developer portal platform that gives engineering organizations visibility into the services they operate, who owns them, and how mature they are relative to internal engineering standards. At its core, OpsLevel maintains a service catalog that maps every microservice, repository, and infrastructure component to a team owner, populating metadata automatically from integrations with GitHub, GitLab, PagerDuty, Datadog, and cloud providers. This catalog becomes the authoritative source of truth for answering questions like who to contact about a service, what tier of reliability it requires, and what dependencies it has — questions that are often unanswerable at engineering organizations that have grown past the point where everyone knows everything.
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.