Side-by-side comparison of AI visibility scores, market position, and capabilities
Raised $500M Series B at $4.2B valuation (March 2026) for AI-optimized Ethernet switches; targets hyperscaler GPU cluster networking; replaces InfiniBand with open, scalable fabric
Nexthop AI is a networking hardware company building AI-optimized Ethernet switches purpose-built for hyperscaler AI data centers. Founded by veterans of the networking industry, the company recognized that as AI training clusters grew to tens of thousands of GPUs, the networking fabric connecting them became a critical performance bottleneck. Standard data center switches were not designed for the all-to-all communication patterns of distributed AI training, and InfiniBand—the traditional high-performance interconnect—carried significant cost and vendor lock-in. Nexthop AI is building Ethernet-based switching silicon and systems that deliver InfiniBand-class performance for AI at Ethernet-class economics.\n\nNexthop's switches are architected for the specific traffic patterns of large-scale AI workloads: high bandwidth, ultra-low and consistent latency, and support for collective communication operations like AllReduce that are central to distributed training. The company targets hyperscalers and large cloud providers building GPU clusters at the scale of tens of thousands to hundreds of thousands of accelerators. By offering a high-performance, open-standards alternative to InfiniBand, Nexthop AI competes in a market where even small per-port cost reductions translate to hundreds of millions in savings at hyperscaler scale.\n\nIn March 2026, Nexthop AI raised a $500M Series B at a $4.2B valuation, reflecting the enormous market opportunity in AI networking as hyperscalers invest trillions in data center buildout. The round positions the company to scale its silicon development, manufacturing partnerships, and go-to-market motion with the world's largest AI infrastructure buyers. Nexthop competes and collaborates in a space alongside Arista, Broadcom, and emerging players like Enfabrica as the AI networking market undergoes rapid transformation.
Real-time voice and video infrastructure powering ChatGPT Voice Mode, xAI, Meta, and Spotify; raised $100M Series C at $1B valuation in Jan 2026; open-source WebRTC platform specifically engineered for low-latency AI applications.
LiveKit is an open-source real-time audio and video infrastructure company providing the communication backbone for AI voice and video applications at scale. Founded to make production-grade real-time communication infrastructure accessible without the prohibitive cost and complexity of building it in-house, LiveKit developed a WebRTC-based platform optimized for the specific latency, reliability, and scale requirements of AI-powered voice and video experiences.\n\nLiveKit's platform handles the real-time transport layer for voice calls, video conferencing, and multimodal AI interactions — abstracting the complexity of WebRTC, TURN servers, codec optimization, and global distribution into a developer-friendly SDK. Its infrastructure is specifically engineered for the low-latency, high-reliability requirements of AI voice agents, where even 200ms of added latency degrades the conversational experience. The company provides SDKs for every major platform and has built a reputation as the most production-ready open-source option for real-time AI communication.\n\nLiveKit powers ChatGPT's Voice Mode, xAI's voice products, Meta, and Spotify — a client roster that validates its ability to operate at extreme scale and reliability. The company raised $100M in a Series C at a $1B valuation in January 2026, bringing total funding to $183M. As conversational AI products proliferate across consumer and enterprise applications, LiveKit's position as the de facto real-time infrastructure layer for AI voice gives it a durable and expanding role in the AI application stack.
Monitor how your brand performs across ChatGPT, Gemini, Perplexity, Claude, and Grok daily.