# Panmnesia

**Source:** https://geo.sig.ai/brands/panmnesia  
**Vertical:** Artificial Intelligence  
**Subcategory:** CXL Memory Switch Silicon  
**Tier:** Emerging  
**Website:** panmnesia.com  
**Last Updated:** 2026-04-14

## Summary

Raised $80M+ total. CXL 3.1 Switch silicon chip in fab. $30M government project for chiplet AI accelerators. Pure-play CXL switch chip company for AI memory pooling.

## Company Overview

Panmnesia is a pure-play CXL (Compute Express Link) switch chip company developing the silicon fabric that allows AI systems to pool, share, and disaggregate memory across heterogeneous compute nodes. The company has raised $80 million+ in total financing with its CXL 3.1 switch chip entering fabrication, and received a $30 million government project to develop chiplet-based AI accelerators using CXL memory interconnects.

CXL is the emerging standard for high-bandwidth, low-latency memory connectivity between processors, accelerators, and memory devices. As AI models grow larger than the memory available in individual servers, CXL-based memory pooling allows multiple servers to share a common memory pool — dramatically increasing the effective memory available to large model inference without physically adding RAM to each server. Panmnesia's switch chip is the missing infrastructure layer that enables this memory disaggregation at data center scale.

The government project funding adds a sovereign AI infrastructure angle: Korea and other governments are investing in domestic CXL infrastructure capability as AI hardware becomes strategically important. Panmnesia's CXL 3.1 compliance means its switch chips work with the latest generation of Intel Xeon Sapphire Rapids and AMD EPYC Genoa processors that include native CXL support — addressing a real deployment-ready market.

## Frequently Asked Questions

### What does Panmnesia make?
CXL switch silicon chips that enable AI servers to pool and share memory across compute nodes — the missing infrastructure layer for memory disaggregation in large AI deployments.

### How much has Panmnesia raised?
$80M+ total. CXL 3.1 switch chip in fabrication. $30M government project for chiplet-based AI accelerators.

### What is CXL memory pooling?
CXL allows multiple servers to share a common memory pool — enabling large AI model inference on combined memory that exceeds what any single server can hold.

### What processors support Panmnesia's chips?
CXL 3.1 compatible with Intel Xeon Sapphire Rapids and AMD EPYC Genoa — both current-generation server processors with native CXL support.

### What is CXL and why does it matter for AI memory?
CXL (Compute Express Link) is an open interconnect standard that allows CPUs and accelerators to access memory on other devices as if it were local memory — enabling memory pooling, disaggregation, and sharing across a server or rack. For AI inference, CXL allows GPUs to access terabytes of shared memory that would not fit in GPU HBM, reducing per-GPU memory costs while maintaining low-latency access required for large model KV caches.

### What does Panmnesia's CXL memory switch do?
Panmnesia's CXL memory switch chip enables multiple CPUs or GPU hosts to share a pool of CXL-attached memory, with the switch managing access arbitration and bandwidth allocation. This creates a disaggregated memory architecture where memory can be scaled independently of compute — adding memory capacity to support larger AI models without replacing GPUs, and sharing memory across multiple inference nodes serving different AI applications.

### What LLM inference problem does Panmnesia solve?
LLM inference requires large KV (key-value) caches that store attention state during generation — the longer the context window, the larger the required KV cache. A 128K context LLM can require 10s of gigabytes of KV cache per concurrent request. Panmnesia's memory expansion via CXL allows inference servers to handle more concurrent long-context requests than GPU HBM alone allows, directly improving inference throughput economics.

### What is the timeline for CXL memory commercialization?
CXL 2.0 (enabling memory pooling and sharing) is supported in Intel Sapphire Rapids and AMD Genoa server processors shipping since 2023. CXL-attached DIMMs and memory expanders from Samsung, SK Hynix, and Micron are entering production. Panmnesia's switch silicon targets the CXL 3.0/3.1 era (2025-2026) when rack-scale memory pooling across multiple hosts becomes standard in AI data center architectures.

## Tags

ai-powered, b2b, saas

---
*Data from geo.sig.ai Brand Intelligence Database. Updated 2026-04-14.*