Nvidia's Latest AI Chip Design Prompts Questions on HBM Memory Demand
Loading more articles...
Nvidia's New AI Chip Design Raises HBM Demand Questions Ahead of GTC 2026
N
News18•16-03-2026, 12:30
Nvidia's New AI Chip Design Raises HBM Demand Questions Ahead of GTC 2026
•Nvidia may unveil a new AI inference chip architecture using on-chip SRAM at GTC 2026 in San Jose, California, potentially reshaping the AI memory market.
•The proposed SRAM-based design places large SRAM blocks inside the chip, reducing data movement and improving processing latency, differing from current HBM-dependent GPUs.
•SRAM is significantly larger and more expensive than DRAM, requiring 5-10 times more silicon area for the same capacity, traditionally used as cache rather than main memory.
•Industry experts believe SRAM is unlikely to replace HBM directly, as they serve fundamentally different roles; HBM remains crucial for large-scale AI training and data centers.
•Analysts suggest SRAM-centered architectures will likely complement existing memory technologies, targeting specific ultra-low-latency workloads or edge applications.