Provided by SK hynix
At ‘CES 2026’, the world’s largest consumer electronics and information technology (IT) trade show, SK hynix will unveil for the first time the 48GB HBM4 with 16 layers, a next-generation high-bandwidth memory (HBM) currently under development.
On the 6th, SK hynix announced that it will set up a customer exhibition hall at the Venetian Expo in Las Vegas, United States, from the 6th to the 9th (local time) to showcase next-generation artificial intelligence (AI) memory solutions. The exhibition theme is ‘Creating a sustainable future with innovative AI technology’.
At this exhibition, SK hynix will unveil the 16-layer 48GB HBM4 for the first time. HBM4 is a high-performance memory semiconductor mounted in AI accelerators, including graphics processing units (GPUs). The company said, “It is the successor to the 12-layer 36GB HBM4 that achieved the industry’s top speed of 11.7 Gbps (gigabits per second),” adding, “Development is progressing smoothly in line with customer timelines.”
Bird’s-eye view of the SK hynix exhibit at CES 2026. Provided by SK hynix
This year, while the HBM market is led by fifth-generation HBM3E, sixth-generation HBM4 is expected to be adopted in earnest. SK hynix will also present the 12-layer 36GB HBM3E and NVIDIA’s latest AI server GPU module equipped with it.
The SOCAMM2, a low-power memory module specialized for AI servers, and the low-power memory ‘LPDDR6’ optimized for on-device AI will also be showcased.
In NAND, it will unveil a ‘321-layer 2Tb (terabit) quad-level cell (QLC)’ product for ultra-high-capacity enterprise SSDs, where demand is surging as AI data center build-outs expand. SK hynix said, “Compared with the previous generation, it greatly improves power efficiency and performance, showing strengths in AI data center environments that require low power.”
The company has also prepared an ‘AI System Demo Zone’ where visitors can see how its memory solution products for AI systems being prepared for the future are organically interconnected. It will demonstrate offerings such as customized ‘cHBM’, optimized for specific AI chips or system requirements, and ‘AiMX’, a low-cost·high-efficiency accelerator card for generative AI based on ‘PIM (processing-in-memory)’ semiconductors that add compute capabilities to memory.