MSI Showcases DC-MHS and MGX Server Platforms for Cloud-Scale and AI Infrastructure at OCP APAC 2025
TAIPEI, Aug. 5, 2025 /PRNewswire/ -- At OCP APAC 2025 (Booth S04), MSI, a global leader in high-performance server solutions, presents modular server platforms for modern data center needs. The lineup includes AMD DC-MHS servers for 21" ORv3 and 19" EIA racks, built for scalable and energy-efficient cloud infrastructure, and a NVIDIA MGX-based GPU server optimized for high-density AI workloads such as LLM training and inference. These platforms demonstrate how MSI is powering what's next in computing with open, modular, and workload-optimized infrastructure.
MSI Showcases DC-MHS and MGX Server Platforms for Cloud-Scale and AI Infrastructure at OCP APAC 2025
"Open and modular infrastructure is shaping the future of compute. With OCP-aligned and MGX-based platforms, MSI helps customers reduce complexity, accelerate scale-out, and prepare for the demands of cloud-native and AI-driven environments." – Danny Hsu, General Manager of MSI's Enterprise Platform Solutions
AMD DC-MHS Platforms for Cloud Infrastructure
Powered by a single AMD EPYC™ 9005 processor and up to 12 DDR5 DIMM slots per node, MSI's DC-MHS open compute and core compute platforms deliver strong compute performance and high memory bandwidth to meet the demands of data-intensive and parallel workloads. Built on the modular OCP DC-MHS architecture and equipped with DC-SCM2 management modules, these systems offer cross-vendor interoperability, streamlined integration, and easier serviceability, ideal for modern, scalable infrastructure in hyperscale and cloud environments.
The CD281-S4051-X2 targets 21" ORv3 rack deployments with 48Vdc power, featuring a 2OU 2-node design and EVAC cooling that supports up to 500W TDP per node. With 12 E3.S PCIe 5.0 NVMe bays per node, it offers high-density, front-access storage for throughput-heavy applications.
The CD270-S4051-X4 fits into a standard 2U 4-node 19" EIA chassis, maximizing compute density for environments with limited rack space. Supporting up to 400W air-cooled or 500W liquid-cooled CPUs, and equipped with front-access U.2 NVMe bays, it's built for flexible deployment across general-purpose and scale-out workloads.
NVIDIA MGX AI Server for Scalable AI Workloads
Built on the NVIDIA MGX modular architecture, the CG480-S5063 is optimized for large-scale AI workloads with a 2:8:5 CPU:GPU:NIC topology. It supports dual Intel® Xeon® 6 processors and up to eight 600W FHFL dual-width GPUs, including NVIDIA H200 NVL and RTX PRO 6000 Blackwell Server Edition. With 32 DDR5 DIMMs and 20 PCIe 5.0 E1.S bays, it delivers high compute density, fast storage, and modular scalability for next-gen AI infrastructure.