The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce complexity. Powered by the Cisco Intersight™ cloud operations platform, it shifts your thinking from administrative details to business outcomes with hybrid cloud infrastructure that is assembled from the cloud, shaped to your workloads, and continuously optimized. The Cisco UCS X210c M6 Compute Node is the first computing device to integrate into the Cisco UCS X-Series Modular System. Up to eight compute nodes can reside in the 7-Rack-Unit (7RU) Cisco UCS X9508 Chassis, offering one of the highest densities of compute, IO, and storage per rack unit in the industry.
CISCO SYSTEMS PUBLICATION HISTORY 170 WEST TASMAN DR. SAN JOSE, CA, 95134 REV A.47 DECEMBER 03, 2024 WWW.CISCO.COM Spec Sheet Cisco UCS X210c M6 Compute Node A printed version of this document is only a copy and not necessarily the latest version. Refer to the following link for the latest released version: https://www.cisco.com/c/en/us/products/servers-unifiedcomputing/ucs-x-series-modular-system/datasheetlisting.html 2 Cisco UCS X210c M6 Compute Node
X210c M6 Compute Node OVERVIEW 3 OVERVIEW The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce complexity. Powered by the Cisco Intersight™ cloud operations platform, it shifts your thinking from administrative details to business outcomes with hybrid cloud infrastructure that is assembled from the cloud, shaped to your workloads, and continuously optimized. The Cisco UCS X210c M6 Compute Node is the first computing device to integrate into the Cisco UCS X-Series Modular System. Up to eight compute nodes can reside in the 7-Rack-Unit (7RU) Cisco UCS X9508 Chassis, offering one of the highest densities of compute, IO, and storage per rack unit in the industry. The Cisco UCS X210c M6 Compute Node harnesses the power of the latest 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake), and offers the following: ■ CPU: Up to 2x 3rd Gen Intel® Xeon® Scalable Processors with up to 40 cores per processor and 1.5 MB Level 3 cache per core. ■ Memory: Up to 32x 256 GB DDR4-3200 DIMMs for up to 8 TB of main memory. Configuring up to 16x 512-GB Intel Optane™ persistent memory DIMMs can yield up to 12 TB of memory. ■ Storage: Up to 6 hot-pluggable, Solid-State Drives (SSDs), or Non-Volatile Memory Express (NVMe) 2.5-inch drives with a choice of enterprise-class Redundant Array of Independent Disks (RAID) or pass-through controllers with four lanes each of PCIe Gen 4 connectivity and up to 2 M.2 SATA drives for flexible boot and local storage capabilities. ■ Optional Front Mezzanine GPU module: The Cisco UCS Front Mezzanine GPU module is a passive PCIe Gen 4 front mezzanine option with support for up to two U.2/U.3 NVMe drives and two GPUs. ■ mLOM virtual interface cards: ■ Cisco UCS Virtual Interface Card (VIC) 14425 occupies the server’s Modular LAN on Motherboard (mLOM) slot, enabling up to 50 Gbps of unified fabric connectivity to each of the chassis Intelligent Fabric Modules (IFMs) for 100 Gbps connectivity per server. ■ Cisco UCS Virtual Interface Card (VIC) 15230 occupies the server’s modular LAN on motherboard (mLOM) slot, enabling up to 100 Gbps of unified fabric connectivity to each of the chassis Intelligent Fabric Modules (IFMs) for 100 Gbps connectivity per server with secure boot capability. ■ Cisco UCS Virtual Interface Card (VIC) 15420 occupies the server’s Modular LAN on Motherboard (mLOM) slot, enabling up to 50Gbps (2 x25Gbps) of unified fabric connectivity to each of the chassis Intelligent Fabric Modules (IFMs) for 100Gbps connectivity per server with secure boot capability ■ Optional Mezzanine card: ■ Cisco UCS Virtual Interface Card (VIC) 14825 can occupy the server’s mezzanine slot at the bottom rear of the chassis. This card’s I/O connectors link to Cisco UCS X-Fabric technology. An included bridge card extends this VIC’s 2x 50 Gbps of network connections through IFM connectors, bringing the total bandwidth to 100 Gbps per fabric (for a total of 200 Gbps per server). ■ Cisco UCS PCI Mezz card for X-Fabric can occupy the server’s mezzanine slot at the bottom rear of the chassis. This card’s I/O connectors link to Cisco UCS X-Fabric modules and enable connectivity to the X440p PCIe Node. 4 Cisco UCS X210c M6 Compute Node OVERVIEW ■ Cisco UCS Virtual Interface Card (VIC) 15422 can occupy the server’s mezzanine slot at the bottom rear of the chassis. An included bridge card extends this VIC’s 100Gbps (4 x 25Gbps) of network connections through IFM connectors, bringing the total bandwidth to 100Gbps per VIC 15420 and 15422 (for a total of 200Gbps per server). In addition to IFM connectivity, the VIC 15422 I/O connectors link to Cisco UCS X-Fabric technology. ■ Security: Includes secure boot silicon root of trust FPGA, ACT2 anti-counterfeit provisions, and optional Trusted Platform Model (TPM). NOTE: All options listed in the Spec Sheet are compatible with Intersight Managed Mode and UCSM Managed Mode configurations. To see the most recent list of components that are supported in Intersight Managed Mode, see Supported Systems. Figure 1 on page 5 shows a front view of the Cisco UCS X210c M6 Compute Node. Figure 1 Cisco UCS X210c M6 Compute Node Front View with Drives Front View with Drives and GPU Cisco UCS X210c M6 Compute Node DETAILED VIEWS 5 DETAILED VIEWS Cisco UCS X210c M6 Compute Node Front View Figure 2 & Figure 3 is a front view of the Cisco UCS X210c M6 Compute Node. Figure 2 Cisco UCS X210c M6 Compute Node Front View (Drives option) 1 Locate button/LED 9 Drive Bay 3 (shown populated) 2 Power button/LED 10 Drive Bay 4 (shown populated) 3 Status LED 11 Drive Bay 5 (shown populated) 4 Network activity LED 12 Drive Bay 6 (shown populated) 5 Warning LED (one per drive) 13 OCuLink console port1 6 Disk drive activity LED (one per drive) 14 Ejector handle retention button 7 Drive Bay 1 (shown populated) 15 Upper ejector handle 8 Drive Bay 2 (shown populated) 16 Lower ejector handle Notes: 1. An adapter cable (PID UCSX-C-DEBUGCBL) is required to connect the OCuLink port to the transition serial USB and video (SUV) octopus cable. Storage Drives Option UCSC-NVME2H-I1600 NVMe SSD 1.6 TB UCSC-NVME2H-I1600 NVMe SSD 1.6 TB UCSC-NVME2H-I1600 NVMe SSD 1.6 TB UCSC-NVME2H-I1600 NVMe SSD 1.6 TB UCSC-NVME2H-I1600 NVMe SSD 1.6 TB UCSC-NVME2H-I1600 NVMe SSD 1.6 TB 1 5 7 8 6 2 3 4 10 14 15 16 12 13 9 11 6 Cisco UCS X210c M6 Compute Node DETAILED VIEWS Figure 3 Cisco UCS X210c M6 Compute Node Front View Storage Drives and GPU Option 1 U.2/U.3 drive slot 1 6 Activity LED 2 GPU slot 1 7 Health LED 3 GPU slot 2 8 Locator LED 4 U.2/U.3 drive slot 2 9 Console port 5 Power Button/LED – – (Drives and GPU option) Cisco UCS X210c M6 Compute Node COMPUTE NODE STANDARD CAPABILITIES and FEATURES 7 COMPUTE NODE STANDARD CAPABILITIES and FEATURES Table 1 lists the capabilities and features of the base Cisco UCS X210c M6 Compute Node. Details about how to configure the compute node for a listed feature or capability (for example, number of processors, disk drives, or amount of memory) are provided in CONFIGURING the Cisco UCS X210c M6 Compute Node on page 10. Table 1 Capabilities and Features Capability/Feature Description Chassis The Cisco UCS X210c M6 Compute Node mounts in a Cisco UCS X9508 chassis. CPU ■ One or two 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake). ■ Each CPU has 8 channels with up to 2 DIMMs per socket, for up to 16 DIMMs per CPU. Chipset Intel® C621A series chipset Memory ■ 32 total 3200-MHz DIMM slots (16 per CPU) ■ Support for Advanced ECC ■ Support for registered ECC DIMMs (RDIMMs) ■ Support for load-reduced DIMMs (LR DIMMs) ■ Support for Intel® Optane™ Persistent Memory Modules (PMem), only in designated slots ■ Up to 8 TB DDR4 DIMM memory capacity (32x 256 GB DIMMs) ■ Up to 12 TB memory capacity (16x 256 GB DIMMs and 16x 512 GB PMem) Mezzanine Adapter (Rear) ■ An optional Cisco UCS Virtual Interface Card 14825 can occupy the server’s mezzanine slot at the bottom of the chassis. A bridge card extends this VIC’s 2x 50 Gbps of network connections up to the mLOM slot and out through the mLOM’s IFM connectors, bringing the total bandwidth to 100 Gbps per fabric—a total of 200 Gbps per server. ■ An optional UCS PCIe Mezz card for X-Fabric is also supported in the server’s mezzanine slot. This card’s I/O connectors link to the Cisco UCS X-Fabric modules for UCS X-series Gen4 PCIe node access. mLOM The modular LAN on motherboard (mLOM) cards (the Cisco UCS VIC 14425/15420 and 15230) is located at the rear of the compute node. ■ The Cisco UCS VIC 14225/15420 is a Cisco designed PCI Express (PCIe) based card that supports two 2x25G-KR network interfaces to provide Ethernet communication to the network by means of the Intelligent Fabric Modules (IFMs) in the Cisco UCS X9508 chassis. The Cisco UCS VIC 14425 mLOM can connect to the rear mezzanine adapter card with a bridge connector. ■ The Cisco UCS VIC 15230 is a Cisco designed PCI Express (PCIe) based card that supports two 2x100G-KR network interfaces to provide Ethernet communication to the network by means of the Intelligent Fabric Modules (IFMs) in the Cisco UCS X9508 chassis 8 Cisco UCS X210c M6 Compute Node COMPUTE NODE STANDARD CAPABILITIES and FEATURES Mezzanine Adapters (Front) One front mezzanine connector that supports: ■ Up to 6 x 2.5-inch SAS and SATA RAID-compatible SSDs ■ Up to 6 x 2.5-inch NVMe PCIe drives ■ A mixture of up to six SAS/SATA or NVMe drives ■ A mixture of up to two GPUs and up to two NVMe drives Note: Drives require a RAID or pass-through controller in the front mezzanine module slot or a front mezzanine GPU module. Additional Storage Dual 80 mm SATA 3.0 M.2 cards (up to 960 GB per card) on a boot-optimized hardware RAID controller Video Video uses a Matrox G200e video/graphics controller. ■ Integrated 2D graphics core with hardware acceleration ■ DDR4 memory interface supports up to 512 MB of addressable memory (16 MB is allocated by default to video memory) ■ Supports display resolutions up to 1920 x 1200 32 bpp@ 60Hz ■ Video is available with an Oculink connector on the front panel. An adapter cable (PID UCSX-C-DEBUGCBL) is required to connect the OCuLink port to the transition serial USB and video (SUV) octopus cable. Front Panel Interfaces OCuLink console port. Note that an adapter cable is required to connect the OCuLink port to the transition serial USB and video (SUV) octopus cable. Power subsystem Power is supplied from the Cisco UCS X9508 chassis power supplies. The Cisco UCS X210c M6 Compute Node consumes a maximum of 1300 W. Fans Integrated in the Cisco UCS X9508 chassis. Integrated management processor The built-in Cisco Integrated Management Controller enables monitoring of Cisco UCS X210c M6 Compute Node inventory, health, and system event logs. Baseboard Management Controller (BMC) ASPEED Pilot IV ACPI Advanced Configuration and Power Interface (ACPI) 6.2 Standard Supported. ACPI states S0 and S5 are supported. There is no support for states S1 through S4. Front Indicators ■ Power button and indicator ■ System activity indicator ■ Location button and indicator Management ■ Cisco Intersight software (SaaS, Virtual Appliance and Private Virtual Appliance) ■ Starting with UCS Manager (UCSM) 4.3(2) or later Fabric Interconnect Compatible with the Cisco UCS 6454, 64108 and 6536 fabric interconnects Table 1 Capabilities and Features (continued) Capability/Feature Description Cisco UCS X210c M6 Compute Node COMPUTE NODE STANDARD CAPABILITIES and FEATURES 9 Chassis Compatible with the Cisco UCS 9508 X-Series Server Chassis Table 1 Capabilities and Features (continued) Capability/Feature Description 10 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Follow these steps to configure the Cisco UCS X210c M6 Compute Node: ■ STEP 1 CHOOSE BASE Cisco UCS X210c M6 Compute Node SKU, page 11 ■ STEP 2 CHOOSE CPU(S), page 12 ■ STEP 3 CHOOSE MEMORY, page 16 ■ STEP 4 CHOOSE REAR mLOM ADAPTER, page 23 ■ STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS, page 27 ■ STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER, page 30 ■ STEP 7 CHOOSE OPTIONAL GPU PCIe NODE, page 31 ■ STEP 8 CHOOSE OPTIONAL GPUs, page 32 ■ STEP 9 CHOOSE OPTIONAL DRIVES, page 33 ■ STEP 10 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE, page 39 ■ STEP 11 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE, page 40 ■ STEP 12 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT, page 43 ■ SUPPLEMENTAL MATERIAL, page 44 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 11 STEP 1 CHOOSE BASE Cisco UCS X210c M6 Compute Node SKU Verify the product ID (PID) of the Cisco UCS X210c M6 Compute Node as shown in Table 2. Table 2 PID of the Base Cisco UCS X210c M6 Compute Node Product ID (PID) Description UCSX-210C-M6 Cisco UCS X210c M6 Compute Node 2S Intel 3rd Gen CPU without CPU, memory, drive bays, drives, VIC adapter, or mezzanine adapters (ordered as a UCS X9508 chassis option) UCSX-210C-M6-U Cisco UCS X210c M6 Compute Node 2S Intel 3rd Gen CPU without CPU, memory, drive bays, drives, VIC adapter, or mezzanine adapters (ordered standalone) A base Cisco UCS X210c M6 Compute Node ordered in Table 2 does not include any components or options. They must be selected during product ordering. Please follow the steps on the following pages to order components such as the following, which are required in a functional compute node: ■ CPUs ■ Memory ■ Cisco storage RAID or passthrough controller with drives (or blank, for no local drive support) ■ SAS, SATA, NVMe or M.2 drives. ■ Cisco adapters (such as the 14000 series VIC or 15000 series VIC or Bridge) 12 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node STEP 2 CHOOSE CPU(S) The standard CPU features are: ■ 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake) ■ Intel® C621A series chipset ■ Cache size of up to 60 MB ■ Up to 40 cores Select CPUs The available CPUs are listed in Table 3. See Table 4 on page 14 for CPU suffix notations. Table 3 Available CPUs Product ID (PID) Clock Freq (GHz) Power (W) Cache Size (MB) Cores UPI1 Links (GT/s) Highest DDR4 DIMM Clock Support (MHz)2 PMem Support 8000 Series Processors UCSX-CPU-I8380 2.3 270 60 40 3 at 11.2 3200 Yes UCSX-CPU-I8368 2.4 270 57 38 3 at 11.2 3200 Yes UCSX-CPU-I8362 2.8 265 48 32 3 at 11.2 3200 Yes UCSX-CPU-I8360Y 2.4 250 54 36 3 at 11.2 3200 Yes UCSX-CPU-I8358P 2.6 240 54 32 3 at 11.2 3200 Yes UCSX-CPU-I8358 2.6 250 48 32 3 at 11.2 3200 Yes UCSX-CPU-I8352M 2.3 185 48 32 3 at 11.2 3200 Yes UCSX-CPU-I8352Y 2.2 205 48 32 3 at 11.2 3200 Yes UCSX-CPU-I8352V 2.1 195 54 36 3 at 11.2 2933 Yes UCSX-CPU-I8352S 2.2 205 48 32 3 at 11.2 3200 Yes UCSX-CPU-I8351N3 2.4 225 54 36 0 2933 Yes 6000 Series Processors UCSX-CPU-I6354 3.0 205 39 18 3 at 11.2 3200 Yes UCSX-CPU-I6348 2.6 235 42 28 3 at 11.2 3200 Yes UCSX-CPU-I6346 3.1 205 36 16 3 at 11.2 3200 Yes UCSX-CPU-I6342 2.8 230 36 24 3 at 11.2 3200 Yes UCSX-CPU-I6338T 2.1 165 36 24 3 at 11.2 3200 Yes UCSX-CPU-I6338N 2.2 185 48 32 3 at 11.2 2666 Yes UCSX-CPU-I6338 2.0 205 48 32 3 at 11.2 3200 Yes UCSX-CPU-I6336Y 2.4 185 36 24 3 at 11.2 3200 Yes UCSX-CPU-I6334 3.6 165 18 8 3 at 11.2 3200 Yes UCSX-CPU-I6330N 2.2 165 48 28 3 at 11.2 2666 Yes UCSX-CPU-I6330 2.0 205 42 28 3 at 11.2 2933 Yes UCSX-CPU-I6326 2.9 185 24 16 3 at 11.2 3200 Yes Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 13 UCSX-CPU-I6312U4 2.4 185 36 24 0 3200 Yes UCSX-CPU-I6314U5 2.3 205 48 32 0 3200 Yes 5000 Series Processors UCSX-CPU-I5320T 2.3 150 30 20 3 at 11.2 2933 Yes UCSX-CPU-I5320 2.2 185 39 26 3 at 11.2 2933 Yes UCSX-CPU-I5318Y 2.1 165 36 24 3 at 11.2 2933 Yes UCSX-CPU-I5318S 2.1 165 36 24 3 at 11.2 2933 Yes UCSX-CPU-I5318N 2.1 150 36 24 3 at 11.2 2666 Yes UCSX-CPU-I5317 3.0 150 18 12 3 at 11.2 2933 Yes UCSX-CPU-I5315Y 3.2 140 12 8 3 at 11.2 2933 Yes 4000 Series Processors UCSX-CPU-I4316 2.3 150 30 20 2 at 10.4 2666 No UCSX-CPU-I4314 2.4 135 24 16 2 at 10.4 2666 Yes UCSX-CPU-I4310T 2.3 105 15 10 2 at 10.4 2666 No UCSX-CPU-I4310 2.1 120 18 12 2 at 10.4 2666 No UCSX-CPU-I4309Y 2.8 105 12 8 2 at 10.4 2666 No Notes: 1. UPI = Ultra Path Interconnect 2. If higher or lower speed DIMMs are selected than what is shown in Table 5 on page 17 for a given CPU speed, the DIMMs will be clocked at the lowest common denominator of CPU clock and DIMM clock. 3. The maximum number of UCSX-CPU-I8351N CPUs is one 4. The maximum number of UCSX-CPU-I6312U CPUs is one 5. The maximum number of UCSX-CPU-I6314U CPUs is one Table 3 Available CPUs Product ID (PID) Clock Freq (GHz) Power (W) Cache Size (MB) Cores UPI1 Links (GT/s) Highest DDR4 DIMM Clock Support (MHz)2 PMem Support Table 4 CPU Suffixes CPU Suffix Description Features N Networking Optimized Optimized for use in networking applications like L3 forwarding, 5G UPF, OVS DPDK, VPP FIB router, VPP IPsec, web server/NGINX, vEPC, vBNG, and vCMTS. SKUs have higher base frequency with lower TDPs to enable best performance/Watt P Cloud Optimized SKU specifically designed for cloud IaaS environments to deliver higher frequencies at constrained TDPs V Cloud Optimized SKUs specifically designed for cloud environments to deliver high rack density and maximize VM/cores per TCO$ T High T case SKUs designed for Network Environment-Building System (NEBS) environments U 1-socket only Optimized for targeted platforms adequately served by the cores, memory bandwidth and IO capacity available from a single processor S Max SGX enclave size Supports Max SGX enclave size (512GB) to enhance and protect the most sensitive portions of a workload or service M Media and AI optimized Media, AI and HPC Segment Optimized for lower TDP & higher frequencies delivering better perf/w Y Speed Select – Performance Profile Intel® Speed Select Technology provides the ability to set a guaranteed base frequency for a specific number of cores, and assign this performance profile to a specific application/workload to guarantee performance requirements. It also provides the ability to configure settings during runtime and provide additional frequency profile configuration opportunities. 14 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Supported Configurations (1) DIMM only configurations: ■ Select one or two identical CPUs listed in Table 3 on page 12 (2) DIMM/PMem Mixed Configurations: ■ You must select two identical CPUs listed in Table 3 on page 12 (3) Configurations with NVMe PCIe drives: ■ Select one or two identical CPUs listed in Table 3 on page 12 (4) Configurations with GPUs: ■ Select one or two identical CPUs listed in Table 3 on page 12 (5) One-CPU Configuration Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 15 — Choose one CPU from any one of the rows of Table 3 Available CPUs, page 12 (6) Two-CPU Configuration — Choose two identical CPUs from any one of the rows of Table 3 Available CPUs, page 12 NOTE: You cannot have two I8351N or two I6314U or two I6314U CPUs in a two-CPU configuration. NOTE: If you configure a server with one I8351N CPU or one I6314U CPU or one I6314U, you cannot later upgrade to a 2-CPU system with two of these CPUs. 16 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node STEP 3 CHOOSE MEMORY The available memory for the Cisco UCS X210c M6 Compute Node is as follows: ■ Clock speed: 3200 MHz ■ Ranks per DIMM: 1, 2, 4, or 8 ■ Operational voltage: 1.2 V ■ Registered ECC DDR4 DIMMS (RDIMMs), Load-reduced DIMMs (LRDIMMs), or Intel® OptaneTM Persistent Memory Modules (PMem). Memory is organized with eight memory channels per CPU, with up to two DIMMs per channel, as shown in Figure 4. Figure 4 Cisco UCS X210c M6 Compute Node Memory Organization 2 CPUs, 8 memory channels per CPU, up to 2 DIMMs per channel, up to 32 DIMMs total CPU 2 32 DIMMS total (16 DIMMs per CPU) 8 TB maximum memory (with 256 GB DIMMs) 8 memory channels per CPU, up to 2 DIMMs per channel A1 A2 B1 B2 F1 F2 Chan B Chan C Chan E Chan A Chan F Chan G Chan E Chan A Chan C G1 G2 CPU 1 Slot 1 Slot 2 Slot 2 Slot 1 B2 B1 C2 C1 F2 F1 G2 G1 A2 Chan H Chan F Chan G A1 Chan B D2 D1 Chan D E2 E1 H2 H1 C1 C2 Chan D D1 D2 E1 E2 Chan H H1 H2 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 17 Select DIMMs and Memory Mirroring Select the memory configuration and whether or not you want the memory mirroring option. The available memory DIMMs and mirroring option are listed in Table 5. NOTE: When memory mirroring is enabled, the memory subsystem simultaneously writes identical data to two channels. If a memory read from one of the channels returns incorrect data due to an uncorrectable memory error, the system automatically retrieves the data from the other channel. A transient or soft error in one channel does not affect the mirrored data, and operation continues unless there is a simultaneous error in exactly the same location on a DIMM and its mirrored DIMM. Memory mirroring reduces the amount of memory available to the operating system by 50% because only one of the two populated channels provides data. Table 5 Available DDR4 DIMMs Product ID (PID) PID Description Voltage Ranks /DIMM 3200-MHz DIMMs UCSX-MR-X16G1RW 16 GB RDIMM SRx4 3200 (8Gb) 1.2 V 1 UCSX-MR-X32G1RW 32GB RDIMM SRx4 3200 (16Gb) 1.2 V 1 UCSX-MR-X32G2RW 32 GB RDIMM DRx4 3200 (8Gb) 1.2 V 2 UCSX-MR-X64G2RW 64 GB RDIMM DRx4 3200 (16Gb) 1.2 V 2 UCSX-ML-128G4RW 128 GB LRDIMM QRx4 3200 (16Gb) 1.2 V 4 UCSX-ML-256G8RW 256 GB LRDIMM 8Rx4 3200 (16Gb) 1.2 V 8 Intel® Optane™ Persistent Memory (PMem)1 UCSX-MP-128GS-B0 Intel® OptaneTM Persistent Memory, 128GB, 3200 MHz UCSX-MP-256GS-B0 Intel® OptaneTM Persistent Memory, 256 GB, 3200 MHz UCSX-MP-512GS-B0 Intel® OptaneTM Persistent Memory, 512 GB, 3200 MHz DIMM Blank2 UCS-DIMM-BLK UCS DIMM Blank Intel® Optane™ Persistent Memory (PMem) Operational Modes UCS-DCPMM-AD App Direct Mode UCS-DCPMM-MM Memory Mode Memory Mirroring Option N01-MMIRROR Memory mirroring option Notes: 1. All 3rd Generation Intel® Xeon® Scalable Processors (Ice Lake) support PMem products, except 4309Y, 4310, 4310T, and 4316 processor. 2. Any empty DIMM slot must be populated with a DIMM blank to maintain proper cooling airflow. 18 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Approved Configurations (1) 1-CPU configuration without memory mirroring: ■ Select from 1 to 16 DIMMs. — 1, 2, 4, 6, 8, 12, or 16 DIMMs allowed — 3, 5, 7, 9, 10, 11, 13, 14, 15 DIMMs not allowed — DIMMs for both CPUs must be configured identically. The DIMMs will be placed by the factory as shown in the following table. #DIMMs 1 (A1) 2 (A1, E1) 4 (A1, C1); (E1, G1) 6 (A1, C1); (D1, E1); (G1, H1) 8 (A1, C1); (D1, E1); (G1, H1); (B1, F1) 12 (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) 16 (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) (2) 1-CPU configuration with memory mirroring: ■ Select 2, 4, 8, 12, or 16 DIMMs per CPU (DIMMs for all CPUs must be configured identically). In addition, the memory mirroring option (N01-MMIRROR) as shown in Table 5 on page 17 must be selected. The DIMMs will be placed by the factory as shown in the following table. # DIMMs Per CPU CPU 1 DIMM Placement in Channels (for identical ranked DIMMs) 2 (A1, E1) 4 (A1, C1); (E1, G1) 8 (A1, C1); (D1, E1); (G1, H1); (B1, F1) 12 (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) 16 (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) ■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 5 on page 17. CPU 1 DIMM Placement in Channels (for identically ranked DIMMs) Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 19 (3) 2-CPU configuration without memory mirroring: ■ Select from 1 to 16 DIMMs per CPU. — 1, 2, 4, 6, 8, 12, or 16 DIMMs allowed — 3, 5, 7, 9, 10, 11, 13, 14, 15 DIMMs not allowed — DIMMs for both CPUs must be configured identically. The DIMMs will be placed by the factory as shown in the following tables. #DIMMs 1 (A1) (A1) 2 (A1, E1) (A1, E1) 4 (A1, C1); (E1, G1) (A1, C1); (E1, G1) 6 (A1, C1); (D1, E1); (G1, H1) (A1, C1); (D1, E1); (G1, H1) 8 (A1, C1); (D1, E1); (G1, H1); (B1, F1) (A1, C1); (D1, E1); (G1, H1); (B1, F1) 12 (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) 16 (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) (4) 2-CPU configuration with memory mirroring: ■ Select 2, 4, 8, 12, or 16 DIMMs per CPU (DIMMs for all CPUs must be configured identically). In addition, the memory mirroring option (N01-MMIRROR) as shown in Table 5 on page 17 must be selected. The DIMMs will be placed by the factory as shown in the following tables. # DIMMs Per CPU CPU 1 DIMM Placement in Channels (for identical ranked DIMMs) 2 (A1, E1) (A1, E1) 4 (A1, C1); (E1, G1) (A1, C1); (E1, G1) 8 (A1, C1); (D1, E1); (G1, H1); (B1, F1) (A1, C1); (D1, E1); (G1, H1); (B1, F1) 12 (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2) 16 (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2) ■ Select the memory mirroring option (N01-MMIRROR) as shown in Table 5 on page 17. CPU 1 DIMM Placement in Channels (for identically ranked DIMMs) CPU 2 DIMM Placement in Channels (for identically ranked DIMMs) CPU 2 DIMM Placement in Channels (for identically ranked DIMMs) NOTE: System performance is optimized when the DIMM type and quantity are equal for both CPUs, and when all channels are filled equally across the CPUs in the server. Table 6 3200-MHz DIMM Memory Speeds with Different 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake) DIMM and CPU Frequencies (MHz) DPC LRDIMM (8Rx4)- 256 GB (MHz) LRDIMM (QRx4) – 128 GB (MHz) RDIMM (2Rx4) – 64 GB (MHz) RDIMM (DRx4) – 32 GB (MHz) RDIMM (SRx4) – 16 GB (MHz) 1.2 V 1.2 V 1.2 V 1.2 V 1.2 V DIMM = 3200 CPU = 3200 1DPC 3200 3200 3200 3200 3200 2DPC 3200 3200 3200 3200 3200 DIMM = 3200 CPU = 2933 1DPC 2933 2933 2933 2933 2933 2DPC 2933 2933 2933 2933 2933 DIMM = 3200 CPU = 2666 1DPC 2666 2666 2666 2666 2666 2DPC 2666 2666 2666 2666 2666 20 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node DIMM Rules ■ Allowed DIMM count for 1 CPU: ■ Minimum DIMM count = 1; Maximum DIMM count = 16 ■ 1, 2, 4, 6, 8, 12, or 16 DIMMs allowed ■ 3, 5, 7. 9, 10, 11, 13, 14, or 15 DIMMs not allowed. ■ Allowed DIMM count for 2 CPUs ■ Minimum DIMM count = 2; Maximum DIMM count = 32 ■ 2, 4, 8, 12, 16, 24, or 32 DIMMs allowed ■ 6, 10, 14, 18, 20, 22, 26, 28, or 30 DIMMs not allowed. ■ DIMM Mixing: ■ Mixing different types of DIMM (RDIMM with any type of LRDIMM or 3DS LRDIMM with non-3DS LRDIMM) is not supported within a server. ■ Mixing RDIMM with RDIMM types is allowed if they are mixed in same quantities, in a balanced configuration. ■ Mixing 16 GB, 32 GB, and 64 GB RDIMMs is supported. ■ 128 GB and 256 GB LRDIMMs cannot be mixed with other RDIMMs Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 21 ■ 128 GB non-3DS LRDIMMs cannot be mixed with 256 GB 3DS LRDIMMs NOTE: DIMM mixing is not allowed when PMem are installed; in these cases, all DIMMs must be the same type and size. See the detailed mixing DIMM configurations at the following link Cisco UCS X210c M6 Compute Node Memory Guide Table 7 Intel® Optane™ Persistent Memory Modes Intel® Optane™ Persistent Memory Modes App Direct Mode: PMem operates as a solid-state disk storage device. Data is saved and is non-volatile. Both PMem and DIMM capacities count towards the CPU capacity limit. Memory Mode: PMem operates as a 100% memory module. Data is volatile and DRAM acts as a cache for PMem. Only the PMem capacity counts towards the CPU capacity limit. This is the factory default mode. Table 8 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake) DIMM and PMem1 Physical Configuration DIMM + PMem Count CPU 1 or CPU 2 ICX: IMC2 ICX: IMC3 ICX: IMC1 ICX: IMC0 Chan 0 (F) Chan 1 (E) Chan 0 (H Chan 1 (G) Chan 0 (C) Chan 1 (D) Chan 0 (A) Chan 1 (B) Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 Slot 2 Slot 1 4 + 42 PMem DIMM PMem DIMM DIMM PMem DIMM PMem 8 + 13 DIMM DIMM DIMM DIMM DIMM DIMM PMem DIMM DIMM 8 + 44 DIMM DIMM PMem DIMM DIMM PMem PMem DIMM DIMM PMem DIMM DIMM 8 + 85 DIMM PMem DIMM PMem DIMM PMem DIMM PMem PMem DIMM PMem DIMM PMem DIMM PMem DIMM NOTE: AD = App Direct Mode, MM = Memory Mode 22 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node See Table 7 for PMem memory modes. For detailed Intel PMem configurations, refer to https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/210c-m6/install /b-cisco-ucs-x210c-m6-install.html For detailed DIMM/PMem informations, refer to Cisco UCS X210c M6 Compute Node Memory Guide Notes: 1. All systems must be fully populated with two CPUs when using PMem at this time. 2. AD, MM 3. AD 4. AD, MM 5. AD, MM Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 23 STEP 4 CHOOSE REAR mLOM ADAPTER The Cisco UCS X210c M6 Compute Node must be ordered with a Cisco VIC mLOM Adapter. The adapter is located at the back and can operate in a single-CPU or dual-CPU configuration. Table 9 shows the mLOM adapter choices. Table 9 mLOM Adapters Product ID (PID) Description Connection type UCSX-V4-Q25GML UCS VIC 14425 4x25G mLOM for X Compute Node mLOM UCSX-ML-V5D200GV2 Cisco UCS VIC 15230 modular LOM w/Secure Boot X Compute Node mLOM UCSX-ML-V5Q50G UCS VIC 15420 4x25G secure boot mLOM for X Compute Node mLOM NOTE: ■ VIC 14425, 15420, or 15230 are supported with both X9108-IFM-25G and X9108-IFM-100G. VIC 14425 and VIC 15420 will operate at 4x 25G with both X9108-IFM-25G and X9108-IFM-100G. While, VIC 15231 will operate at 4x 25G with X9108-IFM-25G and at 2x 100G with X9108-IFM-100G. ■ The mLOM adapter is mandatory for the Ethernet connectivity to the network by means of the IFMs and has x16 PCIe Gen3 connectivity with Cisco UCS VIC 14425, x16 Gen4 connectivity with Cisco UCS VIC 15230, and x16 Gen4 connectivity with Cisco UCS VIC 15420 towards the CPU1. ■ There is no backplane in the Cisco UCS X9508 chassis; thus the compute nodes directly connect to the IFMs using Orthogonal Direct connectors. ■ Figure 5 shows the location of the mLOM and rear mezzanine adapters on the Cisco UCS X210c M6 Compute Node. The bridge adapter connects the mLOM adapter to the rear mezzanine adapter. 24 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Figure 5 Location of mLOM and Rear Mezzanine Adapters Rear MezzanineAdapter mLOM Adapter Bridge Adapter Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 25 Figure 6 shows the network connectivity from the mLOM out to the 25G IFMs. Figure 6 Network Connectivity 25G IFMs Mezz Adapter Cisco ASIC Bridge Adapter mLOM Adapter 25G-KR 25G-KR 25G-KR 25G-KR Lane 1 Lane 0 Lane 0 Lane 1 Lane 1 Lane 1 Lane 0 Lane 0 MAC1 MAC0 IFM-1 IFM-2 KR Lanes KR Lanes 3210 3210 Cisco ASIC Cisco ASIC Cisco UCS x210c Compute Node UCS X9508 Chassis To Fabric Interconnect To Fabric Interconnect 25G-KR 25G-KR 25G-KR 25G-KR 25G-KR 25G-KR 25G-KR 25G-KR IFM OD connectors (1 for each IFM) UCS 210c mLOM OD connectors (2) Cisco ASIC MAC1 MAC0 26 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Figure 7 shows the network connectivity from the mLOM out to the 100G IFMs. Figure 7 Network Connectivity 100G IFMs Bridge Adapter mLOM Adapter IFM-1 IFM-2 KR Lanes KR Lanes Cisco ASIC Cisco ASIC Cisco UCS x210c Compute Node UCS X9508 Chassis To Fabric Interconnect To Fabric Interconnect 100G-KR4 IFM OD connectors (1 for each IFM) UCS 210c mLOM OD connectors (2) Cisco ASIC MAC1 MAC0 Empty Mezzanine Slot 100G-KR4 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 27 STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS The Cisco UCS X210c M6 Compute Node has one rear mezzanine adapter connector which can have a UCS VIC 14825/15422 Mezz card that can be used as a second VIC card on the compute node for network connectivity or as a connector to the X440p PCIe node via X-Fabric modules. The same mezzanine slot on the compute node can also accommodate a pass-through mezzanine adapter for X-Fabric which enables compute node connectivity to the X440p PCIE node. Refer to Table 10 for supported adapters. Table 10 Available Rear Mezzanine Adapters Product ID(PID) PID Description CPUs Required Connector Type Cisco VIC Card UCSX-V4-Q25GME UCS VIC 148251 4x25G mezz for X Compute Node 2 CPUs required Rear Mezzanine connector on motherboard UCSX-ME-V5Q50G UCS VIC 154202 4x25G secure boot mLOM for X Compute Node 2 CPUs required Rear Mezzanine connector on motherboard UCSX-V4-PCIME UCS PCI Mezz Card for X-Fabric 2 CPUs required Rear Mezzanine connector on motherboard Cisco VIC Bridge Card UCSX-V4-BRIDGE3 UCS VIC 14000 bridge connect mLOM and mezz X Compute Node 2 CPUs required One connector on Mezz card and one connector on mLOM card UCSX-V5-BRIDGE4 UCS VIC 15000 bridge to connect mLOM and mezz X Compute Node (This bridge to connect the Cisco VIC 15420 mLOM and Cisco VIC 15422 Mezz for the X210c M6 Compute Node) 2 CPUs required One connector on Mezz card and one connector on mLOM card NOTE: The UCSX-V4-PCIME rear mezzanine card for X-Fabric has PCIE Gen4 x16 connectivity towards each CPU1 and CPU2. Additionally, the UCSX-V4-PCIME also provides two PCIE Gen4 x16 to each X-fabric. This rear mezzanine card enables connectivity from the X210c M6 Compute Node to the X440p PCIe node. Notes: 1. Cisco UCS VIC 14825 can only be used with the Cisco UCS VIC 14425 mLOM 2. Cisco UCS VIC 15420 can only be used with the Cisco UCS VIC 15422 mLOM 3. Included with the Cisco VIC 14825 4. Included with the Cisco VIC 15422 Table 11 Throughput Per UCS X210c M6 Server X210c M6 Compute Node FI-6536 + X9108-IFM-100G FI-6536/6400 + X9108-IFM-25G FI-6536 + X9108-IFM-25G/100G or FI-6400 + X9108-IFM-25G FI-6536 + X9108-IFM-25G/100G or FI-6400 + X9108-IFM-25G x210c configuration VIC 15230 VIC 15230 VIC 14425/15420 VIC 14425 + VIC 14825 Throughput per node 200G (100G per IFM) 100G (50G per IFM) 100G (50G per IFM) 200G (100G per IFM) vNICs needed for max BW 2 2 2 4 KR connectivity from VIC to each IFM 1x 100GKR 2x 25GKR 2x 25GKR 4x 25GKR Single vNIC throughput on VIC 100G (1x100GKR) 50G (2x25G KR) 50G (2x25G KR) 50G (2x25G KR) 50G (2x25G KR) Max Single flow BW per vNIC 100G 25G 25G 25G 25G Single vHBA throughput on VIC 100G 50G 50G 50G 50G 28 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Supported Configurations ■ One of the mLOM VIC from Table 9 is always required . ■ If a UCSX-V4-Q25GME rear mezzanine VIC card is installed, a UCSX-V4-BRIDGE VIC bridge card is included and connects the mLOM to the mezzanine adapter. ■ If a UCSX-ME-V5Q50G rear mezzanine VIC card is installed, a UCSX-V5-BRIDGE VIC bridge card is included and connects the mLOM to the mezzanine adapter. ■ The UCSX-V4-Q25GME rear mezzanine card has Ethernet connectivity to the IFM using the UCSX-V4-BRIDGE and has a PCIE Gen3 x16 connectivity towards CPU2. Additionally, the UCSX-V4-Q25GME also provides two PCIE Gen4 x16 to each X-fabric. ■ The UCSX-ME-V5Q50G rear mezzanine card has Ethernet connectivity to the IFM using the UCSX-V5-BRIDGE and has a PCIE Gen4 x16 connectivity towards CPU2. Additionally, the UCSX-ME-V5Q50G also provides two PCIe Gen4 x16 to each X-fabric. Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 29 ■ All the connections to Cisco UCS X-Fabric 1 and Cisco UCS X-Fabric 2 are through the Molex Orthogonal Direct (OD) connector on the mezzanine card. ■ The rear mezzanine card has 32 x16 PCIe lanes to each Cisco UCS X-Fabric for I/O expansion to enable resource consumption from the PCIe resource nodes. 30 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER The Cisco UCS X210c M6 Compute Node has one front mezzanine connector that can accommodate one of the following mezzanine cards: ■ Pass-through controller for up to 6 U.2/U.3 NVMe drives ■ RAID controller (RAID levels 0, 1, 5, 6, 10, and 50) for 6 SAS/SATA/U.3 NVMe drives or up to 4 U.2 NVMe drives (drive slots 1-4) and SAS/SATA/U.3 NVMe (drive slots 5-6) ■ GPU Front Mezz to Support up to 2 U.2/U.3 NVMe drives and 2 NVIDIA T4 GPUs The Cisco UCS X210c M6 Compute Node can be ordered with or without the front mezzanine adapter. Refer to Table 12 Available Front Mezzanine Adapters. Table 12 Available Front Mezzanine Adapters Product ID(PID) PID Description Connector Type UCSX-X10C-PT4F Cisco UCS X210c M6 Compute Node compute pass through controller for up to 6 NVMe drives Front Mezzanine UCSX-X10C-RAIDF Cisco UCS X210c M6 Compute Node RAID controller w/4GB Cache, with LSI 3900 for up to 6 SAS/SATA drives or up to 4 NVMe drives (SAS/SATA and NVMe drives can be mixed). Front Mezzanine UCSX-X10C-GPUFM UCS X210c M6 Compute Node Front Mezz to support up to 2 NVIDIA T4 GPUs and 2 NVMe drives Front Mezzanine NOTE: Only one Front Mezzanine connector or Front GPU can be selected per Server Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 31 STEP 7 CHOOSE OPTIONAL GPU PCIe NODE Table 13 GPU PCIe Node Product ID(PID) PID Description UCSX-440P UCS X-Series Gen4 PCIe node Refer to Table 13 for GPU PCIe Node NOTE: ■ If UCSX-440P-D is selected, then rear mezzanine is required. 32 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node STEP 8 CHOOSE OPTIONAL GPUs Select GPU Options The available Compute node GPU options are listed in Table 14 Table 14 Available PCIe GPU Card supported on the Compute Node Front Mezz GPU Product ID (PID) PID Description UCSX-GPU-T4-MEZZ NVIDIA T4 GPU PCIE 75W 16GB, MEZZ form factor . The available PCIe node GPU options are listed in Table 15. Table 15 Available PCIe GPU Cards supported on the PCIe Node GPU Product ID (PID) PID Description UCSX-GPU-T4-161 NVIDIA T4 PCIE 75W 16GB UCSX-GPU-A162 NVIDIA A16 PCIE 250W 4X16GB UCSX-GPU-A402 TESLA A40 RTX, PASSIVE, 300W, 48GB UCSX-GPU-A100-802 TESLA A100, PASSIVE, 300W, 80GB3 Notes: 1. The maximum number of GPUs per node is 4 2. The maximum number of GPUs per node is 2 3. Required power cables are included with the riser cards Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 33 STEP 9 CHOOSE OPTIONAL DRIVES The Cisco UCS X210c M6 Compute Node can be ordered with or without drives. The drive options are: ■ One to six 2.5-inch small form factor SAS/SATA SSDs or PCIe U.2/U.3 NVMe drives — Hot-pluggable — Sled-mounted ■ Up to two SATA M.2 RAID modules can be selected to be installed in the 6GB/s SATA boot-optimized M.2 RAID controller. The boot-optimized RAID controller plugs into the motherboard. NOTE: It is recommended that M.2 SATA SSDs be used as boot-only devices. Select one or two drives from the list of supported drives available in Table 16. Table 16 Available Drive Options Product ID (PID) Description Drive Type Speed Performance/ Endurance/ Value Size SAS/SATA SSDs1,2,3 Self-Encrypted Drives (SED) UCSX-SD38TBKNK9 3.8 TB Enterprise value SAS SSD (1X DWPD, SED) SAS/ SED Ent. Value 1X 3.8 TB UCSX-SD76TBKANK9 7.6TB Enterprise value SAS SSD (1 DWPD, SED-FIPS) SAS/ SED Ent. Value 1X 7.6 TB UCSX-SD38TBKANK9 3.8TB 2.5in Enterprise value 12G SAS SSD (1DWPD, SED-FIPS) SAS/ SED Ent. Value 1X 3.8 TB UCSX-SD960GM2NK9 960GB Enterprise value SATA SSD (1X , SED) SAS/ SED Ent. Perf 3X 960 GB UCSX-SD76TEM2NK9 7.6TB Enterprise value SATA SSD (1X, SED) SAS/ SED Ent. Perf 3X 7.6 TB UCSX-SD16TBKANK9 1.6TB 2.5 Enterprise performance 12GSAS SSD(3DWPD,SED-FIPS) SAS/ SED Ent. Perf 3X 1.6 TB Enterprise Performance SSDs (high endurance, supports up to 3X DWPD (drive writes per day)) UCSX-SD19T63X-EP 1.9 TB 2.5 inch Enterprise performance 6G SATA SSD(3X endurance) SATA 6G Ent. Perf 3X 1.9 TB UCSX-SD480G63X-EP 480 GB 2.5in Enterprise performance 6G SATA SSD (3X endurance) SATA 6G Ent. Perf 3X 480 GB UCSX-SD960G63X-EP 960 GB 2.5 inch Enterprise performance 6G SATA SSD (3X endurance) SATA 6G Ent. Perf 3X 960 GB UCSX-SD19TBM3X-EP 1.9TB 2.5in Enterprise performance 6GSATA SSD(3X endurance) SATA 6G Ent. Perf 3X 1.9 TB 34 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node UCSX-SD960GBM3XEP 960GB 2.5in Enterprise performance 6GSATA SSD(3X endurance) SATA 6G Ent. Perf 3X 960 GB UCSX-SD480GBM3XEP 480GB 2.5in Enterprise Performance 6GSATA SSD(3X endurance) SATA 6G Ent. Perf 3X 480 GB UCSX-SD800GK3X-EP 800 GB 2.5in Enterprise Performance 12G SAS SSD(3X endurance) SAS 12G Ent. Perf 3X 800 GB UCSX-SD32TKA3X-EP 3.2TB 2.5in Enter Perf 12G SAS Kioxia G2 SSD (3X) SAS 12G Ent. Perf 3X 3.2 TB UCSX-SD16TKA3X-EP 1.6TB 2.5in Enterprise Performance 12G SAS SSD(3X endurance) SAS 12G Ent. Perf 3X 1.6 TB Enterprise Value SSDs (Low endurance, supports up to 1X DWPD (drive writes per day)) UCSX-SD960GK1X-EV 960 GB 2.5 inch Enterprise Value 12G SAS SSD SAS 12G Ent. Value 960 GB UCSX-SD15TKA1X-EV 15.3TB 2.5in Enter Value 12G SAS Kioxia G2 SSD SAS 12G Ent. Value 15.3 TB UCSX-SD76TKA1X-EV 7.6TB 2.5 inch Enterprise Value 12G SAS SSD SAS 12G Ent. Value 7.6 TB UCSX-SD38TKA1X-EV 3.8TB 2.5in Enter Value 12G SAS Kioxia G2 SSD SAS 12G Ent. Value 3.8 TB UCSX-SD19TKA1X-EV 1.9TB 2.5 inch Enterprise Value 12G SAS SSD SAS 12G Ent. Value 1.9 TB UCSX-SD480G6I1XEV 480 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 480 GB UCSX-SD960G6I1XEV 960 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 960 GB UCSX-SD38T6I1X-EV 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 3.8 TB UCSX-SD19T61X-EV 1.9 TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 1.9 TB UCSX-SD38T61X-EV 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 3.8 TB UCSX-SD19T6S1X-EV 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 1.9 TB UCSX-SD76T6S1X-EV 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 7.6 TB UCSX-SD960G6S1XEV 960GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 960 GB UCSX-SD76TBM1X-EV 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 7.6 TB UCSX-SD38TBM1X-EV 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 3.8 TB UCSX-SD19TBM1X-EV 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 1.9 TB UCSX-SD16TBM1X-EV 1.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 1.6 TB UCSX-SD960GBM1XEV 960GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 960 GB UCSX-SD480GBM1XEV 480 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 480 GB UCSX-SD240GBM1XEV 240GB 2.5in Enter Value 6G SATA Micron G2 SSD SATA 6G Ent. Value 240 GB UCSX-SD19TM1X-EV 1.9TB 2.5in Enter Value 6G SATA Micron G1 SSD SATA 6G Ent. Value 1.9 TB UCSX-SDB960SA1V 960GB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD SATA 6G Ent. Value 960 GB UCSX-SDB1T9SA1V 1.9TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD SATA 6G Ent. Value 1.9 TB UCSX-SDB3T8SA1V 3.8TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD SATA 6G Ent. Value 3.8 TB Table 16 Available Drive Options (continued) Product ID (PID) Description Drive Type Speed Performance/ Endurance/ Value Size Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 35 UCSX-SDB7T6SA1V 7.6TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD SATA 6G Ent. Value 7.6 TB UCSX-SD19T61X-EV 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 1.9 TB UCSX-SD38T61X-EV 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G Ent. Value 3.8 TB NVMe4, 5,6 UCSX-NVME4-6400 6.4TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) NVMe U.2 High Perf High End. 6.4 TB UCSX-NVME4-3840 3.8TB 2.5in U.2 15mm P5520 Hg Perf Med End NVMe NVMe U.2 High Perf Med End. 3.8 TB UCSX-NVMEM6-W3200 3.2TB 2.5in U.2 WD SN840 NVMe Extreme Perf. High Endurance NVMe U.2 Ext Perf High End. 3.2 TB UCSX-NVMEM6-W6400 6.4TB 2.5in U.2 WD SN840 NVMe Extreme Perf. High Endurance NVMe U.2 Ext Perf High End. 6.4 TB UCSX-NVMEM6-W7680 7.6TB 2.5in U.2 WD SN840 NVMe Extreme Perf. Value Endurance NVMe U.2 Ext Perf Value End. 7.6 TB UCSX-NVMEM6W15300 15.3TB 2.5in U.2 WD SN840 NVMe Extreme Perf. Value Endurance NVMe U.2 Ext Perf Value End. 15.3 TB UCSX-NVMEXP-I400 400GB 2.5in U.2 15mm P5800X Optane Ext Perf NVMe (30/100X) NVMe U.2 Ext Perf 400 GB UCSX-NVMEXP-I800 800GB 2.5in U.2 15mm P5800X Optane Ext Perf NVMe (30/100X) NVMe U.2 Ext Perf 800 GB UCSX-NVME4-15360 15.3TB 2.5in U.2 15mm P5520 Hg Perf Med End NVMe NVMe U.2 High. Perf Med End. 15.3 TB UCSX-NVME4-1600 1.6TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) NVMe U.2 High. Perf High End. 1.6 TB UCSX-NVME4-1920 1.9TB 2.5in U.2 15mm P5520 Hg Perf Med End NVMe NVMe U.2 High. Perf Med End. 1.9 TB UCSX-NVME4-3200 3.2TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) NVMe U.2 High. Perf High End. 3.2 TB UCSX-NVME4-7680 7.6TB 2.5in U.2 15mm P5520 Hg Perf Med End NVMe NVMe U.2 Ext Perf Med End. 7.6 TB UCSX-NVMEI4-I1600 1.6TB 2.5in U.2 Intel P5600 NVMe High Perf High Endurance NVMe U.2 High. Perf High End. 1.6 TB UCSX-NVMEQ-1536 15.3TB 2.5in U.2 15mm P5316 Hg Perf Low End NVMe NVMe U.2 High. Perf low End. 15.3 TB UCSX-NVMEG4-M1536 15.3TB 2.5in U.3 15mm P7450 Hg Perf Med End NVMe NVMe U.3 High. Perf Med End. 15.3 TB UCSX-NVMEG4-M1600 1.6TB 2.5in U.3 15mm P7450 Hg Perf Hg End NVMe (3X) NVMe U.3 High. Perf High End. 1.6 TB UCSX-NVMEG4-M1920 1.9TB 2.5in U.3 15mm P7450 Hg Perf Med End NVMe NVMe U.3 High. Perf Med End. 1.9 TB Table 16 Available Drive Options (continued) Product ID (PID) Description Drive Type Speed Performance/ Endurance/ Value Size 36 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Cisco 6GB/s SATA Boot-Optimized M.2 RAID Controller You can optionally select the Boot-Optimized RAID controller (UCS-M2-HWRAID) for hardware RAID across two SATA M.2 storage modules. The Boot-Optimized RAID controller plugs into the motherboard and the M.2 SATA drives plug into the Boot-Optimized RAID controller. Note: The Boot-Optimized RAID controller supports VMware, Windows and Linux Operating Systems. Table 17 Boot-Optimized RAID controller Product ID (PID) PID Description UCS-M2-HWRAID Cisco Boot optimized M.2 RAID controller UCSX-NVMEG4-M3200 3.2TB 2.5in U.3 15mm P7450 Hg Perf Hg End NVMe (3X) NVMe U.3 High Perf High End. 3.2 TB UCSX-NVMEG4-M3840 3.8TB 2.5in U.3 15mm P7450 Hg Perf Med End NVMe NVMe U.3 High Perf Med End. 3.8 TB UCSX-NVMEG4-M6400 6.4TB 2.5in U.3 15mm P7450 Hg Perf Hg End NVMe (3X) NVMe U.3 High Perf High End. 6.4 TB UCSX-NVMEG4-M7680 7.6TB 2.5in U.3 15mm P7450 Hg Perf Med End NVMe NVMe U.3 High Perf Med End. 7.6 TB UCSX-NVMEG4-M960 960GB 2.5in U.3 15mm P7450 Hg Perf Med End NVMe NVMe U.3 High. Perf Med End. 960 GB SATA M.2 Storage Modules (plug into Boot-Optimized RAID controller on motherboard) UCSX-M2-240G 240GB SATA M.2 SATA M.2 240 GB UCSX-M2-480G 480GB M.2 SATA SSD SATA M.2 480 GB UCSX-M2-960G 960GB SATA M.2 SATA M.2 960 GB UCSX-M2-I240GB 240GB M.2 Boot SATA Intel SSD SATA M.2 240 GB UCSX-M2-I480GB 480GB M.2 Boot SATA Intel SSD SATA M.2 480 GB NOTE: Cisco uses solid state drives from a number of vendors. All solid state drives are subject to physical write limits and have varying maximum usage limitation specifications set by the manufacturer. Cisco will not replace any solid state drives that have exceeded any maximum usage specifications set by Cisco or the manufacturer, as determined solely by Cisco. Notes: 1. SSD drives require the UCSX-X10C-RAIDF front mezzanine adapter 2. For SSD drives to be in a RAID group, two identical SSDs must be used in the group. 3. If SSDs are in JBOD Mode, the drives do not need to be identical. 4. NVMe drives require a front mezzanine the UCSX-X10C-PT4F pass through controller or UCSX-X10C-RAIDF RAID controller or the X10c Front Mezzanine GPU module. 5. A maximum of 4x NVMe drives can be ordered with RAID controller. 6. A maximum of 2x NVMe drives can be ordered with Front Mezzanine GPU module. Table 16 Available Drive Options (continued) Product ID (PID) Description Drive Type Speed Performance/ Endurance/ Value Size NOTE: ■ The UCS-M2-HWRAID controller supports RAID 1 and JBOD mode and is available only with 240 GB,480GB and 960 GB M.2 SATA SSDs. ■ Cisco IMM is supported for configuring of volumes and monitoring of the controller and installed SATA M.2 drives ■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported ■ Hot-plug replacement is not supported. The compute node must be powered off to replace. Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 37 38 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node Intel® Virtual RAID on CPU (Intel® VROC) The server supports Intel® Virtual RAID on CPU (Intel® VROC). VROC is an enterprise RAID solution used with Intel NVMe SSDs (see Table 16 for supported Intel NVMe SSDs). The Intel® Volume Management Device (Intel® VMD) is a controller integrated into the CPU PCIe root complex. Intel® VMD NVMe SSDs are connected to the CPU, which allows the full performance potential of fast Intel® Optane™ SSDs to be realized. Intel® VROC, when implemented, replaces traditional hardware RAID host bus adapter (HBA) cards placed between the drives and the CPU. NOTE: ■ Intel® VROC is only supported with Intel drives ■ Intel® VROC enablement key factory pre-provisioned to BIOS – no additional licensing required. VROC has the following features: ■ Small Form Factor (SFF) drive support (only) ■ No battery backup (BBU) or external SuperCap needed ■ Software-based solution utilizing Intel SFF NVMe direct connected to Intel CPU ■ RAID 0/1/5/10 support ■ Windows, Linux, VMware OS support. ■ Host Tools- Windows GUI/CLI, Linux CLI. ■ UEFI Support- HII Utility, OBSE. ■ Intel VROC NVMe operates in UEFI mode only See the instructions on setting up and managing VROC for Intel NVMe SSDs for more information. Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 39 STEP 10 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE Trusted Platform Module (TPM) is a computer chip or microcontroller that can securely store artifacts used to authenticate the platform or Cisco UCS X210c M6 Compute Node. These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy. Authentication (ensuring that the platform can prove that it is what it claims to be) and attestation (a process helping to prove that a platform is trustworthy and has not been breached) are necessary steps to ensure safer computing in all environments. Table 18 Available TPM Option Product ID (PID) Description UCSX-TPM-002C Trusted Platform Module 2.0, FIPS140-2 Compliant, UCS M6 server UCSX-TPM-002D TPM 2.0 TCG FIPS140-2 CC+ Cert M6 Intel MSW2022 Compliant UCSX-TPM-OPT-OUT OPT OUT, TPM 2.0, TCG, FIPS140-2, CC EAL4+ Certified NOTE: ■ The TPM module used in this system conforms to TPM v2.0 as defined by the Trusted Computing Group (TCG). TPM installation is supported after-factory. However, a TPM installs with a one-way screw and cannot be replaced, upgraded, or moved to another compute node. If a Cisco UCS X210c M6 Compute Node with a TPM is returned, the replacement Cisco UCS X210c M6 Compute Node must be ordered with a new TPM. If there is no existing TPM in the Cisco UCS X210c M6 Compute Node, you can install a TPM 2.0. Refer to the following document for Installation location and instructions: https://www.cisco.com/content/en/us/td/docs/unified_computing/uc s/x/hw/210c-m6/install/b-cisco-ucs-x210c-m6-install.html 40 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node STEP 11 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE NOTE: ■ See this link for operating system guidance: https://ucshcltool.cloudapps.cisco.com/public/ ■ VMware is on Compliance Hold. Contact the Compute-Vmware-Hold@cisco.com mailer to see if you are allowed to receive VMware Licenses Select ■ Cisco Software (Table 19) ■ Operating System (Table 20) Table 19 OEM Software Product ID (PID) PID Description VMware vCenter VMW-VCS-STD-1A VMware vCenter 7 Server Standard, 1 yr support required VMW-VCS-STD-3A VMware vCenter 7 Server Standard, 3 yr support required VMW-VCS-STD-5A VMware vCenter 7 Server Standard, 5 yr support required VMW-VCS-FND-1A VMware vCenter 7 Server Foundation (4 Host), 1 yr supp reqd VMW-VCS-FND-3A VMware vCenter 7 Server Foundation (4 Host), 3 yr supp reqd VMW-VCS-FND-5A VMware vCenter 7 Server Foundation (4 Host), 5 yr supp reqd Table 20 Operating System Product ID (PID) PID Description Microsoft Windows Server MSWS-19-DC16C Windows Server 2019 Data Center (16 Cores/Unlimited VMs) MSWS-19-DC16C-NS Windows Server 2019 DC (16 Cores/Unlim VMs) – No Cisco SVC MSWS-19-ST16C Windows Server 2019 Standard (16 Cores/2 VMs) MSWS-19-ST16C-NS Windows Server 2019 Standard (16 Cores/2 VMs) – No Cisco SVC MSWS-22-DC16C Windows Server 2022 Data Center (16 Cores/Unlimited VMs) MSWS-22-DC16C-NS Windows Server 2022 DC (16 Cores/Unlim VMs) – No Cisco SVC Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 41 MSWS-22-DCA2C Windows Server 2022 Data Center – Additional 2 Cores MSWS-22-DCA2C-NS Windows Server 2022 DC – Additional 2 Cores – No Cisco SVC MSWS-22-ST16C Windows Server 2022 Standard (16 Cores/2 VMs) MSWS-22-ST16C-NS Windows Server 2022 Standard (16 Cores/2 VMs) – No Cisco SVC MSWS-22-STA2C Windows Server 2022 Standard – Additional 2 Cores MSWS-22-STA2C-NS Windows Server 2022 Stan – Additional 2 Cores – No Cisco SVC Red Hat RHEL-2S2V-1A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req RHEL-2S2V-3A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 3-Yr Support Req RHEL-2S2V-5A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 5-Yr Support Req RHEL-VDC-2SUV-1A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr Supp Req RHEL-VDC-2SUV-3A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr Supp Req RHEL-VDC-2SUV-5A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 5 Yr Supp Req Red Hat Ent Linux/ High Avail/ Res Strg/ Scal RHEL-2S2V-1S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 1-Yr SnS RHEL-2S2V-3S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 3-Yr SnS RHEL-2S-HA-1S RHEL High Availability (1-2 CPU); Premium 1-yr SnS RHEL-2S-HA-3S RHEL High Availability (1-2 CPU); Premium 3-yr SnS RHEL-2S-RS-1S RHEL Resilent Storage (1-2 CPU); Premium 1-yr SnS RHEL-2S-RS-3S RHEL Resilent Storage (1-2 CPU); Premium 3-yr SnS RHEL-VDC-2SUV-1S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr SnS Reqd RHEL-VDC-2SUV-3S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr SnS Reqd Red Hat SAP RHEL-SAP-2S2V-1S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS RHEL-SAP-2S2V-3S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS VMware VMW-VSP-STD-1A VMware vSphere 6 Standard (1 CPU), 1-yr, Support Required VMW-VSP-STD-3A VMware vSphere 6 Standard (1 CPU), 3-yr, Support Required VMW-VSP-STD-5A VMware vSphere 6 Standard (1 CPU), 5-yr, Support Required Table 20 Operating System (continued) Product ID (PID) PID Description 42 Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node VMW-VSP-EPL-3A VMware vSphere 6 Ent Plus (1 CPU), 3-yr, Support Required VMW-VSP-EPL-1A VMware vSphere 6 Ent Plus (1 CPU), 1-yr, Support Required VMW-VSP-EPL-5A VMware vSphere 6 Ent Plus (1 CPU), 5-yr, Support Required SUSE SLES-2S2V-1A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 1-Yr Support Req SLES-2S2V-3A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 3-Yr Support Req SLES-2S2V-5A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 5-Yr Support Req SLES-2S2V-1S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 1-Yr SnS SLES-2S2V-3S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 3-Yr SnS SLES-2S2V-5S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 5-Yr SnS SLES-2S-HA-1S SUSE Linux High Availability Ext (1-2 CPU); 1yr SnS SLES-2S-HA-3S SUSE Linux High Availability Ext (1-2 CPU); 3yr SnS SLES-2S-HA-5S SUSE Linux High Availability Ext (1-2 CPU); 5yr SnS SLES-2S-GC-1S SUSE Linux GEO Clustering for HA (1-2 CPU); 1yr Sns SLES-2S-GC-3S SUSE Linux GEO Clustering for HA (1-2 CPU); 3yr SnS SLES-2S-GC-5S SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr SnS SLES-2S-LP-1S SUSE Linux Live Patching Add-on (1-2 CPU); 1yr SnS Required SLES-2S-LP-3S SUSE Linux Live Patching Add-on (1-2 CPU); 3yr SnS Required SLES-2S-LP-1A SUSE Linux Live Patching Add-on (1-2 CPU); 1yr Support Req SLES-2S-LP-3A SUSE Linux Live Patching Add-on (1-2 CPU); 3yr Support Req SLES and SAP SLES-SAP-2S2V-1A SLES for SAP Apps (1-2 CPU, 1-2 VM); 1-Yr Support Reqd SLES-SAP-2S2V-3A SLES for SAP Apps (1-2 CPU, 1-2 VM); 3-Yr Support Reqd SLES-SAP-2S2V-5A SLES for SAP Apps (1-2 CPU, 1-2 VM); 5-Yr Support Reqd SLES-SAP-2S2V-1S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 1-Yr SnS SLES-SAP-2S2V-3S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 3-Yr SnS SLES-SAP-2S2V-5S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 5-Yr SnS Table 20 Operating System (continued) Product ID (PID) PID Description Cisco UCS X210c M6 Compute Node CONFIGURING the Cisco UCS X210c M6 Compute Node 43 STEP 12 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT Select the optional operating system media listed in Table 21. Table 21 OS Media Product ID (PID) PID Description MSWS-19-ST16C-RM Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only MSWS-19-DC16C-RM Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only MSWS-22-ST16C-RM Windows Server 2022 Stan (16 Cores/2 VMs) Rec Media DVD Only MSWS-22-DC16C-RM Windows Server 2022 DC (16Cores/Unlim VM) Rec Media DVD Only 44 Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL SUPPLEMENTAL MATERIAL Simplified Block Diagram A simplified block diagram of the Cisco UCS X210c M6 Compute Node system board is shown in Figure 8. Figure 8 Node MEZZ Connector Main ASIC SGMII FEM-1 OD Connector FEM-2 OD Connector PCIe Gen4x16 PCIe Gen4x16 Bridge Connector PCIe Gen3x16 2x25G-KR 2x25G-KR Rear MEZZ Adapter Bridge Connector Bridge Adapter Rear mLOM Adapter Node mLOM Connector SGMII PCIe Gen3x16 PCIe Gen3x16 IFM-1 OD Connector IFM-2 OD Connector Main ASIC 2x25G-KR 2x25G-KR 2x25G-KR 2x25G-KR Cisco UCS X210c Node CPU 1 (front CPU) CPU 2 (rear CPU) RAID Controller Front MEZZ Adapter UPI Links PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 . . . . . . . Local Storage Disk 1 Disk n PCIe Gen4x16 Cisco UCS X210c M6 Compute Node Simplified Block Diagram (IFMs 25G with Drives) Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL 45 Figure 9 Rear MEZZ Adapter Rear mLOM Adapter Node mLOM Connector SGMII PCIe Gen4x16 IFM-1 OD Connector IFM-2 OD Connector Main ASIC 100G-KR4 Cisco UCS X210c Node CPU 1 (front CPU) CPU 2 (rear CPU) Front MEZZ Adapter UPI Links PCIe Gen4x16 PCIe Gen4x16 . . . . . . . Local Storage Disk 1 Disk n PCIe Gen4x16 PCIe Mezz Card for X-Fabric 100G-KR4 IFM-1 OD Connector IFM-2 OD Connector Node mLOM Connector PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 Cisco UCS X210c M6 Compute Node Simplified Block Diagram (IFMs 100G with Drives) 46 Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL Figure 10 Node MEZZ Connector Main ASIC SGMII FEM-1 OD Connector FEM-2 OD Connector PCIe Gen4x16 PCIe Gen4x16 Bridge Connector PCIe Gen3x16 2x25G-KR 2x25G-KR Rear MEZZ Adapter Bridge Connector Bridge Adapter Rear mLOM Adapter Node mLOM Connector SGMII PCIe Gen3x16 PCIe Gen3x16 IFM-1 OD Connector IFM-2 OD Connector Main ASIC 2x25G-KR 2x25G-KR 2x25G-KR 2x25G-KR Cisco UCS X210c Node CPU 1 (front CPU) CPU 2 (rear CPU) Front MEZZ Adapter UPI Links PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 Local Storage Disk 1 PCIe Gen4x24 Disk 2 GPU 1 GPU 2 GPUs Disk 1 Cisco UCS X210c M6 Compute Node Simplified Block Diagram (IFMs 25G with Drives and GPUs) Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL 47 Figure 11 Rear MEZZ Adapter Rear mLOM Adapter Node mLOM Connector SGMII PCIe Gen4x16 IFM-1 OD Connector IFM-2 OD Connector Main ASIC 100G-KR4 Cisco UCS X210c Node CPU 1 (front CPU) CPU 2 (rear CPU) Front MEZZ Adapter UPI Links PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 Local Storage Disk 1 PCIe Gen4x24 Disk 2 GPU 1 GPU 2 GPUs 100G-KR4 PCIe Mezz Card for X-Fabric Node mLOM Connector IFM-1 OD Connector IFM-2 OD Connector PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 PCIe Gen4x16 Disk 1 Cisco UCS X210c M6 Compute Node Simplified Block Diagram (IFMs 100G with Drives and GPUs) 48 Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL System Board A top view of the Cisco UCS X210c M6 Compute Node system board is shown in Figure 12. Figure 12 Cisco UCS X210c M6 Compute Node System Board 1 Front mezzanine slot for SAS/SATA or NVMe drives. 5 Rear mezzanine slot, which supports a mezzanine card with standard or extended mLOM. If an extended mLOM slot is used, it occupies this slot, such that no rear mezzanine card can be installed. 2 DIMM slots (32 maximum) 6 Bridge adapter (for connecting the mLOM to the rear mezzanine card) 3 CPU 1 slot (shown populated) 7 mLOM slot for a standard or extended mLOM 4 CPU 2 slot (shown unpopulated) – – P1 F2 P1 E2 P1 H2 P1 G2 P1 E1 P1 H1 P1 G1 P1 F1 P1 B2 P1 A1 P1 C2 P1 D2 P1 A2 P1 C1 P1 D1 P1 B1 P2 B2 P2 A2 P2 D2 P2 C2 P2 A1 P2 D1 P2 C1 P2 B1 P2 H2 P2 E2 P2 F2 P2 H1 P2 E1 P2 F1 P2 G1 P2 G2 308974 1 3 4 5 7 6 2 2 Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL 49 Memory Configuration Each CPU has eight DIMM channels: ■ CPU1 (P1) has channels A, B, C, D, E, F, G, and H ■ CPU2 (P2) has channels A, B, C, D, E, F, G, and H Each DIMM channel has two slots: slot 1 and slot 2. The blue-colored DIMM slots are for slot 1 and the black slots for slot 2. Figure 12 on page 48 shows how slots and channels are physically laid out on the motherboard. The DIMM slots on the left are for channels A, B, C, D, E, F, G, and H and are associated with CPU 1 (P1), while the DIMM slots on the right are for channels A, B, C, D, E, F, G, and H and are associated with CPU 2 (P2). The slot 1 (blue) DIMM slots are always located farther away from a CPU than the corresponding slot 2 (black) slots. For all allowable DIMM populations, please refer to the “Memory Population Guidelines” section of the Cisco UCS X210c M6 Compute Node Installation Guide, at the following link: https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/210c-m6/install /b-cisco-ucs-x210c-m6-install.html For more details, see the Cisco UCS C220/C240/B200 M6 memory Guide at the following link: Cisco UCS X210c M6 Compute Node Memory Guide When considering the memory configuration, consider the following items: ■ Each channel has two DIMM slots (for example, channel A = slots A1 and A2) and a channel can operate with one or two DIMMs installed. ■ When both CPUs are installed, populate the DIMM slots of each CPU identically. ■ Any DIMM installed in a DIMM socket for which the CPU is absent is not recognized. ■ For further details, see STEP 3 CHOOSE MEMORY, page 16. 50 Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL Table 22 DIMM Rules for Cisco UCS X210c M6 Compute Nodes DIMM Parameter DIMMs in the Same Channel DIMM in the Same Slot1 Notes: 1. Although different DIMM capacities can exist in the same slot, this will result in less than optimal performance. For optimal performance, all DIMMs in the same slot should be identical. DIMM Capacity RDIMM = 16, 32, or 64 GB LRDIMM = 128 or 256 GB DIMMs in the same channel (for example, A1 and A2) can have different capacities. Do not mix RDIMMS with LRDIMMs For best performance, DIMMs in the same slot (for example, A1, B1, C1, D1, E1, F1, G1, H1) should have the same capacity. Do not mix RDIMMS with LRDIMMs DIMM Speed 3200-MHz DIMMs will run at the highest memory speed supported by the CPU installed DIMMs will run at the highest memory speed supported by the CPU installed DIMM Type RDIMMs or LRDIMMs Do not mix DIMM types in a channel Do not mix DIMM types in a slot Cisco UCS X210c M6 Compute Node SUPPLEMENTAL MATERIAL 51 Memory Support for 3rd Generation Intel® Xeon® Scalable Processors (Ice Lake) PMem Support The Ice Lake CPUs support two memory modes: ■ App Direct Mode ■ Memory Mode App Direct Mode PMem operates as a solid-state disk storage device. Data is saved and is non-volatile. Both DCPMM and DIMM capacities count towards the CPU capacity limit. For example, if App Direct mode is configured and the DIMM sockets for a CPU are populated with 8 x 256 GB DRAMs (2 TB total DRAM) and 8 x 512 GB PMem (4 TB total PMem), then 6 TB total counts towards the CPU capacity limit. Follow the Intel recommended DRAM:PMem ratio for App Direct Mode. Memory Mode PMem operates as a 100% memory module. Data is volatile and DRAM acts as a cache for PMem. Only the PMem capacity counts towards the CPU capacity limit. This is the factory default mode. For example, if Memory mode is configured and the DIMM sockets for a CPU are populated with 8 x 256 GB DRAMs (2 TB total DRAM) and 8 x 512 GB PMem (4 TB total PMem), then only 4 TB total (the PMem memory) counts towards the CPU capacity limit. All of the DRAM capacity (2 TB) is used as cache and does not factor into CPU capacity. The recommended Intel DRAM:PMem ratio for Memory Mode is 1:4, 1:8, or 1:16. For 3rd Generation Intel® Xeon® Ice Lake® Processors: ■ DRAMs and PMem are supported ■ Each CPU has 16 DIMM sockets and supports the following maximum memory capacities: ■ 4 TB using 16 x 256 GB DRAMs, or ■ 6 TB using 8 x 256 GB DRAMs and 8 x 512 GB Intel® Optane™ Persistent Memory Modules (PMem) Only the following mixed DRAM/PMem memory configurations are supported per CPU socket: ■ 4 DRAMs and 4 PMem, or 8 DRAMs and 4 PMem, or 8 DRAMs and 1 PMem, or 8 DRAMs and 8 PMem The available DRAM capacities are 32 GB, 64 GB, 128 GB, or 256 GB. The available PMem capacities are 128 GB, 256 GB, or 512 GB For further details see the following link: Cisco UCS X210c M6 Compute Node Memory Guide 52 Cisco UCS X210c M6 Compute Node SPARE PARTS SPARE PARTS This section lists the upgrade and service-related parts for the Cisco UCS X210c M6 Compute Node. Some of these parts are configured with every compute node or with every Cisco UCS X9508 chassis. Table 23 Spare Parts Product ID (PID) PID Description Debug Cable UCSX-C-DEBUGCBL= UCSX Compute Node Debug Cable CPUs Note: If you are ordering a second CPU, see the CPU Accessories section in this table for additional parts you may need to order for the second CPU. 8000 Series Processors UCSX-CPU-I8380= UCSX-CPU-I8368= UCSX-CPU-I8362= UCSX-CPU-I8360Y= UCSX-CPU-I8358P= UCSX-CPU-I8358= UCSX-CPU-I8352M= UCSX-CPU-I8352Y= UCSX-CPU-I8352V= UCSX-CPU-I8352S= UCSX-CPU-I8351N=1 6000 Series Processors UCSX-CPU-I6354= UCSX-CPU-I6348= UCSX-CPU-I6346= UCS-CPU-I6342= UCS-CPU-I6338T= UCSX-CPU-I6336Y= UCSX-CPU-I6334= Cisco UCS X210c M6 Compute Node SPARE PARTS 53 UCS-CPU-I6334= UCSX-CPU-I6330N= UCSX-CPU-I6330= UCSX-CPU-I6326= UCSX-CPU-I6312U=2 UCS-CPU-I6326= UCSX-CPU-I6314U=3 5000 Series Processors UCSX-CPU-I5320T= UCSX-CPU-I5320= UCSX-CPU-I5318Y= UCSX-CPU-I5318S= UCSX-CPU-I5318N= UCSX-CPU-I5317= UCSX-CPU-I5315Y= 4000 Series Processors UCSX-CPU-I4316= UCSX-CPU-I4314= UCSX-CPU-I4310T= UCSX-CPU-I4310= UCSX-CPU-I4309Y= CPU Accessories UCSX-C-M6-HS-F= CPU Heat Sink for UCS B-Series M6 CPU socket (Front) UCSX-C-M6-HS-R= CPU Heat Sink for UCS B-Series M6 CPU socket (Rear) UCSX-CPU-TIM= Single CPU thermal interface material syringe for M6 server HS seal UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU) UCSX-CPUAT= CPU Assembly Tool for M6 Servers UCSX-M6-CPU-CAR= UCS M6 CPU Carrier UCSX-CPUATI-4= CPX-4 CPU Assembly tool for M6 Servers Table 23 Spare Parts (continued) Product ID (PID) PID Description 54 Cisco UCS X210c M6 Compute Node SPARE PARTS UCSX-CPUATI-3= ICX CPU Assembly Tool for M6 Servers Memory UCSX-MR-X16G1RW= 16 GB RDIMM SRx4 3200 (8Gb) UCSX-MR-X32G1RW 32GB RDIMM SRx4 3200 (16Gb) UCSX-MR-X32G2RW= 32 GB RDIMM DRx4 3200 (8Gb) UCSX-MR-X64G2RW= 64 GB RDIMM DRx4 3200 (16Gb) UCSX-ML-128G4RW= 128 GB LRDIMM QRx4 3200 (16Gb) UCSX-MP-128GS-B0= Intel® OptaneTM Persistent Memory, 128GB, 2666-MHz UCSX-MP-256GS-B0= Intel® OptaneTM Persistent Memory, 256GB, 2666-MHz UCSX-MP-512GS-B0= Intel® OptaneTM Persistent Memory, 512GB, 2666-MHz DIMM Blank UCSX-DIMM-BLK= Cisco UCS DIMM Blank Rear Mezzanine Adapters UCSX-V4-Q25GML= UCS VIC 14425 4x25G mLOM for X Compute Node UCSX-V4-Q25GME= UCS VIC 14825 4x25G mezz for X Compute Node UCSX-V4-PCIME= UCS PCI Mezz Card for X-Fabric UCSX-ML-V5D200GV2= Cisco UCS VIC 15230 modular LOM w/Secure Boot X Compute Node Front Mezzanine Adapters UCSX-X10C-PT4F= UCS X10c Compute Pass Through Controller (Front) UCSX-X10C-RAIDF= UCS X10c Compute RAID Controller with LSI 3900 (Front) UCSX-X10C-FMBK= UCS X10c Compute Node Front Mezz Blank GPUs UCSX-X10C-GPUFM= UCS X210c M6 Compute Node Front Mezz to support up to 2 NVIDIA T4 GPUs and 2 NVMe drives UCSX-GPUFM-BLK= UCSX GPU Front Mezz slot blank UCSX-GPU-T4-MEZZ= NVIDIA T4 GPU PCIE 75W 16GB, MEZZ form factor SSD Enterprise Performance Drives UCSX-SD19T63X-EP= 1.9TB 2.5in Enterprise performance 6GSATA SSD(3X endurance) UCSX-SD480G63X-EP= 480GB 2.5in Enterprise Performance 6GSATA SSD(3X endurance) Table 23 Spare Parts (continued) Product ID (PID) PID Description Cisco UCS X210c M6 Compute Node SPARE PARTS 55 UCSX-SD960G63X-EP= 960GB 2.5in Enterprise performance 6GSATA SSD(3X endurance) UCSX-SD19TBM3X-EP= 1.9TB 2.5in Enterprise performance 6GSATA SSD(3X endurance) UCSX-SD960GBM3XEP= 960GB 2.5in Enterprise performance 6GSATA SSD(3X endurance) UCSX-SD480GBM3XEP= 480GB 2.5in Enterprise Performance 6GSATA SSD(3X endurance) UCSX-SD800GK3X-EP= 800GB 2.5in Enterprise Performance 12G SAS SSD(3X endurance) UCSX-SD32TKA3X-EP= 3.2TB 2.5in Enter Perf 12G SAS Kioxia G2 SSD (3X) UCSX-SD16TKA3X-EP= 1.6TB 2.5in Enterprise Performance 12G SAS SSD(3X endurance) SSD Enterprise Value Drives UCSX-SD960GK1X-EV= 960 GB 2.5 inch Enterprise Value 12G SAS SSD UCSX-SD15TKA1X-EV= 15.3TB 2.5in Enter Value 12G SAS Kioxia G2 SSD UCSX-SD76TKA1X-EV= 7.6TB 2.5 inch Enterprise Value 12G SAS SSD UCSX-SD38TKA1X-EV= 3.8TB 2.5in Enter Value 12G SAS Kioxia G2 SSD UCSX-SD19TKA1X-EV= 1.9TB 2.5 inch Enterprise Value 12G SAS SSD UCSX-SD480G6I1XEV= 480 GB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD960G6I1XEV= 960 GB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD38T6I1X-EV= 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD19T61X-EV= 1.9 TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD38T61X-EV= 3.8 TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD19T6S1X-EV= 1.9TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD38T6S1X-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD76T6S1X-EV= 7.6TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD76TBM1X-EV= 7.6TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD38TBM1X-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD19TBM1X-EV= 1.9TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD16TBM1X-EV= 1.6TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD960GBM1XEV= 960GB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD480GBM1XEV= 480 GB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD240GBM1XEV= 240GB 2.5in Enter Value 6G SATA Micron G2 SSD UCSX-SD19TM1X-EV= 1.9TB 2.5in Enter Value 6G SATA Micron G1 SSD Table 23 Spare Parts (continued) Product ID (PID) PID Description 56 Cisco UCS X210c M6 Compute Node SPARE PARTS UCSX-SDB960SA1V= 960GB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD UCSX-SDB1T9SA1V = 1.9TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD UCSX-SDB3T8SA1V= 3.8TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD UCSX-SDB7T6SA1V= 7.6TB 2.5in 6G SATA Enter Value 1X Samsung G1PM893A SSD UCSX-SD19T61X-EV= 1.9TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD38T61X-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD UCSX-SD960G6S1XEV= 960GB 2.5 inch Enterprise Value 6G SATA SSD Self-Encrypted Drives (SED) UCSX-SD38TBKNK9= 3.8TB Enterprise Value SAS SSD (1X DWPD, SED) UCSX-SD960GM2NK9= 960GB Enterprise value SATA SSD (1X , SED) UCSX-SD76TEM2NK9= 7.6TB Enterprise value SATA SSD (1X, SED) UCSX-SD76TBKANK9= 7.6TB Enterprise value SAS SSD (1 DWPD, SED-FIPS) UCSX-SD38TBKANK9= 3.8TB 2.5in Enterprise value 12G SAS SSD (1DWPD, SED-FIPS) UCSX-SD16TBKANK9= 1.6TB 2.5 Enterprise performance 12GSAS SSD(3DWPD,SED-FIPS) NVME Drives UCSX-NVMEI4-I3840= 3.8TB 2.5in U.2 Intel P5500 NVMe High Perf Medium Endurance UCSX-NVMEI4-I1600= 1.6TB 2.5in U.2 Intel P5600 NVMe High Perf High Endurance UCSX-NVMEXP-I400= 400GB 2.5in U.2 Intel P5800X Optane NVMe Extreme Perform SSD UCSX-NVMEXP-I800= 800GB 2.5in U.2Intel P5800X Optane NVMe Extreme Perform SSD UCSX-NVME4-1600= 1.6TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) UCSX-NVME4-3200= 3.2TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) UCSX-NVME4-6400= 6.4TB 2.5in U.2 15mm P5620 Hg Perf Hg End NVMe (3X) UCSX-NVMEQ-1536= 15.3TB 2.5in U.2 15mm P5316 Hg Perf Low End NVMe UCSX-NVMEM6-W3200= 3.2 TB 2.5in U.2 WD SN840 NVMe Extreme Perf. High Endurance UCSX-NVMEM6-W7680= 7.6 TB 2.5in U.2 WD SN840 NVMe Extreme Perf. Value Endurance UCSX-NVMEM6W15300= 15.3 TB 2.5in U.2 WD SN840 NVMe Extreme Perf. Value Endurance UCSX-NVMEM6-W1600= 1.6TB 2.5in U.2 WD SN840 NVMe Extreme Perf. High Endurance UCSX-NVMEM6-W6400= 6.4TB 2.5in U.2 WD SN840 NVMe Extreme Perf. High Endurance UCSX-NVMEG4-M1536= 15.3TB 2.5in U.3 Micron 7450 NVMe High Perf Medium Endurance Table 23 Spare Parts (continued) Product ID (PID) PID Description Cisco UCS X210c M6 Compute Node SPARE PARTS 57 UCSX-NVMEG4-M1600= 1.6TB 2.5in U.3 Micron 7450 NVMe High Perf High Endurance UCSX-NVMEG4-M1920= 1.9TB 2.5in U.3 Micron 7450 NVMe High Perf Medium Endurance UCSX-NVMEG4-M3200= 3.2TB 2.5in U.3 Micron 7450 NVMe High Perf High Endurance UCSX-NVMEG4-M3840= 3.8TB 2.5in U.3 Micron 7450 NVMe High Perf Medium Endurance UCSX-NVMEG4-M6400= 6.4TB 2.5in U.3 Micron 7450 NVMe High Perf High Endurance UCSX-NVMEG4-M7680= 7.6TB 2.5in U.3 Micron 7450 NVMe High Perf Medium Endurance UCSX-NVMEG4-M960= 960GB 2.5in U.3 Micron 7450 NVMe High Perf Medium Endurance SATA M.2 Storage Modules UCSX-M2-240G= 240GB SATA M.2 UCSX-M2-480G= 480GB M.2 SATA SSD UCSX-M2-960G= 960GB SATA M.2 UCSX-M2-I240GB= 240GB M.2 Boot SATA Intel SSD UCSX-M2-I480GB= 480GB M.2 Boot SATA Intel SSD Boot-Optimized RAID Controller UCS-M2-HWRAID= Cisco Boot optimized M.2 RAID controller Drive Blank UCSC-BBLKD-S2= Cisco UCS X210c M6 Compute Node 7mm Front Drive Blank TPM UCSX-TPM-002C= Trusted Platform Module 2.0, FIPS140-2 Compliant, UCS M6 svr UCSX-TPM-002D= TPM 2.0 TCG FIPS140-2 CC+ Cert M6 Intel MSW2022 Compliant Software/Firmware Windows Server Recovery Media MSWS-19-ST16C-RM= Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only MSWS-19-DC16C-RM= Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only MSWS-22-ST16C-RM= Windows Server 2022 Stan (16 Cores/2 VMs) Rec Media DVD Only MSWS-22-DC16C-RM= Windows Server 2022 DC (16Cores/Unlim VM) Rec Media DVD Only RHEL SAP RHEL-SAPSP-3S= RHEL SAP Solutions Premium – 3 Years RHEL-SAPSS-3S= RHEL SAP Solutions Standard – 3 Years Table 23 Spare Parts (continued) Product ID (PID) PID Description 58 Cisco UCS X210c M6 Compute Node SPARE PARTS RHEL-SAPSP-R-1S= Renew RHEL SAP Solutions Premium – 1 Year RHEL-SAPSS-R-1S= Renew RHEL SAP Solutions Standard – 1 Year RHEL-SAPSP-R-3S= Renew RHEL SAP Solutions Premium – 3 Years RHEL-SAPSS-R-3S= Renew RHEL SAP Solutions Standard -3 Years VMware vSphere VMW-VSP-STD-1A= VMware vSphere 7 Std (1 CPU, 32 Core) 1-yr, Support Required VMW-VSP-STD-3A= VMware vSphere 7 Std (1 CPU, 32 Core) 3-yr, Support Required VMW-VSP-STD-5A= VMware vSphere 7 Std (1 CPU, 32 Core) 5-yr, Support Required VMW-VSP-EPL-1A= VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 1Yr, Support Reqd VMW-VSP-EPL-3A= VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 3Yr, Support Reqd VMW-VSP-EPL-5A= VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 5Yr, Support Reqd VMW-VSP-STD-1S= VMware vSphere 7 Std (1 CPU, 32 Core), 1-yr Vmware SnS Reqd VMW-VSP-STD-3S= VMware vSphere 7 Std (1 CPU, 32 Core), 3-yr Vmware SnS Reqd VMW-VSP-STD-1YR VMware vSphere 7 Std SnS – 1 Year (reports to PID VMW-VSP-STD-1S=) VMW-VSP-STD-3YR VMware vSphere 7 Std SnS – 3 Year (reports to PID VMW-VSP-STD-3S=) VMW-VSP-EPL-1S= VMware vSphere 7 EntPlus (1 CPU 32 Core) 1Yr VMware SnS Reqd VMW-VSP-EPL-3S= VMware vSphere 7 EntPlus (1 CPU 32 Core) 3Yr VMware SnS Reqd VMW-VSP-EPL-1YR VMware vSphere 7 Enterprise Plus SnS – 1 Year (reports to PID VMW-VSP-EPL-1S=) VMW-VSP-EPl-3YR VMware vSphere 7 Enterprise Plus SnS – 3 Year (reports to PID VMW-VSP-EPL-3S=) VMware vCenter VMW-VCS-STD-1A= VMware vCenter 7 Server Standard, 1 yr support required VMW-VCS-STD-3A= VMware vCenter 7 Server Standard, 3 yr support required VMW-VCS-STD-5A= VMware vCenter 7 Server Standard, 5 yr support required VMW-VCS-STD-1S= VMware vCenter 7 Server Standard, 1-yr Vmware SnS Reqd VMW-VCS-STD-3S= VMware vCenter 7 Server Standard, 3-yr Vmware SnS Reqd VMW-VCS-STD-1YR VMware vCenter 6 Server Standard SnS – 1 Year (reports to PID VMW-VCS-STD-1S=) Table 23 Spare Parts (continued) Product ID (PID) PID Description Cisco UCS X210c M6 Compute Node SPARE PARTS 59 Please refer to the Cisco UCS X210c M6 Compute Node Installation Guide for installation procedures. VMW-VCS-STD-3YR VMware vCenter 6 Server Standard SnS – 3 Year (reports to PID VMW-VCS-STD-3S=) VMW-VCS-FND-1A= VMware vCenter Server 7 Foundation (4 Host), 1 yr supp reqd VMW-VCS-FND-3A= VMware vCenter Server 7 Foundation (4 Host), 3 yr supp reqd VMW-VCS-FND-5A= VMware vCenter Server 7 Foundation (4 Host), 5 yr supp reqd VMW-VCS-FND-1S= VMware vCenter Server 7 Foundation (4 Host), 1yr VM SnS Reqd VMW-VCS-FND-3S= VMware vCenter Server 7 Foundation (4 Host), 3yr VM SnS Reqd VMW-VCS-FND-1YR VMware vCenter Server 6 Foundation (4 Host) SnS – 1 Year (reports to PID VMW-VCS-FND-1S=) VMW-VCS-FND-3YR VMware vCenter Server 6 Foundation (4 Host) SnS – 3 Year (reports to PID VMW-VCS-FND-3S=) VMware vSphere Upgrades VMW-VSS2VSP-1A= Upgrade: vSphere 7 Std to vSphere 7 Ent Plus (1 yr Supp Req) VMW-VSS2VSP-3A= Upgrade: vSphere 7 Std to vSphere 7 Ent Plus (1 yr Supp Req) Notes: 1. The maximum number of UCSX-CPU-I8351N CPUs is one 2. The maximum number of UCSX-CPU-I6312U CPUs is one 3. The maximum number of UCSX-CPU-I6314U CPUs is one Table 23 Spare Parts (continued) Product ID (PID) PID Description 60 Cisco UCS X210c M6 Compute Node UPGRADING or REPLACING CPUs UPGRADING or REPLACING CPUs NOTE: Before servicing any CPU, do the following: ■ Decommission and power off the compute node. ■ Slide the Cisco UCS X210c M6 Compute Node out from its chassis. ■ Remove the top cover. To replace an existing CPU, follow these steps: (1) Have the following tools and materials available for the procedure: ■ T-30 Torx driver—Supplied with replacement CPU. ■ #1 flat-head screwdriver—Supplied with replacement CPU. ■ CPU assembly tool—Supplied with replacement CPU. Can be ordered separately as Cisco PID UCSX-CPUAT=. ■ Heatsink cleaning kit—Supplied with replacement CPU. Can be ordered separately as Cisco PID UCSX-HSCK=. ■ Thermal interface material (TIM)—Syringe supplied with replacement CPU. Can be ordered separately as Cisco PID UCSX-CPU-TIM=. (2) Order the appropriate replacement CPU from Available CPUs on page 12. Carefully remove and replace the CPU and heatsink in accordance with the instructions found in “Cisco UCS X210c M6 Compute Node Installation and Service Note,” found at: https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/210c-m6/install /b-cisco-ucs-x210c-m6-install.html. (3) . To add a new CPU, follow these steps: (1) Have the following tools and materials available for the procedure: ■ T-30 Torx driver—Supplied with new CPU. ■ #1 flat-head screwdriver—Supplied with new CPU ■ CPU assembly tool—Supplied with new CPU.Can be ordered separately as Cisco PID UCSX-CPUAT= ■ Thermal interface material (TIM)—Syringe supplied with replacement CPU.Can be ordered separately as Cisco PID UCSX-CPU-TIM= (2) Order the appropriate new CPU from Table 3 on page 12. (3) Order one heat sink for each new CPU. Order PID UCSX-C-M6-HS-F= for the front CPU socket and PID UCSX-C-M6-HS-R= for the rear CPU socket. Cisco UCS X210c M6 Compute Node UPGRADING or REPLACING MEMORY 61 Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco UCS X210c M6 Compute Node Installation and Service Note,” found at: https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/210c-m6/install /b-cisco-ucs-x210c-m6-install.html. UPGRADING or REPLACING MEMORY NOTE: Before servicing any DIMM or PMem, do the following: ■ Decommission and power off the compute node. ■ Slide the Cisco UCS X210c M6 Compute Node out from its chassis. ■ Remove the top cover. To add or replace DIMMs or PMem, follow these steps: To add or replace DIMMs or PMem, follow these steps: Step 1 Open both DIMM connector latches. Step 2 Press evenly on both ends of the DIMM until it clicks into place in its slot Note: Ensure that the notch in the DIMM aligns with the slot. If the notch is misaligned, it is possible to damage the DIMM, the slot, or both. Step 3 Press the DIMM connector latches inward slightly to seat them fully. Step 4 Populate all slots with a DIMM or DIMM blank. A slot cannot be empty. Figure 13 30604 1 0 3 3 1 2 4 2 Replacing Memory For additional details on replacing or upgrading DIMMs, see “Cisco UCS X210c M6 Compute Node Installation and Service Note,” found at https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/210c-m6/install /b-cisco-ucs-x210c-m6-install.html. 62 Cisco UCS X210c M6 Compute Node DISCONTINUED EOL PRODUCTS DISCONTINUED EOL PRODUCTS Below is the list of parts were previously available for this product and are no longer sold. Please refer to the EOL Bulletin Links via the Table 36 below to determine if still supported. Table 24 EOS Product ID Description EOL/EOS link Operating system SLES-2SUV-1A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 1-Yr Support Req SLES-2SUV-1S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 1-Yr SnS SLES-2SUV-3A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 3-Yr Support Req SLES-2SUV-3S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 3-Yr SnS SLES-2SUV-5A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); 5-Yr Support Req SLES-2SUV-5S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM); Prio 5-Yr SnS SLES-SAP-2SUV-1A SLES for SAP Apps w/ HA (1-2 CPU, Unl VM); 1-Yr Support Reqd SLES-SAP-2SUV-1S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 1-Yr SnS SLES-SAP-2SUV-3A SLES for SAP Apps w/ HA (1-2 CPU, Unl VM); 3-Yr Support Reqd SLES-SAP-2SUV-3S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 3-Yr SnS SLES-SAP-2SUV-5A SLES for SAP Apps w/ HA (1-2 CPU, Unl VM); 5-Yr Support Reqd SLES-SAP-2SUV-5S SLES for SAP Apps (1-2 CPU, Unl VM); Priority 5-Yr SnS UCSX-NVMEI4-I3840 3.8TB 2.5in U.2 Intel P5500 NVMe High Perf Medium Endurance https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-access-eol-15074.html UCSX-NVMEI4-I7680 7.6TB 2.5in U.2 Intel P5500 NVMe High Perf Medium Endurance https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-access-eol-15074.html UCSX-SD76T61X-EV 7.6TB 2.5 inch Enterprise Value 6G SATA SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hyperflex-accessories-eol2. html UCSX-SD76TBEM2NK9 7.6TB 2.5in Enter Value 6G SATA Micron G1 SSD (SED) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hyperflex-accessories-eol2. html Cisco UCS X210c M6 Compute Node DISCONTINUED EOL PRODUCTS 63 UCSX-SD960GBM2NK9 960GB 2.5in Enter Value 6G SATA Micron G1 SSD (SED) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD16TM1X-EV 1.6TB 2.5in Enter Value 6G SATA Micron G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD240GM1X-EV 240GB 2.5in Enter Value 6G SATA Micron G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD38TM1X-EV 3.8TB 2.5in Enter Value 6G SATA Micron G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD480GM1X-EV 480 GB 2.5in Enter Value 6G SATA Micron G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD76TM1X-EV 7.6TB 2.5in Enter Value 6G SATA Micron G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/unified-computing-accessories-eol.ht ml UCSX-SD960G61X-EV 960GB 2.5 inch Enterprise Value 6G SATA SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hyperflex-accessories-eol2. html UCSX-NVMEI4-I6400 6.4TB 2.5in U.2 Intel P5600 NVMe High Perf High Endurance https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD32TK3X-EP 3.2TB 2.5in Enter Perf 12G SAS Kioxia G1 SSD (3X) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD38TK1X-EV 3.8TB 2.5in Enter Value 12G SAS Kioxia G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD76TBKNK9 7.6TB 2.5in Enter Value 12G SAS Kioxia G1 SSD (SED-FIPS) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD76TK1X-EV 7.6TB 2.5in Enter Value 12G SAS Kioxia G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD15TK1X-EV 15.3TB 2.5in Enter Value 12G SAS Kioxia G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD16TBKNK9 1.6TB 2.5in Enter Perf 12G SAS Kioxia G1 SSD (3X SED-FIPS) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html Table 24 EOS 64 Cisco UCS X210c M6 Compute Node DISCONTINUED EOL PRODUCTS UCSX-SD16TK3X-EP 1.6TB 2.5in Enter Perf 12G SAS Kioxia G1 SSD (3X) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD19TK1X-EV 1.9TB 2.5in Enter Value 12G SAS Kioxia G1 SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-NVMEI4-I1920 1.9TB 2.5in U.2 Intel P5500 NVMe High Perf Medium Endurance https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-access-eol-15074.html UCSX-ML-V5D200G Cisco VIC 15231 2x 100G mLOM X-Series https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-ucsx-accessories-eol.html UCSX-NVMEI4-I3200 3.2TB 2.5in U.2 Intel P5600 NVMe High Perf High Endurance https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-NVMEXP-I750 750GB 2.5in Intel Optane NVMe Extreme Perf https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hci-accessories-eol.html UCSX-NVMEXPB-I375 375GB 2.5in Intel Optane NVMe Extreme Performance SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hci-accessories-eol.html UCSX-SD120GM1X-EV 120 GB 2.5 inch Enterprise Value 6G SATA SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-b-series-bla de-servers/select-ucs-accessories-eol.html UCSX-SD38T6S1X-EV 3.8TB 2.5in Enter Value 6G SATA Samsung SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hci-accessories-eol.html UCSX-SD800GBKNK9 800GB 2.5in Enter Perf 12G SAS Kioxia G1 SSD (3X SED-FIPS) https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-accessories-eol15420.html UCSX-SD960G6S1XEV 960GB 2.5in Enter Value 6G SATA Samsung SSD https://www.cisco.com/c/en/us/products/colla teral/servers-unified-computing/ucs-c-series-rac k-servers/select-ucs-hci-accessories-eol.html Table 24 EOS Cisco UCS X210c M6 Compute Node TECHNICAL SPECIFICATIONS 65 TECHNICAL SPECIFICATIONS Table 25 Cisco UCS X210c M6 Compute Node Dimensions and Weight Parameter Value Height 1.80 in. (45.7 mm) Width 11.28 in.(286.5 mm) Depth 23.7 in. (602 mm) Weight ■ Minimally configured node weight = 12.84 lbs (5.83 kg) ■ Fully configured compute node weight = 25.1 lbs (11.39 kg) Dimensions and Weight Table 26 Cisco UCS X210c M6 Compute Node Environmental Specifications Parameter Value Operating temperature 50° to 95°F (10° to 35°C) Non-operating temperature -40° to 149°F (–40° to 65°C) Operating humidity 5% to 90% noncondensing Non-operating humidity 5% to 93% noncondensing Operating altitude 0 to 10,000 ft (0 to 3000m); maximum ambient temperature decreases by 1°C per 300m Non-operating altitude 40,000 ft (12,000m) Environmental Specifications For configuration-specific power specifications, use the Cisco UCS Power Calculator at: http://ucspowercalc.cisco.com NOTE: The Cisco UCS X210c Server Node has a power cap of 1300 Watts for all combinations of components (CPUs, DIMMs, drives, and so on). Also, the ambient temperature must be less than 35 oC (95 oF). 66 Cisco UCS X210c M6 Compute Node TECHNICAL SPECIFICATIONS