NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter
Product Overview
The NVIDIA® ConnectX®-6 MCX653106A-HDAT is a high-performance, dual-port smart adapter card engineered to accelerate the most demanding data centers. As a cornerstone of the NVIDIA Quantum InfiniBand platform, this network interface card provides two QSFP56 ports, each capable of 200Gb/s InfiniBand HDR or Ethernet connectivity. Its defining innovation is In-Network Computing, which transforms the network card from a passive data pipe into an active compute element, offloading operations from the CPU to boost application performance and scalability for High-Performance Computing (HPC), Artificial Intelligence (AI), and hyperscale cloud infrastructures.
Core Value Proposition
- Extreme Performance: Delivers up to 200Gb/s per port (400Gb/s aggregate) with ultra-low latency and up to 215 million messages per second
- In-Network Computing: Revolutionary technology offloads computation and memory access to dramatically increase system efficiency
- Built-in Security: Hardware-offloaded XTS-AES 256/512-bit block-level encryption ensures data-in-transit security without CPU overhead
- Dual-Protocol Flexibility: Supports both InfiniBand HDR (200G) and high-speed Ethernet (up to 200GbE) on the same card
- Comprehensive Offloads: Advanced offloads for NVMe-of, Storage, MPI, and Virtualization (SR-IOV)
Technical Specifications
| Parameter |
Specification |
| Product Model |
MCX653106A-HDAT |
| Port Configuration |
2 x QSFP56 |
| Max Speed per Port |
200 Gb/s (InfiniBand HDR / 200GbE) |
| Host Interface |
PCIe 4.0 x16 (3.0 compatible) |
| Form Factor |
Full-Height, Full-Length PCIe Card |
| Security |
XTS-AES 256/512-bit Block Encryption |
| Virtualization |
SR-IOV (Up to 1000 Virtual Functions) |
| Protocol Support |
InfiniBand (HDR/EDR), Ethernet (200/100/50/40/25/10/1 GbE |
| Software Support |
Linux, Windows Server, VMware, OFED |
Core Technologies
- In-Network Computing: Offloads collective communication operations and memory access directly into the network
- RDMA (Remote Direct Memory Access): Enables direct memory-to-memory data transfer between servers
- NVIDIA GPUDirect Technologies: Direct GPU-to-GPU communication and GPU-to-storage access
- Hardware-Based I/O Virtualization (ASAP²): High-performance network isolation for virtual machines and containers
- Packet Pacing: Sub-nanosecond accuracy for smooth, predictable data flow
Application Scenarios
- AI & Deep Learning Training Clusters
- High-Performance Computing (HPC)
- Hyperscale & Cloud Data Centers
- High-Frequency Trading (HFT)
- Converged Storage & Compute
Related Models
| Ordering Part Number |
Description |
Key Difference |
| MCX653106A-HDAT |
Dual-Port, 200Gb/s, PCIe 4.0 x16 |
Featured Model |
| MCX653105A-HDAT |
Dual-Port, 200Gb/s, PCIe 4.0 x16 |
Check specific datasheet |
| MCX653105A-HDAL |
Dual-Port, 200Gb/s with Cold Plate |
Liquid-Cooled Thermal Solution |
| MCX653435A-HDAT |
Dual-Port, 200Gb/s, OCP 3.0 SFF |
Form Factor for OCP servers |
Frequently Asked Questions
Q1: What is the difference between the MCX653106A-HDAT and the MCX653105A-HDAT?
While both are dual-port 200Gb/s ConnectX-6 cards, the specific suffix (e.g., 106A vs 105A) often denotes minor revisions in firmware, component sourcing, or feature enablement. For critical compatibility, always verify the exact OPN against the system's hardware compatibility list.
Q2: Does this card require special drivers or software?
It is supported by standard NVIDIA MLNX_OFED drivers for Linux and WinOF-2 for Windows. For In-Network Computing features, specific libraries like HPC-X or NVIDIA's SHARP software may be required.
Q3: Can I use this card in an Ethernet-only network?
Absolutely. The ConnectX-6 is a dual-protocol card. When connected to an Ethernet switch, it will operate as a high-performance Ethernet network card, supporting RoCE (RDMA over Converged Ethernet).
Q4: What cables are compatible with the QSFP56 ports?
The ports support a wide range of QSFP56 and QSFP28 cables and transceivers for both InfiniBand and Ethernet.
Q5: What is meant by "Block-Level Encryption"?
It encrypts data at the storage block level (typically 512 bytes) rather than the file or full-disk level. This hardware offload secures data in transit transparently with minimal latency impact.
Service & Support
- Warranty: Full manufacturer warranty provided with all units
- Availability: In stock for fast shipment
- Technical Support: 24/7 access to specialized networking engineering team
- Driver & Firmware: Access to latest stable drivers and firmware
- Custom Solutions: Consultation available for large-scale deployments
About Our Company
We are a leading global provider of enterprise-grade networking hardware with over ten years of industry expertise. Our foundation is built upon a large-scale manufacturing operation and a deeply experienced technical team, enabling us to deliver superior products and support.
As an authorized partner for top-tier brands including NVIDIA Mellanox, Ruckus, Aruba, and Extreme Networks, we maintain a vast, diverse inventory exceeding $10 million. Our portfolio encompasses original, brand-new equipment such as high-performance network switches, network interface cards (NICs), wireless solutions, and high-speed cabling.