Mellanox ** 40GB/s ** Dual Port InfiniBand Network Adapter PCIe Card MHQH29-XTC

£485.50
Product Code: MHQH29-XTC
Product Code: MHQH29-XTC
Specifications

Mellanox  40GB/s  Dual Port InfiniBand Network Adapter PCIe Card MHQH29-XTC 

ConnectX adapter cards provide the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

– Industry-standard technology – World-class cluster performance – High-performance networking and storage access – Guaranteed bandwidth and lowlatency services – Reliable transport – I/O consolidation – Virtualization acceleration – Scales to tens-of-thousands of nodes – 1us MPI ping latency – Selectable 10, 20, or 40Gb/s InfiniBand or 10GigE per port – Single- and Dual-Port options available – PCI Express 2.0 (up to 5GT/s) – CPU offload of transport operations – End-to-end QoS and congestion control – Hardware-based I/O virtualization – TCP/UDP/IP stateless offload – Fibre Channel encapsulation (FCoIB or FCoE)

INFINIBAND – IBTA Specification 1.2.1 compliant – 10, 20, or 40Gb/s per port – RDMA, Send/Receive semantics – Hardware-based congestion control – Atomic operations – 16 million I/O channels – 256 to 4Kbyte MTU, 1Gbyte messages – 9 virtual lanes: 8 data + 1 management

ENHANCED INFINIBAND – Hardware-based reliable transport – Hardware-based reliable multicast – Extended Reliable Connected transport – Enhanced Atomic operations – Fine grained end-to-end QoS HARDWARE-BASED I/O VIRTUAL

HARDWARE-BASED I/O VIRTUALIZATION – Single Root IOV – Address translation and protection – Multiple queues per virtual machine – VMware NetQueue support – PCISIG IOV compliant

ADDITIONAL CPU OFFLOADS – TCP/UDP/IP stateless offload – Intelligent interrupt coalescence – Compliant to Microsoft RSS and NetDMA

STORAGE SUPPORT – T10-compliant Data Integrity Field support – Fibre Channel over InfiniBand or Ethernet

CONNECTIVITY – Interoperable with InfiniBand or 10GigE switches – microGiGaCN or QSFP connectors – 20m+ (10Gb/s), 10m+ (20Gb/s) or 7m+ (40Gb/s) of passive copper cable – External optical media adapter and active cable support

MANAGEMENT AND TOOLS InfiniBand – OpenSM – Interoperable with third-party subnet managers – Firmware and debug tools (MFT, IBDIAG)

OPERATING SYSTEMS/DISTRIBUTIONS – Novell SLES, Red Hat Enterprise Linux (RHEL), Fedora, and other Linux distributions – Microsoft Windows Server 2003/2008/CCS 2003 – OpenFabrics Enterprise Distribution (OFED) – OpenFabrics Windows Distribution (WinOF) – VMware ESX Server 3.5, Citrix XenServer 4.1

PROTOCOL SUPPORT – Open MPI, OSU MVAPICH, HP MPI, Intel MPI, MS MPI, Scali MPI – TCP/UDP, IPoIB, SDP, RDS – SRP, iSER, NFS RDMA, FCoIB, FCoE – uDAPL