Intel Ethernet Network Adapter E810-CQDA2 Internal Fiber 100000 Mbit/s

SKU
E810CQDA2
Login for pricing
In stock
Intel Ethernet Network Adapter E810CQDA2
More Information
Connectivity technology Wired
Host interface PCI Express
Interface Fiber
Maximum data transfer rate 100000 Mbit/s
SKU E810CQDA2
EAN 5032037172141
Manufacturer Intel
Availability Y
iWARP/RDMA
iWARP delivers converged, low-latency fabric services to data centers through Remote Direct Memory Access (RDMA) over Ethernet. The key iWARP components that deliver low-latency are Kernel Bypass, Direct Data Placement, and Transport Acceleration.

Intel® Data Direct I/O Technology
Intel® Data Direct I/O Technology is a platform technology that improves I/O data processing efficiency for data delivery and data consumption from I/O devices. With Intel DDIO, Intel® Server Adapters and controllers talk directly to the processor cache without a detour via system memory, reducing latency, increasing system I/O bandwidth, and reducing power consumption.

PCI-SIG* SR-IOV Capable
Single-Root I/O Virtualization (SR-IOV) involves natively (directly) sharing a single I/O resource between multiple virtual machines. SR-IOV provides a mechanism by which a Single Root Function (for example a single Ethernet Port) can appear to be multiple separate physical devices.

Flexible Port Partitioning
Flexible Port Partitioning (FPP) technology utilizes industry standard PCI SIG SR-IOV to efficiently divide your physical Ethernet device into multiple virtual devices, providing Quality of Service by ensuring each process is assigned to a Virtual Function and is provided a fair share of the bandwidth.

Virtual Machine Device Queues (VMDq)
Virtual Machine Device Queues (VMDq) is a technology designed to offload some of the switching done in the VMM (Virtual Machine Monitor) to networking hardware specifically designed for this function. VMDq drastically reduces overhead associated with I/O switching in the VMM which greatly improves throughput and overall system performance
Ports & interfaces
InternalYes
PCI version4.0
Fiber ports quantity2
Fiber optic connectorQSFP28
InterfaceFiber
Host interfacePCI Express
Connectivity technologyWired
Bandwidth
Maximum data transfer rate100000 Mbit/s
Technical details
Interface typePCIe v4.0 (16 GT/s)
Speed & slot width16.0 GT/s, x16 Lane
PCI-SIG* SR-IOV CapableYes
On-chip QoS and Traffic ManagementYes
iWARP/RDMAYes
Intelligent OffloadsYes
Intel Virtualization Technology for Connectivity (VT-c)Yes
Intel Virtual Machine Device Queues (VMDq)Yes
Intel Flexible Port PartitioningYes
Intel Data Direct I/O TechnologyYes
Storage Over EthernetiSCSI, NFS
Harmonized System (HS) code85176990
Export Control Classification Number (ECCN)5A991
Commodity Classification Automated Tracking System (CCATS)NA
Product typeNetwork Interface Card
Processor family800 Network Adapters (up to 100GbE)-800 Network Adapters (up to 100GbE)
StatusLaunched
Network interface card typeServer
Network interface card cable mediumCopper, Fiber
Launch dateQ3'20
Network
LAN controllerIntel E810
Ethernet LAN data rates1000,10000,25000,50000,100000 Mbit/s
Maximum data transfer rate100000 Mbit/s
Design
InternalYes
Component forServer/workstation
Product series800 Series Network Adapters (up to 100GbE)
Operational conditions
Operating temperature (T-T)0 - 60 °C
Storage temperature (T-T)-40 - 70 °C
Performance
Speed & slot width16.0 GT/s, x16 Lane
PCI-SIG* SR-IOV CapableYes
On-chip QoS and Traffic ManagementYes
iWARP/RDMAYes
Intelligent OffloadsYes
Intel Virtual Machine Device Queues (VMDq)Yes
Intel Flexible Port PartitioningYes
Intel Data Direct I/O TechnologyYes
Logistics data
Harmonized System (HS) code85176990
Other features
Product typeNetwork Interface Card
Processor family800 Network Adapters (up to 100GbE)-800 Network Adapters (up to 100GbE)
Interface typePCIe v4.0 (16 GT/s)
Cable typeQSFP28 ports - DAC, Optics, and AOC's
Speed & slot width16.0 GT/s, x16 Lane
PCI-SIG* SR-IOV CapableYes
On-chip QoS and Traffic ManagementYes
iWARP/RDMAYes
Intelligent OffloadsYes
Intel Virtualization Technology for Connectivity (VT-c)Yes
Intel Virtual Machine Device Queues (VMDq)Yes
Intel Flexible Port PartitioningYes
Intel Data Direct I/O TechnologyYes
Storage Over EthernetiSCSI, NFS
StatusLaunched
Network interface card typeServer
Network interface card cable mediumCopper, Fiber
Launch dateQ3'20
Product series800 Series Network Adapters (up to 100GbE)
Harmonized System (HS) code85176990
Export Control Classification Number (ECCN)5A991
Commodity Classification Automated Tracking System (CCATS)NA
Controller typeIntel(R) Ethernet Controller E810
Cabling typeQSFP28 ports - DAC, Optics, and AOC's
Bracket heightLow-Profile (LP) / Full-Height (FH)

You may also be interested in

Compare Products
Product Intel Ethernet Network Adapter E810-CQDA2 Internal Fiber 100000 Mbit/s Intel Ethernet Network Adapter E810-CQ... Login for pricing
QNAP QXG-100G2SF-E810 network card Internal Fiber 100000 Mbit/s QNAP QXG-100G2SF-E810 network card Int... Login for pricing
Lenovo 01CV830 network card Internal Fiber 16000 Mbit/s Lenovo 01CV830 network card Internal F... Login for pricing
QNAP QXG-100G2SF-CX6 network card Internal Fiber 100000 Mbit/s
Hot Product
QNAP QXG-100G2SF-CX6 network card Inte... Login for pricing
Broadcom BCM957508-P2100G network card Internal Fiber 100000 Mbit/s
Bestseller
Broadcom BCM957508-P2100G network card... Login for pricing
Nvidia ConnectX-6 Dx Internal Fiber 100000 Mbit/s
New
Nvidia ConnectX-6 Dx Internal Fiber 10... Login for pricing
SKU
E810CQDA2
QXG-100G2SF-E810
01CV830
QXG-100G2SF-CX6
BCM957508-P2100G
900-9X658-0056-SB1
Description
iWARP/RDMA
iWARP delivers converged, low-latency fabric services to data centers through Remote Direct Memory Access (RDMA) over Ethernet. The key iWARP components that deliver low-latency are Kernel Bypass, Direct Data Placement, and Transport Acceleration.

Intel® Data Direct I/O Technology
Intel® Data Direct I/O Technology is a platform technology that improves I/O data processing efficiency for data delivery and data consumption from I/O devices. With Intel DDIO, Intel® Server Adapters and controllers talk directly to the processor cache without a detour via system memory, reducing latency, increasing system I/O bandwidth, and reducing power consumption.

PCI-SIG* SR-IOV Capable
Single-Root I/O Virtualization (SR-IOV) involves natively (directly) sharing a single I/O resource between multiple virtual machines. SR-IOV provides a mechanism by which a Single Root Function (for example a single Ethernet Port) can appear to be multiple separate physical devices.

Flexible Port Partitioning
Flexible Port Partitioning (FPP) technology utilizes industry standard PCI SIG SR-IOV to efficiently divide your physical Ethernet device into multiple virtual devices, providing Quality of Service by ensuring each process is assigned to a Virtual Function and is provided a fair share of the bandwidth.

Virtual Machine Device Queues (VMDq)
Virtual Machine Device Queues (VMDq) is a technology designed to offload some of the switching done in the VMM (Virtual Machine Monitor) to networking hardware specifically designed for this function. VMDq drastically reduces overhead associated with I/O switching in the VMM which greatly improves throughput and overall system performance
The dual-port QXG-100G2SF-E810 100 GbE network expansion card with Intel® Ethernet Controller E810 supports PCIe 4.0 and provides up to 100Gbps bandwidth to overcome performance bottlenecks. With iWARP/RDMA and SR-IOV support (available soon), the QXG-100G2SF-E810 greatly boosts network efficiency and is ideal for I/O-intensive and latency-sensitive virtualization and data centers. The QXG-100G2SF-E810 is a perfect match for all-flash storage to realize the highest performance potential for ever-demanding IT challenges.

Ensure reliable data transfer with Forward Error Correction (FEC)
The QXG-100G2SF-E810 supports FEC to overcome packet loss – a potentially common occurrence in fast, long-distance networks. FEC helps the receiving side detect and recover lost data to correct bit errors, thus ensuring reliable data transmission over “noisy” communication channels. With three FEC modes, you can use a suitable cable and switch to build an optimal network environment.

Pair with high-speed switches for high-performance, low-latency data centers
The QXG-100G2SF-E810 network expansion card can be connected to a switch either with a QSFP28 cable or a QSFP28 to (4) SFP28 cable. You can also configure network redundancy to achieve network failover via the switch for continuous service and high availability.

Unleash the full potential of the NVMe all-flash TS-h2490FU
Supporting twenty-four (24) U.2 NVMe Gen 3 x4 SSDs, QNAP’s flagship TS-h2490FU NVMe all-flash storage features a PCIe Gen 4 x16 slot that enables the QXG-100G2SF-E810 to fully realize 100Gbps performance and eliminate bottlenecks in modern data centers, virtualization, cloud applications, and mission-critical backup/restore tasks.

Supports Windows and Linux servers/workstations
Besides QNAP devices, the QXG-100G2SF-E810 supports many platforms (including Windows 10, Windows Server, and Linux/Ubuntu) allowing you to attain optimal business performance for a wider range of system applications and services. The higher bandwidth density with reduction in links helps reduce cabling footprint and operational costs.
The Emulex 16 Gb (Generation 6) Fibre Channel (FC) host bus adapters (HBAs) are an ideal solution for all Lenovo System x servers requiring high-speed data transfer in storage connectivity for virtualized environments, data backup, and mission-critical applications. They are designed to meet the needs of modern networked storage systems that utilize high performance and low latency solid state storage drives for caching and persistent storage as well as hard disk drive arrays.

The Emulex 16 Gb Gen 6 FC HBAs feature ExpressLane™, which prioritizes mission-critical traffic in congested networks ensuring maximum application performance on flash storage arrays. They also seamlessly support Brocade ClearLink™ diagnostics through Emulex OneCommand® Manager, ensuring the reliability and management of storage network when connected to Brocade Gen 5 FC SAN fabrics.
No
36 Months Service (Bring-in)
Dual-Port 100GbE/Single-Port 200GbE SmartNIC
ConnectX-6 Dx SmartNIC is the industry’s most secure and advanced cloud network interface card to accelerate mission-critical data-center applications, such as security, virtualization, SDN/NFV, big data, machine learning, and storage. The SmartNIC provides up to two ports of 100Gb/s or a single-port of 200Gb/s Ethernet connectivity and delivers the highest return on investment (ROI) of any smart network interface card. ConnectX-6 Dx is a member of NVIDIA's world-class, award-winning ConnectX series of network adapters powered by leading 50Gb/s (PAM4) and 25/10Gb/s (NRZ) SerDes technology and novel capabilities that accelerate cloud and data-center payloads.
Short Description
Intel Ethernet Network Adapter E810CQDA2
2x 100G QSFP28, PCIe 4.0 x16
Emulex 16Gbps Gen 6 FC Single-Port HBA Adaper for Lenovo System X Servers
Dual port per 100GbE Network adapter; 2 x QSFP28; Mellanox ConnectX-6 Dx controller
Dual-Port 100 Gb/s QSFP56 Ethernet PCI Express 4.0 x16 Network Interface Card
ConnectX-6 Dx EN adapter card, 100GbE, OCP3.0, With Host Management, Dual-port QSFP56, PCIe 4.0 x16, No Crypto, Thumbscrew (Pull Tab) Bracket
Manufacturer
Intel
QNAP
Lenovo
QNAP
Broadcom
Nvidia