Cables and Interconnects: DAC, AOC, AEC, and Fiber Selection for 800G AI Data Centers

Generative AI data centers require ten times more fiber than conventional setups to support GPU clusters and low-latency interconnects.¹ The cable infrastructure connecting 800G switches to thousands

Cables and Interconnects: DAC, AOC, AEC, and Fiber Selection for 800G AI Data Centers

December 2025 Update: 800G becoming default for new AI buildouts with 1.6T in development trials. AEC technology reaching 9 meters at 800G (Marvell/Infraeo OCP demo), bridging DAC-AOC gap with 25-50% less power than optical. NVIDIA LinkX AOC cables installed in majority of TOP500 HPC systems. QSFP-DD800 and OSFP form factors supporting PAM4 for production deployments.

Generative AI data centers require ten times more fiber than conventional setups to support GPU clusters and low-latency interconnects.¹ The cable infrastructure connecting 800G switches to thousands of GPUs determines whether expensive compute resources achieve full utilization or bottleneck on network connectivity. With 800G becoming the default choice for new AI data center buildouts and 1.6T already in development trials, cable selection decisions made today determine infrastructure flexibility for years ahead.²

The interconnect landscape has grown more complex than the traditional DAC versus AOC choice. Active Electrical Cables (AEC) now bridge the gap between copper and optical solutions, reaching 9 meters while consuming 25-50% less power than active optical alternatives.³ Form factors evolved from QSFP-DD to OSFP, each optimized for different thermal and density requirements. Organizations deploying AI infrastructure must navigate distance requirements, power budgets, cooling constraints, and upgrade paths across cable types that each excel in specific scenarios.

DAC delivers lowest cost and latency for short runs

Direct Attach Copper (DAC) cables remain the optimal choice for intra-rack connections where distance permits. The copper-based interconnects require no photoelectric conversion, transmitting signals directly with almost zero additional latency.⁴ Simple structure and high reliability reduce operational complexity while keeping costs substantially below optical alternatives.

800G DAC products use QSFP-DD800 or OSFP packaging supporting PAM4 technology. Passive DAC consumes essentially no power (below 0.15W), while active DAC power consumption remains far lower than optical modules.⁵ The cost advantage compounds at scale, with large deployments saving significant sums compared to optical alternatives.

Distance limitations constrain DAC applicability. Passive DAC reaches approximately 3 meters at 800G speeds, with active DAC extending to 5 meters.⁶ When interconnection distances exceed these thresholds, transmission loss and cable inflexibility make DAC impractical.

Physical characteristics present additional challenges. Copper cables run thicker than optical alternatives, with larger bending radius and heavier weight complicating dense deployments.⁷ Cable management in high-density racks becomes more difficult as DAC counts increase.

Best applications for DAC include server-to-ToR (Top of Rack) switch connections within racks, high-speed interconnection between adjacent servers, and environments with extreme latency sensitivity at limited distances.⁸ Meta's AI cluster architecture uses DAC cables for rack training switch connections to GPUs, demonstrating the technology's role in production AI infrastructure.⁹

AOC extends reach with optical performance

Active Optical Cables (AOC) integrate photoelectric conversion modules, converting electrical signals to optical for transmission. The technology supports transmission distances from 30 to 100 meters in 800G configurations while maintaining QSFP-DD800 or OSFP packaging.¹⁰

Performance characteristics favor AOC for medium-distance connections. Light weight, excellent flexibility, and immunity to electromagnetic interference enable denser deployments without performance tradeoffs.¹¹ Better heat dissipation than copper alternatives helps manage thermal loads in GPU-dense environments.

Power consumption runs higher than DAC at 1-2W per cable, but remains acceptable for the distance capabilities provided.¹² Cost reaches approximately 4x DAC for similar specifications, reflecting the more complex internal electronics and optical components.¹³

800G AOC cables target emerging applications in AI data centers, machine learning training facilities, and hyperscale cloud environments where bandwidth demands exceed 400G.¹⁴ The technology proves ideal for GPU cluster interconnections, inter-row connections, and large-scale AI training environments requiring flexible cabling with medium transmission distances.¹⁵

NVIDIA LinkX AOC cables demonstrate vendor-specific optimization for AI workloads. Designed and manufactured by LinkX Optics, the cables install in the majority of TOP500 HPC systems.¹⁶ Products span QSFP form factors supporting QDR through NDR (400G) at distances up to 150 meters, with 100% testing in actual NVIDIA networking and GPU systems ensuring optimal signal integrity.¹⁷

AEC bridges the DAC-AOC gap

Active Electrical Cables (AEC) represent the emerging middle ground between DAC and AOC solutions. The technology integrates retimer or DSP chips inside cables to enhance signal transmission, amplifying signals, equalizing them, and performing clock data recovery to address copper transmission challenges.¹⁸

Distance capabilities exceed DAC significantly. AEC supports cable lengths from 2 to 9 meters, enabling reliable connections across racks within dense data center layouts.¹⁹ Marvell and Infraeo demonstrated a 9-meter 800G AEC at the 2025 OCP Global Summit, enabling copper connections spanning seven racks and bringing data center architecture closer to full row-scale AI system design.²⁰

Power efficiency advantages over AOC prove substantial. AEC consumes approximately 20% less power than optical alternatives while supporting 8 lanes of 106.25G-PAM4 signaling for bidirectional 800G traffic.²¹ Total power consumption of approximately 10W represents 25-50% lower consumption than AOC, improving airflow and weight management in high-density environments.²²

Cost-performance positioning makes AEC attractive for large deployments. The cables cost less than AOC while providing capabilities exceeding DAC, offering smart investment for bandwidth-demanding environments.²³ Industry analysts at 650 Group note that hyperscalers need solutions with high bandwidth, low power, and low cost, positioning AEC as the optimal solution for generative AI infrastructure.²⁴

The AEC market projects 28.2% CAGR through 2031, reaching $1.257 billion as the technology becomes standard in AI cluster deployments.²⁵ Leading vendors including Amphenol, TE Connectivity, Molex, and Credo invest in next-generation modules capable of 112Gbps per lane scaling toward 224Gbps for 800G and 1.6T systems.²⁶

OSFP versus QSFP-DD form factor selection

Transceiver form factors determine switch port density, thermal management requirements, and upgrade flexibility. Two standards compete for 400G and 800G deployments: OSFP and QSFP-DD.

OSFP (Octal Small Form-factor Pluggable) provides larger mechanical form optimized for high-thermal-capacity applications. The design accommodates power dissipation up to 15-20W, supporting native 8x lanes for 400G and 800G connectivity.²⁷ OSFP excels in next-generation AI cluster interconnects where link reliability and power management outweigh form factor size concerns.²⁸

Twin-port OSFP 800G configurations house 8 channels of electrical signaling with two 400Gbps optical or copper engines exiting to two ports. Extra cooling fins support 17W transceivers, designated as "2x400G twin-port OSFP finned-top" products.²⁹

QSFP-DD (Quad Small Form-factor Pluggable Double Density) offers flexibility through backward compatibility. QSFP-DD ports typically run both 400G and 800G modules, enabling incremental upgrades without switch replacement.³⁰ Full compatibility with QSFP+, QSFP28, and QSFP56 standards enables seamless migration paths.³¹

QSFP-DD 400G remains the most widely deployed standard in AI-focused Ethernet environments, particularly within NVIDIA-based GPU clusters.³² The form factor dominates in networks upgrading incrementally from lower speeds.

Selection guidance depends on deployment strategy. QSFP-DD suits networks upgrading step by step, while OSFP favors new deployments prioritizing long-term scalability over backward compatibility.³³ Organizations anticipating expansion to 1.6T should favor OSFP architecture for easier future scaling.

Distance-based cable strategy

Optimal cable selection follows distance requirements across the data center topology:

Intra-rack connections (0-3m): DAC provides lowest cost, lowest latency, and lowest power consumption. Use passive DAC where distances permit, active DAC when additional signal conditioning benefits performance.

Adjacent rack connections (3-7m): AEC extends copper benefits with active signal restoration. The 25-50% power savings over AOC compound across thousands of connections in large GPU clusters.

Inter-row connections (7-100m): AOC delivers the reach required for spine-leaf architectures spanning data halls. SR8/DR8 multimode modules support distances to 100 meters with MTP/MPO connectors.³⁴

Cross-building connections (100m-2km+): Single-mode fiber with FR4/LR4 modules provides the reach for connecting clusters across facilities. Install SMF for core-backbone or cross-building links planning for future bandwidth growth.³⁵

Server-to-leaf GPU connections in ToR configurations typically span 100-300 meters.³⁶ Leaf-to-spine links using 400G/800G interfaces span 300-800 meters across data halls.³⁷ Matching cable technology to distance requirements optimizes cost while ensuring performance.

NVIDIA LinkX portfolio spans all requirements

The LinkX product family provides the industry's most complete interconnect line spanning 10G through 1600G in Ethernet and EDR through XDR in InfiniBand protocols.³⁸ Products address every distance and speed requirement for AI infrastructure.

800G and 400G products link Quantum-2 InfiniBand and Spectrum-4 SN5600 Ethernet switches with ConnectX-7 adapters, BlueField-3 DPUs, and DGX H100 systems.³⁹ The product line includes DAC reaching 3 meters, linear active copper cables from 3-5 meters, multimode optics to 50 meters, and single-mode optics to 100 meters, 500 meters, and 2 kilometers.⁴⁰

Dual protocol support simplifies inventory management. 100G-PAM4 LinkX cables and transceivers support both InfiniBand and Ethernet protocols in the same device using identical part numbers.⁴¹ Protocol determination occurs when inserting into Quantum-2 NDR InfiniBand or Spectrum-4 Ethernet switches.

Quality assurance exceeds industry standards. Beyond IBTA compliance, LinkX-certified cables undergo 100% testing in actual NVIDIA networking and GPU systems ensuring optimal signal integrity and end-to-end performance.⁴² The testing requirements exceed Ethernet AOC industry standards, meeting supercomputer-grade quality levels.

Bend radius and cable management at density

High-density deployments require careful attention to cable routing and bend radius maintenance. Improper bending causes signal attenuation and permanent fiber damage that degrades performance over time.

Standard bend radius guidelines specify fiber cables should never bend tighter than ten times outer diameter.⁴³ Installation phases require more conservative minimums of twenty times diameter.⁴⁴ Temperature variations, vibration, and movement change fiber bend characteristics, requiring 35% increased bend radius in high-vibration or seismic environments.⁴⁵

Bend-insensitive fiber options reduce constraints. ITU G.657 specification defines bend-insensitive single-mode fibers with minimum bend radii from 5mm (G.657.B2) to 10mm (G.657.A1), compared to 30mm for standard G.652 fiber.⁴⁶ However, high-fiber-count data center cables create stiff constructions that cannot physically achieve these tight radii without damage, making cable-level specifications (typically 10x diameter) the practical constraint.⁴⁷

High-density environments tempt operators to overload cable pathways. Crowded pathways strain cables, increase signal loss probability, and complicate bend radius maintenance.⁴⁸ Space constraints in dense racks require high-density MPO panels and slim cables to maximize capacity while maintaining routing compliance.⁴⁹

Pre-terminated assemblies reduce installation time and errors in large deployments.⁵⁰ Consistent labeling, polarity verification, and trunk layout planning before installation prevent future maintenance challenges as fiber counts multiply for AI workloads.

Migration to 800G and beyond

The transition to 800G networking accelerates through 2025 as AI workloads demand bandwidth increases. Strategic cable infrastructure decisions enable smooth migration paths.

800G modules consume 14-20W or more, stressing switch cooling and rack power budgets.⁵¹ OSFP form factors designed for higher power dissipation accommodate these thermal loads better than QSFP-DD alternatives in dense configurations.

Migrating to 800G requires higher fiber counts, MTP cabling, and stricter polarity and cleanliness requirements.⁵² Organizations should plan connector strategies and cleaning protocols before deployment scales.

Interoperability testing across switch vendors and NICs becomes critical. Ensuring 800G optics work across RoCE, Ethernet, and InfiniBand gateway deployments requires robust lab validation and ongoing firmware alignment.⁵³

Introl's field engineering teams deploy cable infrastructure for AI clusters across 257 global locations, from single-rack deployments to 100,000-GPU installations. Cable selection and routing decisions directly impact GPU utilization rates and cluster reliability.

The interconnect decision framework

Cable infrastructure investments span the operational lifetime of AI deployments. A systematic approach to selection optimizes performance, cost, and flexibility:

Map distance requirements across the topology before selecting technologies. Measure actual rack-to-rack and row-to-row distances rather than assuming standard configurations.

Match cable types to distances: DAC for intra-rack, AEC for adjacent racks and short inter-row runs, AOC for longer spine-leaf connections, SMF for cross-building links.

Select form factors based on upgrade strategy: QSFP-DD for incremental migration, OSFP for greenfield deployments planning 1.6T expansion.

Budget for higher fiber counts in AI deployments. The 10x increase over conventional data centers requires pathway capacity planning before installation begins.

Specify bend-insensitive fiber where routing constraints exist, while respecting cable-level (not fiber-level) bend radius specifications.

The cables connecting expensive GPUs and switches represent a fraction of total infrastructure cost but determine whether that equipment performs optimally. Organizations deploying AI infrastructure should treat interconnect selection as a strategic decision affecting performance for the deployment lifetime.

Key takeaways

For network architects: - AI data centers require 10x more fiber than conventional setups; 800G now default for new buildouts, 1.6T in development - Distance-based selection: DAC (0-3m intra-rack), AEC (3-7m adjacent racks), AOC (7-100m inter-row), SMF (100m-2km+ cross-building) - Form factor: QSFP-DD for incremental migration, OSFP for greenfield deployments planning 1.6T expansion

For cable selection: - DAC: lowest cost/latency, passive <0.15W power, passive to 3m and active to 5m; best for server-to-ToR connections - AOC: 30-100m reach, 1-2W power, ~4x DAC cost; EMI immunity, flexibility, heat dissipation for GPU-dense environments - AEC: 2-9m reach, 25-50% less power than AOC, bridges DAC-AOC gap; Marvell/Infraeo demonstrated 9m 800G AEC spanning 7 racks

For procurement teams: - AEC market projected at $1.257B by 2031 (28.2% CAGR); becoming standard for AI clusters as hyperscalers need high-bandwidth, low-power, low-cost - NVIDIA LinkX installed in majority of TOP500 HPC systems; 100% tested in actual NVIDIA networking and GPU systems - 800G modules consume 14-20W+; OSFP accommodates thermal loads better than QSFP-DD in dense configurations

For installation teams: - Bend radius: 10x outer diameter standard, 20x during installation, +35% in high-vibration/seismic environments - G.657 bend-insensitive fiber minimums (5-10mm) don't apply at cable level; cable specs (typically 10x diameter) are practical constraint - Pre-terminated assemblies reduce installation time; verify polarity before deployment—MPO/MTP systems demand consistency

For strategic planning: - 800G migration requires higher fiber counts, MTP cabling, stricter polarity and cleanliness requirements - Interoperability testing critical across switch vendors and NICs for RoCE, Ethernet, InfiniBand gateway deployments - Budget for 10x fiber increase over conventional data centers; pathway capacity planning before installation critical

References

  1. FS.com, "Data Center Cabling for AI Workloads: What's Changing in 2025?" blog, 2025.

  2. QSFP DD 800G, "2025 800G Optical Module Trends for AI Data Centers," 2025.

  3. AscentOptics Blog, "Comprehensive Guide to Active Electrical Cables," 2025.

  4. AscentOptics Blog, "800G DAC and AOC Cables for Data Center and AI Interconnects," 2025.

  5. Network-Switch.com, "DAC vs AOC Cables: Complete 2025 Data Center Guide (with AEC)," 2025.

  6. AscentOptics Blog, "800G DAC and AOC Cables for Data Center and AI Interconnects," 2025.

  7. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  8. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  9. Meta Engineering, "RoCE networks for distributed AI training at scale," August 2024.

  10. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  11. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  12. Network-Switch.com, "DAC vs AOC Cables," 2025.

  13. Network-Switch.com, "DAC vs AOC Cables," 2025.

  14. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  15. AscentOptics Blog, "800G DAC and AOC Cables," 2025.

  16. NVIDIA, "Mellanox LinkX InfiniBand AOC Cables," product page, 2025.

  17. NVIDIA, "LinkX Cables and Transceivers Guide to Key Technologies," documentation, 2025.

  18. AscentOptics Blog, "800G AOC vs AEC: Choosing the Right Interconnect for Your Network," 2025.

  19. Equal Optics, "Active Electrical Cables Explained," 2025.

  20. Marvell, "AI Scale Up Goes for Distance with 9-meter 800G AEC from Infraeo and Marvell," blog, 2025.

  21. Microchip Technology, "Active Electrical Cable Technology—A Growing Necessity in the Era of Generative AI," blog, 2024.

  22. AscentOptics Blog, "Comprehensive Guide to Active Electrical Cables," 2025.

  23. AscentOptics Blog, "Comprehensive Guide to Active Electrical Cables," 2025.

  24. Microchip Technology, "Active Electrical Cable Technology," blog, 2024.

  25. Valuates Reports, "Active Electrical Cables (AEC) Market, Report Size, Worth, Revenue, Growth," 2025.

  26. Credo, "Active Electrical Cables (AEC) Becoming an Important Part of Data Center Architectures," 2024.

  27. Link-PP, "Understanding the OSFP Standard: The Open 400G/800G Optical Transceiver Platform Powering Next-Gen AI Networks," 2025.

  28. Link-PP, "Understanding the OSFP Standard," 2025.

  29. NVIDIA Documentation, "LinkX Cables and Transceivers Guide to Key Technologies," 2025.

  30. QSFP DD 800G, "2025 800G Optical Module Trends for AI Data Centers," 2025.

  31. SDGICable, "OSFP Transceivers vs QSFP-DD: Which Is Better for Your 400G Data Center Network?" 2025.

  32. QSFP DD 800G, "QSFP-DD 400G in NVIDIA AI Infrastructure: Scalable Connectivity," 2025.

  33. QSFP DD 800G, "2025 800G Optical Module Trends," 2025.

  34. FiberMall, "800G/400G AI Data Center Product Architecture," 2025.

  35. Cablify, "Data Center Cabling Best Practices for 2025," 2025.

  36. FiberMall, "800G/400G AI Data Center Product Architecture," 2025.

  37. FiberMall, "800G/400G AI Data Center Product Architecture," 2025.

  38. NVIDIA, "LinkX Cables and Transceivers," product page, 2025.

  39. NVIDIA Documentation, "LinkX Cables and Transceivers Guide," 2025.

  40. NVIDIA Documentation, "LinkX Cables and Transceivers Guide," 2025.

  41. NVIDIA Documentation, "LinkX Cables and Transceivers Guide," 2025.

  42. NVIDIA, "Mellanox LinkX InfiniBand AOC Cables," product page, 2025.

  43. Cablify, "Data Center Cabling Best Practices for 2025," 2025.

  44. Topfiberbox, "Fiber Optic Bend Radius Standards 2025," 2025.

  45. Topfiberbox, "Fiber Optic Bend Radius Standards 2025," 2025.

  46. FS Community, "Bend-Insensitive Fiber Optic Cables," 2025.

  47. Data Center Frontier, "Single-Mode Fiber Bend Performance in Data Center Networks," 2025.

  48. Cablify, "Data Center Cabling Best Practices for 2025," 2025.

  49. Cables and Kits, "High-Speed Ethernet Cables for Data Centers in 2025," 2025.

  50. ZGSM Wire Harness, "Guide to AI Data Center Cabling: Strategies, Solutions, and Future Trends," 2025.

  51. QSFP DD 800G, "2025 800G Optical Module Trends," 2025.

  52. Vitex Technology, "AI Data Center Upgrades for 2025," 2025.

  53. Vitex Technology, "AI Data Center Upgrades for 2025," 2025.


URL Slug Options: 1. cables-interconnects-dac-aoc-aec-800g-ai-data-center-2025 (primary) 2. 800g-cable-selection-dac-aoc-fiber-ai-gpu-cluster-2025 3. data-center-interconnects-osfp-qsfp-dd-ai-networking-2025 4. nvidia-linkx-800g-cables-transceivers-ai-infrastructure-2025

Request a Quote_

Tell us about your project and we'll respond within 72 hours.

> TRANSMISSION_COMPLETE

Request Received_

Thank you for your inquiry. Our team will review your request and respond within 72 hours.

QUEUED FOR PROCESSING