Back to Blog

DPUs and SmartNICs: the third pillar of data center computing

DPU SmartNIC market reaching $1.11B in 2024, projected $4.44B by 2034 (15% CAGR). 50% of cloud providers now using DPUs; 35% of AI training offloaded to DPUs. BlueField-3 delivering equivalent of 300...

DPUs and SmartNICs: the third pillar of data center computing

DPUs and SmartNICs: the third pillar of data center computing

Updated December 11, 2025

December 2025 Update: DPU SmartNIC market reaching $1.11B in 2024, projected $4.44B by 2034 (15% CAGR). 50% of cloud providers now using DPUs; 35% of AI training offloaded to DPUs. BlueField-3 delivering equivalent of 300 CPU cores in services offload. BlueField-4 announced with 800Gbps and 6x compute. AMD Pensando Elba shipping dual 200GbE with P4 programmability.

The DPU SmartNIC market reached $1.11 billion in 2024 and will grow to $4.44 billion by 2034 at a 14.89% compound annual growth rate.¹ Close to 50% of cloud service providers now rely on DPUs for workload optimization.² Around 35% of AI model training tasks are offloaded to DPUs for better efficiency and performance.³ Industry leaders increasingly view DPUs as the third pillar of computing alongside CPUs and GPUs—the dedicated processors that securely move data across infrastructure.⁴

AI clusters transformed traffic patterns within data centers. Most traffic now flows east-west between GPUs during model training and checkpointing rather than north-south between applications and the internet.⁵ The DPU evolved from an optional accelerator to necessary infrastructure that prevents CPU bottlenecks from constraining GPU utilization.⁶ Organizations building AI infrastructure must evaluate DPU selection as carefully as GPU and CPU choices.

NVIDIA BlueField-3: the infrastructure standard

NVIDIA BlueField-3 represents the third-generation data center infrastructure-on-a-chip, enabling organizations to build software-defined, hardware-accelerated IT infrastructure from cloud to core data center to edge.⁷ The 22-billion transistor DPU offloads, accelerates, and isolates software-defined networking, storage, security, and management functions.⁸

Network connectivity reaches 400 gigabits per second via Ethernet or NDR InfiniBand.⁹ Port configurations span 1, 2, or 4 ports with options for various bandwidth combinations.¹⁰ On-board memory includes 16 gigabytes of DDR5 with form factor options including half-height half-length and full-height half-length PCIe cards.¹¹

BlueField-3 delivers 10 times the accelerated compute power of the previous generation.¹² The processor complex features 16 ARM A78 cores with 4 times the cryptography acceleration of BlueField-2.¹³ Network bandwidth doubled while compute power quadrupled and memory bandwidth increased nearly 5 times.¹⁴

The performance equivalence tells the story. One BlueField-3 DPU delivers the equivalent data center services of up to 300 CPU cores, freeing valuable CPU cycles for business-critical applications.¹⁵ The offload ratio justifies DPU investment for organizations where CPU capacity constrains workload deployment.

BlueField-3 is the first DPU to support fifth-generation PCIe and offer time-synchronized data center acceleration.¹⁶ Maximum power consumption does not exceed 150 watts.¹⁷

Use cases span the full infrastructure stack: hyperconverged infrastructure with encryption, data integrity, deduplication, decompression, and erasure coding for storage; distributed firewalls, IDS/IPS, root of trust, microsegmentation, and DDoS prevention for security; cloud-native supercomputing with multi-tenancy and communication acceleration for HPC/AI; and Cloud RAN, virtualized edge gateways, and VNF acceleration for telco and edge applications.¹⁸

NVIDIA announced BlueField-4 as the successor—an 800 gigabits per second infrastructure platform for gigascale AI factories delivering 6 times the compute of BlueField-3 with accelerations for networking, data storage, and cybersecurity.¹⁹

AMD Pensando: the hyperscaler choice

AMD acquired Pensando Systems in 2022, bringing P4-programmable DPU technology into AMD's data center portfolio.²⁰ The Pensando DPUs have been widely adopted, validated, and tested as the front-end networking solution in some of the largest hyperscale data centers.²¹

The second-generation AMD Pensando Elba DPU is fully P4 programmable and optimized for high throughput, enabling advanced offload of networking, storage, and security services at dual 200 gigabits per second line rate.²²

The Elba SoC contains 16 ARM Cortex-A72 cores, dual DDR4/DDR5 memory controllers, 32 lanes of PCIe Gen3 or Gen4 connectivity, up to dual 200GbE or quad 100GbE networking, and storage and crypto offloading capabilities.²³

The architecture centers on Match-Processing Units (MPUs) where software-in-silicon executes and provides accelerated fast-path services.²⁴ System memory connects to both the general-purpose ARM cores and the domain-specific MPUs.²⁵ The P4 pipeline handles networking, storage, telemetry, SDN, security, congestion management, and RDMA simultaneously without compromising performance.²⁶

The programmable pipeline provides VxLAN tunnel encapsulation and decapsulation, IPv4/v6 routing, stateless and stateful security rules, network address translation, server load balancing, encryption services, VLAN to VPC mapping, and VPC peering at line rate.²⁷

AMD offers a SAI (Switch Abstraction Interface) reference pipeline running SONiC OS on Pensando DPUs.²⁸ The integration enables SONiC-provided services including the routing stack, management interface, and monitoring while leveraging full DPU capabilities via the SSDK.²⁹

AMD introduced the Pensando Salina as the 400G successor designed to compete directly with NVIDIA BlueField-3 in front-end network applications.³⁰ The Pensando Pollara 400 AI NIC became commercially available in the first half of 2025, optimizing AI and HPC networking through advanced capabilities including RDMA and congestion control.³¹

The newer Giglio DPU builds upon Elba with source-code compatibility, enabling existing customers to adopt the newer platform with minimal software changes.³²

For enterprises running VMware, the practical choices narrow to NVIDIA BlueField-2 or AMD Pensando DSC2.³³ The VMware ecosystem support limits options for organizations committed to that virtualization platform.

Intel IPU E2100: the cloud-native approach

Intel's Infrastructure Processing Unit (IPU) Adapter E2100 delivers infrastructure acceleration, virtual storage enablement, and enhanced security features.³⁴ The E2100 SoC is an infrastructure acceleration platform optimized for power, performance, and scale.³⁵

The hardware features a rich packet-processing pipeline with 200GbE bandwidth and includes NVMe, compression, and crypto accelerators.³⁶ The ARM Neoverse N1 compute complex allows customer-provided software to execute features ranging from complex packet-processing pipelines to storage transport, device management, and telemetry.³⁷

The E2100 contains 16 ARM Neoverse N1 cores with 32 megabytes of cache and 3 channels of 16GB LPDDR4x memory totaling 48 gigabytes.³⁸

Model variants address different deployment requirements. The E2100-CCQDA2 launched in Q1 2024 with 150W TDP in a dual-port configuration supporting 200/100/50/25/10GbE data rates over PCIe 4.0 in a half-length, full-height, single-slot form factor.³⁹ The E2100-CCQDA2HL launched in Q4 2024 with reduced 75W TDP in the same dual-port configuration.⁴⁰

Connectivity uses QSFP56 ports supporting DAC, optics, and AOC cables.⁴¹ Virtualization support includes Virtual Machine Device Queues (VMDq), PCI-SIG SR-IOV, and RoCEv2/RDMA.⁴²

The Intel IPU lineage traces to the Mt Evans project designed to function like AWS Nitro specifically for Google Cloud, offloading NVMe over Fabric and network security.⁴³ The E2100 represents the first iteration available to non-Google customers.⁴⁴

Use cases include separation and isolation of infrastructure workloads, offloading virtualized networks to the IPU where accelerators process tasks more efficiently, and replacing local disk storage with detached virtualized storage.⁴⁵

Market dynamics and adoption patterns

The DPU market divides into distinct use case segments. Data center offload leads, propelled by hyperscale data center expansion and growing demands of complex, data-heavy computing workloads.⁴⁶ North America holds the largest revenue share, driven by escalating cybersecurity threats, growing adoption of zero-trust security frameworks, and significant investments in AI and machine learning infrastructure.⁴⁷

Adoption patterns show clear workload alignment. About 30% of deployments focus on AI workloads while 20% target zero-trust security architecture.⁴⁸ DPUs with hardware-based security acceleration see a 30% increase in adoption, reflecting the industry's priority on zero-trust principles.⁴⁹

AI traffic patterns drive DPU necessity. East-west traffic between GPUs during training dominates modern AI cluster communication.⁵⁰ The host CPU cannot process this traffic at line rate without becoming a bottleneck. DPUs handle the network processing that would otherwise consume CPU cycles needed for orchestration and control plane functions.

The competitive landscape features three primary vendors with distinct positioning. NVIDIA leads with BlueField integration into its broader AI infrastructure ecosystem and the strongest InfiniBand support.⁵¹ AMD Pensando dominates hyperscaler deployments with proven production scale and P4 programmability.⁵² Intel targets cloud-native architectures with the Nitro-inspired IPU design.⁵³

Marvell's OCTEON 10 represents the next-generation challenger—the industry's first 5nm DPU with ARM Neoverse N2 cores delivering 3 times higher computing performance and 50% lower power consumption than previous generations.⁵⁴ Innovative hardware accelerators for inline ML/AI provide 100 times performance boost over software-based inference.⁵⁵

Zero-trust security implementation

DPUs enable zero-trust security enforcement at the network edge without involving host CPUs.⁵⁶ The architecture places policy enforcement at the data source rather than at network aggregation points.

L4 firewalls run directly on the DPU, enforcing policy before traffic reaches the host.⁵⁷ NVIDIA's BlueField DPU supports microsegmentation, allowing operators to apply zero-trust principles to GPU workloads without host CPU involvement.⁵⁸

The security model matters particularly for multi-tenant AI infrastructure. When multiple customers share GPU clusters, the DPU enforces isolation between tenants at the network level.⁵⁹ The host operating system never sees traffic destined for other tenants, reducing the attack surface.

Root of trust establishes cryptographic verification of infrastructure components.⁶⁰ The DPU validates firmware, operating systems, and applications before allowing network access. Compromised hosts cannot communicate on the network without passing DPU-enforced verification.

DPUs enable network monitoring, telemetry, and observability functions in highly distributed zero-trust environments across cloud and edge instances.⁶¹ The visibility extends to encrypted traffic through hardware-accelerated TLS inspection without the performance penalty of software-based decryption.

AI infrastructure integration

AI clusters present specific DPU requirements that differ from general data center workloads. The east-west traffic pattern between GPUs during distributed training creates sustained bandwidth demands that traditional NICs cannot handle without CPU assistance.⁶²

Collective operations—all-reduce, all-gather, and broadcast—form the communication backbone of distributed training.⁶³ DPUs can accelerate these operations through hardware offload, reducing latency and freeing GPU compute for actual model execution.

RDMA support proves essential for AI workloads. DPUs handle RoCEv2 (RDMA over Converged Ethernet) or InfiniBand RDMA processing in hardware, bypassing the host network stack entirely.⁶⁴ The zero-copy data transfer between GPU memory and network minimizes latency and maximizes bandwidth utilization.

Congestion control becomes critical at AI cluster scale. DPUs implement DCQCN (Data Center Quantized Congestion Notification) or similar algorithms in hardware, responding to congestion signals faster than software implementations.⁶⁵ The hardware response prevents the packet loss that would otherwise degrade training performance.

Storage acceleration extends DPU value beyond networking. Checkpoint writes during training require high-throughput storage access that DPUs can accelerate through NVMe-oF offload and compression.⁶⁶ The storage path bypasses the host CPU entirely, maintaining GPU utilization during checkpoint operations.

Infrastructure planning considerations

DPU selection depends on the broader infrastructure context. NVIDIA BlueField integrates naturally with NVIDIA GPU deployments through consistent management and the DOCA software framework.⁶⁷ Organizations standardizing on NVIDIA GPUs should evaluate BlueField for consistency.

AMD Pensando fits deployments requiring P4 programmability and proven hyperscale production. The P4 pipeline enables custom packet processing that predefined hardware cannot match.⁶⁸ Organizations with specific networking requirements should evaluate Pensando's flexibility.

Intel IPU suits cloud-native architectures where the Nitro-style separation of infrastructure and tenant workloads matters.⁶⁹ The design philosophy differs from BlueField and Pensando's approach, fitting organizations building public cloud-style infrastructure.

Power consumption affects total cost of ownership. BlueField-3's 150W maximum, Intel E2100's 75-150W range, and Pensando's similar envelope all add to rack power budgets.⁷⁰ At scale, DPU power consumption becomes a meaningful line item.

Software ecosystem maturity varies by vendor. NVIDIA's DOCA provides the most comprehensive SDK for DPU application development.⁷¹ AMD's SSDK and SONiC integration enable programmable pipelines. Intel's SDK targets the specific IPU architecture. Development team expertise should influence vendor selection.

The DPU market will consolidate as AI infrastructure standardizes. Organizations investing now should evaluate roadmaps—BlueField-4, Pensando Salina, and future Intel IPUs—to ensure multi-year deployment alignment.⁷² The third pillar of computing is still taking shape, and early architectural decisions will compound over time.

Key takeaways

For infrastructure architects: - One BlueField-3 DPU delivers equivalent data center services of up to 300 CPU cores; offload justifies investment when CPU capacity constrains workload deployment - East-west traffic between GPUs dominates AI cluster communication; DPUs prevent host CPU bottlenecks from constraining GPU utilization - 35% of AI model training tasks now offloaded to DPUs; 30% of deployments focus on AI workloads, 20% on zero-trust security

For procurement teams: - DPU SmartNIC market: $1.11B (2024) → $4.44B by 2034 (14.89% CAGR); ~50% of cloud providers now rely on DPUs - NVIDIA BlueField-3: 400Gbps (Ethernet/InfiniBand), 16 ARM A78 cores, 16GB DDR5, ≤150W; BlueField-4 announced with 800Gbps - AMD Pensando Elba: dual 200GbE, 16 ARM A72 cores, P4 programmable; Salina 400G and Pollara 400 AI NIC available 2025 - Intel IPU E2100: 200GbE, 16 ARM Neoverse N1, 48GB LPDDR4x, 75-150W TDP; Nitro-inspired architecture

For security teams: - Zero-trust enforcement at network edge without host CPU involvement; L4 firewalls run directly on DPU - Microsegmentation enables zero-trust for GPU workloads; multi-tenant AI infrastructure enforces isolation at network level - Root of trust validates firmware, OS, and applications before network access; compromised hosts cannot communicate without DPU verification

For AI cluster operators: - RDMA essential: DPUs handle RoCEv2 or InfiniBand in hardware, bypassing host stack for zero-copy GPU-to-network transfer - Congestion control (DCQCN) in hardware responds faster than software, preventing packet loss that degrades training - Storage acceleration: NVMe-oF offload and compression maintain GPU utilization during checkpoint operations

For vendor selection: - NVIDIA BlueField: DOCA SDK ecosystem, consistent NVIDIA GPU management, strongest InfiniBand support - AMD Pensando: proven hyperscale production, P4 programmability for custom packet processing - Intel IPU: cloud-native Nitro-style separation of infrastructure and tenant workloads


References

  1. OpenPR. "DPU SmartNIC Market expected to hit USD 4.44 Billion by 2034, growing at a CAGR of 14.89%." 2024. https://www.openpr.com/news/4100580/dpu-smartnic-market-expected-to-hit-usd-4-44-billion-by-2034

  2. Network Switch. "SmartNICs and DPUs Explained 2025: Architecture, Use Cases & Future." 2025. https://network-switch.com/blogs/networking/smart-nics-and-dpus-explained

  3. Network Switch. "SmartNICs and DPUs Explained 2025."

  4. Dell. "Entering the Next Frontier - SmartNIC/Data Processing Unit." 2024. https://www.dell.com/en-us/blog/entering-the-next-frontier-smartnic-data-processing-unit/

  5. TechWrix. "DPUs/SmartNICs for AI fabrics: Offload Patterns for East–West Traffic." 2025. https://www.techwrix.com/dpus-smartnics-for-ai-fabrics-practical-offload-patterns-for-east-west-traffic/

  6. TechWrix. "DPUs/SmartNICs for AI fabrics."

  7. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU: Programmable Data Center Infrastructure On-A-Chip." 2024. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/documents/datasheet-nvidia-bluefield-3-dpu.pdf

  8. NVIDIA Newsroom. "NVIDIA Extends Data Center Infrastructure Processing Roadmap with BlueField-3." 2021. https://nvidianews.nvidia.com/news/nvidia-extends-data-center-infrastructure-processing-roadmap-with-bluefield-3

  9. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  10. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  11. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  12. NVIDIA Newsroom. "NVIDIA Extends Data Center Infrastructure Processing Roadmap."

  13. TechPowerUp. "NVIDIA Extends Data Center Infrastructure Processing Roadmap with BlueField-3 DPU." 2021. https://www.techpowerup.com/280932/nvidia-extends-data-center-infrastructure-processing-roadmap-with-bluefield-3-dpu

  14. TechPowerUp. "NVIDIA Extends Data Center Infrastructure Processing Roadmap."

  15. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  16. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  17. NVIDIA Documentation. "Specifications." 2024. https://docs.nvidia.com/networking/display/bf3dpucontroller/specifications

  18. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  19. NVIDIA. "BlueField Networking Platform." 2025. https://www.nvidia.com/en-us/networking/products/data-processing-unit/

  20. AMD. "AMD Pensando DPU Technology." 2025. https://www.amd.com/en/products/data-processing-units/pensando.html

  21. AMD. "AMD Pensando DPU Technology."

  22. AMD Product Brief. "AMD Pensando 2nd Generation ('Elba') Data Processing Unit." 2024. https://www.amd.com/content/dam/amd/en/documents/pensando-technical-docs/product-briefs/pensando-elba-product-brief.pdf

  23. ServeTheHome. "Hands-on with an AMD Pensando DSC2-100G Elba DPU in a Secret Lab." 2024. https://www.servethehome.com/hands-on-with-an-amd-pensando-elba-dpu-in-a-secret-lab-arm-nvidia-dell-vmware-esxi-upt/

  24. AMD White Paper. "Architecture Matters: Comparison of DPU Hardware Strategies." 2024. https://www.amd.com/content/dam/amd/en/documents/pensando-technical-docs/white-papers/pensando-comparison-of-dpu-hardware-strategies.pdf

  25. AMD White Paper. "Architecture Matters."

  26. AMD Product Brief. "AMD Pensando 2nd Generation ('Elba')."

  27. AMD Product Brief. "AMD Pensando 2nd Generation ('Elba')."

  28. AMD. "AMD Pensando Networking Solutions for the Modern Data Center." 2025. https://www.amd.com/en/solutions/data-center/networking.html

  29. AMD. "AMD Pensando Networking Solutions."

  30. ColoCrossing. "AMD Pensando Salina 400 DPU: New Features and Insights Unveiled." 2024. https://www.colocrossing.com/blog/amd-pensando-salina-400-dpu-new-features/

  31. Hard|Forum. "AMD takes the AI networking battle to Nvidia with new DPU & smart NIC launch." 2025. https://hardforum.com/threads/amd-takes-the-ai-networking-battle-to-nvidia-with-new-dpu-smart-nic-launch.2037378/

  32. AMD Product Brief. "Pensando Giglio Product Brief." 2024. https://www.amd.com/content/dam/amd/en/documents/pensando-technical-docs/product-briefs/pensando-giglio-product-brief.pdf

  33. ServeTheHome. "Hands-on with an AMD Pensando DSC2-100G Elba DPU."

  34. Intel. "Intel Infrastructure Processing Unit (Intel IPU) Adapter E2100." 2025. https://www.intel.com/content/www/us/en/products/details/network-io/ipu/adapter-e2100.html

  35. Intel Product Brief. "Intel Infrastructure Processing Unit (Intel IPU) SoC E2100." 2024. https://www.intel.com/content/www/us/en/content-details/818147/intel-infrastructure-processing-unit-intel-ipu-soc-e2100-product-brief.html

  36. Intel Product Brief. "Intel IPU SoC E2100."

  37. Intel Product Brief. "Intel IPU SoC E2100."

  38. ColoCrossing. "Intel IPU E2100 DPU Launch: Mass Market Review & Features." 2024. https://www.colocrossing.com/blog/the-launch-of-intel-ipu-e2100-dpu/

  39. Intel. "Intel IPU Adapter E2100-CCQDA2 - Product Specifications." 2024. https://www.intel.com/content/www/us/en/products/sku/235724/intel-ipu-adapter-e2100ccqda2/specifications.html

  40. Intel. "Intel IPU Adapter E2100-CCQDA2HL - Product Specifications." 2024. https://www.intel.com/content/www/us/en/products/sku/242097/intel-ipu-adapter-e2100-ccqda2hl/specifications.html

  41. Intel. "Intel IPU Adapter E2100-CCQDA2 - Product Specifications."

  42. Intel. "Intel IPU Adapter E2100-CCQDA2 - Product Specifications."

  43. ColoCrossing. "Intel IPU E2100 DPU Launch."

  44. ColoCrossing. "Intel IPU E2100 DPU Launch."

  45. Intel. "Intel Infrastructure Processing Unit (Intel IPU)." 2025. https://www.intel.com/content/www/us/en/products/details/network-io/ipu.html

  46. OpenPR. "DPU SmartNIC Market expected to hit USD 4.44 Billion by 2034."

  47. OpenPR. "DPU SmartNIC Market expected to hit USD 4.44 Billion by 2034."

  48. Network Switch. "SmartNICs and DPUs Explained 2025."

  49. Network Switch. "SmartNICs and DPUs Explained 2025."

  50. TechWrix. "DPUs/SmartNICs for AI fabrics."

  51. Yole Group. "NVIDIA DPU BlueField-3." 2024. https://www.yolegroup.com/product/report/nvidia-dpu-network-card-bluefield-3/

  52. AMD. "AMD Pensando DPU Technology."

  53. ColoCrossing. "Intel IPU E2100 DPU Launch."

  54. CloudSwitch. "DPU & SmartNIC Vendors: Complete Product Line Guide." 2024. https://cloudswit.ch/blogs/the-most-complete-dpu-smartnic-vendors-with-its-product-line-summary/

  55. CloudSwitch. "DPU & SmartNIC Vendors."

  56. VMware Blog. "The Next Generation in Data Center Security – SmartNICs and DPUs." February 2023. https://blogs.vmware.com/cloud-foundation/2023/02/20/the-next-generation-in-data-center-security-smartnics-and-dpus/

  57. TechWrix. "DPUs/SmartNICs for AI fabrics."

  58. TechWrix. "DPUs/SmartNICs for AI fabrics."

  59. VMware Blog. "The Next Generation in Data Center Security."

  60. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  61. VMware Blog. "The Next Generation in Data Center Security."

  62. TechWrix. "DPUs/SmartNICs for AI fabrics."

  63. TechWrix. "DPUs/SmartNICs for AI fabrics."

  64. NVIDIA Datasheet. "NVIDIA BlueField-3 DPU."

  65. AMD Product Brief. "AMD Pensando 2nd Generation ('Elba')."

  66. Intel Product Brief. "Intel IPU SoC E2100."

  67. NVIDIA Developer Blog. "Power the Next Wave of Applications with NVIDIA BlueField-3 DPUs." 2023. https://developer.nvidia.com/blog/power-the-next-wave-of-applications-with-nvidia-bluefield-3-dpus/

  68. AMD White Paper. "Architecture Matters."

  69. ColoCrossing. "Intel IPU E2100 DPU Launch."

  70. NVIDIA Documentation. "Specifications."

  71. NVIDIA Developer Blog. "Power the Next Wave of Applications."

  72. NVIDIA. "BlueField Networking Platform."


SEO Elements

Squarespace Excerpt (159 characters): DPU market hits $4.44B by 2034. BlueField-3 replaces 300 CPU cores. 35% of AI training offloaded to DPUs. The third pillar of computing explained.

SEO Title (55 characters): DPUs and SmartNICs: BlueField, Pensando, Intel IPU 2025

SEO Description (155 characters): NVIDIA BlueField-3 400Gbps, AMD Pensando Elba 200Gbps, Intel IPU E2100. DPU market grows to $4.44B. Analysis of data center infrastructure offload for AI.

URL Slugs: - Primary: dpus-smartnics-data-center-infrastructure-bluefield-pensando - Alt 1: nvidia-bluefield-3-dpu-specifications-ai-offload - Alt 2: amd-pensando-elba-dpu-p4-programmable - Alt 3: intel-ipu-e2100-infrastructure-processing-unit

Request a Quote_

Tell us about your project and we'll respond within 72 hours.

> TRANSMISSION_COMPLETE

Request Received_

Thank you for your inquiry. Our team will review your request and respond within 72 hours.

QUEUED FOR PROCESSING