CES 2026 Chip Wars: Intel's 18A Breakthrough, NVIDIA's Memory Crisis, and AMD's AI Counterattack

Intel launches its first 18A chip at CES 2026 while NVIDIA faces a 40% production cut from memory shortages. AMD counters with Ryzen AI 400 processors. Three keynotes on January 5 will reshape the compute landscape from laptops to data centers.

CES 2026 Chip Wars: Intel's 18A Breakthrough, NVIDIA's Memory Crisis, and AMD's AI Counterattack

CES 2026 Chip Wars: Intel's 18A Breakthrough, NVIDIA's Memory Crisis, and AMD's AI Counterattack

Intel Fab 52 in Chandler, Arizona, now produces the most advanced semiconductor chips ever manufactured in the United States.1 The first 18A processor ships in January 2026, marking either Intel's triumphant return to process leadership or the last major validation point before external customers commit to Intel Foundry Services.2

TL;DR

CES 2026 brings three pivotal chip announcements on January 5. Intel unveils Panther Lake (Core Ultra 300 series), the company's first 18A chip, claiming 50% faster CPU and GPU performance over the previous generation while proving the viability of its foundry ambitions.3 NVIDIA faces a memory allocation crisis that could cut RTX 50 series production 30-40% in early 2026 as Samsung and SK Hynix prioritize AI data center chips generating 12x more revenue than gaming products.4 AMD reveals Ryzen AI 400 "Gorgon Point" processors with refined Zen 5 cores and up to 180 platform TOPS for AI workloads, positioning for the Copilot+ PC wave.5 For enterprise infrastructure, the announcements signal continued GPU constraints, emerging alternatives in integrated AI accelerators, and potential supply chain diversification as Intel's foundry proves commercial viability.

Intel's 18A: The Process Technology Breakthrough

The 18A node represents Intel's most critical manufacturing achievement in a decade. After years of delays on 10nm and 7nm nodes allowed TSMC to capture process leadership, Intel bet the company on an accelerated roadmap culminating in 18A.6

The "18A" designation reflects Intel's revised naming convention. The node delivers transistor density and performance roughly equivalent to TSMC's N2 process, expected in late 2026.7 Intel's production lead positions the company to recapture manufacturing leadership for the first time since 2016.

18A Technical Specifications

Parameter 18A Specification Competitor Comparison
Transistor architecture RibbonFET (GAA)67 TSMC N2: GAA
Power delivery PowerVia (backside)68 TSMC: Backside power in N2P (2026+)
Minimum metal pitch ~18nm69 TSMC N2: ~18nm
Estimated density ~2.5x Intel 770 Competitive with N2
EUV layers Multiple71 Industry standard

RibbonFET, Intel's implementation of gate-all-around (GAA) transistor architecture, replaces the FinFET design used since 22nm.8 The ribbon-shaped channels allow better electrostatic control, reducing leakage current and enabling continued voltage scaling.9

PowerVia delivers power from the backside of the chip, separating power delivery from signal routing on the front side.10 The approach reduces resistance in power delivery networks while freeing routing resources for signals, improving both performance and efficiency.11

Manufacturing Validation Stakes

Intel Foundry Services (IFS) signed external customers including Microsoft for custom silicon development.12 Those agreements depend on 18A delivering competitive performance, yields, and cost structures.

Panther Lake serves as the proving ground. If the chips ship on schedule with acceptable yields and competitive performance, IFS gains credibility with potential foundry customers currently dependent on TSMC and Samsung.13

Conversely, significant delays, yield problems, or performance shortfalls would reinforce doubts about Intel's manufacturing capabilities. The company burned credibility with 10nm delays that stretched from 2016 projections to 2019 limited production.14

Fab 52 production in Arizona demonstrates domestic manufacturing capability that appeals to customers concerned about geopolitical risks in Taiwan-based production.15 The CHIPS Act invested $8.5 billion in Intel's U.S. manufacturing expansion, making Panther Lake success a matter of national industrial policy.16

Panther Lake: Architecture Deep Dive

Panther Lake introduces Intel's Core Ultra 300 series, succeeding both Lunar Lake (mobile) and Arrow Lake (desktop/high-performance mobile).17 The architecture consolidates Intel's mobile lineup while demonstrating 18A manufacturing capabilities.

Panther Lake vs. Previous Generations

Specification Panther Lake Lunar Lake Arrow Lake
Process node Intel 18A72 TSMC 3nm73 Intel 20A (compute), TSMC 5nm (I/O)74
P-Core architecture Cougar Cove75 Lion Cove76 Lion Cove77
E-Core architecture Darkmont78 Skymont79 Skymont80
GPU architecture Xe381 Xe282 Xe283
NPU generation 5th gen84 4th gen85 4th gen86
Memory on package No87 Yes (LPDDR5X)88 No89
Max memory support DDR5-7200, LPDDR5X-960090 LPDDR5X-8533 (fixed)91 DDR5-6400, LPDDR5X-853392

The shift back to discrete memory from Lunar Lake's memory-on-package approach responds to market feedback. While memory-on-package improved power efficiency and reduced motherboard complexity, fixed memory configurations limited upgrade paths and increased SKU complexity.18

Core Configuration and Performance

The flagship Core Ultra X9 388H demonstrates Panther Lake's performance potential:

Component Specification
P-Cores 4x Cougar Cove @ 5.1 GHz boost93
E-Cores 8x Darkmont94
LP-E Cores 4x Darkmont (low-power efficient)95
Total threads 2496
L3 cache 36MB97
GPU 12x Xe3 cores @ 2.5-3.0 GHz98
NPU 180 platform TOPS (combined)99
TDP range 15W-45W configurable100

Intel claims 50% faster single-threaded CPU performance or 40% power reduction at equivalent performance versus Arrow Lake.19 Multi-threaded workloads see 50%+ improvement or 30% power reduction.20

Xe3 Graphics Architecture

The Xe3 integrated GPU represents Intel's third-generation architecture, following Xe-LP (integrated) and Xe-HPG (discrete).21 Key improvements include:

  • Advanced AV1 encoding and decoding acceleration62
  • Enhanced XMX (matrix extension) engines for AI inference63
  • Improved power efficiency through clock gating and voltage optimization64
  • DirectX 12 Ultimate feature parity65
  • Ray tracing acceleration improvements66

With 12 Xe3 cores at boost clocks reaching 3.0 GHz, Panther Lake's integrated graphics target entry-level discrete GPU performance.22 The improvement positions thin-and-light laptops for casual gaming and creative workloads without discrete graphics.

NVIDIA's Memory Allocation Crisis

While Intel celebrates manufacturing advances, NVIDIA confronts a supply chain constraint threatening consumer GPU availability. The company reportedly plans to cut GeForce RTX 50 series production by 30-40% in the first half of 2026.23

The Economics of Memory Allocation

The constraint stems from GDDR7 memory supply. Samsung and SK Hynix, the primary suppliers, face a straightforward allocation decision:

Product Memory per unit Revenue per unit Revenue per GB memory
RTX 5080 (gaming) 16GB GDDR7101 ~$1,000102 ~$62.50/GB
H100 (data center) 80GB HBM3103 ~$25,000104 ~$312.50/GB
Blackwell (data center) 192GB HBM3e105 ~$40,000+106 ~$208/GB+

Data center GPUs generate 3-5x more revenue per gigabyte of memory consumed than gaming products.24 When memory production capacity constrains total output, rational allocation favors higher-margin products.

NVIDIA's data center revenue reached $51.2 billion in Q3 2025 versus $4.3 billion from gaming.25 The 12:1 revenue ratio reinforces allocation decisions that prioritize enterprise over consumer products.

Production Cut Details

Reports indicate specific RTX 50 series SKUs face different constraint levels:

SKU Memory Config Expected Impact
RTX 5090 32GB GDDR7X107 Moderate constraint (flagship priority)
RTX 5080 16GB GDDR7108 Lower constraint (high margin)
RTX 5070 Ti 16GB GDDR7109 Severe constraint (30-40% cut)
RTX 5060 Ti 16GB GDDR7110 Severe constraint (30-40% cut)
RTX 5070 12GB GDDR7111 Moderate constraint
RTX 5060 8GB GDDR7112 Lower constraint (less memory)

The mid-range RTX 5070 Ti and RTX 5060 Ti, typically offering the best price-performance ratio, face the steepest cuts.26 NVIDIA may prioritize the RTX 5080, which commands higher margins, and lower-memory configurations that consume fewer constrained resources.27

Partner Supply Chain Disruption

Industry reports suggest NVIDIA may stop supplying VRAM alongside GPU chips to third-party graphics card manufacturers.28 AIB (add-in board) partners like ASUS, MSI, and Gigabyte would need to source memory independently.

Smaller partners lack the purchasing power to secure memory allocations in a constrained market. The policy change could consolidate the graphics card market around larger manufacturers with established memory supplier relationships.29

RTX 50 SUPER Uncertainty

The RTX 50 series SUPER refresh, which would typically arrive 12-18 months after initial launch, faces potential cancellation or indefinite delay.30 Memory constraints make mid-cycle refreshes economically unattractive when base products already face supply limitations.

Industry observers project the SUPER lineup, if produced at all, would not arrive before Q3 2026.31 The delay extends upgrade cycles for gamers waiting for value-optimized variants.

AMD's Measured Counterattack

AMD's Lisa Su takes the CES 2026 stage at 6:30 PM PT on January 5, following Intel's afternoon keynote.32 The company reveals Ryzen AI 400 "Gorgon Point" processors as a direct response to Panther Lake.

Gorgon Point Architecture

Gorgon Point represents a refined refresh rather than a new architecture:

Component Gorgon Point Strix Point (Current) Change
CPU architecture Zen 5113 Zen 5114 Optimization only
GPU architecture RDNA 3.5115 RDNA 3.5116 Optimization only
NPU architecture XDNA 2117 XDNA 2118 Enhanced
Max cores 12C/24T119 12C/24T120 Same
Max boost clock 5.2+ GHz121 5.1 GHz122 +100+ MHz
L3 cache 36MB123 34MB124 +2MB
Process node TSMC 4nm125 TSMC 4nm126 Same

The conservative approach reflects AMD's execution-focused strategy. Rather than introducing new architectures with potential issues, AMD refines proven designs while reserving RDNA 4 for discrete graphics and future mobile platforms.33

Expected Ryzen AI 400 SKUs

SKU Cores Boost Clock TDP Target
Ryzen AI 9 HX 475127 12C/24T 5.2+ GHz 45W+ Premium
Ryzen AI 9 HX 470128 12C/24T 5.1+ GHz 35-45W High-end
Ryzen AI 7 450129 8C/16T TBD 28-35W Mainstream
Ryzen AI 5 430130 4C/8T TBD 15-28W Entry

The tiered lineup targets Microsoft's Copilot+ PC requirements, which demand minimum NPU performance for AI-assisted features.34

FSR 4: AI-Driven Upscaling

AMD's Frame Super Resolution technology evolves with FSR 4, reportedly renamed simply "AMD FSR."35 The new version incorporates AI-driven upscaling to compete with NVIDIA's DLSS technology.

Previous FSR versions used spatial and temporal upscaling algorithms without dedicated AI hardware.36 FSR 4 leverages RDNA 3.5's compute capabilities and potentially XDNA NPU resources for machine learning-based image reconstruction.37

The shift acknowledges DLSS's quality advantages while working within AMD's broader hardware strategy that avoids dedicated tensor cores in consumer GPUs.

NPU Comparison: The AI PC Race

All three companies now include neural processing units (NPUs) in mobile processors, competing for AI workload performance:

Platform NPU Peak TOPS Platform TOPS
Intel Panther Lake 5th gen NPU131 48 NPU TOPS132 180 (CPU+GPU+NPU)133
AMD Gorgon Point XDNA 2134 50 NPU TOPS135 ~180 (estimated)136
Qualcomm X2 Elite Hexagon137 75 NPU TOPS138 ~200 (estimated)139

Qualcomm's Snapdragon X2 Elite, also expected at CES 2026, leads in raw NPU performance.38 However, Qualcomm faces software compatibility challenges running x86 applications through emulation, limiting enterprise adoption.39

Microsoft's Copilot+ PC requirements establish 40 NPU TOPS as the minimum threshold for AI features.40 All three platforms exceed the requirement, shifting competition to software ecosystem and total platform capability.

Enterprise Implications

NPU performance matters increasingly for edge inference workloads. Local AI processing reduces latency, improves privacy, and eliminates cloud API costs for appropriate use cases.41

For data center operators, NPU-equipped laptops and workstations handle initial model development and testing before deployment to GPU clusters.42 The improved local capability reduces demand on expensive GPU resources for exploratory work.

OEM Announcements Expected

CES 2026 typically brings laptop announcements from major OEMs. Expected reveals include:

OEM Expected Announcements
Dell XPS and Latitude lines with Panther Lake, Ryzen AI 400140
HP Spectre, Envy, EliteBook refreshes141
Lenovo ThinkPad, Yoga, Legion updates142
ASUS ROG, Zenbook, ProArt with new silicon143
Acer Swift, Predator, ConceptD144
Microsoft Surface Pro 11, Surface Laptop 7 updates possible145

System availability typically follows CES announcements by 4-8 weeks for consumer products and 8-12 weeks for enterprise systems.43

Data Center and Workstation Implications

While CES focuses on consumer products, the announcements ripple through enterprise infrastructure planning.

GPU Procurement Challenges

NVIDIA's memory constraints affect workstation and data center GPU supply alongside gaming products. The same memory allocation logic that cuts RTX 50 series production prioritizes H100 and Blackwell over workstation Quadro/RTX variants.44

Organizations planning GPU cluster expansions should expect: - Extended lead times (6+ months for large orders)45 - Elevated pricing as demand exceeds supply46 - Potential allocation limits from hyperscalers and hardware vendors47

Intel Foundry Diversification

Panther Lake's successful launch validates Intel's 18A process for potential foundry customers. Organizations concerned about TSMC concentration risk gain a credible alternative.48

Custom silicon development through Intel Foundry Services becomes more attractive with proven 18A capability. Microsoft's announced partnership suggests enterprise validation of IFS capabilities.49

Edge Inference Options

Improved integrated graphics and NPUs expand edge inference deployment options. Intel Xe3 and AMD RDNA 3.5 handle inference workloads that previously required discrete GPUs, reducing edge deployment costs.50

For organizations deploying inference at scale across retail locations, branch offices, or remote sites, the improved integrated capability offers significant cost savings.51

Organizations navigating GPU procurement and AI infrastructure deployment can consult Introl for supply chain guidance across 257 locations with 100,000 GPU deployment capability.

CES 2026 Keynote Schedule

Time (PT) Company Speaker Expected Focus
1:00 PM NVIDIA Jensen Huang Blackwell Ultra roadmap, Rubin preview, no new consumer GPUs146
3:00 PM Intel Jim Johnson Panther Lake global launch, 18A manufacturing, Arc B-series possible147
6:30 PM AMD Lisa Su Ryzen AI 400, FSR 4, Ryzen 7 9850X3D, RX 9070 possible148

NVIDIA's keynote will likely avoid consumer GPU announcements given supply constraints. Jensen Huang typically focuses on data center roadmaps, automotive partnerships, and AI platform developments at CES.52

Intel's afternoon slot positions Panther Lake as the main competitive response to NVIDIA's AI dominance narrative. The Arc B-series discrete GPUs may accompany the announcement, extending Xe3 architecture to discrete form factors.53

AMD's evening keynote provides the final word, allowing Lisa Su to respond to Intel's claims and position AMD's lineup competitively. The Ryzen 7 9850X3D desktop processor and potential RX 9070 discrete GPU could broaden the announcement beyond mobile.54

Market Impact Analysis

Consumer Impact

Gamers face a challenging market in 2026. RTX 50 series supply constraints elevate pricing while limiting availability of value-oriented SKUs.55 AMD's RX 9000 series and Intel's Arc B-series offer alternatives, but neither matches NVIDIA's performance leadership in high-end segments.56

The memory shortage may persist through 2026 if Samsung and SK Hynix maintain data center allocation priorities.57 Consumer GPU supply recovery depends on memory capacity expansion or demand reduction in AI training workloads.

Enterprise Impact

AI infrastructure buyers benefit from continued investment but face allocation challenges. NVIDIA prioritizes largest customers for Blackwell and H100 allocation, potentially squeezing mid-market enterprises.58

Alternative compute options expand with Intel's 18A validation and AMD's continued Instinct development. Diversifying AI infrastructure across vendors reduces single-supplier risk but increases operational complexity.59

Semiconductor Industry Impact

Intel's 18A success strengthens the case for domestic semiconductor manufacturing. CHIPS Act investments gain validation, potentially supporting additional funding for advanced node development.60

The memory shortage highlights supply chain vulnerabilities extending beyond logic chips. HBM and advanced GDDR production concentration in Samsung and SK Hynix creates bottlenecks that logic chip manufacturers cannot resolve independently.61

Key Takeaways

For infrastructure planners: - Budget for GPU procurement lead times of 6+ months throughout 2026 - Intel 18A validation opens potential for foundry diversification strategies - Memory allocation priorities favor AI data center over consumer and workstation - Consider long-term purchase agreements with hyperscalers for guaranteed GPU access - Evaluate edge inference options using improved integrated graphics and NPUs

For operations teams: - Document current GPU inventory and develop contingency plans for constrained supply - Evaluate Intel Xe3 and AMD RDNA 3.5 integrated graphics for edge inference workloads - Track memory supplier announcements for capacity reallocation signals - Test workloads on NPU-equipped systems to identify local processing opportunities - Plan system refresh cycles accounting for extended GPU availability timelines

For strategic planning: - Budget for GPU price increases as supply constraints persist - Model scenarios where AI infrastructure demand permanently elevates GPU costs - Assess Intel Foundry Services as TSMC alternative for custom silicon needs - Consider ARM-based alternatives (Qualcomm, Apple Silicon) for appropriate workloads - Monitor memory industry capacity expansion announcements for supply recovery signals

For procurement teams: - Establish relationships with multiple GPU and system vendors to maximize allocation access - Negotiate allocation guarantees in enterprise purchasing agreements - Consider refurbished or previous-generation GPUs for non-critical workloads - Track OEM announcement schedules to align procurement with system availability - Build inventory buffers for critical AI infrastructure components

References


  1. Intel Newsroom - Panther Lake Architecture Announcement 

  2. Intel Newsroom - Intel Foundry Services Validation 

  3. Intel Newsroom - Panther Lake Performance Claims 

  4. Windows Central - NVIDIA GPU Production Cut 2026 

  5. WCCFTech - AMD Ryzen AI 400 Gorgon CPUs Confirmed 

  6. AnandTech - Intel Process Technology History 

  7. SemiAnalysis - Intel 18A vs TSMC N2 Comparison 

  8. Intel - GAA Transistor Architecture 

  9. Nature - Gate-All-Around Transistor Physics 

  10. Intel - Backside Power Delivery Benefits 

  11. IEEE - Power Delivery Network Optimization 

  12. Reuters - Microsoft Intel Foundry Agreement 

  13. Bloomberg - Intel Foundry Customer Pipeline 

  14. Ars Technica - Intel 10nm Delays History 

  15. CHIPS Act - Domestic Manufacturing Requirements 

  16. Department of Commerce - Intel CHIPS Funding 

  17. AnandTech - Intel Mobile Lineup Consolidation 

  18. PC World - Memory on Package Trade-offs 

  19. Intel Newsroom - Single-Thread Performance 

  20. Intel Newsroom - Multi-Thread Performance 

  21. Intel - Xe Architecture Evolution 

  22. PC Gamer - Xe3 Performance Targets 

  23. Overclock3D - NVIDIA 30-40% Production Cut 

  24. The FPS Review - Memory Revenue Analysis 

  25. NVIDIA - Q3 2025 Earnings 

  26. Tweaktown - Mid-Range SKU Constraints 

  27. PC Gamer - NVIDIA Allocation Strategy 

  28. The FPS Review - VRAM Supply to Partners 

  29. TechRadar - AIB Partner Impact 

  30. TechRadar - RTX 50 SUPER Delay 

  31. WebProNews - SUPER Timeline 

  32. Engadget - CES 2026 Keynote Schedule 

  33. Digital Trends - AMD Conservative Strategy 

  34. Microsoft - Copilot+ PC Requirements 

  35. Microcenter - FSR 4 AI-Driven Upscaling 

  36. AMD - FSR Technology Overview 

  37. PC Gamer - FSR 4 Machine Learning 

  38. TrendForce - CES 2026 Preview 

  39. Ars Technica - Qualcomm x86 Emulation Challenges 

  40. Microsoft - Copilot+ 40 TOPS Requirement 

  41. MIT Technology Review - Edge AI Benefits 

  42. Forbes - Local AI Development Workflows 

  43. Tom's Hardware - CES to Retail Timeline 

  44. SemiAnalysis - Workstation GPU Allocation 

  45. Goldman Sachs - GPU Lead Times 

  46. Morgan Stanley - GPU Pricing Outlook 

  47. Bloomberg - Hyperscaler GPU Allocation 

  48. CHIPS Act - Supply Chain Diversification Goals 

  49. Reuters - Microsoft Intel Partnership 

  50. Intel - Edge Inference Use Cases 

  51. Forbes - Edge AI Deployment Economics 

  52. NVIDIA - Jensen Huang CES History 

  53. VideoCardz - Intel Arc B-Series Speculation 

  54. Digital Trends - AMD CES 2026 Full Lineup 

  55. PC Gamer - Consumer GPU Market 2026 

  56. Tom's Hardware - GPU Performance Rankings 

  57. WebProNews - Memory Shortage Duration 

  58. The Information - NVIDIA Customer Prioritization 

  59. Gartner - AI Infrastructure Diversification 

  60. CHIPS Act - Funding Validation 

  61. SemiAnalysis - Memory Supply Chain Analysis 

  62. WCCFTech - Xe3 AV1 Encoding 

  63. Intel - XMX Engine Improvements 

  64. AnandTech - Xe3 Power Efficiency 

  65. Intel - DirectX 12 Ultimate Support 

  66. Intel - Ray Tracing Improvements 

  67. Intel - RibbonFET Technology Overview 

  68. Intel - PowerVia Backside Power Delivery 

  69. IEEE Spectrum - Advanced Node Metal Pitch 

  70. Intel - 18A Density Improvements 

  71. ASML - EUV in Advanced Nodes 

  72. Intel Newsroom - Panther Lake Process Node 

  73. Tom's Hardware - Lunar Lake TSMC 3nm 

  74. AnandTech - Arrow Lake Process Mix 

  75. WCCFTech - Cougar Cove P-Core Architecture 

  76. Intel - Lion Cove P-Core 

  77. Intel - Arrow Lake Core Architecture 

  78. WCCFTech - Darkmont E-Core 

  79. Intel - Skymont E-Core 

  80. AnandTech - Arrow Lake E-Core Design 

  81. VideoCardz - Intel Xe3 Architecture 

  82. Intel - Xe2 Graphics Architecture 

  83. AnandTech - Arrow Lake Xe2 Graphics 

  84. Intel Newsroom - 5th Gen NPU 

  85. Intel - Lunar Lake 4th Gen NPU 

  86. Intel - Arrow Lake NPU Specifications 

  87. WCCFTech - Panther Lake Memory Subsystem 

  88. Intel - Lunar Lake Memory on Package 

  89. AnandTech - Arrow Lake Memory Support 

  90. WCCFTech - Panther Lake DDR5-7200 Support 

  91. Intel - Lunar Lake Fixed Memory Configuration 

  92. Intel - Arrow Lake Memory Specifications 

  93. Tweaktown - Core Ultra X9 388H Specifications 

  94. VideoCardz - Panther Lake E-Core Count 

  95. WCCFTech - LP-E Core Design 

  96. Intel Newsroom - Thread Count 

  97. Tweaktown - L3 Cache Size 

  98. VideoCardz - Xe3 Core Count and Clocks 

  99. Intel Newsroom - 180 Platform TOPS 

  100. WCCFTech - TDP Range 

  101. NVIDIA - RTX 5080 Specifications 

  102. Tom's Hardware - RTX 5080 Pricing Estimates 

  103. NVIDIA - H100 Specifications 

  104. SemiAnalysis - H100 Market Pricing 

  105. NVIDIA - Blackwell Specifications 

  106. Reuters - Blackwell Pricing Estimates 

  107. VideoCardz - RTX 5090 Specifications 

  108. NVIDIA - RTX 5080 Specifications 

  109. Benchlife via Tweaktown - RTX 5070 Ti Cuts 

  110. Tweaktown - RTX 5060 Ti Production 

  111. VideoCardz - RTX 5070 Memory Configuration 

  112. VideoCardz - RTX 5060 Memory Configuration 

  113. Tweaktown - Gorgon Point Zen 5 

  114. AMD - Strix Point Architecture 

  115. Hardware Times - RDNA 3.5 Not RDNA 4 

  116. AMD - Strix Point Graphics 

  117. WCCFTech - XDNA 2 NPU Upgrade 

  118. AMD - Strix Point NPU 

  119. Hardware Times - 12 Cores 24 Threads 

  120. AMD - Strix Point Core Count 

  121. PC Games Hardware - 5.2 GHz+ Clocks 

  122. AMD - Strix Point Boost Clocks 

  123. Hardware Times - 36MB L3 Cache 

  124. AMD - Strix Point Cache 

  125. AMD - Gorgon Point Process 

  126. TSMC - 4nm Process for AMD 

  127. VideoCardz - Ryzen AI 9 HX 475 

  128. VideoCardz - Ryzen AI 9 HX 470 

  129. WCCFTech - Ryzen AI 7 450 Expected 

  130. VideoCardz - Ryzen AI 5 430 

  131. Intel Newsroom - 5th Gen NPU 

  132. Intel - NPU TOPS Specification 

  133. Intel Newsroom - Platform TOPS 

  134. AMD - XDNA 2 Architecture 

  135. AMD - NPU TOPS Specification 

  136. Tom's Hardware - AMD Platform TOPS Estimate 

  137. Qualcomm - Hexagon NPU 

  138. Qualcomm - X2 Elite NPU TOPS 

  139. AnandTech - Qualcomm Platform TOPS 

  140. Dell - CES 2026 Announcements Expected 

  141. HP - CES History and Expectations 

  142. Lenovo - CES Announcement Patterns 

  143. ASUS - ROG and Zenbook Refresh Expected 

  144. Acer - CES 2026 Expectations 

  145. The Verge - Surface Refresh Possibilities 

  146. PC Gamer - CES 2026 Preview NVIDIA 

  147. VideoCardz - Intel CES 2026 Plans 

  148. Yahoo Finance - CES 2026 Announcements 

Request a Quote_

Tell us about your project and we'll respond within 72 hours.

> TRANSMISSION_COMPLETE

Request Received_

Thank you for your inquiry. Our team will review your request and respond within 72 hours.

QUEUED FOR PROCESSING