South Korea's HBM4 Moment: How Samsung and SK Hynix Became the Gatekeepers of AI
Two companies in South Korea control the future of artificial intelligence infrastructure. Samsung Electronics and SK Hynix produce 90% of global High Bandwidth Memory, the specialized DRAM that enables every frontier AI model to function.1 With HBM4 mass production launching in February 2026, server DRAM prices surging 60-70%, and a 900,000 wafer-per-month commitment to OpenAI's Stargate project, memory has transformed from commodity component to strategic national asset.234
The implications extend far beyond quarterly earnings. Organizations planning GPU deployments now face memory-constrained timelines stretching into 2027. NVIDIA's next-generation Rubin architecture depends entirely on HBM4 availability. And South Korea has positioned itself as the irreplaceable chokepoint in global AI infrastructure.5
The February Revolution: HBM4 Arrives
February 2026 marks the most significant architectural overhaul in memory technology history. Both Samsung and SK Hynix will begin mass production of HBM4 simultaneously, ending months of speculation about which company would reach volume first.6
Technical Specifications
The JEDEC JESD270-4 standard, published in April 2025, defines HBM4's capabilities.7 The architecture doubles interface width from 1024 bits to 2048 bits, enabling bandwidth that previous generations could not approach.8
| Specification | HBM3E | HBM4 | Improvement |
|---|---|---|---|
| Interface Width | 1024 bits | 2048 bits | 2x |
| Channels | 16 | 32 | 2x |
| Bandwidth | ~1.2 TB/s | 2+ TB/s | ~1.7x |
| Max Stack Height | 12-high | 16-high | +33% |
| Max Capacity | 36 GB | 64 GB | +78% |
The channel architecture expansion from 16 to 32 independent channels enables greater access flexibility and parallelism in memory operations.9 For AI workloads that depend on massive parallel data access, this architectural change proves more significant than raw bandwidth numbers suggest.
Power efficiency improvements accompany the performance gains. JESD270-4 supports vendor-specific voltage levels for VDDQ (0.7V, 0.75V, 0.8V, or 0.9V) and VDDC (1.0V or 1.05V), enabling lower power consumption.10 Micron claims its HBM4 delivers over 20% better power efficiency compared to HBM3E products.11
Samsung's Approach
Samsung adopted a turnkey approach for HBM4, handling chip design, manufacturing, and advanced microfabrication in-house.12 The company uses a 10nm-class fabrication process for the base die of its HBM4 chips, which Samsung claims delivers better performance than competitors.13
Production begins at Samsung's Pyeongtaek campus in February 2026.14 The company plans to increase HBM production capacity by 50% during 2026, targeting approximately 250,000 wafers per month by year-end.15
Samsung's HBM4 chips reportedly achieved 11.7 Gbps in internal evaluations—an industry-leading figure if validated in production.16 Most chips will be installed in NVIDIA's Vera Rubin AI accelerator system, launching in H2 2026, with additional supply going to Google for seventh-generation Tensor Processing Units.17
SK Hynix's Strategy
SK Hynix takes a different approach, outsourcing the logic base die to TSMC through its "One-Team" alliance.18 This collaboration ensures HBM4 synchronization with TSMC's manufacturing processes for NVIDIA GPUs, potentially reducing integration complexity.
Production launches at SK Hynix's M16 plant in Icheon and the M15X fab in Cheongju.19 The company uses a 12nm fabrication process for the base die and adopted the Advanced MR-MUF (Mass Reflow Molded Underfill) process to minimize production risk.20
SK Hynix maintains market leadership with an estimated 53-60% share of HBM production.21 The company completed HBM4 development first, showcasing working samples in April 2025, and delivered paid samples to NVIDIA ahead of Q1 2026 contract finalization.2223
The Stargate Commitment: 40% of Global DRAM Output
OpenAI's Stargate infrastructure initiative secured unprecedented memory commitments from both Korean manufacturers. The agreements, announced in October 2025, locked in supply through 2029 at volumes that will reshape global memory markets.24
Scale of Commitment
Samsung and SK Hynix confirmed that OpenAI's anticipated demand could grow to 900,000 DRAM wafers monthly.25 This volume represents approximately 40% of total global DRAM output—more than double current global HBM production capacity.2627
| Metric | Current Industry | Stargate Target | Multiple |
|---|---|---|---|
| Monthly wafers | ~400,000 | 900,000 | 2.25x |
| HBM capacity share | Baseline | 40% of global | - |
| Timeline | 2025 | 2026-2029 | - |
Analysts estimate the commitment amounts to more than 100 trillion won ($72 billion) of incremental demand for Korean chipmakers over four years.28
Data Center Construction
The partnership extends beyond chip supply to physical infrastructure. OpenAI plans to construct Stargate data centers in South Korea, with Samsung and SK Hynix participating in development.29
These facilities will serve dual purposes: demonstrating Korean memory technology in production AI workloads while establishing local infrastructure for OpenAI's global expansion. The Korean government views the data centers as strategic assets that reinforce the country's position in AI value chains.30
The 60-70% Price Surge
Samsung and SK Hynix are raising server DRAM prices by 60-70% for Q1 2026 compared to Q4 2025.31 The increases reflect a fundamental shift in market power that began with HBM demand and now affects all server memory categories.
Pricing Dynamics
Both companies have rejected long-term agreements of two to three years, instead requiring quarterly contracts that allow stepwise price increases.32 Major customers including Microsoft and Google face the same terms—there are no volume discounts sufficient to offset the seller's market conditions.33
| Period | Price Movement | Contract Terms |
|---|---|---|
| Q4 2025 | Baseline | LTAs still available |
| Q1 2026 | +60-70% | Quarterly only |
| H1 2026 | Additional increases expected | Quarterly only |
| 2027 | Relief possible | Depends on new fabs |
Combined with 50% increases through 2025, cumulative price increases could nearly double server DRAM costs by mid-2026.34 Industry consensus suggests tight supply conditions will persist through 2026, with Micron CEO Sanjay Mehrotra stating the shortage could extend beyond that year.35
Downstream Effects
The server DRAM shortage cascades through consumer markets. DDR5 retail prices surged two to three times within weeks of the Stargate announcement, with some modules increasing by more than 150% in under a month.36
Enterprise buyers face delivery timelines stretching to 10-14 months, with certain orders pushed into late 2026.37 Organizations planning AI infrastructure deployments must now factor memory availability into project timelines with the same rigor applied to GPU allocation.
Supply Allocation Strategy
Both manufacturers prioritize HBM production over conventional DRAM, squeezing output for standard server memory.38 This strategic choice maximizes revenue per wafer but exacerbates shortages for customers requiring traditional memory products.
SK Hynix's CFO previously stated the company has "already sold out our entire 2026 HBM supply."39 Micron confirmed similar constraints, with HBM capacity for 2025 and 2026 fully booked.40 New capacity will not meaningfully impact availability until 2027.
NVIDIA Rubin: The Demand Driver
NVIDIA's Rubin architecture, entering production in Q1 2026, demonstrates why HBM4 has become strategically critical.41 The platform's specifications require memory performance that only HBM4 can deliver.
Rubin Specifications
The first Rubin GPU features 288 GB of 6.4 GT/s HBM4 memory arranged in eight stacks, providing approximately 13 TB/s of aggregate bandwidth.42 This represents nearly triple the memory bandwidth of Blackwell, the current-generation architecture.43
| Component | Rubin (2026) | Rubin Ultra (2027) |
|---|---|---|
| HBM Type | HBM4 | HBM4E |
| Memory Capacity | 288 GB | 1 TB |
| Memory Bandwidth | ~22 TB/s | ~32 TB/s |
| FP4 Inference | 50 PFLOPS | 100 PFLOPS |
| Process Node | 3nm | 3nm |
NVIDIA quotes up to 50 PFLOPS of NVFP4 inference and 35 PFLOPS of training performance.44 Rubin Ultra, targeted for 2027, will double performance by moving from two compute chiplets to four, with memory capacity expanding to 1 TB of HBM4E delivering approximately 32 TB/s bandwidth.45
Supply Chain Dependencies
Rubin's production depends on TSMC's 3nm process for compute dies and Korean HBM4 for memory.46 Both represent potential bottlenecks. NVIDIA has secured HBM4 samples from all major DRAM manufacturers, but volume production allocation remains competitive.47
The accelerated February 2026 HBM4 production timeline directly responds to NVIDIA's Rubin requirements.48 Both Samsung and SK Hynix adjusted schedules to align with NVIDIA's platform launch, demonstrating the customer's leverage in partnership negotiations.
The $600 Trillion Yongin Cluster
SK Hynix announced plans to increase investment in its Yongin semiconductor cluster from 128 trillion won ($85.5 billion) to 600 trillion won ($410 billion).49 The expansion transforms Yongin into the world's largest HBM production hub.
Cluster Specifications
| Metric | Original Plan | Revised Plan |
|---|---|---|
| Investment | 128 trillion won | 600 trillion won |
| Dollar Value | $85.5 billion | $410 billion |
| First Fab | 2027 | 2027 |
| Focus Products | HBM3E | HBM4, HBM4E, 321-layer NAND |
| Area | 7.28 million m² | 7.28 million m² |
The first fab at Yongin will focus on next-generation HBM products including HBM4 and HBM4E, alongside 321-layer NAND—the world's highest-density NAND flash.50 Operations begin in 2027.
Samsung operates complementary facilities at Pyeongtaek, where a new P5 fab will come online in 2028.51 Samsung's $14.3 billion Research & Development Center in Yongin, operational since mid-2025, focuses exclusively on next-generation AI semiconductors.52
National Investment Framework
The Korean government accelerated construction of the Yongin Semiconductor National Industrial Complex to begin in December 2026.53 When complete, the complex will attract up to 360 trillion won ($246.4 billion) in private investment from major industry players.54
President Lee Jae Myung announced a blueprint to invest more than 700 trillion won ($534 billion) in the semiconductor sector through 2047.55 Near-term allocations include:
| Initiative | Investment | Timeline |
|---|---|---|
| AI-specific semiconductors | 1.27 trillion won | By 2030 |
| Next-generation memory | 215.9 billion won | By 2032 |
| Advanced packaging | 360.6 billion won | By 2031 |
| Compound semiconductors | 260.1 billion won | By 2031 |
The government's 150 trillion won ($110 billion) National Growth Fund targets SK Hynix's Yongin cluster as its first investment destination.56 This public-private coordination represents one of the largest industrial policy commitments in Korean history.
Strategic Implications
South Korea's dominance in HBM production creates advantages and vulnerabilities that will shape global AI infrastructure development.
National Security Dimensions
Memory has become a strategic asset comparable to energy resources or critical minerals.57 Control over HBM production provides leverage in trade negotiations and geopolitical disputes. The Korean government increasingly treats semiconductor manufacturing as a national security priority.
The geographic concentration of production creates risk. Samsung and SK Hynix facilities in Korea represent single points of failure for global AI infrastructure.58 Natural disasters, military conflict, or even industrial accidents could disrupt supply chains worldwide.
Both companies maintain operations in other countries, but advanced HBM production remains concentrated in Korea. Expanding this capability to other regions would require years of investment and technology transfer that neither government nor company has prioritized.
For AI Infrastructure Operators
Organizations planning large-scale AI deployments face new constraints:59
Memory-first planning: GPU allocation depends on HBM availability. Secure memory commitments before finalizing compute purchases.
Extended timelines: Plan for 12-18 month lead times on server DRAM and even longer for HBM-equipped systems.
Budget adjustments: Factor 60-100% memory cost increases into infrastructure projections.
Supplier relationships: Direct relationships with memory manufacturers may prove as valuable as GPU allocations.
For the AI Industry
The HBM bottleneck challenges assumptions about AI scaling. Compute density improvements driven by architectural advances (like those demonstrated by DeepSeek) may prove more practical than raw GPU accumulation when memory constrains deployment.60
Inference optimization gains strategic importance. Models that require less memory bandwidth per token can serve more users on constrained hardware. Architecture decisions that seemed like engineering details become competitive advantages.
For Korea
Success creates its own challenges. The concentration of AI infrastructure value in Korean memory manufacturing invites regulatory scrutiny, potential trade restrictions, and competitive responses from other nations seeking to develop domestic capacity.61
China's memory manufacturers continue development despite export controls. The United States has increased investment in domestic semiconductor manufacturing. Neither represents near-term threats to Korean HBM dominance, but both signal long-term strategic responses to concentration risk.62
What Comes Next
The HBM shortage will ease—eventually. New fabrication plants planned between 2026 and 2030 will begin reaching scale, and memory manufacturers will expand capacity in response to sustained demand.63 But relief remains years away.
2026 Outlook
HBM4 production ramps through 2026, but demand from Rubin deployments will absorb most output. Server DRAM prices will remain elevated through the year, with quarterly increases likely to continue until new capacity comes online.
The February production launch marks the beginning of the HBM4 era, not the solution to supply constraints. Expect allocation battles between hyperscalers, with smaller buyers facing extended delivery windows.
2027 and Beyond
SK Hynix's Yongin cluster begins operations in 2027, potentially adding meaningful HBM4 capacity. Samsung's P5 fab reaches production in 2028. These additions should improve availability, though NVIDIA's Rubin Ultra and subsequent architectures will drive continued demand growth.
HBM4E stacks reaching 64GB will enable new AI system configurations, but will also require additional production capacity that may not exist at launch.64 The cycle of demand outpacing supply shows no signs of breaking.
The Longer View
South Korea's semiconductor industry has achieved a position that few countries can challenge. The combination of manufacturing capability, R&D investment, and government support creates durable competitive advantages that will persist for the foreseeable future.
For AI infrastructure operators, this reality demands strategic adaptation. Memory is no longer a commodity to be purchased on demand—it is a constrained resource requiring long-term planning and supplier relationships. Those who recognize this shift early will secure positions in the next generation of AI systems. Those who don't will wait.
References
-
TrendForce. "OpenAI's Stargate 900k DRAM Wafers Could Hit 40% of Global Output — Led by Samsung & SK hynix." October 2025. https://www.trendforce.com/news/2025/10/02/news-openais-stargate-900k-dram-wafers-could-hit-40-of-global-output-led-by-samsung-sk-hynix/ ↩
-
Sammy Fans. "February 2026 is when Samsung HBM4 arrives, stapled with a competitor." December 2025. https://www.sammyfans.com/2025/12/25/samsung-sk-hynix-hbm4-mass-production-february-2026/ ↩
-
KED Global. "Samsung, SK Hynix seek up to 70% server DRAM price hikes as AI boom tightens supply." January 2026. https://www.kedglobal.com/korean-chipmakers/newsView/ked202601050006 ↩
-
OpenAI. "Samsung and SK join OpenAI's Stargate initiative to advance global AI infrastructure." October 2025. https://openai.com/index/samsung-and-sk-join-stargate/ ↩
-
Financial Content. "The HBM Scramble: Samsung and SK Hynix Pivot to Bespoke Silicon for the 2026 AI Supercycle." January 2026. https://markets.financialcontent.com/wral/article/tokenring-2026-1-2-the-hbm-scramble-samsung-and-sk-hynix-pivot-to-bespoke-silicon-for-the-2026-ai-supercycle ↩
-
DigiTimes. "Samsung, SK Hynix reportedly accelerate HBM4 production to early 2026." December 2025. https://www.digitimes.com/news/a20251226PD223/samsung-sk-hynix-production-hbm4-2026.html ↩
-
JEDEC. "JEDEC and Industry Leaders Collaborate to Release JESD270-4 HBM4 Standard: Advancing Bandwidth, Efficiency, and Capacity for AI and HPC." April 2025. https://www.jedec.org/news/pressreleases/jedec%C2%AE-and-industry-leaders-collaborate-release-jesd270-4-hbm4-standard-advancing ↩
-
Tom's Hardware. "JEDEC finalizes HBM4 memory standard with major bandwidth and efficiency upgrades." April 2025. https://www.tomshardware.com/pc-components/ram/jedec-finalizes-hbm4-memory-standard-with-major-bandwidth-and-efficiency-upgrades ↩
-
Ibid. ↩
-
JEDEC, op. cit. ↩
-
Micron Technology. "Micron Ships HBM4 to Key Customers to Power Next-Gen AI Platforms." 2025. https://investors.micron.com/news-releases/news-release-details/micron-ships-hbm4-key-customers-power-next-gen-ai-platforms ↩
-
SamMobile. "Samsung could start HBM4 AI memory chip mass production in February 2026." December 2025. https://www.sammobile.com/news/samsung-start-mass-production-hbm4-memory-chips-february/ ↩
-
Ibid. ↩
-
Sammy Fans, op. cit. ↩
-
TrendForce. "Samsung Reportedly Plans 50% HBM Capacity Surge in 2026, Spotlight on HBM4." December 2025. https://www.trendforce.com/news/2025/12/30/news-samsung-reportedly-plans-50-hbm-capacity-surge-in-2026-spotlight-on-hbm4 ↩
-
Financial Content, op. cit. ↩
-
SamMobile, op. cit. ↩
-
TrendForce. "SK hynix 2026 Outlook: HBM3E Dominates, HBM4 Dual Strategy Amid 3 Market Headwinds." January 2026. https://www.trendforce.com/news/2026/01/05/news-sk-hynix-2026-outlook-hbm3e-remains-mainstream-hbm4-dual-strategy-amid-triple-market-headwinds/ ↩
-
Sammy Fans, op. cit. ↩
-
TrendForce (January 2026), op. cit. ↩
-
Ibid. ↩
-
SK Hynix Newsroom. "SK hynix Completes World-First HBM4 Development." April 2025. https://news.skhynix.com/sk-hynix-completes-worlds-first-hbm4-development-and-readies-mass-production/ ↩
-
TrendForce. "SK hynix, Samsung Reportedly Deliver Paid HBM4 Samples to NVIDIA Ahead of 1Q26 Contract Finalization." December 2025. https://www.trendforce.com/news/2025/12/16/news-sk-hynix-samsung-reportedly-deliver-paid-hbm4-samples-to-nvidia-ahead-of-1q26-contract-finalization/ ↩
-
OpenAI, op. cit. ↩
-
Tom's Hardware. "OpenAI's Stargate project to consume up to 40% of global DRAM output — inks deal with Samsung and SK hynix to the tune of up to 900,000 wafers per month." October 2025. https://www.tomshardware.com/pc-components/dram/openais-stargate-project-to-consume-up-to-40-percent-of-global-dram-output-inks-deal-with-samsung-and-sk-hynix-to-the-tune-of-up-to-900-000-wafers-per-month ↩
-
TrendForce (October 2025), op. cit. ↩
-
Astute Group. "Samsung and SK Hynix to Supply 900,000 DRAM Wafers Monthly for OpenAI's $500 Billion Stargate Project." October 2025. https://www.astutegroup.com/news/general/samsung-and-sk-hynix-to-supply-900000-dram-wafers-monthly-for-openais-500-billion-stargate-project/ ↩
-
CoinCodex. "AI Workloads Are Set to Consume Over 40% of Global RAM Output by 2029." 2025. https://coincodex.com/article/78966/openai-memory-supply-deals-reshape-global-dram-market/ ↩
-
Data Center Dynamics. "OpenAI plans Stargate data center in South Korea; Samsung Electronics, SK Hynix to supply memory chips." October 2025. https://www.datacenterdynamics.com/en/news/openai-plans-stargate-data-center-in-south-korea-samsung-electronics-sk-hynix-to-supply-memory-chips/ ↩
-
Ibid. ↩
-
KED Global, op. cit. ↩
-
DigiTimes. "Samsung, SK Hynix reportedly reject long-term DRAM contracts and raise prices by up to 70%." January 2026. https://www.digitimes.com/news/a20260106PD224/samsung-sk-hynix-dram-price-increase.html ↩
-
TrendForce. "Samsung, SK Reportedly Hike Server DRAM Prices 60-70% – Google, Microsoft in the Queue." January 2026. https://www.trendforce.com/news/2026/01/06/news-samsung-sk-reportedly-hike-server-dram-prices-60-70-google-microsoft-in-the-queue/ ↩
-
The Register. "AI chip frenzy to wallop DRAM prices with 70% hike." January 2026. https://www.theregister.com/2026/01/06/memory_firm_profits_up_as/ ↩
-
BuySellRam. "Samsung and SK Hynix Signal Up to 70% Server DRAM Price Increases for 2026." January 2026. https://www.buysellram.com/blog/samsung-and-sk-hynix-signal-up-to-70-server-dram-price-increases-for-2026/ ↩
-
CoinCodex, op. cit. ↩
-
Ibid. ↩
-
Financial Content (December 2025). "AI-Driven DRAM Shortage Intensifies as SK Hynix and Samsung Pivot to HBM4 Production." https://markets.financialcontent.com/wral/article/tokenring-2025-12-26-ai-driven-dram-shortage-intensifies-as-sk-hynix-and-samsung-pivot-to-hbm4-production ↩
-
Medium. "Memory Supercycle: How AI's HBM Hunger Is Squeezing DRAM." December 2025. ↩
-
Ibid. ↩
-
TweakTown. "NVIDIA's next-gen Rubin GPUs enter production, gets HBM4 samples from all major DRAM makers." December 2025. https://www.tweaktown.com/news/108770/nvidias-next-gen-rubin-gpus-enter-production-gets-hbm4-samples-from-all-major-dram-makers/index.html ↩
-
NVIDIA Developer Blog. "Inside the NVIDIA Rubin Platform: Six New Chips, One AI Supercomputer." 2025. https://developer.nvidia.com/blog/inside-the-nvidia-rubin-platform-six-new-chips-one-ai-supercomputer/ ↩
-
Tom's Hardware. "Nvidia's Vera Rubin platform in depth — Inside Nvidia's most complex AI and HPC platform to date." 2025. https://www.tomshardware.com/pc-components/gpus/nvidias-vera-rubin-platform-in-depth-inside-nvidias-most-complex-ai-and-hpc-platform-to-date ↩
-
WCCFTech. "NVIDIA Rubin Is The Most Advanced AI Platform On The Planet: Up To 50 PFLOPs With HBM4, Vera CPU With 88 Olympus Cores, And Delivers 5x Uplift Vs Blackwell." 2025. https://wccftech.com/nvidia-rubin-most-advanced-ai-platform-50-pflops-vera-cpu-5x-uplift-vs-blackwell/ ↩
-
VideoCardz. "NVIDIA Vera Rubin NVL72 Detailed: 72 GPUs, 36 CPUs, 260 TB/s Scale-Up Bandwidth." 2025. https://videocardz.com/newz/nvidia-vera-rubin-nvl72-detailed-72-gpus-36-cpus-260-tb-s-scale-up-bandwidth ↩
-
Next Platform. "Nvidia's Vera-Rubin Platform Obsoletes Current AI Iron Six Months Ahead Of Launch." January 2026. https://www.nextplatform.com/2026/01/05/nvidias-vera-rubin-platform-obsoletes-current-ai-iron-six-months-ahead-of-launch/ ↩
-
TweakTown (December 2025), op. cit. ↩
-
DigiTimes (December 2025), op. cit. ↩
-
TweakTown. "SK hynix expands investment plan for new semiconductor cluster in South Korea to $410 billion." November 2025. https://www.tweaktown.com/news/108938/sk-hynix-expands-investment-plan-for-new-semiconductor-cluster-in-south-korea-to-dollars410-billion/index.html ↩
-
Ibid. ↩
-
Data Center Dynamics. "Samsung and SK Hynix to scale up memory production capacity in 2026 to meet AI demand." January 2026. https://www.datacenterdynamics.com/en/news/samsung-and-sk-hynix-to-scale-up-memory-production-capacity-in-2026-to-meet-ai-demand/ ↩
-
Korea Tech Today. "Korea Inc. Comes Home: How Samsung, Hyundai and SK Are Reshaping the Domestic Tech Economy." 2025. https://koreatechtoday.com/korea-inc-comes-home-how-samsung-hyundai-and-sk-are-reshaping-the-domestic-tech-economy/ ↩
-
SiliconANGLE. "South Korea accelerates construction of Yongin semiconductor hub to start in 2026." December 2024. https://siliconangle.com/2024/12/26/south-korea-accelerates-construction-yongin-semiconductor-hub-start-2026/ ↩
-
Ibid. ↩
-
TrendForce. "South Korea to Pour KRW 700 Trillion in Future Chip Industry." December 2025. https://www.trendforce.com/news/2025/12/16/news-south-korea-to-pour-krw-700-trillion-in-future-chip-industry/ ↩
-
Korea Tech Desk. "Korea's National Growth Fund Targets SK Hynix's Yongin Cluster: Redefining Industrial Investment with $11B Deep-Tech Mandate." 2025. https://koreatechdesk.com/korea-national-growth-fund-sk-hynix-semiconductor-cluster-yongin ↩
-
Financial Content (January 2026), op. cit. ↩
-
Analysis based on geographic concentration. ↩
-
Strategic analysis based on market conditions. ↩
-
DeepSeek. "DeepSeek-V3 Technical Report." arXiv:2412.19437. December 2024. ↩
-
Korea Herald. "S. Korea eyes AI processors, southern expansion in semiconductor blueprint." 2025. https://www.koreaherald.com/article/10633799 ↩
-
Korea Times. "'Do-or-die' chip investment in AI era." December 2025. https://www.koreatimes.co.kr/opinion/editorial/20251212/ed-do-or-die-chip-investment-in-ai-era ↩
-
CoinCodex, op. cit. ↩
-
Tom's Hardware. "HBM undergoes major architectural shakeup as TSMC and GUC detail HBM4, HBM4E and C-HBM4E — 3nm base dies to enable 2.5x performance boost with speeds of up to 12.8GT/s by 2027." 2025. https://www.tomshardware.com/pc-components/dram/hbm-undergoes-major-architectural-shakeup-as-tsmc-and-guc-detail-hbm4-hbm4e-and-c-hbm4e-3nm-base-dies-to-enable-2-5x-performance-boost-with-speeds-of-up-to-12-8gt-s-by-2027 ↩