Back to Blog

The Death of Data Center Geography: Why Traditional Markets Won't Survive the AI Era

US data center power demand growing from 33 GW (2024) to 120 GW by 2030—near-quadrupling in six years. Northern Virginia and Phoenix facing terminal power and water constraints. Dominion Energy...

The Death of Data Center Geography: Why Traditional Markets Won't Survive the AI Era

The Death of Data Center Geography: Why Traditional Markets Won't Survive the AI Era

Updated December 11, 2025

December 2025 Update: US data center power demand growing from 33 GW (2024) to 120 GW by 2030—near-quadrupling in six years. Northern Virginia and Phoenix facing terminal power and water constraints. Dominion Energy acknowledging grid capacity cannot match demand. New transmission requiring 7-10 years to permit. Power availability now determining site selection over traditional factors.

Northern Virginia hosts more data center capacity than any market on Earth. Companies spent decades building there because fiber density, customer proximity, and regulatory familiarity made it the obvious choice. Phoenix rose on similar logic: favorable tax treatment, available land, and enough grid connectivity to matter.

Both markets are positioned to lose the next decade.

The AI infrastructure buildout requires power at a scale that existing data center geography cannot provide. US data center power demand will grow from 33 GW in 2024 to 120 GW or more by 2030—a near-quadrupling in six years.¹ No grid planned for this. Traditional markets face hard physical constraints that no amount of investment can overcome on the necessary timeline. Organizations continuing to build in Northern Virginia and Phoenix are making strategic errors that will take years to unwind.

The winning markets of 2030 will be determined by power availability, not by where data centers exist today. Nuclear capacity, renewable generation at scale, and grid headroom will matter more than fiber routes and customer proximity. Geography is about to undergo its most dramatic redistribution since the industry's founding.

Why traditional markets face terminal constraints

Northern Virginia built its dominance on a specific set of advantages: proximity to federal customers, density of fiber interconnections, and an ecosystem of skilled labor and supporting services. These advantages created a flywheel where each new facility made the market more attractive for the next one.

Power demand broke the flywheel.

Dominion Energy, the primary utility serving Northern Virginia, has publicly acknowledged that grid capacity cannot keep pace with data center demand.² New transmission infrastructure requires 7-10 years to permit and build. Substations require 3-5 years. The demand curve exceeds the infrastructure timeline by a factor of two or more. Companies can secure land and construction permits in Northern Virginia faster than they can secure power.

Phoenix faces parallel constraints with additional complications. The Maricopa County grid was built to serve residential and commercial loads with predictable daily patterns. Data centers demand constant baseload power at densities that residential infrastructure never anticipated.

Water availability compounds the problem in ways that liquid cooling doesn't fully solve. Traditional data center cooling consumes 1.8-4.0 liters of water per kWh of IT load.³⁵ A 100 MW facility using evaporative cooling consumes 300-500 million gallons annually—equivalent to 3,500 average households. Arizona's groundwater crisis has forced Maricopa County to restrict new housing developments in areas without assured 100-year water supply.³⁶

Data centers face increasing scrutiny. Phoenix approved facilities consuming 765 million gallons of groundwater annually—equivalent to 5% of the city's residential water use from just a handful of data centers.³⁷ Operators must now compete for water rights alongside residential developers, agriculture, and manufacturing. The political environment grows less favorable as water becomes the region's defining constraint.

Liquid cooling reduces but doesn't eliminate water consumption. Direct-to-chip systems still require heat rejection, often through cooling towers that evaporate water. Closed-loop dry cooling systems that eliminate water consumption require more energy and work less efficiently in Phoenix's summer heat. The tradeoff exists regardless of cooling approach. New facilities face longer approval timelines and higher costs for water rights that didn't exist five years ago.

The markets that dominated the 2010s optimized for the constraints of the 2010s. Fiber connectivity mattered when data needed to travel short distances to reach users. Real estate costs mattered when facilities ran at 5-10 kW per rack. Labor markets mattered when operations required large local teams.

AI infrastructure inverts these priorities. Data travels at the speed of light; an extra few hundred miles of fiber adds single-digit milliseconds of latency that most workloads cannot detect. Real estate costs become rounding errors when a single rack draws 100+ kW of power. Operations increasingly centralize into remote monitoring, reducing the importance of local labor markets.

Power availability becomes the only constraint that matters, and traditional markets have less of it than they need.

The physics forcing geographic redistribution

The shift from traditional computing to AI fundamentally changes the relationship between data centers and electrical grids.

A 2020-era data center running enterprise workloads drew perhaps 20-30 MW at full capacity. Utility interconnection at that scale, while not trivial, fit within the planning horizons and capacity reserves of most major markets. A utility could accommodate a new 30 MW load with relatively minor grid investments.

A 2025-era AI training cluster requires 100-300 MW for a single facility.⁴ The numbers get larger. Microsoft's planned Wisconsin campus will draw 1 GW.⁵ The Stargate project anticipates facilities requiring 1-5 GW each.⁶ Individual buildings will consume more power than small cities.

No existing grid can absorb these loads without massive upstream investment. The transformers, transmission lines, and generation capacity required to serve gigawatt-scale facilities simply don't exist in most markets. Building them takes longer than AI companies are willing to wait.

The physics of power transmission constrains solutions. Electricity experiences losses proportional to distance and inversely proportional to voltage. High-voltage transmission reduces losses but requires expensive infrastructure. Practically, large power consumers must locate near generation sources or accept the costs and complexity of dedicated transmission.

AI data centers are relocating to power sources rather than expecting power sources to reach them. The geographic implications are profound.

Where the power exists

The markets that will dominate AI infrastructure through 2030 share a common characteristic: abundant generation capacity that existing customers don't fully utilize.

Quebec offers hydroelectric power at rates among the lowest in North America—roughly $0.05/kWh for large industrial consumers compared to $0.10+ in Virginia.⁷ The province's massive hydro infrastructure generates more electricity than Quebec consumes, creating available capacity for export or new large loads. The cold climate reduces cooling costs. The political environment welcomes data center investment.

The hyperscalers have noticed. Google announced a $735 million expansion in Beauharnois in 2024.²⁴ Microsoft committed $1.3 billion across multiple Quebec investments.²⁵ Amazon continues expanding its Montreal region. Hydro-Québec reports 3,000+ MW of available capacity specifically for data center development—enough to power facilities that would take Virginia a decade to interconnect.²⁶ Quebec will capture significant AI infrastructure share that would otherwise have flowed to US markets.

The US Southeast combines existing nuclear generation with a regulatory environment favorable to new nuclear development. Georgia's Vogtle Units 3 and 4 represent the first new nuclear construction in the US in decades.⁸ Tennessee Valley Authority operates seven nuclear reactors with 9,000 MW of available capacity for economic development.²⁷ Duke Energy's service territory includes substantial nuclear generation. Georgia Power offers 20-year fixed rates for large industrial customers—the kind of long-term pricing certainty that AI infrastructure investors require.²⁸

Capital is flowing accordingly. Meta expanded its Georgia campus with investments exceeding $800 million.²⁹ Google committed $1 billion to Tennessee expansion.³⁰ QTS, Digital Realty, and Equinix all expanded their Atlanta market presence. The Southeast can offer baseload power that intermittent renewable markets cannot.

Nordic countries provide the optimal combination for liquid-cooled AI infrastructure: renewable power at scale (primarily hydro and wind), naturally cold ambient temperatures that reduce cooling energy consumption, stable regulatory environments, and strong connectivity to European markets.⁹

The track record speaks clearly. Meta built its first non-US data center in Luleå, Sweden, specifically for the power and cooling advantages.³¹ Google expanded its Hamina, Finland facility beyond 1 GW capacity.³² Microsoft committed multi-billion dollar investments across the Nordic region.³³ The average Power Usage Effectiveness in Nordic facilities runs 1.15 compared to 1.4+ globally—a 20% efficiency advantage that compounds annually.³⁴ The region operates on 100% renewable power as standard, not as premium option. Norway, Sweden, and Finland will capture European AI infrastructure investment that might otherwise locate in traditional markets like Frankfurt, London, or Amsterdam.

Iceland represents an extreme case with geothermal power providing carbon-free baseload electricity at costs competitive with any market globally.¹⁰ The isolation creates latency challenges for real-time applications but works well for training workloads where latency doesn't matter. Iceland will grow from a niche curiosity to a meaningful AI infrastructure market.

These markets share the characteristic of having solved the power problem before the AI demand wave arrived. They had excess generation capacity for historical reasons unrelated to data centers. That historical accident becomes strategic advantage.

Why air cooling is already dead for AI infrastructure

The thermal management requirements of AI hardware make traditional air cooling obsolete, and this obsolescence accelerates the geographic redistribution.

NVIDIA's Blackwell GPUs dissipate roughly 1,200 watts per chip under full load.¹¹ A rack containing eight GB200 GPUs draws over 100 kW. Training clusters push toward 150 kW per rack. Air cannot remove heat at these densities efficiently enough to maintain chip operating temperatures.

The physics are straightforward. Air has low thermal conductivity and low heat capacity compared to liquids. Removing 150 kW of heat with air requires massive airflow volumes that create their own energy costs and noise problems. The approach doesn't scale.

Direct-to-chip liquid cooling, where cold plates attached to processors circulate water or specialized coolant, handles rack densities up to approximately 80-100 kW.¹² The technology works with existing raised-floor data center designs and requires less radical infrastructure changes than full immersion.

Beyond 100 kW per rack, immersion cooling becomes necessary. Servers submerge entirely in dielectric fluid that absorbs heat directly from all components.¹³ Single-phase immersion keeps the fluid liquid throughout; two-phase immersion allows the fluid to boil at component surfaces, dramatically increasing heat transfer efficiency.

The current installed base reflects the past, not the future. The global average rack density remains only 12 kW.¹⁴ Less than 10% of data centers operate any immersion cooling.¹⁵ These statistics describe facilities built for workloads that no longer represent the growth vector.

New AI-focused construction defaults to liquid cooling infrastructure. The question isn't whether to deploy liquid cooling but which approach and at what density. Existing air-cooled facilities face retrofit decisions—expensive conversions that may not be economically justified given the facility's remaining useful life.

The cooling transition reinforces the geographic redistribution. Facilities designed for air cooling in temperate markets need more aggressive liquid cooling than facilities in cold climates. The energy required for cooling scales with the temperature difference between the heat rejection point and ambient conditions. Northern locations that seemed unnecessary when air cooling dominated become advantageous when liquid cooling must reject heat to ambient conditions.

The nuclear timeline: aggressive but achievable

Skeptics of the geographic redistribution argue that small modular reactors remain unproven, timelines will slip, and traditional markets will adapt before nuclear power reshapes the landscape.

The skepticism underweights how seriously technology companies have committed to nuclear.

Google's agreement with Kairos Power targets 500 MW of operational capacity by 2030.¹⁶ Microsoft signed a 20-year power purchase agreement to restart Three Mile Island Unit 1.¹⁷ Amazon acquired a nuclear-powered data center campus from Talen Energy.¹⁸ Oracle has discussed nuclear-powered facilities. Collectively, technology companies have announced plans to finance more than 20 GW of nuclear capacity.¹⁹

The first-generation SMR designs rely on proven light-water reactor technology, reducing regulatory and technical risk. NuScale's 77 MW modules use the same physics as existing reactors at smaller scale.²⁰ GE Hitachi's BWRX-300 similarly builds on established technology.²¹ These aren't experimental fusion concepts or untested designs—they're engineering exercises in modularizing proven approaches.

Romania will deploy NuScale's six-module VOYGR plant by 2029, becoming the first European nation with operational SMR capacity.²² China's Linglong One SMR is scheduled for completion in 2026.²³ The technology works. The question concerns deployment speed and cost, not technical feasibility.

Even if SMR timelines slip by 2-3 years—a reasonable assumption given nuclear project history—the facilities will still arrive before traditional markets can build comparable grid infrastructure. The competition isn't between SMRs and existing grids. The competition is between SMRs and transmission line permitting processes that take a decade.

The stranded asset question

Northern Virginia hosts over 3 GW of operational data center capacity. Phoenix hosts another 1+ GW. The redistribution thesis raises an uncomfortable question: what happens to these facilities when the growth shifts elsewhere?

The optimistic scenario: Existing facilities retain value for non-AI workloads. Enterprise IT, content delivery, cloud services—these workloads don't require 100 kW racks or gigawatt campuses. Traditional data centers continue serving traditional customers. The bifurcation doesn't destroy existing value; it redirects growth.

The pessimistic scenario: AI workloads generate the highest margins and drive the most demand growth. Facilities that cannot serve AI workloads become commodity infrastructure competing on price in a market with expanding supply. Utilization rates drop as hyperscalers shift new AI deployments elsewhere. Operators face the choice between expensive retrofits and gradual obsolescence.

The retrofit economics: Converting an air-cooled facility to liquid cooling costs $15-25 per watt of IT capacity—potentially $15-25 million per MW.³⁸ A 100 MW facility faces $1.5-2.5 billion in retrofit costs. Even then, the facility may not access sufficient grid power for expanded AI loads. The power constraint persists regardless of cooling investment. The retrofit math often doesn't close.

The realistic outcome: Northern Virginia and Phoenix don't become ghost towns. They become what they were before the AI wave: major data center markets serving enterprise IT, government, and latency-sensitive applications. The growth goes elsewhere. The existing facilities operate but don't expand meaningfully. Over 10-15 years, relative importance declines as AI-optimized markets absorb the majority of new capacity.

The federal government represents perhaps 10-15% of Northern Virginia data center demand—ensuring baseline utilization regardless of AI trends.³⁹ Commercial hyperscalers drive the rest. The hyperscalers will build where power exists. They've already announced facilities in Ohio, Wisconsin, and other non-traditional markets. The ecosystem follows the investment, not the other way around.

The counterargument: why traditional markets might adapt

Intellectual honesty requires engaging with the strongest arguments against geographic redistribution.

Fiber density and interconnection still matter for some workloads. Financial services requiring microsecond latency to specific counterparties need proximity. Content delivery benefits from edge presence near users. Real-time gaming demands low latency. These workloads will keep traditional markets relevant for specific use cases.

The counterargument weakens when examining what drives the growth. AI training and inference represent the majority of new capacity demand. Training workloads are entirely latency-insensitive—it doesn't matter if the training cluster is 50 or 500 miles from researchers. Inference latency matters more, but the threshold is tens of milliseconds, not microseconds. Most inference applications function fine with geographic diversity.

Grid investments could theoretically solve traditional market constraints. Utilities could accelerate transmission projects. Regulators could expedite permitting. Distributed generation including on-site natural gas could bridge gaps.

The counterargument underweights timeline mismatches. Transmission projects that take 7-10 years cannot serve facilities that companies want operational in 2-3 years. Regulatory acceleration faces genuine obstacles: permitting processes exist to address legitimate concerns about environmental impact, property rights, and community input. Even aggressive acceleration gains years, not the half-decade required to match demand timelines.

Labor markets and ecosystems in traditional hubs provide advantages that new markets lack. Northern Virginia has thousands of experienced data center operators. Phoenix has established construction supply chains. Relocating to Quebec or the Nordics means building these capabilities from scratch.

The counterargument assumes operations remain labor-intensive. Remote monitoring, automation, and centralized expertise reduce the importance of on-site personnel. A 2030 data center requires fewer local operators than a 2015 data center at the same scale. The ecosystem advantages erode as the ecosystem matters less.

Who wins, who loses, and when

The redistribution will not happen instantly, but it will happen faster than the industry expects.

Winners by 2030: - Quebec and Ontario (nuclear + hydro combination) - US Southeast (existing nuclear, favorable new development environment) - Nordic countries (renewable power, cold climate, stable governance) - Iceland (geothermal baseload, extreme cooling advantage) - Regions with stranded nuclear capacity (anywhere with underutilized nuclear plants)

Losers by 2030: - Northern Virginia (grid constrained, no path to sufficient power within timeline) - Phoenix (power constrained, water constrained, cooling disadvantaged) - Traditional European hubs (Frankfurt, London, Amsterdam) without nuclear access - Markets dependent on natural gas baseload facing emissions pressure

Timeline markers: - 2025-2026: Major hyperscalers announce facilities in non-traditional markets at scale - 2027-2028: First SMRs reach commercial operation serving data centers - 2029-2030: Geographic redistribution visible in capacity statistics; traditional market share declines

The companies building significant AI infrastructure in traditional markets today face a difficult choice in 3-5 years: continue operating facilities that cannot access sufficient power for next-generation hardware, or write off investments and relocate. The earlier organizations recognize this trajectory, the more options they retain.

What organizations should do now

The strategic implications for organizations building AI infrastructure are concrete.

For hyperscalers and large enterprises: Begin securing power access in emerging markets before competition intensifies. The land is available now; the power agreements are available now; waiting means competing with everyone else who waited.

For colocation providers: Evaluate portfolio exposure to constrained markets. Facilities in Northern Virginia retain value for non-AI workloads but face ceiling constraints for the highest-value future use case. Geographic diversification reduces concentration risk.

For enterprises planning AI deployments: Factor geographic constraints into cloud and colocation provider selection. A provider with capacity only in traditional markets will struggle to offer competitive AI infrastructure within 3-5 years.

Power determines everything else. Organizations that secure power access in the right markets position themselves for the AI infrastructure era. Organizations that assume traditional markets will adapt are betting against physics and timelines.

Key takeaways

For geographic strategy: - US data center power demand: 33GW (2024) → 120GW+ by 2030; traditional markets face terminal constraints - Winners by 2030: Quebec (hydro + capacity), US Southeast (nuclear), Nordics (renewable + cold), Iceland (geothermal) - Losers: Northern Virginia (grid-constrained), Phoenix (power + water constrained), Frankfurt/London/Amsterdam (no nuclear)

For power planning: - Quebec: ~$0.05/kWh vs $0.10+ Virginia; 3,000+ MW available capacity specifically for data centers - Nuclear commitments: Google/Kairos 500MW by 2030; Microsoft/Three Mile Island 20-year PPA; Amazon acquired Talen campus - Grid timeline mismatch: transmission needs 7-10 years; substations 3-5 years; AI demand timeline 2-3 years

For cooling strategy: - Blackwell GPUs: 1,200W per chip; 8x GB200 rack exceeds 100kW; training clusters push 150kW/rack - Air cooling obsolete for AI; direct-to-chip handles 80-100kW; immersion required beyond 100kW - Nordic PUE averages 1.15 vs 1.4+ global—20% compounding efficiency advantage on 100% renewable

For existing facilities: - Retrofit costs: $15-25M per MW for air-to-liquid conversion; 100MW facility faces $1.5-2.5B - Stranded asset risk: facilities may retain value for non-AI workloads but face ceiling constraints for highest-margin use - Federal government ensures ~10-15% Northern Virginia baseline utilization regardless of AI trends

For strategic planning: - SMR timeline: NuScale Romania 2029, China Linglong One 2026; even with 2-3 year slips, arrives before grid expansion - Phoenix water: 765M gallons annually approved for data centers (5% residential use); increasingly competitive environment - Timeline markers: 2025-26 hyperscaler announcements in non-traditional markets; 2027-28 first SMRs; 2029-30 visible redistribution


Teams at Introl deploy AI infrastructure across 257 global locations, including emerging markets where power availability creates strategic advantage. Geographic planning for AI infrastructure requires understanding where the power will be, not where the data centers are today.

References

  1. Deloitte. "2025 Data Center Industry Outlook." Deloitte Research, 2025.

  2. Dominion Energy. "Data Center Load Growth and Grid Planning." Regulatory Filing, Virginia State Corporation Commission, 2024.

  3. Arizona Department of Water Resources. "Groundwater Management and Industrial Use in Maricopa County." ADWR Report, 2025.

  4. McKinsey & Company. "AI Power: Expanding Data Center Capacity to Meet Growing Demand." McKinsey TMT Practice, 2025.

  5. Microsoft. "Microsoft Wisconsin Data Center Campus Announcement." Microsoft News, 2024.

  6. Stargate Project. "AI Infrastructure Investment Announcement." Press Release, January 2025.

  7. Hydro-Québec. "Large Industrial Electricity Rates." Hydro-Québec Commercial, 2025.

  8. Georgia Power. "Vogtle Units 3 and 4: America's First New Nuclear Units in Decades." Georgia Power News, 2024.

  9. JLL. "2025 Global Data Center Outlook: Nordic Markets." JLL Research, 2025.

  10. Landsvirkjun. "Data Center Power Pricing and Availability." Landsvirkjun Commercial, 2025.

  11. NVIDIA. "Blackwell Architecture Technical Brief." NVIDIA Corporation, 2024.

  12. JLL. "2025 Global Data Center Outlook."

  13. Data Center Dynamics. "Immersion Cooling: Technology Overview and Market Status." DCD Research, 2025.

  14. JLL. "2025 Global Data Center Outlook."

  15. ———. "2025 Global Data Center Outlook."

  16. IEEE Spectrum. "Big Tech Embraces Nuclear Power to Fuel AI and Data Centers." IEEE Spectrum, 2025.

  17. Constellation Energy. "Crane Clean Energy Center Power Purchase Agreement with Microsoft." Constellation News, 2024.

  18. Amazon. "Amazon Acquires Nuclear-Powered Data Center Campus." Amazon News, 2024.

  19. IEA. "Energy Supply for AI – Energy and AI – Analysis." International Energy Agency, 2025.

  20. NuScale Power. "VOYGR SMR Technology Overview." NuScale Technical Documentation, 2025.

  21. GE Hitachi Nuclear. "BWRX-300 Small Modular Reactor." GE Hitachi Technical, 2025.

  22. Nuclear Business Platform. "Top 6 Ways Leading Nations Are Using Nuclear Energy to Power AI and Data Centers." Nuclear Business Platform, 2025.

  23. ———. "Top 6 Ways Leading Nations."

  24. Google. "Google Announces Beauharnois Data Center Expansion." Google Cloud Blog, 2024.

  25. Microsoft. "Microsoft Quebec Data Center Investments." Microsoft News, 2024.

  26. Hydro-Québec. "Data Center Power Availability Report." Hydro-Québec Commercial, 2025.

  27. Tennessee Valley Authority. "Economic Development Power Availability." TVA Commercial, 2025.

  28. Georgia Power. "Large Industrial Customer Rate Programs." Georgia Power Commercial, 2024.

  29. Meta. "Meta Georgia Data Center Expansion." Meta Newsroom, 2024.

  30. Google. "Google Tennessee Data Center Investment." Google Blog, 2024.

  31. Meta. "Luleå Data Center: Meta's First Facility Outside the United States." Meta Engineering, 2013.

  32. Google. "Hamina Data Center Expansion to 1GW." Google Cloud Blog, 2024.

  33. Microsoft. "Microsoft Nordic Data Center Investments." Microsoft News, 2024.

  34. Uptime Institute. "Nordic Data Center Efficiency Benchmarks." Uptime Institute Research, 2024.

  35. U.S. Department of Energy. "Water Usage in Data Center Cooling Systems." DOE Office of Energy Efficiency, 2024.

  36. Arizona Department of Water Resources. "Maricopa County Groundwater Restrictions." ADWR, 2023.

  37. Arizona Republic. "Phoenix Data Center Water Consumption Report." Arizona Republic, 2024.

  38. JLL. "Data Center Retrofit Cost Analysis." JLL Research, 2025.

  39. Northern Virginia Technology Council. "Federal Data Center Demand Analysis." NVTC, 2024.


SEO Elements

Squarespace Excerpt (158 characters): Northern Virginia and Phoenix can't power the AI era. Geographic redistribution is coming—power availability will determine where the next decade gets built.

SEO Title (58 characters): Data Center Geography Is Dying: Where AI Infrastructure Wins

SEO Description (155 characters): Traditional data center markets face terminal power constraints. Quebec, Nordics, nuclear regions will capture AI infrastructure. Analysis of the coming shift.

Title Review: New title "The Death of Data Center Geography: Why Traditional Markets Won't Survive the AI Era" at 69 characters may be slightly long for SERP display but works for social sharing and establishes controversial stance immediately. Consider shortening to "The Death of Data Center Geography: Traditional Markets Won't Survive" (67 chars) for SEO.

URL Slugs: - Primary: data-center-geography-redistribution-ai-infrastructure-2025 - Alt 1: northern-virginia-phoenix-data-center-power-constraints-2025 - Alt 2: data-center-future-nuclear-power-geography-2025 - Alt 3: ai-data-center-location-power-availability-2025

Request a Quote_

Tell us about your project and we'll respond within 72 hours.

> TRANSMISSION_COMPLETE

Request Received_

Thank you for your inquiry. Our team will review your request and respond within 72 hours.

QUEUED FOR PROCESSING