Power demand from data centers threatens to scuttle utility decarbonization goals, push grid infrastructure to the brink, and drive up electricity costs for everyday customers already struggling to pay their bills.
But a new report identifies a strategy that utility planners can take to avoid these problems while still providing data centers with the massive amounts of power they require. They simply need to convince data centers to use less electricity from time to time — and they need to do so early in the utility planning process, when it’s still a win-win for both developers and utilities.
The report, based on research conducted by analysis firms GridLab and Telos Energy, used NV Energy, Nevada’s biggest utility, as a case study. According to its numbers, NV Energy could save hundreds of millions of dollars and defer hundreds of megawatts of “new firm capacity needs” — i.e., fossil-gas-fired power plants — if the proposed new data centers in its territory agree to be flexible.
But all these benefits are predicated on that flexibility being “factored into resource planning early on rather than being an afterthought,” Priya Sreedharan, a senior program director at GridLab, said during a webinar last week. Without that vital early work, utilities will lock in multibillion-dollar investments to manage the grid peaks that they assume inflexible data centers will cause.
And once those plans are in motion, the chief incentive for data-center developers to commit to being flexible with their energy — getting faster grid interconnections — will evaporate.
Grid planners and utilities face an unprecedented wave of power demand as tech giants race to build data centers to support their artificial-intelligence ambitions. In many cases, plans for new data centers — the largest of which can use as much power as a small city — are spurring the construction of new fossil-fueled power plants, putting decarbonization further out of reach and raising costs for consumers.
The GridLab–Telos Energy report adds to a growing body of work identifying flexibility as a way for data centers to connect to the grid quickly without causing utility costs and emissions to skyrocket.
To become flexible, data centers will need to invest in gas-fired generators, batteries, solar panels, or other resources to supply their own power needs during times of peak demand. Or they’ll need to take on the technically complex task of ramping down power-hungry computing processes when the grid is under the greatest stress.
Data centers won’t do that just to save money on their electric bills, said Derek Stenclik, founding partner at Telos Energy. But they might do it to speed up when they get connected to the grid — or, in data-center parlance, “time to power.”
In some parts of the country, data centers are struggling to get the grid connections they need even though they’re willing to pay extremely high power prices to secure them. That’s because building the power plants and grid infrastructure to meet their demands can take years.
“If you go to a prospective data center and say, ‘Hey, with our queue, it’s going to take five years for us to bring on new resources to build the transmission to get to you and you can wait five years, or we can interconnect you in two years if you’re willing to curtail 10 to 12 hours a year,’ the answer there will be much, much different than if you’re asking them after they’ve been designed,” Stenclik said.
GridLab and Telos Energy chose NV Energy as a test case for a few reasons.
First, the utility has a ton of new data centers trying to connect to its grid — enough to add 2 gigawatts of peak load by 2030 — and keeping up with that demand will be expensive. Former NV Energy CEO Doug Cannon told the Nevada Appeal in February that the utility may need “billions of dollars of investment” to “double, triple, even quadruple the size of the total electric grid” in the northern Nevada region where most of the new data centers are being built.
Second, GridLab and Telos were ready to model the impact of flexible data centers in the region because they served as experts for groups intervening in the utility’s 2024 integrated resource plan. Utilities, regulators, and other stakeholders use these plans to figure out what mix of generation resources are required to meet future grid needs.
NV Energy’s latest plan calls for converting a coal-fired power plant in northern Nevada to run on fossil gas, rather than building solar and batteries at the site, as it had previously proposed — a decision opponents are formally challenging because they argue it will increase customer costs. Like many U.S. utilities, NV Energy faces backlash over rising rates, including an overcharging scandal that coincided with Cannon’s resignation in May.
Similar load-growth pressures driven by the AI data-center boom are pushing utilities across the country to plan far more new gas-fired power plants, at great cost not only to the climate but also to customers, who will pay higher bills to cover the cost of building and fueling them. Data centers are already pushing up electricity rates in some parts of the country.
Flexible data centers could make a big dent in these costs by allowing utilities to rely more on solar and batteries, which are less costly and faster to build than gas plants. GridLab and Telos Energy’s fact sheet on their analysis of NV Energy found that “even modest levels of load flexibility can yield large capacity savings.”
Specifically, the report found that 1 GW of data-center flexibility could defer from 665 to 865 megawatts of new firm capacity needs and save $300 million to $400 million through 2050. Those savings would come from alleviating the utility’s need to build more gas-fired power plants and from substituting more “lower cost ‘energy’ focused resources such as solar plus storage.”
Getting data centers to commit to energy-flexible operations could make a huge difference across the country, according to Tyler Norris, a Duke University doctoral fellow who is a former solar developer and special adviser at the Department of Energy. He co-authored an analysis released in February that found nearly 100 gigawatts of existing capacity on U.S. grids for data centers that can commit to a certain level of flexibility.
Getting data centers to ease off during specific hours of the year is eminently feasible, Norris argued in an August presentation to state utility regulators. Data centers’ “capacity utilization” rates — a measure of how much of their total potential power demand they’re using across all hours of the year — are all over the map, with some analyses estimating rates as low as 50%.
But utility planners can’t build a grid around estimates, and data-center developers don’t have good reasons to commit to using less power unless they see a clear reward.
“Not even the most sophisticated data center owner-operators necessarily know what their utilization rates and load shapes will look like,” Norris wrote in an August blog post. “Their preference is generally to maintain maximal optionality” — that is, to demand as much access to as much always-available power as they can get.
Nor do data centers have a clear path to achieve the kind of flexibility that utility planners may demand, said Ben Hertz-Shargel, global head of grid-edge research for analytics firm Wood Mackenzie.
“There are two main ways to make data centers flexible,” Hertz-Shargel said. “You can make the compute flexible. Or you can use backup generation, which is almost always diesel today.”
But data centers can’t run megawatts of noisy, polluting, and expensive diesel generators without running afoul of air-quality regulations and enraging neighbors, he said. True flexibility will require more novel options like gas-fired generators and batteries charged from the grid or on-site solar systems, he added.
Meanwhile, flexible computing is in its early stages. Of the major tech giants, only Google has actively engaged with utilities to shift computing to match grid needs. Experiments from companies such as Emerald AI have shown “some auspicious results,” Hertz-Shargel said. “But for the industry to count on that, it’s too early.”
Utilities and regulators will also need to adapt how they plan for serving flexible data centers, Telos Energy’s Stenclik said. Today, they’re taking on rising data-center costs in a multitude of ways, from crafting special tariffs to govern their impact to allowing tech giants to contract for 24/7 clean energy resources in order to supply their power demands. But he wasn’t aware of any utility that has undertaken a real-world version of the kind of demand-side flexibility analysis that GridLab and Telos did.
Utilities should start working on it, given the alternatives, he said. “We’re leading to higher total capacity needs. We’ve seen huge challenges on the supply chain. We’re out five, six years from new gas turbines now,” he estimated.
“I think there’s a ton of latent flexibility,” he concluded. “We’re just asking for it at the wrong time. If you ask for it when they’re already built and designed and on the system, the answer is going to be no. If we trade speed to interconnect for flexibility, I think the answer will absolutely be yes.”