This is just PR and carbon greenwashing.
The usecase of "cloud provider buys power plant to power data center" does not exist: All current power plants are completely unsuitable as single power source for a data center because uptime/reliability is way lower than what you would need; decoupling datacenters from the grid is just a losing move, which is why no credible operator is even trying.
Are we gonna see more vertical integration between power generation/datacenters operation in the future? Maybe. But I'm very confident that we're not gonna see datacenters "leave the electrical grid" to be powered directly by nuclear plants, not now and not within 30 years either.
> This is just PR and carbon greenwashing.
> All current power plants are completely unsuitable as single power source for a data center
I don’t think this is a fair assessment.
For example, there is a data center being built in New Zealand that will be grid connected but the power will be supplied from a huge hydro dam that has to have generators shutdown if the load is not great enough. Its primarily purpose is to power an aluminium smelter and there is not enough transmission capacity to transfer all the electricity elsewhere.
Another counter example is home solar. If a house is grid connected it’s still producing green energy even if it still sips gas generation at night.
I call this greenwashing because Meta throws minor amounts of money at an existing plant, then basically claims all the credit for all the CO2-less electricity that is produced there.
This is IMO just a CO2 accounting trick, basically, and does not really achieve anything, because all that happens is that the average electricity user in that area gets a bit "dirtier" (on paper) while Meta becomes "zero emission" (on paper), meanwhile nothing actually changed.
Don't get me wrong, this is not harmful, but building new emission free power or replacing fossil plants is much more useful than paying some cash to basically shift blame around.
> For example, there is a data center being built in New Zealand that will be grid connected but the power will be supplied from a huge hydro dam
Sure-- but you still really want that grid connection, both to sell power when you have too much and also to buy power when the turbines are being maintained or water is running low. My point here is just that it almost never really makes sense to couple plant and datacenter directly and skip the grid connection.
The technical term for this is "additionality" and Google has been aware of this for over a decade when purchasing green power, so there's no real excuse for Meta to be doing it.
https://sustainability.google/operating-sustainably/stories/...
> To ensure that Google is the driver for bringing new clean energy onto the grid, we insist that all projects be “additional.” This means that we seek to purchase energy from not-yet-constructed generation facilities that will be built above and beyond what’s required by existing energy regulations.
Unless the nuclear plant would shut down without their money, they're just taking carbon credits from the wider grid. Amazon had one of their nuclear plans rejected for this exact reason.
Depends on the use case. Dedicated datacenters for ML training can trade off power reliability vs. other factors like cost or carbon emissions.
You absolutely could do that if you wanted to basically burn money.
Datacenters and especially ML training hardware is highly capital intensive and depreciates at basically constant rate regardless of utilization.
I see currently no scenario where you wwould be willing to idle this expensive infrastructure just to save pennies on the dollar on a grid connection; carbon credits would have to be nonsensically expensive for this to happen.
Adding more 9s is costly, and AI training is very suitable to be throttled and/or interrupted. I'm not talking about days or weeks of downtime, but these things are definitely being considered. Source: I'm working at a Google datacenter.
See e.g. this post from Urs Hölze, one of the fathers of hyperscale computing: https://www.linkedin.com/posts/urs-h%C3%B6lzle_rethinking-lo...
Hm... I'm still not really buying the "turn datacenter off during peak electricity demand" scenario at all, because the ratios just don't seem credible to me:
Assuming ~$10M of capex (to buy the datacenter) per MW of electrical power (required by the datacenter), and hardware that is obsolete after 5 years (or even 10!), turning that datacenter off for an hour just to save like ~50$/MWh (or whatever spot price is) seems extremely counterproductive, because your hardware running for that additional hour is worth multiple times that (you spent like >$100 per operating hour on the hardware alone assuming 10 year lifetime).
It seems much more attractive (and credible) to just install more batteries (or even a gas turbine), instead of chasing demand-side-regulation pretensions.
edit: thx for the link though, that is a very interesting study/data even if I disagree with that conclusion!
A power plant can be turned off for a month at a time for major maintenance.
Tbf nuclear power plants have a capacity factor of more than 90%. Sure you still need a backup (like the grid), but 90% of the time not having to use the grid is a huge amount.
Yes, but even 90% is completely insufficient for a datacenter, and you would have to substitute power for days or weeks (during refueling/maintenance), which makes backup systems unsuitable for the task.
Taking this approach would also basically lock your datacenter power use to the exact output power of the reactor, preventing you from scaling either side of the setup freely.
I think looking at moves like this from a power perspective is wrong, and I strongly believe that this is just minimum effort hedging against increasing CO2 costs (both monetary and reputational), i.e. greenwashing.