Earth's oceans contain approximately 1.35 billion cubic kilometers of water. To raise this entire volume from an average temperature of 3.5C to boiling (100 C), we'd need roughly:
1.35 x 10^21 kg x 4,184 J/(kg C) x 96.5C is approximately 5.45 x 10^25 joules
That's 545 million exajoules or about 10,000 times humanity's annual energy consumption.
If you tried to brute-force AES-256 with conventional computers, you'd need to check 2^256 possible keys. Even with a billion billion (10^18) attempts per second:
2^256 operations / 10^18 operations/second is approximately 10^59 seconds. You'd need about 2.7 x 10^41 universe lifetimes to crack AES-256
At about 10 watts per computer, this would require approximately 10^60 joules, or roughly 2 x 10^34 times the energy needed to boil the oceans. You could boil the oceans, refill them, and repeat this process 200 trillion trillion trillion times.
For RSA-2048, the best classical algorithms would need about 2^112 operations. This would still require around 10^27 joules, or about 20 times what's needed to boil the oceans.
ECC with a 256-bit key would need roughly 2^128 operations to crack, requiring approximately 10^31 joules
It's enough to boil the oceans about 2,000 times over.
Quantum computers could theoretically use Shor's algorithm to break RSA and ECC much faster. But to break RSA-2048, we'd need a fault-tolerant quantum computer with millions of qubits. Current quantum computers have fewer than 1,000 stable qubits.
Even with quantum computing, the energy requirements would still be astronomical. Perhaps enough to boil all the oceans once or twice, rather than thousands of times.
That's assuming there's no attacks found in a given algorithm. If there is a feasible attack found, the math changes, sometimes dramatically. And we'll never know it because they sure as hell aren't gonna announce it.
Anyway, I'm not worried because governments don't need to crack encryption to do dastardly shit. They have far easier methods to get what they want.
Also just picking constants for encryption algorithms that are supposed to be "nothing up my sleeve" numbers, like the n first digits of pi.
DJB had a good talk about how many degrees of freedom you can still get picking such numbers and how much you can weaken crypto algorithms (even though not outright breaking them), but I can't find it at the moment
You need to account for the heat of vaporization if you plan on boil away and refilling the oceans for your brute force scheme, so you overestimate how many times you will boil away the oceans by a factor of 6 or something.
The "boiling the ocean" argument comes up every once in a while for some time now, just a lot more structured and number packed in the age of LLMs. There are even funny "security" levels based on this [0] like "lake security".
The picture they paint is very useful to help people grasp the scale of "worst case" brute forcing while being completely misleading on the effort needed to break encryption "somehow". Cracking the encryption isn't usually about brute forcing every possible combination, it's all about finding or building a flaw in the algorithm.
Bike thieves don't go through the 10000 combinations on your lock, scammers don't try every possible email password, etc.
Brute forcing a key finds you one answer at a time, hacking the algorithm finds you all answers at once. Without boiling the ocean.
An adversary with full Intel Management Engine (IME) access could intercept AES-NI instruction calls before execution, replacing them with compromised implementations that maintain superficial compliance with expected behaviors. The encryption would still function much like a funeral home makeup artist ensures the deceased appears lifelike. These direct instruction interceptions occur at a level below the operating system and hypervisor, making them essentially invisible to security monitoring.
The IME's DMA capabilities enable memory inspection without host awareness. Cryptographic keys residing in RAM become visible to this subsystem, essentially placing the combination to the digital vault in plain view of an entity designed never to be seen. One might say the keys to the kingdom are being displayed on a billboard visible only to those standing in another dimension. This extraction could happen before legitimate AES-NI operations even process the key material.
Random number generation becomes particularly vulnerable. By introducing subtle biases to hardware entropy sources like CPU thermal or timing sensors, an adversary could ensure generated keys fall within a predictable pattern while presenting all appearances of randomness.
Statistical tests would show nothing amiss, like a perfectly balanced coin that somehow lands heads 51% of the time over millions of flips, a mathematical miracle that passes unnoticed until the casino's bankruptcy. These manipulations would bias the PRNG to produce predictable entropy patterns that drastically reduce effective key space.
Microcode updates deployed through IME channels could modify AES-NI instruction behavior at its core, ensuring the cryptographic equivalent of building a vault door with steel exterior panels but papier-mache hinges. Everything looks secure until someone approaches from the correct angle. These updates could specifically target the AES-NI implementation to use reduced key space or introduce mathematical weaknesses into the diffusion properties of the algorithm.
Side-channel attack facilitation presents another avenue for compromise. The IME could enable precise timing measurements of AES operations, deliberately increase susceptibility to cache-timing attacks, and manipulate power states to enhance the effectiveness of power analysis techniques while appearing to function normally.
The most effective entropy reduction strategy would likely combine several approaches: replacing the AES-NI implementation with one that only explores a fraction of the key space, creating deterministic but seemingly random patterns for key generation, leaking key material via covert channels to the IME's persistent storage, and maintaining the outward appearance of full entropy while drastically reducing actual security margins.
Detection of such tampering remains virtually impossible given the IME's isolated execution environment.
Security researchers can only examine the results of cryptographic operations, unable to observe the process directly similar to trying to determine if someone has tampered with your food while blindfolded.
The mathematics of AES remain sound, of course. But mathematics requires faithful execution to maintain security guarantees, and therein lies the fundamental issue.
AES-NI itself doesn't provide an avenue for key entropy reduction, since it doesn't generate keys itself, or for exfiltration of stolen keys through the encrypted output, or for introducing mathematical weaknesses into the diffusion properties of the algorithm. If an AES implementation produces output that differs by even one bit from a correct AES implementation, then decryption will fail.
Non-constant timing would also be detectable, though as you say cache side channels are feasible. Power-side-channel key exfiltration is certainly feasible if the attacker can measure power consumption, but AES-NI isn't relevant to many threat models that permit power side channels; amd64 CPUs aren't used in smartcards.
But certainly the IME could steal AES or other cryptographic keys from memory, store them in its own storage, and leak them through some other channel.
"... brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
This is an excellent comment, but I think it's worth pointing out some lacunae.
The most important one is that we're assuming that nobody finds a weakness in AES-256, so we have to brute-force it instead of taking some kind of shortcut. Historically speaking, that doesn't seem like a sure bet. (Some slight progress has been made on AES, but nothing practically useful yet: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard#K...) Similar comments apply to factoring large semiprimes and ECDLP; algorithmic improvements could remove many orders of magnitude from these estimates.
Sometimes, even when weaknesses aren't known in the algorithms themselves, there are weaknesses in how they are applied. The Debian OpenSSL fiasco, which seems to have been accidental, may be the best-known example: all secret keys were generated with only 16 bits of entropy. Reusing IVs for OFB or CTR mode is also catastrophic.
A somewhat pedantic note is that you seem to be using two conflicting definitions of "boil the oceans" in different parts of your comment: to raise them to the boiling temperature while leaving them liquid, at first, and to convert them to vapor, later, since you talk about "refilling them". Converting them to vapor requires several times more energy than that. Also, you dropped an order of magnitude somewhere; raising the oceans to boiling requires 5.46 × 10²⁶ J, not 5... × 10²⁵ as you say. ("545 million exajoules" is correct.)
I used `cal_mean` from units(1) to do the calculation, which is based on the mean specific heat of water from 1° to 100°. I'm not sure that's correct for salt water, though, and in any case that's a minor error.
"about 10,000 times humanity's annual energy consumption" is wrong. 545 million exajoules is about a million years of humanity's energy consumption, which is only about 18 terawatts, excluding agriculture.
As gosub100 pointed out, on average you only have to try 2²⁵⁵ possible keys before finding the right one, not all 2²⁵⁶, but that's only a factor of 2.
10¹⁸ AES attempts per second does seem like a reasonable upper bound, but it's much faster than currently existing encryption hardware. 10¹⁸ Hz is the frequency of 0.3-nanometer X-rays with an energy of about 4000 electron volts. I feel like any computer hardware that is performing operations that fast probably cannot be made out of molecules or atoms. You might be able to build it on the surface of a neutron star or a black hole. Seth Lloyd's Nature paper from 02000 on the "ultimate laptop", "Ultimate physical limits to computation", explores some of the physical phenomena involved, and how fast they could possibly compute: https://faculty.pku.edu.cn/_resources/group1/M00/00/0D/cxv0B...
If we take 10¹⁸ Hz and 2²⁵⁶ cycles as given, it is true that one computer would need 10⁵⁹ seconds to finish the job (4×10⁵¹ years), which is indeed about 2.7 × 10⁴¹ times longer than the universe has existed so far (13.79 billion years). But it's worth pointing out that the universe's lifetime is not yet over; it is expected to continue existing much longer than that: https://en.wikipedia.org/wiki/Timeline_of_the_far_future lists various stages of its future evolution, including the end of star formation in 10¹²–10¹⁴ years, the last star burning out in 1.2 × 10¹⁴ years, 10³⁰ years until all the galaxies fall apart, 2×10³⁶–3×10⁴³ years until all protons and neutrons are gone (if protons decay), 10⁹¹ years until the Milky Way's black hole evaporates, and 10¹⁰⁶–2.1×10¹⁰⁹ years until the last black holes evaporate. If protons are stable, you could definitely build a computer that kept computing for the necessary 10⁵² years.
And (as you point out next!) you could use more than one computer. If you could somehow use 10⁵⁹ computers, you could finish the job in a second, rather than in untold eons. It depends on how many computers you can get!
"10 watts" is a somewhat handwavy estimate. Most of the computers around me, in things like my multimeter and my MicroSD card, use a lot less power than that, often a few milliwatts. (The fact that the MicroSD card doesn't have a monitor and keyboard is irrelevant to using it for AES cracking.) I'm currently working on a project called the Zorzpad, to build a self-sufficient portable personal computing environment on under a milliwatt, something that has become possible recently due to advancements in subthreshold digital logic.
But even a milliwatt may be an overestimate for AES cracking on classical hardware, because reversible logic may be able to drop power consumption by one or more additional orders of magnitude, and as far as we know, there's no lower limit (not even the ones Lloyd's article talks about apply). AES cracking is especially suited for reversible computing, which is why I used it as an example in this comment a week ago: https://news.ycombinator.com/item?id=43850835
It may be worth pointing out that 10⁶⁰ joules (which, despite the possible weaknesses above in its derivation, is certainly a plausible ballpark) is a large number not just measured against Earth, but measured against the Sun and indeed the energy output of the entire Milky Way galaxy.
It's even large compared to the available energy in the Milky Way. If you divide it by c² you get 1.2 × 10⁴³ kg. The Milky Way weighs 1.15 × 10¹² solar masses (https://en.wikipedia.org/wiki/Milky_Way) which turns out to be 2.29 × 10⁴² kg, which is 2.06 × 10⁵⁹ J. So even if you converted the entire galaxy into energy to power your AES crackers, you wouldn't get 10⁶⁰ J.
It's probably worth including AES performance numbers on currently available hardware. You'll still get galactic numbers demonstrating that AES-256 is not currently brute-forceable.
Thank you for this correction and additional perspective.
The Debian vulnerability was particularly bad. An AES key with 16 bits of entropy can be broken with the energy used by a single LED for a fraction of a nanosecond.
Reducing entropy covertly is probably the sole purpose of the so-called Intel Management Engine
I'm not sure the Debian vulnerability affected AES keys, but it definitely affected RSA keys.
A single LED is somewhere between 1 milliwatt and 1 watt, so in a tenth of a nanosecond it uses between 100 femtojoules and 100 picojoules. 2¹⁵ AES encryption operations currently require a lot more energy than that. I'm not sure how much, but it's a lot more.
How much does an AES encryption operation take? https://calomel.org/aesni_ssl_performance.html suggests AES-256-GCM runs at 2957 megabytes per second on each core of an "Intel Gold 5412U", which https://www.intel.la/content/www/xl/es/products/sku/232374/i... tells me is a 24-core CPU launched in Q1 of 02023 with a TDP of 185 watts. https://en.wikipedia.org/wiki/Advanced_Encryption_Standard says the AES block size is 128 bits, so 2957MB/s is 185 million blocks per second per core. Dividing 185 watts by 24 cores of that gives 41.7 nanojoules per block. This is probably reasonably representative of energy requirements for current AES hardware implementations. It presumably doesn't include key setup time, and brute-force cracking will do more key setup than normal encryption, but it's probably in the ballpark, especially for dedicated chips ticking through closely related keys. In any case, key setup surely cannot take less than zero energy, so this represents a lower bound.
Running
openssl speed -elapsed -evp aes-256-gcm
on my own laptop (without -evp, I get "speed: Unknown algorithm aes-256-gcm"), I get 3900 megabytes per second for large block sizes, or 2300 megabytes per second running on battery power.
The 'numbers' are in 1000s of bytes per second processed.
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
AES-256-GCM 353329.81k 1012347.01k 2190564.18k 3178319.19k 3791358.63k 3676427.61k
According to
cat /sys/class/power_supply/BAT0/power_now
I'm using about 12–16 million microwatts to do this, compared to about 6–8 watts when idle. So we can ballpark the AES energy consumption around 7 watts. Dividing that by 2300 megabytes per second, it comes out to about 49 nanojoules per block. This is reassuringly similar to the calomel numbers.
The number for 16-byte blocks is much lower, like 240 megabytes per second on battery and 360 megabytes per second on AC power. This probably tells us key setup takes about an order of magnitude more energy than encrypting a block, but maybe that's just because AMD was optimizing encryption speed over key setup speed.
2¹⁵ times 40 nanojoules is 1.3 millijoules. This is between 13 million and 13 billion times more than the energy used by a single LED for a fraction of a nanosecond.
Also, 2²⁵⁵ times 40 nanojoules is 2.3 × 10⁶⁹ J, a couple billion times larger than your estimate upthread. It's pretty amazing than in 67 nanoseconds my CPU can encrypt something such that it would require, as far as we know, the resources of billions of galaxies to decrypt without knowing the key.
The IME is probably a backdoor, but I don't think we have enough information to say clearly what kind of backdoor.
2¹²⁷ nanoseconds would be only 390 billion times longer than the universe has existed so far (13.79 billion years). If you wanted to crack AES-128 with brute force using one-billion-key-per-second cracking computers and could only wait a year, you would need 5.4 sextillion computers. If each of those computers weighed 100 milligrams, in the neighborhood of many current chips, their total mass would be 539 trillion tonnes (5.39 × 10¹⁸ kg, 539 exagrams).
That's only about a hundred thousandth of the mass of the Moon, and there are dozens of asteroids larger than this. Since it's clearly physically possible to disassemble an asteroid, or even the entire Moon, and build computers out of it, AES-128 should not be considered secure against currently known attacks. However, currently, it is not publicly known that the NSA has converted any asteroids into computers, and it seems unlikely to have happened secretly.
It's worth noting that when the NSA invented DES, they took a cipher from IBM and made it more resistant (to differential cryptanalysis, a technique that at the time wasn't known outside the NSA itself).
> NSA gave Tuchman a clearance and brought
him in to work jointly with the Agency on
his Lucifer modification. . . . NSA tried to
convince IBM to reduce the length of the
key from 64 to 48 bits. Ultimately, they
compromised on a 56-bit key.
> The cryptographic core of NSA's sabotage of DES was remarkably blunt: NSA simply convinced Tuchman to limit the key size to 56 bits, a glaring weakness.
> Whit Diffie and Marty Hellman wrote a paper explaining in considerable detail how to build a machine for $20 million that would break each DES key with an amortized cost of just $5000/key using mid-1970s technology. They predicted that the cost of such a brute-force attack would drop "in about 10 years time" to about $50/key, simply from chip technology improving.
> Diffie and Hellman already distributed drafts of their paper before DES was standardized. Did NSA say, oh, oops, you caught us, this isn't secure?
> Of course not. NSA claimed that, according to their own estimates, the attack was 30000 times more expensive: "instead of one day he gets something like 91 years".
The main source here is https://archive.org/details/cold_war_iii-nsa/cold_war_iii-IS..., "American Cryptology during the Cold War, 1945-1989", DOCID: 523696, REF ID: A523696, a declassified internal NSA history. Longer version of the quote above, originally classified TOP SECRET UMBRA, from p.232 (p.240/271)
> (S CCO) The decision to get involved with NBS was hardly unanimous. From the SIGINT standpoint, a competent industry standard could spread into undesirable areas, like Third World government communications, narcotics traffickers, and international terrorism targets. But NSA had only recently discovered the large-scale Soviet pilfering of information from U.S. government and defense industry telephone communications. This argued the opposite case - that, as Frank Rowlett had contended since World War II, in the long run it was more important to secure one's own communications than to exploit those of the enemy.
> (FOUO) Once that decision had been made, the debate turned to the issue of minimizing the damage. Narrowing the encryption problem to a single, influential algorithm might drive out competitors, and that would reduce the field that NSA had to be concerned about. Could a public encryption standard be made secure enough to protect against everything but a massive brute force attack, but weak enough to still permit an attack of some nature using very sophisticated (and expensive) techniques? NSA worked closely with IBM to strengthen the algorithm against all except brute force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately, they compromised on a 56-bit key.
This may sound like a paranoid conspiracy theory, but it is the point of view of an NSA insider, writing in 01998 for an audience of NSA cryptoanalysts and cryptographers to educate them on the history of cryptology during the Cold War. It is understandable that Schneier and others believed that the overall influence of the NSA on DES was to increase its security, because they did not have access to this declassified material when they formed those opinions; it wasn't declassified until July 26, 02013.
That's true, but the fact that NSA wanted to make brute force cheaper also suggests that they didn't have any particular offensive tricks up their sleeve (they had differential cryptanalysis but they used their knowledge defensively) like they did with Dual_EC_DRBG.
Yes; also, if they had had such tricks, they probably would have mentioned them in that document, perhaps in a following paragraph that was censored from the declassified version. But there seems to have been no such paragraph, further supporting your inference.
I like to tell myself that everyone at NSA is a fine upstanding patriot, and that the agency only ever does what is in the best interest of the American People, but that does feel naive at times. Like when they infiltrate international standards bodies to introduce backdoors.
Is downsizing the NSA something we're upset about?
It doesn't even really matter what character most of them have. Most information is on need to know basis for a reason so the one giving the orders can tell a tale about foreign terrorists while the grunts happily surveil the common man.
If you tried to brute-force AES-256 with conventional computers, you'd need to check 2^256 possible keys. Even with a billion billion (10^18) attempts per second: 2^256 operations / 10^18 operations/second is approximately 10^59 seconds. You'd need about 2.7 x 10^41 universe lifetimes to crack AES-256
At about 10 watts per computer, this would require approximately 10^60 joules, or roughly 2 x 10^34 times the energy needed to boil the oceans. You could boil the oceans, refill them, and repeat this process 200 trillion trillion trillion times.
For RSA-2048, the best classical algorithms would need about 2^112 operations. This would still require around 10^27 joules, or about 20 times what's needed to boil the oceans.
ECC with a 256-bit key would need roughly 2^128 operations to crack, requiring approximately 10^31 joules It's enough to boil the oceans about 2,000 times over.
Quantum computers could theoretically use Shor's algorithm to break RSA and ECC much faster. But to break RSA-2048, we'd need a fault-tolerant quantum computer with millions of qubits. Current quantum computers have fewer than 1,000 stable qubits. Even with quantum computing, the energy requirements would still be astronomical. Perhaps enough to boil all the oceans once or twice, rather than thousands of times.