Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



> Do we just make it act like /dev/urandom by default, and add a new flag for "wait for entropy"?

Dear God. The CSPRNG situation on Linux is deeply depressing.

/dev/urandom is useless because it spews non-random data if it hasn't been seeded yet.

/dev/random is useless because it starts blocking if you try to read too much data from it, because of a mistaken belief that a properly seeded CSPRNG can run out of entropy.

Plus they're both slow as hell, so people try to implement their own PRNGs, often having bugs in the generator or seeding, leading to security issues.

Meanwhile the BSDs have handled this correctly for years. But inexplicably, instead of actually fixing /dev/(u)random, the Linux engineers decide to add a new getrandom() syscall which implements the correct behaviour of only blocking if the PRNG hasn't been seeded.

So finally with getrandom() Linux has a way to securely generate random data without unnecessarily blocking, and now Linus seems to be floating the idea to break it again!

The kernel has plenty of ways to securely seed a PRNG at boot on modern systems; IRQ timings, multicore tricks, sensor data, etc. Run some statistical tests on it to ensure you have a couple hundred bits of randomness and you're done.


> So finally with getrandom() Linux has a way to securely generate random data without unnecessarily blocking, and now Linus seems to be floating the idea to break it again!

Yes, getrandom() works pretty much the "right" way. But the problem is that it still can block during boot, indefinitely. And nobody really wants their computer to just stop working, because it can't guarantee that the entropy is not theoretically possibly "bad". Real users do not want this. But it happens.

The root of this is security paranoia. Security people didn't want the RDRAND instruction to be trusted. SystemD didn't credit the entropy pool when adding the saved seed file from the previous boot, until very recently it got an option to credit the entropy pool. These things are all mixed into the pool, and on any desktop machine /dev/urandom is absolutely fine, but security expert pressure has forced these systems to not trust that real entropy has been added from the many sources that are already implemented. You might be surprised how many people make this problem go away by running havaged, which provides very dubious entropy.


> Security people didn't want the RDRAND instruction to be trusted.

The recent AMD issues have shown that you certainly shouldn't trust rdrand blindly. Even using it after running some statistical tests would still have blocked the kernel bootup on affected machines.

> And nobody really wants their computer to just stop working

I would rather have my computer stop working than initiate cryptographic keys from an all-1s seed.

Of course filling the entropy pool should keep making progress, no matter how slowly, e.g. via the jitter RNG and eventually unblock the systeem. But until there's not enough entropy available for userspace it shouldn't pretend there is.


> You might be surprised how many people make this problem go away by running havaged, [sic] which provides very dubious entropy.

My personal favorite is:

  # rngd -r /dev/urandom -o /dev/random


TBF that's a proper fix for the stupidity that is the entropy estimator.


Whenever I read these discussions, I always see references to BSD vs Linux. The BSD "way" seems cleaner to me, but I'm certainly no expert.

But does anyone know what Windows and OSX (and iOs for that matter) does to "warm up" entropy?


On macOS, /dev/random never blocks. From the beginning up until 2014 or so, this was blatantly insecure: the only initial entropy was the system clock! In microseconds, not seconds, thankfully, but that's still a very low amount of entropy. securityd in userland would send the kernel more entropy once it came up, but before that point, /dev/random would just spew low-quality random numbers.

Since 2014, however, macOS has expected to get a random seed from the bootloader, which in turn gets it from rdrand if available, or some complicated timer stuff if not. I'm not sure how secure the latter is, but as of Mojave, there are no longer any supported Macs without rdrand, making the issue moot...


On Windows, I believe RtlGenRandom is pretty similar to getrandom() etc. Including the "it won't fail due to failure to open a device" aspect.


> inexplicably, instead of actually fixing /dev/(u)random, the Linux engineers decide to add a new getrandom() syscall which implements the correct behaviour of only blocking if the PRNG hasn't been seeded.

FWIW the OpenBSD folks first implemented getentropy() and recommended that Linux do the same[0], because devices causes various issues (e.g. chrooting, attacker-controlled FD exhaustion, …) having a reliable syscall is extremely valuable.

Sadly the Linux folks way over-engineered the thing with all kinds of tuning knobs so that you can get any preexisting behaviour you want: by default getrandom will read from the "urandom source" but block until the entropy pool has been initialised once.

However, you can also:

* GRND_RANDOM to have it read from the "random source" and block if "no random bytes are available"

* GRND_NONBLOCK to have it never block on lack of entropy, whether from the entropy pool not being initialised at all (!GRND_RANDOM) or because "there are no random bytes" (GRND_RANDOM), in which case it can also fail (getentropy should only fail if you give it an invalid buffer address or you request more than 256 bytes)

[0] https://www.openbsd.org/papers/hackfest2014-arc4random/mgp00... also through the libressl project complaining about the lack of safe way to get good random data[1]

[1] https://github.com/libressl-portable/openbsd/blob/4e9048830a...


/dev/urandom is useless because it spews non-random data if it hasn't been seeded yet.

Does it? Under what circumstances? Where can I read about it?


`man 4 random` states,

> When read during early boot time, /dev/urandom may return data prior to the entropy pool being initialized. If this is of concern in your application, use getrandom(2) or /dev/random instead.


it happened in my system in last boot, dmesg says dbus-daemon tried (twice!) to read urandom before its seeded, and the next message (same second, about 200ms) is about urandom has been seeded, it is a race condition!


> Meanwhile the BSDs have handled this correctly for years.

How BSDs handle it correctly?


And before people start bashing systemd, the cause for boot hangs was actually collection of randomness for a Xorg/Xwayland MIT cookie through gdm. So, Linus' e-mail linked by the parent is somewhat inaccurate.

https://lore.kernel.org/linux-ext4/20190915081747.GA1058@dar...

https://lore.kernel.org/linux-ext4/20190916014050.GA7002@dar...


Yet it is implicitly directed at Systemd that have not hesitate in the past to break userspace. More than once.


> The correct fix is to fix getrandom() to not block when it's not appropriate,...

I'm not a cryptography expert, but this suggestion doesn't look right.

Edit: IMO the main problem is the lack of a forward progress guarantee for entropy generation, even if there are suitable sources for entropy in the system.


I've worked in crypto before (not now though) and your statement is actually a common misconception. The information about whether the randomness is sourced from "true" entropy or not is in and of itself a form of entropy from the perspective of an assailant. It's enough that given some period of time, "some" amount of entropy is present and how much is dependent on the degree to which the system is used. You can use a game theoretic approach to determine how much that is.

edit: to clarify, occasionally not using /dev/random when it may block is not actually a security issue (in most cases)


How do we know which cases?


The only case where it could be an issue is at bootup, when the subsystem simply does not have enough entropy. OpenBSD solved that by storing a seed for next bootup at the end of boot and during shutdown.


So what happens at the very first boot (e.g. after system installation, or cloud instance just being spawned)? Is that the only circumstance where it would be OK to block? Does OpenBSD trust RdRand for such occasions?


You can set /etc/random.seed (or /var/db/host.random for spawned instances) prior to first boot. That's what cloud providers do IIRC. It also mixes in hardware random (if available).


> Does OpenBSD trust RdRand for such occasions?

It is used as one source among many to seed the PRNG, so I think the answer is no.


A new cloud instance seems like a small problem. The host could just generate an entropy file for the first boot.


Yes - the hard case is a little flash-based low power ARM router, cloned by the million.


From 2012, but still at least somewhat relevant (https://factorable.net/weakkeys12.extended.pdf).

> RSA and DSA can fail catastrophically when used with malfunctioning random number generators ... network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread ... we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient signature randomness ... the vast majority appear to be headless or embedded devices ...


I hope the seed is very well protected.


wdym? if an attacker can access your filesystem its already game over


The same could be said for some of the entropy sources that were nerfed or removed over time due to security concerns. That's the main reason why we even are in this situation today.


PRNG needs entropy, but it doesn't consume it. You don't need to feed entropy into it continuously.


> PRNG needs entropy, but it doesn't consume it. You don't need to feed entropy into it continuously.

In the extreme case, this means you can run a PRNG with a fixed seed indefinitely, which is definitely wrong because such a PRNG will necessarily loop.

That might not be feasible to exploit, however.


> such a PRNG will necessarily loop

Given infinite time, energy, and computing power, yes. Given computers made out matter and running on energy for use by meat-based intelligences, no.

This is really analogous to saying "technically a 256-bit encryption key is brute-forceable". In fact, this is so close to being the actual underlying situation it's barely even an analogy.


> even if there are suitable sources for entropy in the system

What might those be on, say, a freshly booted RasPi that hasn't even brought up much of userspace besides systemd?


The problematic commit reduced the number of disk IO at bootup which resulted in starving getrandom() from the necessary entropy. If disk IO is already considered a reliable source of entropy then there is no reason to not actively exercise it when entropy is needed.


The Raspberry Pi has a HWRNG onboard.


Do you trust it?


You’re forced to trust the maker of your CPU, mainboard, etc. if you can’t trust your hardware you’re toast. There’s nothing special about RNGs in this regard.


Something else I'd wonder is whether there are other sources of entropy that could be used here other than the disk -- as improving disk IO is what seems to have caused this particular issue in the first place.


Hmm, isn't the solution to do something proactively to increase entropy when getrandom() is waiting for it? (especially for the first bytes)

Like inserting arbitrary reads of any available SSD or hard disk that has already been spun up, or something better if possible.

And in newer userspace, just properly save and restore a seed.


What does "arbitrary" mean if you don't have any randomness? Response times of devices are getting increasingly predictable.


The timer entropy daemon has been around for a while. At minimum one has access to a hardware random number with an output of 256 bits per second by waiting for an interrupt from a timer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: