On Tue, Feb 25, 2020 at 11:58:56AM -0700, stan wrote:
On Tue, 25 Feb 2020 13:13:16 -0500
Neil Horman <nhorman(a)redhat.com> wrote:
> Thats not my understanding. As I understand the changes, /dev/random
> has been converted so that its no longer blocks (which is why the
> removed the read_wakeup_threshold, since theres never a case where
> /dev/random will block anymore). That doesn't prevent rngd from
> feeding new entropy into the kernel though, via /dev/randoms
> RNDADDTOENTCNT and RNDADDENTROPY ioctls (which is how we feed in more
> entropy)
If you are right, that is excellent. I've been hesitating with
creating the patch because it is becoming like recreating more and more
of random.c. I have to be really careful because the kernel expects
the new interface, so I have to leave it, but I still have to add the
obsolete interface back for my use.
Doesn't the elimination of the shadow pool, and the removal of
push_to_pool end the ability to push entropy? I'm going to have to
bite the bullet and take the code apart until I can understand the new
system.
shouldn't do, the ioclt writes directly to the input_pool
> I'm fine with gplv3, IIRC rngd was initially licensed as
gplv2 or
> later, so it should be good.
Great!
> Yeah, I just ordered an RTL2832U from amazon for a few bucks, seems
> like a good cheap entropy source to make available. I'll try look
> into bit-babbler as well, but at $100, that might not be as
> worthwhile.
Yeah, I was thinking of for server farm and cloud provider usage.
> Usually network entropy is avoided because its subject to
> manipulation from off system. You can hammer a target card enough
> that you can do enough prediction of interrupt timing to predict what
> the outcome will be.
I'm thinking of a dedicated server that does nothing but provide
entropy. A couple of different sources of entropy, and a private
network address. The servers sign up as clients, and the entropy server
sends them entropy updates on a periodic basis, which wakes the client
and reseeds the crng.
That works pretty well in a secure environment (in fact, once I get it fixed,
you can use the nist beacon server code and the nist beacon source to transport
that entropy), but I don't think cloud providers like the idea of shipping
entropy from a central source to other nodes where the possibility exists for
snooping. They all rely on localized entropy. Shared entropy pools accross
systems are better used for things like distributed testing, where
multiple systems might need to use the same random bits.
> As for radio sources, I'm not sure. $10 is actually a huge
cost on a
> BOM when you're building 1000's of systems, and crngs are cheaper,
> especially when an OS adds them anyway to handle the 'no hardware
> source' use case.
Sure, individual entropy sources for each server is overkill and too
expensive. But if it's spread across thousands of servers via a
dedicated entropy server, it's just a few cents a server, or less.