Title: #5542: nss client: make innetgr() thread safe
I was thinking to make
`thread_local` and get rid of locks completely quite some time ago.
> My main concern so far was that since we do not close the sockets an application
with many threads might end up with many open sockets.
is a fair point.
Still, I think if we go that venue it should be "all or nothing", i.e.
individual connection for every thread.
IMO, having this for netgr requests as an exception unjustifiably complicates code. In
this sense I prefer #5541 (as it makes code simpler).
And by the way, reading https://man7.org/linux/man-pages/man3/setnetgrent.3.html
it says "innetgr()... is thread-safe."
if any of the functions setnetgrent(), getnetgrent_r(), **innetgr()**,
getnetgrent(), or endnetgrent() are used in parallel in different
threads of a program, then data races could occur
Do those calls use "global" `__netgrent` context (with the exception of
yes, only when called from `innetgr()` the `*netgrent*()` calls receive a request specific
`__netgrent`, in all other cases it is a global one. So the calls are already not
thread-safe in the glibc level.
It seems your solution "fd per state" is more robust. IIUC, it handles
-- your solution will create own fd and thus sssd_nss context for innetgr(). "Single
`thread_local` fd" solution would fail here.
But again, I don't have clear understanding if support of this case is expected.
At least with libnss_files this would work since the first `setnetgrent()` will load the
file content to the global `__netgrent` and `innetgr()` does not overwrite it so it is
still available when `getnetgrent_r()` is called. However I would read the sentence
In the above table, netgrent in race:netgrent signifies that if any of the functions
setnetgrent(), getnetgrent_r(), innetgr(), getnetgrent(), or endnetgrent() are used in
parallel in different threads of a program, then data races could occur.
from the man page in the sense that a data races can occur in the single thread as well if
some of those functions are intermixed in an unexpected way. But there might be different
expectations or interpretations.
See the full comment at https://github.com/SSSD/sssd/pull/5542#issuecomment-818632629