C10K is a scalability problem that a server can face when dealing
with events of thousands of connections (i.e. clients) at the same
time. Events can be new connections, new operations on the
established connections, closure of connection (from client or
For 389-ds, C10K problem was resolved with a new framework Nunc-Stans . Nunc-stans was first enabled in RHDS 7.4 and improved/fixed in 7.5. Robustness issues  and  were reported in 7.5 and it was decided to disable Nunc-stans. It is not known if those issues exist or not in 7.4.
William posted a PR to fix those two issues . Nunc-stans is a complex framework, with its own dynamic. Review of this PR is not easy and even a careful review may not guaranty it will fix  and  and may not introduce others unexpected side effects.
From there we discussed two options (but there may be others):
As PR  is not intended for perf improvement, the step 2.1 will impact the priority according to the performance benefits.
Comments are welcomed
Regarding 2.1 plan we made the following notes for the test plan:
The benefit of Nunc-Stans can only be measure with a large number of connections (i.e. client) above 1000. That means a set of clients (sometime all) should keep their connection opened. Clients should run on several hosts so that clients are not the bootleneck.
For the two types of events (new connection and new operations), the measurement could be
- Event: New connections
- Start all clients in parallel to establish connections (keeping them opened) take the duration to get 1000, 2000, ... 10000 connections and check there are drop or not
- Establish 1000 connections and monitor during to open 100 more, the same starting with 2000, 10000
- Client should not run any operations during the monitoring
- Event: New operations
- Start all clients and when 1000 connections are established, launch simple operations (e.g. search -s base -b "" objectclass) and monitor how many of them can be handled. The same with 2000, ... 10000.
- response time and workqueue length could be monitored to be sure the bottleneck are not the worker.
 https://bugzilla.redhat.com/show_bug.cgi?id=1605554 connection leaks