This past weekend I finally replaced the last of our DS 1.3 instances (1.3.6.12 to be exact). We did a lot of testing but what happened in production with a full prod workload was quite surprising.
This 1.3 instance has been in operation for over a decade and has never had any issues with memory usage (it has 16GB total). When we moved the 2.5 instance into production, the amount of memory usage quickly rose to over 16GB, causing AWS ECS to kill the task. I tried upping the instance type to r6i.xlarge (32GB) and that quickly ran out of memory too. r6i.2xlarge also failed with excessive memory consumption. It wasn't until I switched to r6i.4xlarge (128GB) that the instances finally more or less stabilized at approx. 60GB of memory use.
Other info: - This is using bdb, not mdb. - We tried to keep the cn=config values as close as possible to the original instance. - Over a year ago, we did the same thing with a different production system (moved it to 2.5 bdb) and it's typically consuming about 32GB in spite of being a significantly larger database. - We used the same docker image from docker hub for both of these systems (2.5.0 B2024.017.0000).
For the system we moved this past weekend, here are some daily stats:
SRCH Events BIND Events MOD Events SRCH/BIND 2026-01-01T00:00:00.000-0700 71422 18013 1136 397% 2026-01-02T00:00:00.000-0700 88233 26273 1958 336% 2026-01-03T00:00:00.000-0700 71724 20275 1512 354% 2026-01-04T00:00:00.000-0700 90487 26763 2271 338% 2026-01-05T00:00:00.000-0700 232190 69743 5602 333% 2026-01-06T00:00:00.000-0700 270592 65322 5752 414% 2026-01-07T00:00:00.000-0700 288077 73869 6021 390% 2026-01-08T00:00:00.000-0700 276662 69352 6309 399% 2026-01-09T00:00:00.000-0700 265886 62109 4992 428% 2026-01-10T00:00:00.000-0700 201912 33331 2528 606% 2026-01-11T00:00:00.000-0700 229512 44090 2956 521% 2026-01-12T00:00:00.000-0700 333711 97047 6494 344% 2026-01-13T00:00:00.000-0700 384455 121332 7049 317% 2026-01-14T00:00:00.000-0700 544805 202567 10667 269% 2026-01-15T00:00:00.000-0700 523023 180011 38875 291% 2026-01-16T00:00:00.000-0700 393932 121466 27357 324% 2026-01-17T00:00:00.000-0700 235071 47104 10557 499% 2026-01-18T00:00:00.000-0700 269432 64199 9329 420% 2026-01-19T00:00:00.000-0700 299010 76743 9937 390% 2026-01-20T00:00:00.000-0700 501148 176427 16488 284% 2026-01-21T00:00:00.000-0700 466164 164206 13574 284% 2026-01-22T00:00:00.000-0700 422490 141143 9041 299% 2026-01-23T00:00:00.000-0700 360027 109641 8832 328% 2026-01-24T00:00:00.000-0700 230624 48358 5385 477% 2026-01-25T00:00:00.000-0700 292855 75711 7428 387% 2026-01-26T00:00:00.000-0700 449129 151908 11426 296% 2026-01-27T00:00:00.000-0700 433417 146937 9902 295% 2026-01-28T00:00:00.000-0700 425190 142981 9832 297% 2026-01-29T00:00:00.000-0700 401043 132635 8418 302% 2026-01-30T00:00:00.000-0700 350886 102997 6616 341% 2026-01-31T00:00:00.000-0700 235397 49611 4578 474%
The system we moved earlier has 10 times as much search traffic as this one, but this one gets up to 4 times as many binds.
Any thoughts on what might be going on here?
Tim
Could this big increase in memory usage be due to more aggressive caching? If so, let me know if there's anything I can set to bring that down.
I can not definitely explain why "testing" versus "prod" is giving you different results assuming the config/database is exactly the same, but we are always fixing memory leaks. What is the rpm version of 389-ds-base? I wonder if it's related to a known normalized DN cache leak, but we need to know the exact version you are on to know if that's a candidate. You could also disable it and and see how the server behaves in the meantime.
Regards,
Mark
On 2/26/26 5:44 PM, tdarby--- via 389-users wrote:
Could this big increase in memory usage be due to more aggressive caching? If so, let me know if there's anything I can set to bring that down.
How do I get the "rpm version" from an image I downloaded from the docker hub that appears to be no longer available on the docker hub?
Also, how do I turn off caching?
Thanks, Tim
Maybe this is what you want?
# zypper info 389-ds Loading repository data... Reading installed packages...
Information for package 389-ds: ------------------------------- Repository : @System Name : 389-ds Version : 2.4.0~git126.5936946-173.2 Arch : x86_64 Vendor : obs://build.opensuse.org/network:ldap Installed Size : 13.5 MiB Installed : Yes Status : up-to-date Source package : 389-ds-2.4.0~git126.5936946-173.2.src Upstream URL : https://pagure.io/389-ds-base Summary : 389 Directory Server Description : 389 Directory Server is a full-featured LDAPv3 compliant server. In addition to the standard LDAPv3 operations, it supports multi-master replication, fully online configuration and administration, chaining, virtual attributes, access control directives in the data, Virtual List View, server-side sorting, SASL, TLS/SSL, and many other features. (The server started out as Netscape Directory Server.)
Hi Tim,
On 2/27/26 3:04 PM, tdarby--- via 389-users wrote:
How do I get the "rpm version" from an image I downloaded from the docker hub that appears to be no longer available on the docker hub?
Good point, and I'm not sure in this case.
Also, how do I turn off caching?
$ sudo dsconf slapd-localhost config replace nsslapd-ndn-cache-enabled=off
$ sudo dsctl slapd-localhost restart
Mark
Thanks, Tim
389-users@lists.fedoraproject.org