[389-users] bypassing limits for persistent search and specific user

Petr Spacek pspacek at redhat.com
Tue Mar 13 23:09:38 UTC 2012


Hello list,

I'm looking for way how to bypass nsslapd-sizelimit and 
nsslapd-timelimit for persistent search made by specific user (or 
anything made by that user).

Please, can you point me to right place in documentation about 
persistent search/user specific settings in 389? I googled for a while, 
but I can't find exact way how to accomplish this.

I found attributes nsSizeLimit and nsTimeLimit in 
http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html-single/Schema_Reference/index.html#nsPagedSizeLimit 
, but I'm not sure how to deploy them.


If bypassing is not possible in 389:
Is there any way how to enumerate all records from given subtree 
part-by-part? (My guess: VLV or something similar.)

I know only basics about persistent search and next to nothing about 
VLV, so sorry if I'm completely wrong.


--- Background / why I needed this / long story ---
FreeIPA project has LDAP plugin for BIND. This plugin pulls DNS records 
from LDAP database and populates BIND's internal memory with them. 
(Homepage: https://fedorahosted.org/bind-dyndb-ldap/)

This plugin can use persistent search, which enables reflecting changes 
in LDAP inside BIND immediately.

At this moment, plugin after start do persistent search for all DNS 
records. This single query can lead to tens of thousands records - and 
of course fails, because nssldapd-sizelimit stops that.

Another problem arises with databases smaller than sizelimit - query is 
ended after timelimit and has to be re-established. It leads to 
periodical re-downloading whole DNS DB.

Question is:
  It's possible to bypass limits for this connection/user
OR
  plugin is completely broken by design?


Thanks for you time.

Petr^2 Spacek  @  Red Hat  @  Brno office



More information about the 389-users mailing list