I am seeing the following error when trying to delete a group from a standalone DS there is no replication cfg , any tips what is the explanation of this error ?
( member of plugin is enabled for DS)
ldapdelete -D "cn=directory manager" -W -x 'ou=XXXXXt' 'cn=XXXX''
ldap_delete: Operation not allowed on non-leaf (66)
additional info: Entry has replication conflicts as children
we are running the following 389-DS version :
I need to remove a referral entry from dse.ldif , there are 2 servers cfg in master to master replication with two slaves each master.
I tried removing referrals from one of the master but after ldap server restart the referral was added and same for the entries in replication agreement.
Do I need to disable the replication plugging with DS online to be able to remove referrals and also the old entries from replication agreement ?
we are running an old ldap version and having issues running fixup-memberof.pl script in master-slave cfg fractional replication with member off excluded , can not create index locally. I tried using different protocols when invoking the script :LDAPS, STARTTLS but same error . Any idea what can be the cause of this issue?
Here are details :
/usr/lib64/dirsrv/slapd-sldapX/fixup-memberof.pl -D "cn=Directory Manager" -w - -b "dc=XXX,dc=net"
ldap_start_tls: Can't contact LDAP server (-1)
Failed to add task entry "cn=memberOf_fixup_2023_7_20_11_59_27, cn=memberOf task, cn=tasks, cn=config" error :ldap-ds
/usr/lib64/dirsrv/slapd-sldapX/fixup-memberof.pl -D "cn=Directory Manager" -w - -b "dc =XXX,dc=net" -Z host_name -P LDAPS
ldap_start_tls: Can't contact LDAP server (-1)
Failed to add task entry "cn=memberOf_fixup_2023_7_20_11_52_44, cn=memberOf task, cn=tasks, cn=config" error (1)
we have a multi-master 389-DS enviroment with several suppliers and consumers, version 389-ds-base-220.127.116.11-10. Lately we are having problems with replications to the supplier that has enabled the memberof plugin, in order to replicate the group attribute to the users group attribute, in our case, uniqueMember to isMemberof.
The issue that we are having is that sometimes we see replication errors in the others suppliers with the disabled memberof plugin when they try to start replication operation to the enabled memberof plugin supplier, all of those errors are realted with timouts in the replication operation to that supplier with member of plugin enabled (acting as consumer on this case).
Error (-5) Unable to receive the response for a startReplication extended operation to consumer. Will retry later. - LDAP error: Timed out (connection error)
ERR - NSMMReplicationPlugin - repl5_inc_waitfor_async_results - Timed out waiting for responses: 147 180
ERR - NSMMReplicationPlugin - release_replica - agmt="cn=supplier1" (supplier1:636): Attempting to release replica, but unable to receive endReplication extended operation response from the replica. Error -5 (Timed out)
ERR - NSMMReplicationPlugin - perform_operation - agmt="cn=supplier1" (supplier1:636) - Connection is not available (10)
WARN - NSMMReplicationPlugin - send_updates - agmt="cn=supplier1" (supplier1:636): Timed out sending update operation to consumer (uniqueid xxxx, CSN xxx): Timed out.
It can delay this operation for several hours until the replica ends:
INFO - NSMMReplicationPlugin - bind_and_check_pwp - agmt="cn=germano1" (germano1:636): Replication bind with SIMPLE auth resumed
In the destination node that has member of plugin enabled, we can't see any error log about this problem, maybe we would have to enable debug in errors log.
We have not seen high cpu load, memory or network problems, in the replica destination or source replica nodes. But we suspect that it is something related with a busy state of the memberof plugin, because we have daily scheduled tasks that run operations in groups with serveral thousands of users that can affect the other replica nodes that wants to replicate changes in the node that has the memberof plugin.
¿Could you help me with that issue? ¿Is it possible to enabling member of plugin in other supplier node? I think that it has to be enbled only in one supplier, but i has not found anything about in the 389 documentation. Any other suggestions will be appreciated.
Thanks in advance & Kind Regards.
My organisation is using a replicated 389-dirsrv. Lately, it has been crashing
each time after compacting.
It is replicable on our instances by lowering the compactdb-interval to
trigger the compacting:
dsconf -D "cn=Directory Manager" ldap://127.0.0.1 -w 'PASSWORD_HERE' backend config set --compactdb-interval 300
This is the log:
[03/Aug/2022:16:06:38.552781605 +0200] - NOTICE - checkpoint_threadmain - Compacting DB start: userRoot
[03/Aug/2022:16:06:38.752592692 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact userRoot - 8 pages freed
[03/Aug/2022:16:06:44.172233009 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact userRoot - 888 pages freed
[03/Aug/2022:16:06:44.179315345 +0200] - NOTICE - checkpoint_threadmain - Compacting DB start: changelog
[03/Aug/2022:16:13:18.020881527 +0200] - NOTICE - bdb_db_compact_one_db - compactdb: compact changelog - 458 pages freed
dirsrv(a)auth-alpha.service: Main process exited, code=killed, status=11/SEGV
dirsrv(a)auth-alpha.service: Failed with result 'signal'.
dirsrv(a)auth-alpha.service: Consumed 2d 6h 22min 1.122s CPU time.
The first steps are done very quickly, but the step before the 458 pages of the
retro-changelog are freed, takes several minutes. In this time the dirsrv writes
more than 10 G and reads more than 7 G (according to iotop).
After this line is printed the dirsrv crashes within seconds.
What I also noticed is, that even though it said it freed a lot of pages the
retro-changelog does not seem to change in size.
The file `/var/lib/dirsrv/slapd-auth-alpha/db/changelog/id2entry.db` is 7.2 G
before and after the compacting.
389-ds-base/stable,now 18.104.22.168-2 amd64
Does someone have an idea how to debug / fix this?