On 11/03/2017 12:28 PM, Sergei Gerasenko wrote:
> To look at the replication changelog you need to use the cli
tool
> "cl-dump.pl"
>
>
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=we...
>
<
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=we...
Ok, thank you
>
>> 2. How do I see the setting for the max life of a CSN?
> There is no "max life" of a csn.
Ok, what brought this up is that about every week
Ahh yes, this is the default
replication purge interval (7 days)
https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/8....
Look for nsDS5ReplicaPurgeDelay
It could also be changelog trimming:
http://www.port389.org/docs/389ds/FAQ/changelog-trimming.html
So what this is telling me is that one of your replication agreements
was over a week behind from the other replicas (not good). Was that
agreement disabled for a while, and then enabled, for some reason?
, one of the machines in our environment breaks the replication with
messages like this:
[01/Nov/2017:17:12:52.815891904 +0000] agmt="cn=meToXXXX" - Can't
locate CSN 59f9d98a000000760000 in the changelog (DB rc=-30988). If
replication stops, the consumer may need to be reinitialized.
[01/Nov/2017:17:12:52.820619690 +0000] NSMMReplicationPlugin -
changelog program - agmt="cn=meXXXX": CSN 59f9d98a000000760000 not
found, we aren't as up to date, or we purged
[01/Nov/2017:17:12:52.828626595 +0000] NSMMReplicationPlugin -
agmt="cn=meToXXXX": Data required to update replica has been purged
from the changelog. The replica must be reinitialized.
So it made me think that perhaps the CSN record is removed too early?
The ’76’ in the CSN is the machine having the problem. What do you
think could cause problems of this kind?
> There is replication purging and changelog trimming that uses csns in
> RUV's to determine what can be removed. The admin guide talks about
> these in more detail.
>> 3. How do I view a particular CSN (i.e. its contents)?
> csn:
>
> 59f9e547000200010000
>
> Breaks down like this:
>
> 59f9e547 0002 0001 0000
Yep, found that info previously, but thank you still!