Problem browsing LDAP with Outlook
by Chris Bryant
When configuring Microsoft Outlook (not Outlook Express) to access an LDAP directory, there is an option to 'Enable Browsing (requires server support)'. If this option is chosen and the directory server supports it, then you should be able to open the LDAP address book and page up and down through the results. I have been unable to get this working properly with 389 DS.
When I try to browse from Outlook against the 389 DS directory, I am able to see the first page of results perfectly. However, if I move to the next page, only the first object returned will have any attributes included, and all of the rest of the objects in the page will have no attributes. I have a test perl script that duplicates this functionality as well.
I can get this to work properly with an older version of Netscape Directory Server, and I can get it working with OpenDS. Since 389 DS advertises support for the controls that are required for this to work, just like the other two servers, then I would expect it to work there also.
Has anyone out there gotten this to work with 389 DS? If so, can you share if there was anything special that you needed to do to get this to work? I'm trying to determine if this is a bug in the server, or if I'm just missing something in the configuration.
Thanks,
Chris
USA.NET
You Run Your Business. We'll Run Your Email.
This message is for the sole use of the intended recipient(s) and may contain confidential and/or privileged information of USA.NET, Inc. Any unauthorized review, use, copying, disclosure, or distribution is prohibited. If you are not the intended recipient, please immediately contact the sender by reply email and delete all copies of the original message.
2 years, 9 months
changelog
by Denise Cosso
Hi,
How to modify the attribute nsslapd-encryptionalgorithm in Centos?
Thanks,
Denise
Stop Master servers and set nsslapd-encryptionalgorithm. The allowed value is AES or 3DES.
dn: cn=changelog5,cn=config
[...]
nsslapd-encryptionalgorithm: AES
--- Em ter, 4/6/13, Rich Megginson <rmeggins(a)redhat.com> escreveu:
De: Rich Megginson <rmeggins(a)redhat.com>
Assunto: Re: [389-users] changelog
Para: "Denise Cosso" <guanaes51(a)yahoo.com.br>
Data: Terça-feira, 4 de Junho de 2013, 16:34
On 06/04/2013 01:26 PM, Denise Cosso
wrote:
Hi, Rich
CentOS release 6.3 (Final)
389-ds-base-libs-1.2.10.2-20.el6_3.x86_64
389-ds-1.2.2-1.el6.noarch
389-dsgw-1.1.10-1.el6.x86_64
389-ds-console-1.2.6-1.el6.noarch
389-ds-console-doc-1.2.6-1.el6.noarch
389-ds-base-1.2.10.2-20.el6_3.x86_64
As far as replication goes - you will need to use a security layer
(SSL, TLS, or GSSAPI) to protect the clear text password on the wire
As far as encrypting it in the changelog - not sure
Denise
--- Em ter, 4/6/13, Rich Megginson <rmeggins(a)redhat.com>
escreveu:
De: Rich Megginson <rmeggins(a)redhat.com>
Assunto: Re: [389-users] changelog
Para: "General discussion list for the 389 Directory
server project."
<389-users(a)lists.fedoraproject.org>
Cc: "Denise Cosso" <guanaes51(a)yahoo.com.br>
Data: Terça-feira, 4 de Junho de 2013, 16:11
On
06/04/2013 12:39 PM, Denise Cosso wrote:
Hi,
Description of problem:
When a userPassword is changed in a server with changelog, the hashed password
is logged and also a cleartext pseudo-attribute version. It looks like this:
change::
replace: userPassword
userPassword: {SHA256}vqtiN2LHdrEUOJUKu+IBVqAVFsAlvFw+11kD/Q==
-
replace: unhashed#user#password
unhashed#user#password: secret12
This unhashed version is used in winsync where the cleartext version of the
password must be written to the AD.
Now if the DS is involved in replication with another DS, the change will be
replayed exactly as it is logged to the other DS replicas, including the
cleartext pseudo-attribute password.
What platform? What version of 389-ds-base are you
using?
thanks,
Denise
--
389 users mailing list
389-users(a)lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/389-users
8 years, 1 month
389 Master - Master Replication
by Santos Ramirez
Good Morning,
We have a master - master replication agreement. When we initialize the replication it works perfectly we can see changes to a test user we have set up go up and down from the two servers. However at some point the replication stops and we cannot get replication to start once again. The only way we can get replication to start once again is to recreate the replication agreement and then it fails again. Can anyone please point us in a direction. I am relatively new to 389 so any help would be greatly appreciated.
Santos U. Ramirez
Linux Systems Administrator
National DCP, LLC
150 Depot Street
Bellingham, Ma. 02019
Phone: 508-422-3089
Fax: 508-422-3866
Santos.Ramirez(a)natdcp.com<mailto:Santos.Ramirez@natdcp.com>
This email and any attachments are intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, do not copy or forward to any unauthorized persons, permanently delete the original and notify the sender by replying to this email.
8 years, 8 months
389 directory server crash
by Mitja Mihelič
Hi!
We are having problems with some our 389-DS instances. They crash after
receiving an update from the provider.
The crash happened twice after about a week of running without problems.
The crashes happened on two consumer servers but not at the same time.
The servers are running CentOS 6x with the following 389DS packages
installed:
389-ds-console-doc-1.2.6-1.el6.noarch
389-console-1.1.7-1.el6.noarch
389-adminutil-1.1.15-1.el6.x86_64
389-dsgw-1.1.10-1.el6.x86_64
389-ds-base-debuginfo-1.2.11.15-14.el6_4.x86_64
389-admin-1.1.29-1.el6.x86_64
389-ds-console-1.2.6-1.el6.noarch
389-admin-console-doc-1.1.8-1.el6.noarch
389-ds-1.2.2-1.el6.noarch
389-ds-base-1.2.11.15-14.el6_4.x86_64
389-ds-base-libs-1.2.11.15-14.el6_4.x86_64
389-admin-console-1.1.8-1.el6.noarch
We are in the process of replacing the Centos 5x base consumer+provider
setup with a CentOS 6x base one. For the time being, the CentOS 6
machines are acting as consumers for the old server. They run for a
while and then the replicated instances crash though not at the same time.
One of the servers did not want to start after the crash, so I have run
db2index on its database. It's been running for four days and it has
still not finished. All I get from db2index now are these outputs:
[09/Jul/2013:13:29:11 +0200] - reindex db: Processed 65095 entries (pass
1104) -- average rate 53686277.5/sec, recent rate 0.0/sec, hit ratio 0%
The other instance did start up, but the replication process did not
work anymore. I disabled the replication to this host and set it up
again. I chose "Initialize consumer now" and the consumer crashed every
time. I have enabled full error logging and could find nothing.
I have read a few threads (not all, I admit) on this list and
http://directory.fedoraproject.org/wiki/FAQ#Debugging_Crashes and tried
to troubleshoot.
The crash produced the attached core dump and I could use your help with
understanding it. As well as any help with the crash. If more info is
needed I will gladly provide it.
Regards, Mitja
9 years, 6 months
Consumer Initialization Failure
by Wick, Samson
Running 389-ds version 1.2.2-1 (according to the rpm)
In attempting to stand up a new consumer in our environment, the process of allowing the supplier to initialize the consumer directly would corrupt the consumer irrevocably.
I have ruled out firewalls, SSL issues etc.
When attempting to initialize via an ldif, I get errors on three user accounts more or less identical to this:
WARNING: skipping entry "uid=<etc.....>" ending line 296901 of file "<path to my ldif file>"
REASON: entry too large (15503712 bytes) for the buffer size (8388608 bytes)
When I examine the ldif file that the supplier created, the three user objects it's complaining about all have +/- 100,000 entries like this:
userPassword;vucsn-520b35cb000000010000;deleted: {SSHA256}5WJ9hosO3JO9VLa32nqxmGjn3XoShD1c1g+abekZDCFTX1MM187Bjg==
Each line has a different hash. But most of the other user objects only have a couple of these lines.
Clearly 100k+ password changes is a little excessive and it's something I'll need to look into, but in the meantime, can anyone help me figure out what has caused all of these to remain in the directory, and what can I do to clean them up?
Thanks,
Samson
9 years, 9 months
Membership of Roles
by Andy Spooner
Hello
I am testing integration of 389-ds with a blogging system. I plan to use
roles instead of groups to automatically give users rights to service on the
blog system. However, I am having problems with the system identifying
members of roles. I need help with defining the correct search parameters to
identify which roles a uid or cn is a member of.
>From within the blog system I'm using LDAPGroupFilter
(objectclass=ldapSubEntry) to list the roles. The roles list correctly as
groups within the blog system.
>From within 389 the members of roles are configured as filtered, and I can
see the configured members using the Directory Server GUI.
The blog system is not identifying members of roles when it does its search
against 389. Note, users can log into the blog system using the accounts
created on 389. I don't think I am applying the correct search criteria to
identify group membership. I need advice on creation of the correct search
criteria for membership of roles/groups.
Sample log from access
[31/Aug/2013:11:09:39 +0100] conn=265 op=0 BIND dn="cn=Directory Manager"
method=128 version=3
[31/Aug/2013:11:09:39 +0100] conn=265 op=0 RESULT err=0 tag=97 nentries=0
etime=0 dn="cn=directory manager"
[31/Aug/2013:11:09:39 +0100] conn=265 op=1 SRCH base="dc=xxxx,dc=com"
scope=2 filter="(&(mail=testuser16(a)xxxx.com)(objectClass=*))"
attrs="distinguishedName"
[31/Aug/2013:11:09:39 +0100] conn=265 op=1 RESULT err=0 tag=101 nentries=1
etime=0
[31/Aug/2013:11:09:39 +0100] conn=265 op=2 BIND
dn="uid=1000016,ou=Customers,dc=xxxx,dc=com" method=128 version=3
[31/Aug/2013:11:09:39 +0100] conn=265 op=2 RESULT err=0 tag=97 nentries=0
etime=0 dn="uid=1000016,ou=customers,dc=xxxx,dc=com"
[31/Aug/2013:11:09:39 +0100] conn=265 op=3 BIND dn="cn=Directory Manager"
method=128 version=3
[31/Aug/2013:11:09:39 +0100] conn=265 op=3 RESULT err=0 tag=97 nentries=0
etime=0 dn="cn=directory manager"
[31/Aug/2013:11:09:39 +0100] conn=265 op=4 SRCH base="dc=xxxx,dc=com"
scope=2 filter="(&(mail=testuser16(a)xxxx.com)(objectClass=*))" attrs="uid
mail cn mail distinguishedName"
[31/Aug/2013:11:09:39 +0100] conn=265 op=4 RESULT err=0 tag=101 nentries=1
etime=0
[31/Aug/2013:11:09:39 +0100] conn=265 op=5 SRCH base="dc=xxxx,dc=com"
scope=2 filter="(|(uid=1000016))" attrs="nsRole"
[31/Aug/2013:11:09:39 +0100] conn=265 op=5 RESULT err=0 tag=101 nentries=1
etime=0
[31/Aug/2013:11:09:39 +0100] conn=265 op=6 SRCH
base="ou=customers,dc=xxxx,dc=com" scope=2
filter="(&(|(member=cn=xxxxrolecommentertest,ou=customers,dc=xxxx,dc=com))(o
bjectClass=ldapSubEntry))" attrs="cn cn member nsUniqueId"
[31/Aug/2013:11:09:39 +0100] conn=265 op=6 RESULT err=0 tag=101 nentries=0
etime=0
[31/Aug/2013:11:09:39 +0100] conn=265 op=7 UNBIND
[31/Aug/2013:11:09:39 +0100] conn=265 op=7 fd=68 closed - U1
9 years, 9 months
Re: [389-users] FW: fresh replica reports "reloading ruv failed " just after successfull initialization
by Carsten Grzemba
Am 29.08.13 schrieb Rich Megginson <rmeggins(a)redhat.com>:
>
>
>
>
>
> On 08/28/2013 08:45 AM, Jovan.VUKOTIC(a)sungard.com wrote:
>
>
> > <!--
> > /* Font Definitions */
> > @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;}
> > @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;}
> > @font-face {font-family:Consolas; panose-1:2 11 6 9 2 2 4 3 2 4;}
> > @font-face {font-family:"Times New Roman \, serif"; panose-1:0 0 0 0 0 0 0 0 0 0;}
> > /* Style Definitions */
> > p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif"; color:black;}
> > a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;}
> > a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;}
> > p {mso-style-priority:99; mso-margin-top-alt:auto; margin-right:0in; mso-margin-bottom-alt:auto; margin-left:0in; font-size:12.0pt; font-family:"Times New Roman","serif"; color:black;}
> > pre {mso-style-priority:99; mso-style-link:"HTML Preformatted Char"; margin:0in; margin-bottom:.0001pt; font-size:10.0pt; font-family:"Courier New"; color:black;}
> > p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Balloon Text Char"; margin:0in; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma","sans-serif"; color:black;}
> > span.HTMLPreformattedChar {mso-style-name:"HTML Preformatted Char"; mso-style-priority:99; mso-style-link:"HTML Preformatted"; font-family:Consolas; color:black;}
> > span.BalloonTextChar {mso-style-name:"Balloon Text Char"; mso-style-priority:99; mso-style-link:"Balloon Text"; font-family:"Tahoma","sans-serif";}
> > span.EmailStyle22 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:windowtext;}
> > span.EmailStyle23 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:#1F497D;}
> > span.EmailStyle24 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:#1F497D;}
> > span.EmailStyle25 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:#1F497D;}
> > span.EmailStyle26 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;}
> > .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;}
> > @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;}
> > div.WordSection1 {page:WordSection1;}
> > -->
> > Hi Rich,
> >
> >
> >
> > It has been a while since we discussed the bug that turned out to be SPARC specific.
> >
> > Meanwhile, I got an access to opencsw build environment so that I can build the source and test the fix you mentioned: fix on atomic operation (please, see the email thread below for the bug details).
> >
> >
> >
> > For your reference, we use 389 DS, version 1.2.11.15 on Solaris SPARC. The bug cannot be reproduced on Solaris x86 nor on Red Hat Linux x86.
> >
> > Furthermore, the only way 389 DS 1.2.11.15 on Solaris SPARC works fine in multi-master replication topology is when all other servers are on Solaris x86 platforms and the SPARC’s one is used to initialize all the others.
> >
> >
> >
> > Can you, please, direct me as to where to apply the fix in the source code?
> >
> >
>
> If it is sparc related, it is probably in slapi_counter.c and/or slapi_counter_sunos_sparcv9.S
>
>
>
>
As I have you already sent, there is for Solaris Sparc still used the old netscape/iplanet assembler code whereas for Solaris x86 is the simple architecture independent mutex based locking is used.
Perhaps it helps, if you try to use the generic code for Sparc also.
>
>
>
>
>
>
> >
> >
> >
> >
> >
> > Thank you,
> >
> > Jovan
> >
> >
> >
> > Jovan Vukotić • Senior Software Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic(a)sungard.com
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Rich Megginson [mailto:rmeggins@redhat.com <rmeggins(a)redhat.com>]
> > Sent: Friday, June 28, 2013 4:17 PM
> > To: Vukotic, Jovan
> > Cc: 389-users(a)lists.fedoraproject.org; Mehta, Cyrus
> > Subject: Re: [389-users] FW: fresh replica reports "reloading ruv failed " just after successfull initialization
> >
> >
> >
> >
> >
> > On 06/28/2013 03:30 AM, Jovan.VUKOTIC(a)sungard.com <Jovan.VUKOTIC(a)sungard.com> wrote:
> >
> >
> > >
> > > Rich,
> > >
> > >
> > >
> > > No, I do not build the code myself.
> > >
> >
> >
> > ok - looks like CSW packages.
> >
> > I'm not sure if things are going to work correctly until we get the atomic op bug fixed. Unfortunately we don't have the means to build and test on Sparc. Is there someone who can help us build and test some fixes?
> >
> >
> >
> >
> >
> >
> > At the moment, with error log level set to 40960 (32768+8192) I got a bit more error messages, but they are no indicative to me whatsoever:
> >
> >
> >
> > [28/Jun/2013:05:06:03 —0400] cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:06:09 —0400) NSMllReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxx,dc=com); LDAP error — 68
> >
> > [28/Jun/2013:05:06:39 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:06:39 —0400] NSMMReplicationPlugin _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com); LDAP error 68
> >
> > [28/Jun/2013:05:07:00 —0400] Changelog purge skipped anchor csn 51c5ec28000000020000
> >
> > [28/Jun/2013:05:07:09 —0400] cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:07:09 —0400] NSMMMReplicationPlugin _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxx,dc=com): LDAP error 68
> >
> > [28/Jun/2013:05:07:39 —0400] cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:07:39 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com); LDAP error — 68
> >
> > [28/Jun/2013:05:08:09 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:08:09 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com); LDAP error — 68
> >
> > [28/Jun/2013:05:08:39 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:08:39 —0400] NStlllReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxx,dc=com): LDAP error — 68
> >
> > [28/Jun/2013:05:09:04 —0400] NSMMReplicationPlugin — changelog program — _cl5GetDBFile: found DB object 13f5c40 for database
> > /var/opt/csw/lib/dirsrv/slapd—inst—dr02/changelogdb/686eae02—ldd2llb2—b3b3aede—af5e4e28_51c5c8ae000000020000.db4
> >
> > [28/Jun/2013:05:09:04 —0400] NSMMReplicationPlugin — changelog program — cl5CetOperationCount: found DB object 13f5c40
> >
> > [28/Jun/2013:05:09:09 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:09:09 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com): LDAP error — 68
> >
> > [28/Jun/2013:05:09:39 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:09:39 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com); LDAP error — 68
> >
> > [28/Jun/2013:05:10:09 —0400] — cache_add_tentative concurrency detected
> >
> > [28/Jun/2013:05:10:09 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxxx,dc=com); LDAP error — 68
> >
> >
> >
> >
> >
> > Thanks,
> >
> >
> >
> > Jovan Vukotić • Senior Software Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic(a)sungard.com
> >
> > Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at: https://admin.fedoraproject.org/mailman/listinfo/389-users(http://www.cap..." moz-do-not-send="true" target="1">www.sungard.com/ten.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > From: Rich Megginson [mailto:rmeggins@redhat.com <rmeggins(a)redhat.com>]
> > Sent: Thursday, June 27, 2013 6:20 PM
> > To: Vukotic, Jovan
> > Cc: 389-users(a)lists.fedoraproject.org; Mehta, Cyrus
> > Subject: Re: [389-users] FW: fresh replica reports "reloading ruv failed " just after successfull initialization
> >
> >
> >
> >
> >
> > On 06/27/2013 09:14 AM, Jovan.VUKOTIC(a)sungard.com <Jovan.VUKOTIC(a)sungard.com> wrote:
> >
> >
> > >
> > > Rich,
> > >
> > >
> > >
> > > On Linux x86_64 and Solaris x86_64 the error cannot be reproduced, only on Solaris SPARC.
> > >
> > >
> > >
> > > On the other hand, Solaris SPARC works fine only if it is the first master replica in the multi-master array, that is, the one that initializes other replicas.
> > >
> > >
> > >
> > > Do you, perhaps, have any suggestion as to how to tune Solaris SPARC platform?
> > >
> >
> >
> > I think there is a bug in the way we handle atomic operations on SPARC. We don't develop or test on SPARC, so it's not surprising we have a bug in this area. Do you build the code yourself?
> >
> >
> >
> >
> >
> > I am going to add a more detailed logging to the errors file.
> >
> >
> >
> > Thanks,
> > Jovan
> >
> >
> >
> > Jovan Vukotić • Senior Software Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic(a)sungard.com
> >
> > Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at: www.sungard.com/ten.
> >
> >
> >
> >
> >
> >
> >
> > From: Rich Megginson [mailto:rmeggins@redhat.com <rmeggins(a)redhat.com>]
> > Sent: Monday, June 24, 2013 10:45 PM
> > To: General discussion list for the 389 Directory server project.
> > Cc: Vukotic, Jovan; Mehta, Cyrus
> > Subject: Re: [389-users] FW: fresh replica reports "reloading ruv failed " just after successfull initialization
> >
> >
> >
> >
> >
> > On 06/24/2013 09:34 AM, Jovan.VUKOTIC(a)sungard.com <Jovan.VUKOTIC(a)sungard.com> wrote:
> >
> >
> > >
> > > Hi,
> > >
> > >
> > >
> > > I would like to link the issue I reported on Saturday with the bug 723937 filed some two years ago.
> > >
> > > There, just as in my case, dn/entry cache entries have been reported prior to the initialization of master replica.
> > >
> > >
> > >
> > > I repeated the replication configuration today, where the multi-master replica that was initialized by other replica having only one entry in userRoot datase prior the initialization( root object)
> > >
> > > First, two entries were found, then 5… and then 918 (matches the number of entries from the master database)
> > >
> > >
> > >
> > > 24/Jun/2013:08:16:03 -0400) - entrycache_clear_int: there are still 2 entries in the entry cache.
> > >
> > > [24/Jun/2013:08:16:03 -0400) — dncache_clear_int: there are still 2 dn’s in the dn cache. :/
> > >
> > > [24/Jun/2013:08:16:03 -0400) - WARNNG Import is running with nsslapd-db-private-import-mem on: No other process is allowed to access the database
> > >
> > > [24/Jun/2013:08:16:07 -04001 - import userRoot: Workers finished: cleaning p...
> > >
> > > [24/Jun/2013:08:16:07 -0400) — import userRoot: Workers cleaned up.
> > >
> > > [24/Jun/2013:08:16:07 -0400) - import userRoot: Indexing complete. Post-processing...
> > >
> > > [24/Jun/2013:08:16:07 -0400) - import userRoot: Generating numSubordinates complete.
> > >
> > > [24/Jun/2013:08:16:07 —0400) - import userRoot: Flushing caches...
> > >
> > > [24/Jun/2013:08:16:07 —0400) — import userRoot: Closing files...
> > >
> > > [24/Jun/2013:08:16:07 —0400) — entrycache_clear_int: there are still 5 entries in the entry cache.
> > >
> > > [24/Jun/2013:08:16:07 -0400) - dncache_clear-int: there are still 918 dn’s in the dn cache. :/
> > >
> > > [24/Jun/2013:08:16:07 -0400) - import userRoot: Import complete. Processed 918 entries in 4 seconds. (229.50 entries/sac)
> > >
> > > [24/Jun/2013:08:16:07 -0400] NSMMReplicationPlugin - multimastar_be_state_change: replica dc:xxxxxx,dc=com is coming on
> > >
> > > line: enabling replication
> > >
> > > [24/Jun/2013:08:16:07 -0400] NSMMReplicationPlugin — replica_configure_ruv: failed to create replica ruv tombstone entry (dc=xxxxxx,dc—com): LDAP error — 68
> > >
> > >
> > >
> > > I would like to add that all replicas that could not be configured due to the reported errors were installed on Solaris 10 on Sparc processors, whereas the only replica that was initialized successfully was installed on Solaris 10 on i386 processors.
> > >
> >
> >
> > Any chance you could try to reproduce this on a Linux x86_64 system?
> >
> >
> >
> >
> >
> >
> >
> >
> > Thanks,
> > Jovan
> >
> > Jovan Vukotić • Senior Software Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic(a)sungard.com
> >
> >
> >
> >
> >
> > Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at: www.sungard.com/ten.
> >
> >
> >
> >
> >
> >
> >
> > From: Vukotic, Jovan
> > Sent: Saturday, June 22, 2013 11:59 PM
> > To: '389-users(a)lists.fedoraproject.org'
> > Subject: fresh replica reports "reloading ruv failed " just after successfull initialization.
> >
> >
> >
> >
> >
> > Hi,
> >
> >
> >
> > We have four 389 DS, version 1.2.11 that we are organizing in multi-master replication topology.
> >
> >
> >
> > After I enabled all four multi-master replicas and initialized them - from the one, referent replica M1 and Incremental Replication started, it turned out that only two of them are included in replication, the referent M1 and M2 (replication is working in both direction)
> >
> > I tried to fix M3 and M4 in the following way:
> >
> > M3 example:
> >
> > removed replication agreement M1-M3 (M2-M3 did not existed, M4 switched off)
> >
> > After several database restores of pre-replication state and reconfiguration of that replica, I removed 389 DS instance M3 completely and reinstalled it again: remove-ds-admin.pl + setup-ds-admin.pl. I configured TLS/SSL (as before), restarted the DS and enabled replica from 389 Console.
> >
> > Then I returned to M1, recreated the agreement and did initialization of M3. It was successful again, in terms that M3 imported all the data, but immediately after that, to me strange errors were reported:
> >
> > What confuses me is that LDAP 68 means that an entry already exits… even if it is a new replica. Why a tombstone?
> >
> >
> >
> > Or to make the long story short: Is the only remedy to reinstall all four replica again?
> >
> >
> >
> > 22/Jun/2013:16:30:50 - 0400] — All database tnreaas now stopped // this is from a backup done before replication configuration
> >
> > [22/Jun/2013:16:43:25 —0400] NSMMReplicationPlugin — multimaster_be_state_change: replica xxxxxxxxxx is going off line; disablin
> >
> > g replication
> >
> > [22/Jun/2013:16:43:25 —0400] — entrycache_clear_int: there are still 20 entries in the entry cache,
> >
> > [22/Jun/2013:16:43:25 —0400] — dncache_clear_int: there are still 20 dns in the dn cache. :/
> >
> > [22/Jun/2013:16:43:25 —0400] — WARNING: Import is running with nsslapd—db—private—import—mem on; No other process is allowed to access th
> >
> > e database
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Workers finished; cleaning up..
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Workers cleaned up.
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Indexing complete. Post—processing...
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Generating numSubordinates complete.
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Flushing caches.
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Closing files.
> >
> > [22/Jun/2013:16:43:30 —0400] — entrycache_clear_int: there are still 20 entries in the entry cache.
> >
> > [22/Jun/2013:16:43:30 —0400] — dncache_clear_int: there are still 917 dn’s in the dn cache. :/
> >
> > [22/Jun/2013:16:43:30 —0400] — import userRoot: Import complete. Processed 917 entries in 4 seconds, (229.25 entries/sec)
> >
> > [22/Jun/2013:16:43:30 —0400] NSMMRep1 icationPlugin — multimaster_be_state_change: replica xxxxxxxxxxx is coming online; enabling
> >
> > replication
> >
> > [22/Jun/2013:16:43:30 —0400] NSMMReplicationPlugin — replica_configure_ruv: failed to create replica ruy tombstone entry (xxxxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:43:30 —0400] NSMMReplicationPlugin — replica_enable_replication: reloading ruv failed
> >
> > [22/Jun/2013:16:43:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:44:02 —0400] NSMMReplicationPlugin — replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:44:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:45:02 —0400] NSMMReplicationPluyin — _replica_confiyure_ruv: failed to create replica ruv tombstone entry (xxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:45:32 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
> >
> > [22/Jun/2013:16:46:02 —0400] NSMMReplicationPlugin — _replica_configure_ruv: failed to create replica ruv tombstone entry (xxxxxxxxx); LDAP error — 68
> >
> >
> >
> > Any help will be appreciated.
> >
> > Thank you.
> >
> >
> >
> >
> >
> > Jovan Vukotić • Senior Software Engineer • Ambit Treasury Management • SunGard • Banking • Bulevar Milutina Milankovića 136b, Belgrade, Serbia • tel: +381.11.6555-66-1 • jovan.vukotic(a)sungard.com
> >
> >
> >
> >
> >
> > Join the online conversation with SunGard’s customers, partners and Industry experts and find an event near you at: www.sungard.com/ten.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > --
> >
> > 389 users mailing list
> >
> > 389-users(a)lists.fedoraproject.org
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
>
>
>
9 years, 9 months
Password policy applied to a group
by Juan Carlos Camargo
389ds'ers,
I'm struggling to find the best way to apply a password policy only to members of a group, the rest having either the global or user/local policy. I have a number of users whose password should never expire , but those users live in different OU's, dont even share a parent branch. Do you think a CoS might help? Which do you think would be the best way to implement this?
Thanks!
--
Juan Carlos Camargo Carrillo.
@jcarloscamargo
957-211157 , 650932877
9 years, 9 months