No that's not it.

If RHEVM (manager) is using 389 server in In "RHDS" mode for authentication for its web portal that's where the issue pops up.
When I get back to the office in the morning I'll spend a link to a bugzilla ticket about it which on ovirt 3.5 which I discover earlier tonight also applies to RHEV (ovirt) 3.3 and 3.4

-- Sent from my HP Pre3

On Nov 10, 2014 7:58 PM, Rich Megginson <> wrote:

On 11/10/2014 05:44 PM, Paul Robert Marino wrote:
When did this start?
The reason I ask is I've noticed a lot of problems with RHEV since the recent updates to nss and openssl to deal with the POODLE vulnerability.

Like what?

The workaround for a loot of them is to ensure minssf is set to a value higher than 0.
I'm wondering if this might be something similar. In the past I had never set that option because my LDAP database contained no password and Kerberos was its own database so the risk was nominal. Now I find at least for RHEV (ovirt) I'm suddenly forced to set it.

So if you run 389 in RHEV, you have to set minssf, and if you run the same version of 389 on bare metal, you don't have to set minssf?
If this is not an accurate description of your problem, can you please elaborate?

-- Sent from my HP Pre3

On Nov 10, 2014 3:58 PM, Orion Poplawski <> wrote:

On 11/10/2014 12:07 PM, Rich Megginson wrote:
> On 11/10/2014 11:59 AM, Orion Poplawski wrote:
>> On 11/06/2014 10:35 AM, Orion Poplawski wrote:
>>> On 11/06/2014 03:14 AM, Rich Megginson wrote:
>>>> Try to reproduce the problem while using gdb to capture stack traces every
>>>> few
>>>> seconds as in
>>>> Ideally, we can get some stack traces of the server during the time between
>>>> the BIND and the ABANDON
>>> Thanks, I'll give it a shot. The gdb command line is a little incorrect
>>> though, I think you want:
>>> gdb -ex 'set confirm off' -ex 'set pagination off' -ex 'thread apply all bt
>>> full' -ex 'quit' /usr/sbin/ns-slapd `pidof ns-slapd` > stacktrace.`date
>>> +%s`.txt 2>&1
>>> - added % in date format, drop trailing ``
>> gdb ended up aborting while trying to do the stack trace when the problem
>> occurred ( so I haven't
>> had any luck there.
> What platform are you using? Can you provide an example of the gdb output?

Scientific Linux 6.5

gdb output is in the bug report, but basically:
../../gdb/linux-nat.c:1411: internal-error: linux_nat_post_attach_wait:
Assertion `pid == new_pid' failed.

Hmm - never seen this before.

>> It seems to be a problem with one of my servers only. I've shut it down and
>> the user can authenticate fine against our backup server. I tried restoring
>> from backup with bak2db but that didn't appear to help. Is there a more
>> restore from scratch procedure I should try next to see if it some kind of
>> corruption?
> I don't know. I'm not sure how db corruption could be causing this issue.
> The best way to restore is to completely rebuild the database e.g. db2ldif
> then ldif2db - then reinit all of your replicas.

So the "reinit all of your replicase" part sounds scary to me. Any
documentation for this process?

Why is it scary?  It's just the regular replica initialization process.  There's no trick, nothing fancy, no extra documentation.  The thing to realize is that a replica reinit does a database reinit, from scratch.

Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA, Boulder/CoRA Office FAX: 303-415-9702
3380 Mitchell Lane
Boulder, CO 80301
389 users mailing list

389 users mailing list