When did this start?
The reason I ask is I've noticed a lot of problems with RHEV since the recent updates to nss and openssl to deal with the POODLE vulnerability.

The workaround for a loot of them is to ensure minssf is set to a value higher than 0.
I'm wondering if this might be something similar. In the past I had never set that option because my LDAP database contained no password and Kerberos was its own database so the risk was nominal. Now I find at least for RHEV (ovirt) I'm suddenly forced to set it.

-- Sent from my HP Pre3

On Nov 10, 2014 3:58 PM, Orion Poplawski <orion@cora.nwra.com> wrote:

On 11/10/2014 12:07 PM, Rich Megginson wrote:
> On 11/10/2014 11:59 AM, Orion Poplawski wrote:
>> On 11/06/2014 10:35 AM, Orion Poplawski wrote:
>>> On 11/06/2014 03:14 AM, Rich Megginson wrote:
>>>> Try to reproduce the problem while using gdb to capture stack traces every
>>>> few
>>>> seconds as in http://www.port389.org/docs/389ds/FAQ/faq.html#debugging-hangs
>>>> Ideally, we can get some stack traces of the server during the time between
>>>> the BIND and the ABANDON
>>> Thanks, I'll give it a shot. The gdb command line is a little incorrect
>>> though, I think you want:
>>> gdb -ex 'set confirm off' -ex 'set pagination off' -ex 'thread apply all bt
>>> full' -ex 'quit' /usr/sbin/ns-slapd `pidof ns-slapd` > stacktrace.`date
>>> +%s`.txt 2>&1
>>> - added % in date format, drop trailing ``
>> gdb ended up aborting while trying to do the stack trace when the problem
>> occurred (https://bugzilla.redhat.com/show_bug.cgi?id=1162264) so I haven't
>> had any luck there.
> What platform are you using? Can you provide an example of the gdb output?

Scientific Linux 6.5

gdb output is in the bug report, but basically:
../../gdb/linux-nat.c:1411: internal-error: linux_nat_post_attach_wait:
Assertion `pid == new_pid' failed.

>> It seems to be a problem with one of my servers only. I've shut it down and
>> the user can authenticate fine against our backup server. I tried restoring
>> from backup with bak2db but that didn't appear to help. Is there a more
>> restore from scratch procedure I should try next to see if it some kind of
>> corruption?
> I don't know. I'm not sure how db corruption could be causing this issue.
> The best way to restore is to completely rebuild the database e.g. db2ldif
> then ldif2db - then reinit all of your replicas.

So the "reinit all of your replicase" part sounds scary to me. Any
documentation for this process?

Orion Poplawski
Technical Manager 303-415-9701 x222
NWRA, Boulder/CoRA Office FAX: 303-415-9702
3380 Mitchell Lane orion@nwra.com
Boulder, CO 80301 http://www.nwra.com
389 users mailing list