Password policy applicable to bind?
by Grant Byers
Hi,
I'm looking at applying local password policy to our existing users.
Reading the Red Hat Directory Server Deployment Guide, I see the
paragraphs below[1].
If I'm reading this correctly, it would appear that users will no
longer be able to bind to the directory server after application of the
policy if their existing password doesn't meet the policy we apply. Is
this true? If so, is there a way to prevent this, and only apply the
policy on password modify operations?
Thanks.
Grant
[1] "When a user attempts to bind to the directory, Directory Server
determines whether a local policy has been defined and enabled for the
user's entry.
To determine whether the fine-grained password policy is enabled,
the server checks the value (on or off) assigned to the nsslapd-
pwpolicy-local attribute of the cn=config entry. If the value is off,
the server ignores the policies defined at the subtree and user levels
and enforces the global password policy.
To determine whether a local policy is defined for a subtree or
user, the server checks for the pwdPolicysubentry attribute in the
corresponding user entry. If the attribute is present, the server
enforces the local password policy configured for the user. If the
attribute is absent, the server logs an error message and enforces the
global password policy.
The server then compares the user-supplied password with the value
specified in the user's directory entry to make sure they match. The
server also uses the rules defined by the password policy to ensure
that the password is valid before allowing the user to bind to the
directory. "
3 years, 1 month
OS err 12 - Cannot allocate memory
by Jan Kowalsky
Hi all,
suddenly one of our ldap-servers crashed and don't restart.
When restarting dirsrv we find in logs:
libdb: BDB2034 unable to allocate memory for mutex; resize mutex region
mmap in opening database environment failed trying to allocate 500000
bytes. (OS err 12 - Cannot allocate memory)
Same error, if we run dbverify.
We are running version 3.5.17 of 389-ds on debian stretch:
389-ds 1.3.5.17-2
Ram doesn't seem to be the problem. Only 200 MB of 4GB is used.
The server is part of a replicated cluster. Other servers (running same
software version - more or less on the same virtualisation hardware) are
not affected.
But we got similar errors also some times in the past. But restarting
the service was always possible.
Any ideas?
Thanks and kind regards
Jan
3 years, 1 month
Re: Table of duration and acceptable distance between two synchronized directories.
by William Brown
Glad to have helped. If you have more specific questions or details, we are happy to help at any time. Thanks!
> On 8 Oct 2020, at 17:29, Vincent Lemière <vincent(a)lemiere.org> wrote:
>
> Hi,
> Thanks for your reply,
> I agree with you.
> Duration and quality of links are more qualified.
> The TTL value is more qualified to grant transaction.
> I needed to know if some other solution could be envisaged.
> Your answer confirms my view.
> Thanks a lot
>
> -----Message d'origine-----
> De : William Brown <wbrown(a)suse.de>
> Envoyé : mercredi 7 octobre 2020 00:29
> À : vincent(a)lemiere.org; 389-users(a)lists.fedoraproject.org
> Objet : Re: [389-users] Table of duration and acceptable distance between two synchronized directories.
>
> Hi there,
>
> (removing 389-devel list from reply, this is probably better on the 389-users list :) )
>
> We don't have tables per-say. We don't have these because there are many factors involved that you need to consider when designing this kind of topology.
>
> For example, what rate of changes do you expect to be incoming? How much latency and bandwidth exists between the replicas? How many replicas do you want to deploy? How much latency are you willing to tolerate between the nodes becoming consistent?
>
> We are aware of a number of deployments that have high rates of change (thousands of writes per second) that are replicating between continents (ie US/EU). However they also have high bandwidth/low latency links between these locations to assist this.
>
> So I think there are more questions to answer here about your potential deployment and your specific concerns you have.
>
> Thanks,
>
>> On 7 Oct 2020, at 07:24, 389ds(a)lemiere.org wrote:
>>
>> Hello,
>> do you have tables estimating the reasonable distance to synchronize two 389 directories between two sites. Are there tables of recommendations depending on the distance?
>> What software tools or scripts allow evaluating it.
>>
>> Cordially,
>> Vincent Lemiere
>>
>> _______________________________________________
>> 389-users mailing list -- 389-users(a)lists.fedoraproject.org To
>> unsubscribe send an email to 389-users-leave(a)lists.fedoraproject.org
>> Fedora Code of Conduct:
>> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>> List Guidelines:
>> https://fedoraproject.org/wiki/Mailing_list_guidelines
>> List Archives:
>> https://lists.fedoraproject.org/archives/list/389-users@lists.fedorapr
>> oject.org
>
> —
> Sincerely,
>
> William Brown
>
> Senior Software Engineer, 389 Directory Server SUSE Labs, Australia
>
>
—
Sincerely,
William Brown
Senior Software Engineer, 389 Directory Server
SUSE Labs, Australia
3 years, 1 month
Getting started
by Hendrik Steiner
Hi guys!
Being an absolute NOOB, mailing the first time to this group, I hope you're
patient with my lack of knowledge ...
I have an issue with following documentation (is this the right place for
issues?):
https://directory.fedoraproject.org/docs/389ds/howto/quickstart.html
My company works with RHDS11 (RHEL8) so I tried to recreate some stuff
(Fedora 32):
useradd -c "RedHat Directory Server" -u 389 -g 389 -s /sbin/nologin slapd
groupadd -g 389 ds
Needs to be done before the installation of "389-base", or user and group
will be created automatically (dirsrv.dirsrv).
Adapting basic configuration as described (instance.inf):
[general]
group = ds
user = slapd
leads to
ERR - dse_read_one_file - The configuration file
/etc/dirsrv/slapd-example/schema//usr/share/dirsrv/schema/60trust.ldif
could not be accessed,
error -1
after a copy:
cp /usr/share/dirsrv/schema/60trust.ldif /etc/dirsrv/schema/
instance creation works like a charm ...
Sorry again for being annoying, but ...
where am I doing wrong?
also having some SELinux related questions,
is this the right place for such kind of issues?
Best regards,
Hendrik Steiner
3 years, 1 month
389 DS on CentOS 8
by Paul Whitney
Hi guys,
I am just now looking into our 389-ds migration strategy from CentOS 7 to 8. I successfully created my first master 389 instance on 8. It took some getting used to doing it on the cockpit plugin. But what I am missing is how do I merge a view where I can manage all of my DS from one view? I recall you saying the there will no longer be a java console, is there an alternative solution, such as an application, that will allow me to view all of my systems in a console (that is deprecated)?
Thanks,
Paul M. Whitney
paul.whitney(a)mac.com
Sent from my Mac Book Pro
3 years, 1 month
Complex MMR scenarios
by Eugen Lamers
Hi,
we want to setup a Multi Master Replication that represents a scenario with several mobile environments which need to replicate with some immobile server from time to time. Is it possible - and reasonable - to group the servers of a mobile environment together to a kind of sub-level MMR which replicates with the higher level MMR of the immobile environment. This replication between the different "levels" would be triggered somehow externally, because there would not always be a (sufficient) connection between them.
This would represent some kind of combination of MMR and cascading replication. Is there someone with experience with such kind of scenarios?
Thanks,
Eugen
3 years, 1 month