Import Schema into FreeIPA
by Dirk Streubel
Hello List,
maybe off topic, i don't know.
I have two IPA Servers here, there are in Replication Mode and everything is working fine. Now i
want to import a new schema into the IPA Servers.
So, i put the new schema under /etc/dirsrv/slapd.../schema in one of my IPA Servers. I also make
Chmod / Chown and "restorecon".
After that, in the Cockpit 389 WebGUI i reload the schema files, with no error, and restart the IPA
Service. Maybe this is to necessary?
But i can't see the new schema on my second IPA Server. I have also tried "ipa-replica-manage
re-initialize --from" ..
So, do i have to install the new schema manually on the second IPA Server and reload the schema
files or what is the best way to import a new schema into FreeIPA and replicate it to the second IPA
Server?
Regards
Dirk
4 years, 4 months
Strange krb5 issue
by Petar Kozić
Hi,
I’m using my IPA server in docker. I’m using that server last more than 8-9
months for ssh login.
Everything works well till few hours before.
I can’t login to ssh and I get this strange error:
[sssd[ldap_child[2171]]][2171]: Failed to initialize credentials using
keytab [MEMORY:/etc/krb5.keytab]: Preauthentication failed. Unable to
create GSSAPI-encrypted LDAP connection.
Someone have idea how to solve this?
Thank you.
*—*
*Petar Kozić*
4 years, 4 months
Windows Client Integration
by Alexander Becker
Hi all,
i am trying to configure Windows authentication against FreeIPA using this guide:
https://www.freeipa.org/page/Windows_authentication_against_FreeIPA
Everything worked so far. I added the local User the to "Remote Desktop User" group but it doesn't work with RDP. The Message says that the User is not in the Remote Desktop Group.
Now what i could find out is, that the problem is that the following command was run on the IPA Server after the initial Installation of FreeIPA: ipa-adtrust-install --add-sids --netbios-name=EXAMPLE -a
So if i understand that correctly, "--add-sids" assigns a SID identifier to all groups in FreeIPA. That's why the RDP ist not working, because the User is not mapped to the local Windows User.
How can i make it work?
1. Is it possible to add the FreeIPA User/Group to the "Remote Desktop User" Group so that RDP works? Do i have to configure something in Samba or FreeIPA?
2. Can i somehow undo the Configuration that was done with "ipa-adtrust-install --add-sids"? Can i remove the assigned SIDs?
Any help will be much appreciated.
with regards,
Alexander Becker
4 years, 4 months
Slow login (CentOS 7.5, IPA 4.5.4)
by Detlev Habicht
Hello,
we are using hosts with CentOS 7.5 - physical and virtual.
And we have three IPA server.
We are using Cadence CAD/EDA applications and we have a lot
of simulations running.
After a while we have two problems:
- login is very slow
- access of NFS mounted data and applications is slow
This happens normally after two or more weeks.
After a reboot everything is ok again.
This problems occurs not on all hosts at the same time, so
the other hosts are ok, when some hosts have this problems.
Here i will ask for my login problem:
Her comes a debug output from sssd:
I am using "ssh hostname".
#… (deleted)
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [sysdb_set_entry_attr] (0x0200): Entry [name=habicht(a)imsmx.intern,cn=users,cn=imsmx.intern,cn=sysdb] has set [ts_cache] attrs.
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [check_wait_queue] (0x1000): Wait queue for user [habicht(a)imsmx.intern] is empty.
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [krb5_auth_queue_done] (0x1000): krb5_auth_queue request [0x55d411a08150] done.
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [dp_req_done] (0x0400): DP Request [PAM Preauth #28]: Request handler finished [0]: Success
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [_dp_req_recv] (0x0400): DP Request [PAM Preauth #28]: Receiving request data.
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): DP Request [PAM Preauth #28]: Request removed.
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [dp_method_enabled] (0x0400): Target selinux is not configured
(Wed Jan 15 13:30:02 2020) [sssd[be[imsmx.intern]]] [dp_pam_reply] (0x1000): DP Request [PAM Preauth #28]: Sending result [0][imsmx.intern]
###
### Doing Return after entry of password - first debug output at first 30:37
###
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [sbus_dispatch] (0x4000): dbus conn: 0x55d4119e04e0
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [sbus_dispatch] (0x4000): Dispatching.
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [sbus_message_handler] (0x2000): Received SBUS method org.freedesktop.sssd.dataprovider.getAccountInfo on path /org/freedesktop/sssd/dataprovider
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [sbus_get_sender_id_send] (0x2000): Not a sysbus message, quit
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [dp_get_account_info_handler] (0x0200): Got request for [0x3][BE_REQ_INITGROUPS][name=habicht(a)imsmx.intern]
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [sss_domain_get_state] (0x1000): Domain imsmx.intern is Active
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55d4119fedf0
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55d4119f85b0
(Wed Jan 15 13:30:37 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Running timer event 0x55d4119fedf0 „ltdb_callback"
#… (deleted)
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [hbac_evaluate] (0x0100): ALLOWED by rule [allow_all].
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [hbac_evaluate] (0x0100): hbac_evaluate() >]
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [ipa_hbac_evaluate_rules] (0x0080): Access granted by HBAC rule [allow_all]
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [dp_req_done] (0x0400): DP Request [PAM Account #31]: Request handler finished [0]: Success
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [_dp_req_recv] (0x0400): DP Request [PAM Account #31]: Receiving request data.
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): DP Request [PAM Account #31]: Request removed.
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [dp_method_enabled] (0x0400): Target selinux is not configured
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [dp_pam_reply] (0x1000): DP Request [PAM Account #31]: Sending result [0][imsmx.intern]
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [sdap_process_result] (0x2000): Trace: sh[0x55d411a00330], connected[1], ops[(nil)], ldap[0x55d4119e4300]
(Wed Jan 15 13:30:38 2020) [sssd[be[imsmx.intern]]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
###
### Between last 30:38 and first 31:03 no debug output appears
###
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sbus_dispatch] (0x4000): dbus conn: 0x55d4119e04e0
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sbus_dispatch] (0x4000): Dispatching.
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sbus_message_handler] (0x2000): Received SBUS method org.freedesktop.sssd.dataprovider.getAccountInfo on path /org/freedesktop/sssd/dataprovider
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sbus_get_sender_id_send] (0x2000): Not a sysbus message, quit
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_get_account_info_handler] (0x0200): Got request for [0x3][BE_REQ_INITGROUPS][name=habicht(a)imsmx.intern]
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sss_domain_get_state] (0x1000): Domain imsmx.intern is Active
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55d411a054c0
#… (deleted)
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Running timer event 0x55d4119fedf0 "ltdb_callback"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Destroying timer event 0x55d411a0bee0 "ltdb_timeout"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Ending timer event 0x55d4119fedf0 "ltdb_callback"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x55d4119f85b0
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x55d4119fedf0
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Running timer event 0x55d4119f85b0 "ltdb_callback"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Destroying timer event 0x55d4119fedf0 "ltdb_timeout"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): Ending timer event 0x55d4119f85b0 "ltdb_callback"
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): start ldb transaction (nesting: 0)
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_req_done] (0x0400): DP Request [Account #35]: Request handler finished [0]: Success
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [_dp_req_recv] (0x0400): DP Request [Account #35]: Receiving request data.
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_req_reply_list_success] (0x0400): DP Request [Account #35]: Finished. Success.
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_req_reply_std] (0x1000): DP Request [Account #35]: Returning [Success]: 0,0,Success
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_table_value_destructor] (0x0400): Removing [0:1:0x0001:1::imsmx.intern:name=nnn@imsmx.intern] from reply table
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): DP Request [Account #35]: Request removed.
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [dp_req_destructor] (0x0400): Number of active DP request: 0
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sdap_process_result] (0x2000): Trace: sh[0x55d411a00330], connected[1], ops[(nil)], ldap[0x55d4119e4300]
(Wed Jan 15 13:31:03 2020) [sssd[be[imsmx.intern]]] [sdap_process_result] (0x2000): Trace: end of ldap_result list
###
### Prompt appears
###
Here i have to wait 59 seconds for a prompt. Sometimes it is longer.
A look with tcpdump shows only traffic on LDAP-ports.
With ipa commands or ldapsearch i have no problems.
Any idea why my problem happens?
Thank you for any help.
Detlev
--
Detlev | Institut fuer Mikroelektronische Systeme
Habicht | D-30167 Hannover +49 511 76219662 habicht(a)ims.uni-hannover.de
--------+-------- Handy +49 172 5415752 ---------------------------
4 years, 4 months
freeipa failing to start after update
by Andrew Meyer
I am running CentOS 8.x and have updated to the latest version of IPA and CentOS 8. I rebooted after updating and am now getting the following:
Jan 20 12:55:29 freeipa01 server[7889]: arguments used: stop
Jan 20 12:55:30 freeipa01 systemd[1]: Stopping 389 Directory Server ZONE1-EXAMPLE-NET....
Jan 20 12:55:30 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:30.169315691 -0600] - INFO - op_thread_cleanup - slapd shutting down - signaling operation threads - op stack size 2 max work q size 2 max work q stack size 2
Jan 20 12:55:30 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:30.396008349 -0600] - INFO - slapd_daemon - slapd shutting down - closing down internal subsystems and plugins
Jan 20 12:55:30 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:30.456826998 -0600] - INFO - dblayer_pre_close - Waiting for 4 database threads to stop
Jan 20 12:55:30 freeipa01 server[7889]: SEVERE: Could not contact [localhost:[8005]]. Tomcat may not be running.
Jan 20 12:55:30 freeipa01 server[7889]: SEVERE: Catalina.stop:
Jan 20 12:55:30 freeipa01 server[7889]: java.net.ConnectException: Connection refused (Connection refused)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.PlainSocketImpl.socketConnect(Native Method)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.Socket.connect(Socket.java:607)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.Socket.connect(Socket.java:556)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.Socket.<init>(Socket.java:452)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.net.Socket.<init>(Socket.java:229)
Jan 20 12:55:30 freeipa01 server[7889]: #011at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:498)
Jan 20 12:55:30 freeipa01 server[7889]: #011at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Jan 20 12:55:30 freeipa01 server[7889]: #011at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Jan 20 12:55:30 freeipa01 server[7889]: #011at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jan 20 12:55:30 freeipa01 server[7889]: #011at java.lang.reflect.Method.invoke(Method.java:498)
Jan 20 12:55:30 freeipa01 server[7889]: #011at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:403)
Jan 20 12:55:30 freeipa01 server[7889]: #011at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:494)
Jan 20 12:55:30 freeipa01 systemd[1]: pki-tomcatd(a)pki-tomcat.service: Control process exited, code=exited status=1
Jan 20 12:55:31 freeipa01 systemd[1]: pki-tomcatd(a)pki-tomcat.service: Failed with result 'exit-code'.
Jan 20 12:55:31 freeipa01 systemd[1]: Stopped PKI Tomcat Server pki-tomcat.
Jan 20 12:55:31 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:31.401012956 -0600] - INFO - dblayer_pre_close - All database threads now stopped
Jan 20 12:55:31 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:31.477064258 -0600] - INFO - ldbm_back_instance_set_destructor - Set of instances destroyed
Jan 20 12:55:31 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:31.485527687 -0600] - INFO - connection_post_shutdown_cleanup - slapd shutting down - freed 2 work q stack objects - freed 2 op stack objects
Jan 20 12:55:31 freeipa01 ns-slapd[7385]: [20/Jan/2020:12:55:31.491338592 -0600] - INFO - main - slapd stopped.
4 years, 4 months
Option to allow single-label domains
by Ronald Wimmer
Is there a possibility to allow ipa-server-install for a single-label
domain? I would like to use IPA at home and will definitely never
connect it to an AD.
Cheers,
Ronald
4 years, 4 months
Re: sudo rule doesn't work
by Florence Blanc-Renaud
On 1/18/20 11:37 AM, Elhamsadat Azarian wrote:
> Hi dear Florence
> Thanks of ur reply
> I wasnt at office and today i chacked parameteres but i cant find them
> in sssd.conf!
> How can i check or set values of them?
Hi,
(adding back freeipa-users mailing list)
All the parameters are described in the man page for sssd.conf or
sssd-ldap. If they are not set in /etc/sssd/sssd.conf, then the default
value applies.
flo
>
> On Mon, 13 Jan 2020, 12:21 Florence Blanc-Renaud, <flo(a)redhat.com
> <mailto:flo@redhat.com>> wrote:
>
> On 1/13/20 9:38 AM, Elhamsadat Azarian wrote:
> > I did it but doesnt wotk
> > I think my sudo rule doesnt place on my hosts!!!
> >
> Hi,
>
> the sudorules can be cached on the host. Please check the following
> SSSD
> parameters:
> - entry_cache_sudo_timeout -- How many seconds should sudo consider
> rules valid before asking the backend again
> - ldap_sudo_smart_refresh_interval -- How many seconds SSSD has to wait
> before executing a smart refresh of sudo rules (which downloads all
> rules that have USN higher than the highest USN of cached rules).
> - ldap_sudo_full_refresh_interval -- How many seconds SSSD will wait
> between executing a full refresh of sudo rules (which downloads all
> rules that are stored on the server).
>
> HTH,
> flo
>
> > On Mon, 13 Jan 2020, 11:57 Florence Blanc-Renaud, <flo(a)redhat.com
> <mailto:flo@redhat.com>
> > <mailto:flo@redhat.com <mailto:flo@redhat.com>>> wrote:
> >
> > On 1/13/20 8:57 AM, Elhamsadat Azarian wrote:
> > > Hi Florence
> > > Thanks i replaced but it doest work!
> > >
> > Hi,
> > can you also replace the "RunAs group categoray: all" attr
> with "RunAs
> > User category: all"?
> > flo
> >
> > > On Mon, 13 Jan 2020, 11:18 Florence Blanc-Renaud,
> <flo(a)redhat.com <mailto:flo@redhat.com>
> > <mailto:flo@redhat.com <mailto:flo@redhat.com>>
> > > <mailto:flo@redhat.com <mailto:flo@redhat.com>
> <mailto:flo@redhat.com <mailto:flo@redhat.com>>>> wrote:
> > >
> > > On 1/12/20 12:26 PM, Elhamsadat Azarian via
> FreeIPA-users wrote:
> > > > Hi friends
> > > > i define a SudoRule with this properties:
> > > >
> > > > rulename : rsyslog_rule
> > > > Enabled : true
> > > > RunAs group Category : All
> > > > users :user-test
> > > > hosts: ipacli-irvlt01.mydomain.com
> <http://ipacli-irvlt01.mydomain.com>
> > <http://ipacli-irvlt01.mydomain.com>
> > > <http://ipacli-irvlt01.mydomain.com>
> > > > sudo Deny Commands : sudo /usr/bin/systemctl
> restart rsyslog
> > > >
> > > > now i login with "user-test" into "ipacli-irvlt01"
> server
> > and i
> > > try to run " sudo /usr/bin/systemctl restart rsyslog"
> command. i
> > > expected to doesnt allow to run this command but no action
> > happend
> > > and i could run it!!!
> > > >
> > > > why my sudo rule doesnt work?
> > > Hi,
> > >
> > > can you try to replace the "sudo deny commands": "sudo
> > > /usr/bin/systemctl restart rsyslog" with
> "/usr/bin/systemctl
> > restart
> > > rsyslog" ?
> > >
> > > thanks,
> > > flo
> > >
> > > >
> > > >
> ----------------------------------------------------------
> > > > this is less /var/log/sssd/sssd_domain.log:
> > > > (Sun Jan 12 13:59:01 2020) [sssd[be[lshs.dc]]]
> > [orderly_shutdown]
> > > (0x0010): SIGTERM: killing children
> > > >
> ----------------------------------------------------------
> > > > this is /var/log/sssd/sssd_sudo.log
> > > > (Sun Jan 12 13:59:01 2020) [sssd[sudo]]
> [orderly_shutdown]
> > > (0x0010): SIGTERM: killing children
> > > >
> > > >
> ----------------------------------------------------------
> > > > this is less /var/log/sudo_debug
> > > > Jan 12 14:19:27 sudo[17370] /etc/sudoers:53
> CMNDALIAS ALIAS =
> > > COMMAND , COMMAND ARG , COMMAND ARG
> > > > Jan 12 14:19:27 sudo[17370] -> alias_add @
> ./alias.c:120
> > > > Jan 12 14:19:27 sudo[17370] -> rcstr_addref @
> ./rcstr.c:81
> > > > Jan 12 14:19:27 sudo[17370] <- rcstr_addref @
> ./rcstr.c:88 :=
> > > 0x55f2968e7714
> > > > Jan 12 14:19:27 sudo[17370] -> rbinsert @
> ./redblack.c:177
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -13
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -6
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -6
> > > > Jan 12 14:19:27 sudo[17370] -> rotate_right @
> ./redblack.c:147
> > > > Jan 12 14:19:27 sudo[17370] <- rotate_right @
> ./redblack.c:163
> > > > Jan 12 14:19:27 sudo[17370] <- rbinsert @
> ./redblack.c:265
> > := 0
> > > > Jan 12 14:19:27 sudo[17370] <- alias_add @
> ./alias.c:143
> > := (null)
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_txt @
> ./toke_util.c:52
> > > > Jan 12 14:19:27 sudo[17370] <- fill_txt @
> ./toke_util.c:80
> > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ff550
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ff650
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ff750
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ff850
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ff950
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ffa50
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ffb50
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ffc50
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] /etc/sudoers:54
> CMNDALIAS ALIAS =
> > > COMMAND ARG , COMMAND ARG , COMMAND ARG , COMMAND ARG ,
> > COMMAND ARG
> > > , COMMAND ARG , COMMAND ARG , COMMAND ARG
> > > > Jan 12 14:19:27 sudo[17370] -> alias_add @
> ./alias.c:120
> > > > Jan 12 14:19:27 sudo[17370] -> rcstr_addref @
> ./rcstr.c:81
> > > > Jan 12 14:19:27 sudo[17370] <- rcstr_addref @
> ./rcstr.c:88 :=
> > > 0x55f2968e7714
> > > > Jan 12 14:19:27 sudo[17370] -> rbinsert @
> ./redblack.c:177
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := 7
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -3
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -3
> > > > Jan 12 14:19:27 sudo[17370] <- rbinsert @
> ./redblack.c:265
> > := 0
> > > > Jan 12 14:19:27 sudo[17370] <- alias_add @
> ./alias.c:143
> > := (null)
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_txt @
> ./toke_util.c:52
> > > > Jan 12 14:19:27 sudo[17370] <- fill_txt @
> ./toke_util.c:80
> > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ffdd0
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968ffed0
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2968fffd0
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_cmnd @
> ./toke_util.c:103
> > > > Jan 12 14:19:27 sudo[17370] <- fill_cmnd @
> > ./toke_util.c:124 := true
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] -> fill_args @
> ./toke_util.c:132
> > > > Jan 12 14:19:27 sudo[17370] <- fill_args @
> > ./toke_util.c:162 := true
> > > > Jan 12 14:19:27 sudo[17370] -> new_member @ gram.y:956
> > > > Jan 12 14:19:27 sudo[17370] <- new_member @
> gram.y:968 :=
> > > 0x55f2969000d0
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_append_v1 @
> > ./lbuf.c:159
> > > > Jan 12 14:19:27 sudo[17370] -> sudo_lbuf_expand @
> ./lbuf.c:69
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_expand @
> > ./lbuf.c:87 := true
> > > > Jan 12 14:19:27 sudo[17370] <- sudo_lbuf_append_v1 @
> > ./lbuf.c:190
> > > := true
> > > > Jan 12 14:19:27 sudo[17370] /etc/sudoers:55
> CMNDALIAS ALIAS =
> > > COMMAND ARG , COMMAND ARG , COMMAND ARG , COMMAND ARG
> > > > Jan 12 14:19:27 sudo[17370] -> alias_add @
> ./alias.c:120
> > > > Jan 12 14:19:27 sudo[17370] -> rcstr_addref @
> ./rcstr.c:81
> > > > Jan 12 14:19:27 sudo[17370] <- rcstr_addref @
> ./rcstr.c:88 :=
> > > 0x55f2968e7714
> > > > Jan 12 14:19:27 sudo[17370] -> rbinsert @
> ./redblack.c:177
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -10
> > > > Jan 12 14:19:27 sudo[17370] -> alias_compare @
> ./alias.c:54
> > > > Jan 12 14:19:27 sudo[17370] <- alias_compare @
> > ./alias.c:62 := -4
> > > > _______________________________________________
> > > > FreeIPA-users mailing list --
> > > freeipa-users(a)lists.fedorahosted.org
> <mailto:freeipa-users@lists.fedorahosted.org>
> > <mailto:freeipa-users@lists.fedorahosted.org
> <mailto:freeipa-users@lists.fedorahosted.org>>
> > > <mailto:freeipa-users@lists.fedorahosted.org
> <mailto:freeipa-users@lists.fedorahosted.org>
> > <mailto:freeipa-users@lists.fedorahosted.org
> <mailto:freeipa-users@lists.fedorahosted.org>>>
> > > > To unsubscribe send an email to
> > > freeipa-users-leave(a)lists.fedorahosted.org
> <mailto:freeipa-users-leave@lists.fedorahosted.org>
> > <mailto:freeipa-users-leave@lists.fedorahosted.org
> <mailto:freeipa-users-leave@lists.fedorahosted.org>>
> > > <mailto:freeipa-users-leave@lists.fedorahosted.org
> <mailto:freeipa-users-leave@lists.fedorahosted.org>
> > <mailto:freeipa-users-leave@lists.fedorahosted.org
> <mailto:freeipa-users-leave@lists.fedorahosted.org>>>
> > > > Fedora Code of Conduct:
> > > https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > > > List Guidelines:
> > > https://fedoraproject.org/wiki/Mailing_list_guidelines
> > > > List Archives:
> > >
> >
> https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedoraho...
> > > >
> > >
> >
>
4 years, 4 months
WARNING Could not update DNS SSHFP records.
by Ron Blom
Hi,
We have problems with client’s registering dns records at enrollment. Most of the time all works ok but about 10% of the machines don’t create the A records or the SHHFP records. Sometimes they don’t create both. In the ipaclient-install.log we see the following on machines that doesn’t create the records. In this example the creation of the A records succeeded but the creation of the SSHFP records failed with the following error:
2019-12-20T13:19:51Z INFO Adding SSH public key from /etc/ssh/ssh_host_rsa_key.pub
2019-12-20T13:19:51Z INFO Adding SSH public key from /etc/ssh/ssh_host_ecdsa_key.pub
2019-12-20T13:19:51Z INFO Adding SSH public key from /etc/ssh/ssh_host_ed25519_key.pub
2019-12-20T13:19:51Z INFO [try 1]: Forwarding 'host_mod' to json server 'https://freeipa-002.ipa.cloud/ipa/session/json'
2019-12-20T13:19:51Z DEBUG HTTP connection keep-alive (freeipa-002.ipa.cloud)
2019-12-20T13:19:51Z DEBUG received Set-Cookie (<type 'list'>)'['ipa_session=MagBearerToken=tR1VkWrpjmoNh7aZDYiPzXSwFlkhsp1ENg%2b5y8orMo9P7EkiLQXey11TH9wIgc2xJjJ2xdly2hFyi6v58o2HhzEeQBi%2fcR%2flZ7nwFv8VX3WxCSwS%2beDVSu7%2f%2fjsSB%2b1NzyVHTNe5jkJK9pGXL1nR7QMtNrV2gFY7RyFrJns50dEC%2fi5C%2fEn0BgZAE4aLAiThG4SW3iGc0bfOGy%2bDpAGE17XzB8G978uKpqqHGC9aFDmMmXVFCfpwHoIWoBtJctgy7y6Q97rJnpkjbe2heYMwLQFbDkrTRlrjSDfla0XXCNvd7in6zEu0MZloOXqyXHiu;path=/ipa;httponly;secure;']'
2019-12-20T13:19:51Z DEBUG storing cookie 'ipa_session=MagBearerToken=tR1VkWrpjmoNh7aZDYiPzXSwFlkhsp1ENg%2b5y8orMo9P7EkiLQXey11TH9wIgc2xJjJ2xdly2hFyi6v58o2HhzEeQBi%2fcR%2flZ7nwFv8VX3WxCSwS%2beDVSu7%2f%2fjsSB%2b1NzyVHTNe5jkJK9pGXL1nR7QMtNrV2gFY7RyFrJns50dEC%2fi5C%2fEn0BgZAE4aLAiThG4SW3iGc0bfOGy%2bDpAGE17XzB8G978uKpqqHGC9aFDmMmXVFCfpwHoIWoBtJctgy7y6Q97rJnpkjbe2heYMwLQFbDkrTRlrjSDfla0XXCNvd7in6zEu0MZloOXqyXHiu;' for principal host/adm-sdrn6419-2062.aal.ipa.cloud(a)RINIS.CLOUD
2019-12-20T13:19:51Z DEBUG Writing nsupdate commands to /etc/ipa/.dns_update.txt:
2019-12-20T13:19:51Z DEBUG debug
update delete adm-sdrn6419-2062.aal.ipa.cloud. IN SSHFP
show
send
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 1 1 6134C7CDE12FDDFA33A068A273941697928FBCD7
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 1 2 2F41772E6CAD9C328730BFCED0E27350A6C20DE8499E60158635ED8419BF2022
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 3 1 FFE99F20A5C32D857535D13425A7F85F3A63E198
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 3 2 D2C7FC741E834D4E1FE51B7867AFA2D34D0685C769D9019D98093E01C8312118
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 4 1 ED5416B39F419E4F631AB6C9A9CFC0139907232E
update add adm-sdrn6419-2062.aal.ipa.cloud. 1200 IN SSHFP 4 2 7794DBAA391B2939476EDD3A0173162F9CD3BBE1E16B52754BB8C6B56DA26435
show
send
2019-12-20T13:19:51Z DEBUG Starting external process
2019-12-20T13:19:51Z DEBUG args=/usr/bin/nsupdate -g /etc/ipa/.dns_update.txt
2019-12-20T13:19:51Z DEBUG Process finished, return code=1
2019-12-20T13:19:51Z DEBUG stdout=Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 0
;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; UPDATE SECTION:
adm-sdrn6419-2062.aal.ipa.cloud. 0 ANY SSHFP
Outgoing update query:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22636
;; flags:; QUESTION: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; QUESTION SECTION:
;3648384014.sig-freeipa-001.ipa.cloud. ANY TKEY
;; ADDITIONAL SECTION:
3648384014.sig-freeipa-001.ipa.cloud. 0 ANY TKEY gss-tsig. 1576847991 1576847991 3 NOERROR 677 YIICoQYJKoZIhvcSAQICAQBuggKQMIICjKADAgEFoQMCAQ6iBwMFACAA AACjggGCYYIBfjCCAXqgAwIBBaENGwtSSU5JUy5DTE9VRKIpMCegAwIB AaEgMB4bA0ROUxsXYWRtLWFhYS0wMDEucmluaXMuY2xvdWSjggE3MIIB M6ADAgESoQMCAQKiggElBIIBIWJzJaNElw4aQs2ZFHDopnUdH6vqowdG ojmiCBIpmgFjPsHEl98zY+UX6OqfF3ovB/uMAuCF1eq3spIRtPjb7hUO +lva9UtuvUJSV0pT9WI1B0ROZxzspkBQmZEYLRUCACxjW3Kw1F123ryy Ga4JJ4cROOFf1GtTdEW3CmIJLlyKqWXDFSQzgnqvP/acb0mQIr0Wid6P DJFaxYmm+uRHw5KBTg7hjeAQPFwgZxNdardv9hUvfhzElxtOK0Kj3ZDy 9lFdpemEtO+osfnwrwyX28xWGLZds/Gfpy0kfdihkUxT082eTWNftaE7 dX0LOb46j9sbMAFDbgHESCkXq5VFRBmtotnf3SRru/eBQFdbYq0/o/oY PCmaTJ4HSymhjbkrVVqkgfAwge2gAwIBEqKB5QSB4tPwDLt7qpKesLJg lGFXpoNqHOsGlFheQslzzkcWzjgoJDDRSJtjoaLgLFv0cITj+rr4dXcu tdMNESwRObXQofsbO9E0HYfZWijSDEIVJlXETm+x8ca4Qf938u3RHV/U +ZXmepZIBnMR4d70Vo+vz6CuXt0+HI0Dh6ot2whzX5g0MWHI0SfJElhO pgWN59uMUC4E8HtLzNEoWljX25acK3mi8ZBgq8iFihfObfEP0Xmx11NE Gru9QOiwMoxRUblws44U3sNOFRUgF9Ua3kKWXEfJ4wpPC3GwdMUajMkr V3wCXBc= 0
2019-12-20T13:19:51Z DEBUG stderr=Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 13244
;; flags: qr aa rd ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;adm-sdrn6419-2062.aal.ipa.cloud. IN SOA
;; AUTHORITY SECTION:
aal.ipa.cloud. 0 IN SOA freeipa-001.ipa.cloud. hostmaster.aal.ipa.cloud. 1576848002 3600 60 1209600 60
Found zone name: aal.ipa.cloud
The master is: freeipa-001.ipa.cloud
start_gssrequest
Found realm from ticket: RINIS.CLOUD
send_gssrequest
recvmsg reply from GSS-TSIG query
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22636
;; flags: qr ra; QUESTION: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;3648384014.sig-freeipa-001.ipa.cloud. ANY TKEY
;; ANSWER SECTION:
3648384014.sig-freeipa-001.ipa.cloud. 0 ANY TKEY gss-tsig. 0 0 3 BADNAME 0 0
dns_tkey_gssnegotiate: TKEY is unacceptable
2019-12-20T13:19:51Z DEBUG nsupdate failed: Command '/usr/bin/nsupdate -g /etc/ipa/.dns_update.txt' returned non-zero exit status 1
2019-12-20T13:19:51Z WARNING Could not update DNS SSHFP records.
When I run the nsupdate command manually after enrollment it will succeed and add the missing records.
any ideas?
4 years, 4 months
Re: FreeIPA ipa-replica-install hangs on "No status yet" during the first replication
by Florence Blanc-Renaud
On 1/17/20 4:32 PM, Damien Bras via FreeIPA-users wrote:
> Hi,
>
> During the installation of one of our FreeIPA replica (with
> ipa-replica-install), the process hangs on "No status yet".
>
> Our domain is in domain level 1.
>
> It seems that the script is waiting for an attribute
> nsds5ReplicaLastInitStatus.
>
> The master server is up & running and we want to have a multimaster
> environment.
>
> We don't find any error related to the replication process in the log.
>
> The version installed: 4.6.5-11.0.1.el7_7.3
>
> First, the ipa client is correctly installed on the server. Then we use
> the comment ipa-replica-install to promote it as IPA server with:
>
> ipa-replica-install -U --principal admin --admin-password
> $admin_password --domain domain.com --server server2.domain.com
> --setup-ca --setup-dns --no-forwarders --forward-policy=first
> --no-dnssec-validation --allow-zone-overlap
> --reverse-zone=xx.xx.in-addr.arpa --mkhomedir --force-join
>
> In the ipareplica-install.log we just have this:
>
> …
>
> 2020-01-17T10:25:46Z DEBUG [28/41]: setting up initial replication
>
> 2020-01-17T10:25:46Z DEBUG retrieving schema for SchemaCache
> url=ldapi://%2fvar%2frun%2fslapd-DOMAIN-COM.socket
> conn=<ldap.ldapobject.SimpleLDAPObject instance at 0x7f2c94db6248>
>
> 2020-01-17T10:25:47Z DEBUG Destroyed connection
> context.ldap2_139829518113296
>
> 2020-01-17T10:25:47Z DEBUG Starting external process
>
> 2020-01-17T10:25:47Z DEBUG args=/bin/systemctl --system daemon-reload
>
> 2020-01-17T10:25:47Z DEBUG Process finished, return code=0
>
> 2020-01-17T10:25:47Z DEBUG stdout=
>
> 2020-01-17T10:25:47Z DEBUG stderr=
>
> 2020-01-17T10:25:47Z DEBUG Starting external process
>
> 2020-01-17T10:25:47Z DEBUG args=/bin/systemctl restart
> dirsrv(a)DOMAIN-COM.service
>
> 2020-01-17T10:25:53Z DEBUG Process finished, return code=0
>
> 2020-01-17T10:25:53Z DEBUG stdout=
>
> 2020-01-17T10:25:53Z DEBUG stderr=
>
> 2020-01-17T10:25:53Z DEBUG Restart of
> dirsrv(a)HS2-VDC-CORP-HOMESEND-COM.service complete
>
> 2020-01-17T10:25:53Z DEBUG Created connection context.ldap2_139829518113296
>
> 2020-01-17T10:25:53Z DEBUG Fetching nsDS5ReplicaId from master [attempt 1/5]
>
> 2020-01-17T10:25:53Z DEBUG retrieving schema for SchemaCache
> url=ldap://server2.domain.com:389 conn=<ldap.ldapobject.SimpleLDAPObject
> instance at 0x7f2c95da8320>
>
> 2020-01-17T10:25:54Z DEBUG Successfully updated nsDS5ReplicaId.
>
> 2020-01-17T10:25:54Z DEBUG Add or update replica config
> cn=replica,cn=dc\=domain\,dc\=com,cn=mapping tree,cn=config
>
> 2020-01-17T10:25:54Z DEBUG Added replica config
> cn=replica,cn=dc\=domain\,dc\=com,cn=mapping tree,cn=config
>
> 2020-01-17T10:25:54Z DEBUG Add or update replica config
> cn=replica,cn=dc\=domain\,dc\=com,cn=mapping tree,cn=config
>
> 2020-01-17T10:25:54Z DEBUG No update to
> cn=replica,cn=dc\=domain\,dc\=com,cn=mapping tree,cn=config necessary
>
> 2020-01-17T10:25:54Z DEBUG Waiting for replication
> (ldapi://%2fvar%2frun%2fslapd-DOMAIN-COM.socket)
> cn=meToserver2.domain.com,cn=replica,cn=dc\=domain\,dc\=com,cn=mapping
> tree,cn=config (objectclass=*)
>
> 2020-01-17T10:25:54Z DEBUG Entry found
> [LDAPEntry(ipapython.dn.DN('cn=meToserver2.domain.com,cn=replica,cn=dc\=domain\,dc\=com,cn=mapping
> tree,cn=config'), {u'nsds5replicaLastInitStart': ['19700101000000Z'],
> u'nsds5replicaUpdateInProgress': ['FALSE'], u'cn':
> ['meToserver2.domain.com'], u'objectClass':
> ['nsds5replicationagreement', 'top'], u'nsds5replicaLastUpdateEnd':
> ['19700101000000Z'], u'nsDS5ReplicaRoot': ['dc=domain,dc=com'],
> u'nsDS5ReplicaHost': ['server2.domain.com'],
> u'nsds5replicaLastUpdateStatus': ['Error (0) No replication sessions
> started since server startup'], u'nsDS5ReplicaBindMethod':
> ['SASL/GSSAPI'], u'nsds5ReplicaStripAttrs': ['modifiersName
> modifyTimestamp internalModifiersName internalModifyTimestamp'],
> u'nsds5replicaLastUpdateStart': ['19700101000000Z'],
> u'nsDS5ReplicaPort': ['389'], u'nsDS5ReplicaTransportInfo': ['LDAP'],
> u'description': ['me to server2.domain.com'], u'nsds5replicareapactive':
> ['0'], u'nsds5replicaChangesSentSinceStartup': [''],
> u'nsds5replicaTimeout': ['120'], u'nsDS5ReplicatedAttributeList':
> ['(objectclass=*) $ EXCLUDE memberof idnssoaserial entryusn
> krblastsuccessfulauth krblastfailedauth krbloginfailedcount'],
> u'nsds5replicaLastInitEnd': ['19700101000000Z'],
> u'nsDS5ReplicatedAttributeListTotal': ['(objectclass=*) $ EXCLUDE
> entryusn krblastsuccessfulauth krblastfailedauth krbloginfailedcount']})]
>
Hi,
can you also paste the lines that contain the install error?
> On the live master, there is a strange behavior also:
>
> It seems the ldap is like in read only mode. For exemple, if I reset the
> password of an account, I don’t have any error but nothing happened.
>
> I have also those errors on this server:
>
> Jan 17 16:27:57 hs2-man-idm-02 ns-slapd: [17/Jan/2020:16:27:57.102642397
> +0100] - ERR - csngen_adjust_time - Adjustment limit exceeded; value -
> 2711289715, limit - 86400
Are your servers synchronized (either with ntpd or chronyd)? Maybe the
time is different and prevents correct replication.
flo
>
> Jan 17 16:27:57 hs2-man-idm-02 ns-slapd: [17/Jan/2020:16:27:57.110464100
> +0100] - WARN - NSMMReplicationPlugin - replica_generate_next_csn -
> opcsn=5e21d27e000000050000 <= basecsn=ffbcd1f1522600040000, adjusted
> opcsn=5e21d27e522700050000
>
> But we don’t have any replication because no other servers:
>
> # ipa-replica-manage list
>
> server2.domain.com: master
>
> # ipa-replica-manage list-ruv
>
> Directory Manager password:
>
> Replica Update Vectors:
>
> server2.domain.com:389: 5
>
> Certificate Server Replica Update Vectors:
>
> server2.domain.com:389: 6
>
> # ipa topologysuffix-find
>
> ---------------------------
>
> 2 topology suffixes matched
>
> ---------------------------
>
> Suffix name: ca
>
> Managed LDAP suffix DN: o=ipaca
>
> Suffix name: domain
>
> Managed LDAP suffix DN: dc=domain,dc=com
>
> ----------------------------
>
> Number of entries returned 2
>
> ----------------------------
>
> # ipa topologysegment-find
>
> Suffix name: domain
>
> ------------------
>
> 0 segments matched
>
> ------------------
>
> ----------------------------
>
> Number of entries returned 0
>
> ----------------------------
>
> I really don’t know what happened here. Could you help us on that ?
>
> Best regards,
>
> Damien
>
>
> _______________________________________________
> FreeIPA-users mailing list -- freeipa-users(a)lists.fedorahosted.org
> To unsubscribe send an email to freeipa-users-leave(a)lists.fedorahosted.org
> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedoraho...
>
4 years, 4 months
kinit: Pre-authentication failed: Invalid argument while getting initial credentials
by John Louis
Hi, on CentOS 7 I installed Freeipa using "yum install ipa-server". Everything including client is on the same machine itself. All went well, I can now login to the web as "admin" and create user account etc. And "kinit admin", "kinit list" etc all worked as expected right after installation.
But a couple days later, even though I can still login as "admin" web user, on the server ssh session I get the following (I replaced REALM name with "REALM" here):
# kinit
kinit: Client 'root@REALM' not found in Kerberos database while getting initial credentials
# kinit admin
kinit: Pre-authentication failed: Invalid argument while getting initial credentials
# kinit list
kinit: Client 'list@REALM' not found in Kerberos database while getting initial credentials
# env KRB5_TRACE=/dev/stdout kinit admin 2>&1
[11612] 1578511115.54729: Getting initial credentials for admin@REALM
[11612] 1578511115.54731: Sending unauthenticated request
[11612] 1578511115.54732: Sending request (167 bytes) to REALM
[11612] 1578511115.54733: Initiating TCP connection to stream 127.0.0.1:88
[11612] 1578511115.54734: Sending TCP request to stream 127.0.0.1:88
[11612] 1578511115.54735: Received answer (240 bytes) from stream 127.0.0.1:88
[11612] 1578511115.54736: Terminating TCP connection to stream 127.0.0.1:88
[11612] 1578511115.54737: Response was from master KDC
[11612] 1578511115.54738: Received error from KDC: -1765328359/Additional pre-authentication required
[11612] 1578511115.54741: Preauthenticating using KDC method data
[11612] 1578511115.54742: Processing preauth types: PA-PK-AS-REQ (16), PA-PK-AS-REP_OLD (15), PA-PK-AS-REQ_OLD (14), PA-FX-FAST (136), PA-PKINIT-KX (147), PA-FX-COOKIE (133)
[11612] 1578511115.54743: Received cookie: MIT
[11612] 1578511115.54744: PKINIT client has no configured identity; giving up
[11612] 1578511115.54745: Preauth module pkinit (147) (info) returned: 0/Success
[11612] 1578511115.54746: PKINIT client has no configured identity; giving up
[11612] 1578511115.54747: Preauth module pkinit (16) (real) returned: 22/Invalid argument
[11612] 1578511115.54748: PKINIT client has no configured identity; giving up
[11612] 1578511115.54749: Preauth module pkinit (14) (real) returned: 22/Invalid argument
kinit: Pre-authentication failed: Invalid argument while getting initial credentials
# klist -ek
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------
2 host/ipa.host.name@REALM (aes256-cts-hmac-sha1-96)
2 host/ipa.host.name@REALM (aes128-cts-hmac-sha1-96)
2 host/ipa.host.name@REALM (des3-cbc-sha1)
2 host/ipa.host.name@REALM (arcfour-hmac)
2 host/ipa.host.name@REALM (camellia128-cts-cmac)
2 host/ipa.host.name@REALM (camellia256-cts-cmac)
So looks like I lost "admin" in kerboros?
The only thing I think I did, is I have changed the server's time and hwclock time, by 9 minutes.
Thanks!
4 years, 4 months