kinit not working for some accounts
by Tiemen Ruiten
Hello,
I've just noticed that kinit is not working for several but not all
accounts in our FreeIPA domain (4.4.0-14.el7.centos.7). I get the following
error:
on the client:
[root@caesium tiemen]# KRB5_TRACE=/dev/stdout kinit *dba*
[7827] 1498729905.996951: Resolving unique ccache of type KEYRING
[7827] 1498729905.997071: Getting initial credentials for dba(a)I.RDMEDIA.COM
[7827] 1498729905.997811: Sending request (167 bytes) to I.RDMEDIA.COM
[7827] 1498729905.998340: Initiating TCP connection to stream
10.100.110.36:88
[7827] 1498729906.2356: Sending TCP request to stream 10.100.110.36:88
[7827] 1498729906.9304: Received answer (204 bytes) from stream
10.100.110.36:88
[7827] 1498729906.9334: Terminating TCP connection to stream
10.100.110.36:88
[7827] 1498729906.9621: Response was from master KDC
[7827] 1498729906.9683: Received error from KDC: -1765328359/Additional
pre-authentication required
*[7827] 1498729906.9780: Processing preauth types: 136, 133*
*[7827] 1498729906.9795: Received cookie: MIT*
*kinit: Generic preauthentication failure while getting initial credentials*
whereas
[root@caesium tiemen]# KRB5_TRACE=/dev/stdout kinit *admin*
[7869] 1498730079.918191: Resolving unique ccache of type KEYRING
[7869] 1498730079.918290: Getting initial credentials for
admin(a)I.RDMEDIA.COM
[7869] 1498730079.918896: Sending request (169 bytes) to I.RDMEDIA.COM
[7869] 1498730079.919370: Initiating TCP connection to stream
10.100.110.36:88
[7869] 1498730079.922958: Sending TCP request to stream 10.100.110.36:88
[7869] 1498730079.930832: Received answer (258 bytes) from stream
10.100.110.36:88
[7869] 1498730079.930857: Terminating TCP connection to stream
10.100.110.36:88
[7869] 1498730079.930977: Response was from master KDC
[7869] 1498730079.931039: Received error from KDC: -1765328359/Additional
pre-authentication required
*[7869] 1498730079.931106: Processing preauth types: 136, 19, 2, 133*
*[7869] 1498730079.931129: Selected etype info: etype aes256-cts, salt
"REDACTED", params ""*
*[7869] 1498730079.931139: Received cookie: MIT*
*Password for ter(a)I.RDMEDIA.COM <ter(a)I.RDMEDIA.COM>:*
What could explain this difference? Where can I look to debug this?
--
Tiemen Ruiten
Systems Engineer
R&D Media
6 years, 3 months
(no subject)
by Sean Hogan
Hi All,
We are having an issue performing RHEL 6.6 to 6.7 upgrade with SSSD. The
systems are already enrolled and working in IPA 3.0.0-50 using 6.6 client.
We yum update and sssd gives this
Non-fatal POSTTRANS scriptlet failure in rpm package
sssd-1.12.4-47.el6_7.8.ppc64
warning: %posttrans(sssd-1.12.4-47.el6_7.8.ppc64) scriptlet
failed, exit status 1
Seems to install however sssd will no longer start. I can run ldap
searches against the IPA server and kinit without issue
I have un-enrolled the client and re-enrolled to no avail... once enroll
gets to starting sssd it says sssd restart failed and continues to enroll.
I have reinstalled sssd, ipa client and c-ares, I have removed the sssd
cache db.
The really strange part is if we wait approx 24 hours sssd starts working
again which we have reproduced on 2 servers we are testing with... are we
missing some sort of lease or cache? Using this to remove the sssd db
rm -rf /var/lib/sss/db/*
Here is a piece of the gdb core dump
Core was generated by `/usr/libexec/sssd/sssd_pac --uid 0 --gid 0 -d 0x37f0
'.
Program terminated with signal 11, Segmentation fault.
#0 0x00000fff83f2bc64 in ._dl_vdso_vsym () from /lib64/libc.so.6
Part of SSSD.log with debug 9
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca0c10
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (sudo,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking sudo as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca4590
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (ssh,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking ssh as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca31b0
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (pam,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking pam as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034c9fc70
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (nss,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking nss as started.
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 1
(Tue Jun 27 21:13:14 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:14 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:14 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:16 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 2
(Tue Jun 27 21:13:16 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:16 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:16 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:16 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:19 2017) [sssd] [services_startup_timeout] (0x0400):
Handling timeout
(Tue Jun 27 21:13:20 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 3
(Tue Jun 27 21:13:20 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:20 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:20 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:20 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_restart_service] (0x0010):
Process [pac], definitely stopped!
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0040): Returned with: 1
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[ssh][14092]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [ssh]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[pam][14091]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [pam]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[sudo][14090]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [sudo]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[nss][14089]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [nss]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[example.local][14088]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child
[example.local] exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [sbus_remove_watch] (0x2000):
0x10034c9b8d0/0x10034c895a0
server.example.local:/var/log/sssd# sssd -d9
(Tue Jun 27 21:46:38:159841 2017) [sssd] [check_file] (0x0400): lstat for
[/var/run/nscd/socket] failed: [2][No such file or directory].
(Tue Jun 27 21:46:38:162382 2017) [sssd] [ldb] (0x0400): server_sort:Unable
to register control with rootdse!
(Tue Jun 27 21:46:38:163358 2017) [sssd] [confdb_get_domain_internal]
(0x0400): No enumeration for [example.local]!
(Tue Jun 27 21:46:38:163410 2017) [sssd] [confdb_get_domain_internal]
(0x1000): pwd_expiration_warning is -1
(Tue Jun 27 21:46:38:163570 2017) [sssd] [become_user] (0x0200): Trying to
become user [0][0].
(Tue Jun 27 21:46:38:163597 2017) [sssd] [become_user] (0x0200): Already
user [0].
(Tue Jun 27 21:46:38:163675 2017) [sssd] [server_setup] (0x0040): Becoming
a daemon.
this pauses for a few seconds and drops me back to command line with sssd
not started.
Client
sssd-client-1.12.4-47.el6_7.8.ppc64
sssd-ldap-1.12.4-47.el6_7.8.ppc64
sssd-krb5-common-1.12.4-47.el6_7.8.ppc64
sssd-common-1.12.4-47.el6_7.8.ppc64
sssd-proxy-1.12.4-47.el6_7.8.ppc64
sssd-common-pac-1.12.4-47.el6_7.8.ppc64
sssd-krb5-1.12.4-47.el6_7.8.ppc64
python-sssdconfig-1.12.4-47.el6_7.8.noarch
sssd-ipa-1.12.4-47.el6_7.8.ppc64
sssd-1.12.4-47.el6_7.8.ppc64
sssd-ad-1.12.4-47.el6_7.8.ppc64
ipa-client-3.0.0-47.el6_7.2.ppc64
IPA Server
ipa-server-3.0.0-50.el6.1.x86_64
Any help is appreciated as I have spent a good 15 hours reading the logs
and bug reports and not any closer to a resolution.
Sean Hogan
6 years, 3 months
Re: SSSD not starting
by Sean Hogan
Apologies..
My subject line was "SSSD not starting" but posted it to the old address.
Made a new email with the info and copy paste must have dropped it
Sean Hogan
From: Sean Hogan via FreeIPA-users
<freeipa-users(a)lists.fedorahosted.org>
To: freeipa-users(a)lists.fedorahosted.org
Cc: Sean Hogan <schogan(a)us.ibm.com>
Date: 06/28/2017 07:29 AM
Subject: [Freeipa-users] (no subject)
Hi All,
We are having an issue performing RHEL 6.6 to 6.7 upgrade with SSSD. The
systems are already enrolled and working in IPA 3.0.0-50 using 6.6 client.
We yum update and sssd gives this
Non-fatal POSTTRANS scriptlet failure in rpm package
sssd-1.12.4-47.el6_7.8.ppc64
warning: %posttrans(sssd-1.12.4-47.el6_7.8.ppc64) scriptlet
failed, exit status 1
Seems to install however sssd will no longer start. I can run ldap searches
against the IPA server and kinit without issue
I have un-enrolled the client and re-enrolled to no avail... once enroll
gets to starting sssd it says sssd restart failed and continues to enroll.
I have reinstalled sssd, ipa client and c-ares, I have removed the sssd
cache db.
The really strange part is if we wait approx 24 hours sssd starts working
again which we have reproduced on 2 servers we are testing with... are we
missing some sort of lease or cache? Using this to remove the sssd db
rm -rf /var/lib/sss/db/*
Here is a piece of the gdb core dump
Core was generated by `/usr/libexec/sssd/sssd_pac --uid 0 --gid 0 -d 0x37f0
'.
Program terminated with signal 11, Segmentation fault.
#0 0x00000fff83f2bc64 in ._dl_vdso_vsym () from /lib64/libc.so.6
Part of SSSD.log with debug 9
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca0c10
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (sudo,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking sudo as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca4590
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (ssh,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking ssh as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034ca31b0
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (pam,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking pam as started.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): dbus conn:
0x10034c9fc70
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_dispatch] (0x4000): Dispatching.
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_message_handler] (0x4000): Received
SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_get_sender_id_send] (0x2000): Not a
sysbus message, quit
(Tue Jun 27 21:13:14 2017) [sssd] [sbus_handler_got_caller_id] (0x4000):
Received SBUS method [RegisterService]
(Tue Jun 27 21:13:14 2017) [sssd] [client_registration] (0x0100): Received
ID registration: (nss,1)
(Tue Jun 27 21:13:14 2017) [sssd] [mark_service_as_started] (0x0200):
Marking nss as started.
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 1
(Tue Jun 27 21:13:14 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:14 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:14 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:14 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:16 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 2
(Tue Jun 27 21:13:16 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:16 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:16 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:16 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:19 2017) [sssd] [services_startup_timeout] (0x0400):
Handling timeout
(Tue Jun 27 21:13:20 2017) [sssd] [mt_svc_restart] (0x0400): Scheduling
service pac for restart 3
(Tue Jun 27 21:13:20 2017) [sssd] [get_ping_config] (0x0100): Time between
service pings for [pac]: [10]
(Tue Jun 27 21:13:20 2017) [sssd] [get_ping_config] (0x0100): Time between
SIGTERM and SIGKILL for [pac]: [60]
(Tue Jun 27 21:13:20 2017) [sssd] [start_service] (0x0100): Queueing
service pac for startup
(Tue Jun 27 21:13:20 2017) [sssd] [mt_svc_exit_handler] (0x0040): Child
[pac] terminated with signal [11]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_restart_service] (0x0010):
Process [pac], definitely stopped!
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0040): Returned with: 1
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[ssh][14092]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [ssh]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[pam][14091]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [pam]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[sudo][14090]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [sudo]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[nss][14089]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child [nss]
exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Terminating
[example.local][14088]
(Tue Jun 27 21:13:20 2017) [sssd] [monitor_quit] (0x0020): Child
[example.local] exited gracefully
(Tue Jun 27 21:13:20 2017) [sssd] [sbus_remove_watch] (0x2000):
0x10034c9b8d0/0x10034c895a0
server.example.local:/var/log/sssd# sssd -d9
(Tue Jun 27 21:46:38:159841 2017) [sssd] [check_file] (0x0400): lstat for
[/var/run/nscd/socket] failed: [2][No such file or directory].
(Tue Jun 27 21:46:38:162382 2017) [sssd] [ldb] (0x0400): server_sort:Unable
to register control with rootdse!
(Tue Jun 27 21:46:38:163358 2017) [sssd] [confdb_get_domain_internal]
(0x0400): No enumeration for [example.local]!
(Tue Jun 27 21:46:38:163410 2017) [sssd] [confdb_get_domain_internal]
(0x1000): pwd_expiration_warning is -1
(Tue Jun 27 21:46:38:163570 2017) [sssd] [become_user] (0x0200): Trying to
become user [0][0].
(Tue Jun 27 21:46:38:163597 2017) [sssd] [become_user] (0x0200): Already
user [0].
(Tue Jun 27 21:46:38:163675 2017) [sssd] [server_setup] (0x0040): Becoming
a daemon.
this pauses for a few seconds and drops me back to command line with sssd
not started.
Client
sssd-client-1.12.4-47.el6_7.8.ppc64
sssd-ldap-1.12.4-47.el6_7.8.ppc64
sssd-krb5-common-1.12.4-47.el6_7.8.ppc64
sssd-common-1.12.4-47.el6_7.8.ppc64
sssd-proxy-1.12.4-47.el6_7.8.ppc64
sssd-common-pac-1.12.4-47.el6_7.8.ppc64
sssd-krb5-1.12.4-47.el6_7.8.ppc64
python-sssdconfig-1.12.4-47.el6_7.8.noarch
sssd-ipa-1.12.4-47.el6_7.8.ppc64
sssd-1.12.4-47.el6_7.8.ppc64
sssd-ad-1.12.4-47.el6_7.8.ppc64
ipa-client-3.0.0-47.el6_7.2.ppc64
IPA Server
ipa-server-3.0.0-50.el6.1.x86_64
Any help is appreciated as I have spent a good 15 hours reading the logs
and bug reports and not any closer to a resolution.
Sean Hogan
_______________________________________________
FreeIPA-users mailing list -- freeipa-users(a)lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-leave(a)lists.fedorahosted.org
6 years, 3 months
Sync Issues
by Devin Acosta
I am running the latest CentOS 7.3 / FreeIPA release and it appears that my
replication got broke.
[27/Jun/2017:17:28:58.705411461 +0000] NSMMReplicationPlugin -
agmt="cn=meTolasdc-lmfpa-002.lxi.m451.tech" (lasdc-lmfpa-002:389): Data
required to update replica has been purged from the changelog. The replica
must be reinitialized.
[27/Jun/2017:17:29:02.257550913 +0000]
agmt="cn=meTolasdc-lmfpa-002.lxi.m451.tech
" (lasdc-lmfpa-002:389) - Can't locate CSN 595283d600b400140000 in the
changelog (DB rc=-30988). If replication stops, the consumer may need to be
reinitialized.
When I try to delete the agreement and re-create it I get the error:
Removal of IPA replication agreement is deprecated with managed IPA
replication topology. Please use `ipa topologysegment-*` commands to manage
the topology.
However when I try to delete the segment and recreate it I also get an
error.
[root@lasdc-lmfpa-002 ~]# ipa topologysegment-del
Suffix name: domain
Segment name: las01-003-010.lxi.m451.tech-to-lasdc-lmfpa-002.lxi.m451.tech
ipa: ERROR: Server is unwilling to perform: Removal of Segment disconnects
topology.Deletion not allowed.
Any ideas how i resolve this issue? I basically have 2 FreeIPA servers in
each DC and the one DC is happy with the sync, but I lost all replication
to the other so passwords aren't syncing across DC's.
6 years, 3 months
Can't install client on scientific linux 7.3
by Niels Walet
I seem to have some serios issues with ipa on sl 7.3; on installing on a client, the install works through fine until it bombs on the following issue:
<long trace of ipa-client-install --password=' --domain --realm --server --hostname=xxxx --debug>
,,,,,
Configured /etc/krb5.conf for IPA realm YYY
Starting external process
args=keyctl search @s user ipa_session_cookie:host/xxx@YYY
Process finished, return code=1
stdout=
stderr=keyctl_search: Required key not available
Starting external process
args=/usr/bin/certutil -d /tmp/tmpl7C_lX -N -f /tmp/tmpL9Jnj9
Process finished, return code=0
stdout=
stderr=
Starting external process
args=/usr/bin/certutil -d /tmp/tmpl7C_lX -A -n CA certificate 1 -t C,,
Process finished, return code=0
stdout=
stderr=
Starting external process
args=keyctl search @s user ipa_session_cookie:host/xxx@YYY
Process finished, return code=1
stdout=
stderr=keyctl_search: Required key not available
failed to find session_cookie in persistent storage for principal 'host/xxx@YYY'
trying https://theoipa.ph.man.ac.uk/ipa/json
Created connection context.rpcclient_47349328
Forwarding 'schema' to json server 'https://ipa.xxxx/ipa/json'
Destroyed connection context.rpcclient_47349328
Traceback (most recent call last):
<long traceback>
---
Prof. Niels R. Walet Phone: +44(0)1613063693
School of Physics and Astronomy Mobile: +44(0)7516622121
The University of Manchester Room 7.7, Schuster Building
Manchester, M13 9PL, UK
email: Niels.Walet(a)manchester.ac.uk twitter: @nwalet
6 years, 3 months
empty netgroups = all user could access to all machines?
by Thomas Lau
Folks,
After migrated from FreeIPA 3.3.0 to 4.4.0, all user groups to host groups
mapping is gone, now 4.4.0 seems introduce this feature call "Netgroups",
which is currently empty, I haven't hear any user complain, does it mean if
"Netgroups" is empty all user could access to all machines which enrolled
on FreeIPA?
6 years, 3 months
certmonger CA settings
by Ian Pilcher
As part of my debugging efforts (see "Expired certificates" thread), I
changed modified the settings for the dogtag-ipa-renew-agent and
dogtag-ipa-ca-renew-agent CAs. Unfortunately, I forgot to make a note
of the original settings.
Are these correct for IPA 4.4 (on CentOS 7)?
CA 'SelfSign':
is-default: no
ca-type: INTERNAL:SELF
next-serial-number: 01
CA 'IPA':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/ipa-server-guard
/usr/libexec/certmonger/ipa-submit
CA 'certmaster':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/certmaster-submit
CA 'dogtag-ipa-renew-agent':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/ipa-server-guard
/usr/libexec/certmonger/dogtag-ipa-renew-agent-submit
CA 'local':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/local-submit
CA 'dogtag-ipa-ca-renew-agent':
is-default: no
ca-type: EXTERNAL
helper-location: /usr/libexec/certmonger/ipa-server-guard
/usr/libexec/certmonger/dogtag-ipa-ca-renew-agent-submit
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================
6 years, 3 months
Users not imported with Active Directory Synchronization
by laurent2.perrin@orange.com
Hi,
I'm trying to setup a FreeIPA and Active Directory synchronisation following Red Hat documentation(https://access.redhat.com/documentation/en-US/Red_Hat_Enter....
The ipa-replica-manage command returns a success but no user are imported in FreeIPA:
ipa-replica-manage connect --winsync --binddn='cn=ipasync,cn=Users,dc=ipa,dc=local' --bindpw='####' --passsync #### --cacert ipa-a-v
Directory Manager password:
Added CA certificate ipa-ad.cloud.620nm.net.cer to certificate database for ipa.cloud.620nm.net
ipa: INFO: AD Suffix is: DC=ipa,DC=local
The user for the Windows PassSync service is uid=passsync,cn=sysaccounts,cn=etc,dc=ipa,dc=cloud,dc=620nm,dc=net
Windows PassSync system account exists, not resetting password
ipa: INFO: Added new sync agreement, waiting for it to become ready . . .
ipa: INFO: Replication Update in progress: FALSE: status: Error (0) Replica acquired successfully: Incremental update started: start: 0: end: 0
ipa: INFO: Agreement is ready, starting replication . . .
Starting replication, please wait until this has completed.
Update in progress, 2 seconds elapsed
Update succeeded
The ipasync user has been created with the rights as described in the documentation.
In the freeipa logs, I didn't find any error message that could explain that user are not imported.
Regards,
[cid:image001.gif@01CBF2E5.34FD28F0]
Laurent PERRIN
Service Infra aux Projets
Orange Applications for Business
SCE/OAB/DPO/DT/SF/CLOUDS
tel. +33 4 37 24 62 85
Mob : 07 84 12 78 79
laurent2.perrin(a)orange.com<mailto:laurent2.perrin@orange.com>
139 rue Vendôme 69006 Lyon
www.orange-business.com<http://www.orange-business.com/>
[cid:image002.gif@01CBF2E5.34FD28F0]
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
6 years, 3 months