[Bug 1857104] New: Using FreeIPA breaks IPv4/IPv6 flags for SSH
by bugzilla@redhat.com
https://bugzilla.redhat.com/show_bug.cgi?id=1857104
Bug ID: 1857104
Summary: Using FreeIPA breaks IPv4/IPv6 flags for SSH
Product: Fedora
Version: 32
Status: NEW
Component: sssd
Assignee: sssd-maintainers(a)lists.fedoraproject.org
Reporter: ossman(a)cendio.se
QA Contact: extras-qa(a)fedoraproject.org
CC: abokovoy(a)redhat.com, atikhono(a)redhat.com,
jhrozek(a)redhat.com, lslebodn(a)redhat.com,
mzidek(a)redhat.com, pbrezina(a)redhat.com,
rharwood(a)redhat.com, sbose(a)redhat.com,
ssorce(a)redhat.com,
sssd-maintainers(a)lists.fedoraproject.org
Target Milestone: ---
Classification: Fedora
Description of problem:
If a client is configured using ipa-client-install then the -4 and -6 flags
stop working for ssh.
Version-Release number of selected component (if applicable):
Doesn't matter. Seen on RHEL 6 through 8, and on current Fedora.
How reproducible:
100%
Steps to Reproduce:
1. ipa-client-install
2. ssh -4 host.example.com
Actual results:
Connected via IPv6
Expected results:
Connected via IPv4
Additional info:
The bug is that sss_ssh_knownhostsproxy is configured on the client and that
command doesn't respect the flags given to ssh.
The issue affects all hosts, not just those part of the same FreeIPA domain.
A practical effect of this is that connections get rejected or misbehave
because of IP based rules in place for this connection.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
2 weeks, 1 day
[Bug 2185785] New: sss_ssh_knownhostsproxy does not exit after disconnect from libssh, leaks memory
by bugzilla@redhat.com
https://bugzilla.redhat.com/show_bug.cgi?id=2185785
Bug ID: 2185785
Summary: sss_ssh_knownhostsproxy does not exit after disconnect
from libssh, leaks memory
Product: Fedora
Version: 37
Status: NEW
Component: sssd
Assignee: sssd-maintainers(a)lists.fedoraproject.org
Reporter: mpitt(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: abokovoy(a)redhat.com, atikhono(a)redhat.com,
jhrozek(a)redhat.com, lslebodn(a)redhat.com,
luk.claes(a)gmail.com, mzidek(a)redhat.com,
pbrezina(a)redhat.com, sbose(a)redhat.com,
ssorce(a)redhat.com,
sssd-maintainers(a)lists.fedoraproject.org
Target Milestone: ---
Classification: Fedora
Description of problem: In
https://github.com/cockpit-project/cockpit/issues/18310 we got a report of
leaked sss_ssh_knownhostsproxy processes which eat up quite a lot of RAM and
keep SSH connections open to target hosts even after the parent ssh client went
away.
The user logs in to cockpit locally, then starts a remote cockpit session
through SSH (cockpit-ssh in particular, which uses libssh), then logs out.
Logging out SIGTERMs the cockpit-ssh process. That then goes away, but the
sss_ssh_knownhostsproxy child doesn't exit, but gets reparented to pid 1. It
also keeps the SSH connection open still.
Version-Release number of selected component (if applicable):
sssd-common-2.8.2-1.fc37.x86_64
libssh-0.10.4-2.fc37.x86_64
cockpit-bridge-289-1.fc37.x86_64
How reproducible: Always
Steps to Reproduce:
1. Join a machine to a FreeIPA domain, and log in as IPA user. This should
create /etc/ssh/ssh_config.d/04-ipa.conf with a ProxyCommand for
sss_ssh_knownhostsproxy
2. Set up an SSH key and add it to ~/.ssh/authorized_keys; you should be able
to do "ssh `hostname`" *without* an "unknown host key" prompt (thanks to
sss_ssh_knownhostsproxy) and *without* a password prompt (due to using key
login).
3. dnf install cockpit-bridge
3. Run an SSH session through libssh, and kill it:
(printf '\n\n\n\n\n\n'; sleep 20) | /usr/libexec/cockpit-ssh `hostname` &
sleep 1 && pkill -e cockpit-ssh
Actual results:
The SSH logind session hangs on shutdown:
Since: Tue 2023-04-11 05:22:06 UTC; 1min 36s ago
Leader: 2935
TTY: web console
Remote: ::ffff:172.27.0.2
Service: cockpit; type web; class user
State: closing
Unit: session-11.scope
└─3025 /usr/bin/sss_ssh_knownhostsproxy -p 22 x0.cockpit.lan
The cockpit-ssh process is gone, but there are three leaked processes:
admin@c+ 5572 0.0 0.8 16624 5632 pts/1 S 07:40 0:00
/usr/bin/sss_ssh_knownhostsproxy -p 22 x0.cockpit.lan
root 5573 0.0 2.0 47060 13184 ? Ss 07:40 0:00 sshd:
admin(a)cockpit.lan [priv]
admin@c+ 5594 0.0 1.1 47060 7320 ? S 07:40 0:00 sshd:
admin@cockpit.lan(a)notty
strace -p 5572 says
restart_syscall(<... resuming interrupted read ...>
but it's not clear from what it tries to read.
This does *not* reproduce with "ssh `hostname` sleep 20" and killing that ssh
process. So this is some condition that only libssh triggers.
I know that this isn't an ideal reproducer for you. Do you have some idea how
to debug that further? Enable some debug logging or so? (it's an user process,
so it can't log to /var/log/sssd/)
Thanks!
--
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=2185785
5 months
[Bug 2107824] New: User logins doesn't use right kerberos tickets for cifs.upcall
by bugzilla@redhat.com
https://bugzilla.redhat.com/show_bug.cgi?id=2107824
Bug ID: 2107824
Summary: User logins doesn't use right kerberos tickets for
cifs.upcall
Product: Fedora
Version: 36
Hardware: x86_64
OS: Linux
Status: NEW
Component: sssd
Severity: low
Assignee: sssd-maintainers(a)lists.fedoraproject.org
Reporter: kamarasu(a)aol.in
QA Contact: extras-qa(a)fedoraproject.org
CC: abokovoy(a)redhat.com, atikhono(a)redhat.com,
jhrozek(a)redhat.com, lslebodn(a)redhat.com,
luk.claes(a)gmail.com, mzidek(a)redhat.com,
pbrezina(a)redhat.com, sbose(a)redhat.com,
ssorce(a)redhat.com,
sssd-maintainers(a)lists.fedoraproject.org
Target Milestone: ---
Classification: Fedora
Created attachment 1897647
--> https://bugzilla.redhat.com/attachment.cgi?id=1897647&action=edit
ssd_gdm_cifs_autofs
Description of problem:
User logins doesn't use right kerberos tickets for cifs.upcall at first
attempt, I've noticed this issue while login through GDM, I think it happens
same with ssh as well.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Setup multiuser cifs automount map served from NAS
2. Install fedora 36 linux and perform realm join to SAMBA(AD role)
3. update /etc/dconf/profile/user with service-db:keyfile/user
4.Login through GDM
Actual results:
Jul 16 12:35:48 bullseye.int.lan kernel: FS-Cache: Loaded
Jul 16 12:35:48 bullseye.int.lan kernel: Key type dns_resolver registered
Jul 16 12:35:48 bullseye.int.lan kernel: Key type cifs.spnego registered
Jul 16 12:35:48 bullseye.int.lan kernel: Key type cifs.idmap registered
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: No dialect specified on mount.
Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1),
from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers
which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 on
mount.
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: Attempting to mount
\\nas.int.lan\home
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: key description:
cifs.spnego;0;0;39010000;ver=0x2;host=nas.int.lan;ip4=192.168.1.10;sec=krb5;uid=0x0;creduid=0x2a;user=gdm;pid=0x636
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: ver=2
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: host=nas.int.lan
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: ip=192.168.1.10
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: sec=1
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: uid=0
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: creduid=42
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: user=gdm
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: pid=1590
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]:
get_cachename_from_process_env: pathname=/proc/1590/environ
Jul 16 12:35:48 bullseye.int.lan systemd[1]: Starting sssd-kcm.service - SSSD
Kerberos Cache Manager...
Jul 16 12:35:48 bullseye.int.lan systemd[1]: Started sssd-kcm.service - SSSD
Kerberos Cache Manager.
Jul 16 12:35:48 bullseye.int.lan audit[1]: SERVICE_START pid=1 uid=0
auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0
msg='unit=sssd-kcm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=?
addr=? terminal=? res=success'
Jul 16 12:35:48 bullseye.int.lan sssd_kcm[1606]: Starting up
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: get_existing_cc: default
ccache is KCM:42
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: get_tgt_time: unable to get
principal
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: krb5_get_init_creds_keytab:
-1765328378
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: Exit status 1
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: Verify user has a krb5
ticket and keyutils is installed
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: \\nas.int.lan Send error in
SessSetup = -126
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: cifs_mount failed w/return
code = -126
Expected results:
cifs.spnego user suppose to be the one specified at login prompt and it should
not be user=gdm
Additional info:
But few seconds later the mount cifs.upcall goes well as below
Jul 16 12:36:55 bullseye.int.lan kernel: CIFS: Attempting to mount
\\nas.int.lan\home
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: key description:
cifs.spnego;0;0;39010000;ver=0x2;host=nas.int.lan;ip4=192.168.1.10;sec=krb5;uid=0x0;creduid=0x48d02750;user=kamarasu;pid=0xb48
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: ver=2
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: host=nas.int.lan
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: ip=192.168.1.10
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: sec=1
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: uid=0
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: creduid=1221601104
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: user=kamarasu
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: pid=2888
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]:
get_cachename_from_process_env: pathname=/proc/2888/environ
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: get_existing_cc: default
ccache is KCM:1221601104:18284
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: handle_krb5_mech: getting
service ticket for nas.int.lan
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: handle_krb5_mech: ob
Please see the attachment ssd_gdm_cifs_autofs
[root@bullseye cloud-user]# automount -m
Mount point: /home/int.lan
source(s):
instance type(s): sss
map: auto.home
* | -fstype=cifs -rw -sec=krb5i -multiuser -user=$USER -cruid=$UID -cifsacl
://nas.int.lan/home
[root@bullseye cloud-user]# cat /etc/sssd/sssd.conf
[sssd]
domains = int.lan
config_file_version = 2
services = nss, pam, autofs
[domain/int.lan]
default_shell = /bin/bash
krb5_store_password_if_offline = True
cache_credentials = True
krb5_realm = INT.LAN
realmd_tags = manages-system joined-with-adcli
id_provider = ad
fallback_homedir = /home/%d/%u
ad_domain = int.lan
use_fully_qualified_names = False
ldap_id_mapping = True
#access_provider = ad
autofs_provider = ad
[root@bullseye cloud-user]# mount |grep nas
//nas.int.lan/home on /home/int.lan/kamarasu type cifs
(rw,relatime,vers=3.1.1,sec=krb5i,cruid=1221601104,cache=strict,multiuser,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.10,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,cifsacl,noperm,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,user=kamarasu)
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=2107824
5 months
[Bug 2111582] virtqemud deadlocking
by bugzilla@redhat.com
https://bugzilla.redhat.com/show_bug.cgi?id=2111582
--- Comment #18 from Ben Cotton <bcotton(a)redhat.com> ---
This message is a reminder that Fedora Linux 36 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 36 on
2023-05-16.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with
a
'version' of '36'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version'
to a later Fedora Linux version. Note that the version field may be hidden.
Click the "Show advanced fields" button if you do not see it.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora Linux 36 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=2111582
5 months