Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
Cheers, Ronald
On Fri, May 14, 2021 at 2:35 AM Ronald Wimmer via FreeIPA-users < freeipa-users@lists.fedorahosted.org> wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
There's already a container[0] for it. This is maintained by the community [1]
I've run it in K8S, but only to test some ldap functionality (not the full implementation of FreeIPA) so YMMV.
[0] https://hub.docker.com/r/freeipa/freeipa-server/ [1] https://github.com/freeipa/freeipa-container
Cheers, Ronald _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste... Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On 5/14/21 9:42 PM, Christian Hernandez via FreeIPA-users wrote:
On Fri, May 14, 2021 at 2:35 AM Ronald Wimmer via FreeIPA-users <freeipa-users@lists.fedorahosted.org mailto:freeipa-users@lists.fedorahosted.org> wrote:
Hi, are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
There's already a container[0] for it. This is maintained by the community [1]
too bad its not more complete, and in helm format, could use a install in k8s like this guide for people
I've run it in K8S, but only to test some ldap functionality (not the full implementation of FreeIPA) so YMMV.
[0] https://hub.docker.com/r/freeipa/freeipa-server/ https://hub.docker.com/r/freeipa/freeipa-server/ [1] https://github.com/freeipa/freeipa-container https://github.com/freeipa/freeipa-container
Cheers, Ronald _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org <mailto:freeipa-users@lists.fedorahosted.org> To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org <mailto:freeipa-users-leave@lists.fedorahosted.org> Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ <https://docs.fedoraproject.org/en-US/project/code-of-conduct/> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines <https://fedoraproject.org/wiki/Mailing_list_guidelines> List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org <https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org> Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure <https://pagure.io/fedora-infrastructure>
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste... Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
Cheers, Ronald
Ronald Wimmer via FreeIPA-users wrote:
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
It could require a re-architecture of IPA. Some services rely on ldapi bind to connect to 389. You'd need to switch from that socket to a TCP socket and pass the requisite bind credentials (DM). Services rely on files in various places which if done systematically might not be too bad, but might require creative bind mounting and/or duplicating files. Installing it might require a pretty massive rewrite as it assumes a monolith. Upgrades would be another challenge.
I don't know enough about K8S to know how naming would work to tie a bunch of different nodes into a single "service" with a common name.
I don't know how well scaling would work either, if that's a goal.
rob
On pe, 17 maalis 2023, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
It could require a re-architecture of IPA. Some services rely on ldapi bind to connect to 389. You'd need to switch from that socket to a TCP socket and pass the requisite bind credentials (DM). Services rely on files in various places which if done systematically might not be too bad, but might require creative bind mounting and/or duplicating files. Installing it might require a pretty massive rewrite as it assumes a monolith. Upgrades would be another challenge.
I don't know enough about K8S to know how naming would work to tie a bunch of different nodes into a single "service" with a common name.
I don't know how well scaling would work either, if that's a goal.
It will not work well.
Performance differences between TCP/IP and UNIX domain sockets are huge.
There is roughly 60% of latency difference. There is 9x throughput difference on a bare metal system. See https://github.com/rigtorp/ipc-bench for the test code.
On virtual machines in a datacenter using KVM I am reliably getting roughly 2x slowdown in both throughput and latency.
That is a starting point. I would not even go into technical details requiring a tight collaboration between multiple DC components we have right now.
On 17.03.23 15:32, Alexander Bokovoy wrote:
On pe, 17 maalis 2023, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
It could require a re-architecture of IPA. Some services rely on ldapi bind to connect to 389. You'd need to switch from that socket to a TCP socket and pass the requisite bind credentials (DM). Services rely on files in various places which if done systematically might not be too bad, but might require creative bind mounting and/or duplicating files. Installing it might require a pretty massive rewrite as it assumes a monolith. Upgrades would be another challenge.
I don't know enough about K8S to know how naming would work to tie a bunch of different nodes into a single "service" with a common name.
I don't know how well scaling would work either, if that's a goal.
It will not work well.
Performance differences between TCP/IP and UNIX domain sockets are huge.
There is roughly 60% of latency difference. There is 9x throughput difference on a bare metal system. See https://github.com/rigtorp/ipc-bench for the test code.
On virtual machines in a datacenter using KVM I am reliably getting roughly 2x slowdown in both throughput and latency.
That is a starting point. I would not even go into technical details requiring a tight collaboration between multiple DC components we have right now.
Ok. I got it. So maybe deploying several containerized FreeIPA-Server-Instances would work. I'll give that a try.
As always, thanks a lot for your input!
Cheers, Ronald
On Fri, Mar 17, 2023 at 04:32:44PM +0200, Alexander Bokovoy via FreeIPA-users wrote:
On pe, 17 maalis 2023, Rob Crittenden via FreeIPA-users wrote:
Ronald Wimmer via FreeIPA-users wrote:
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
It could require a re-architecture of IPA. Some services rely on ldapi bind to connect to 389. You'd need to switch from that socket to a TCP socket and pass the requisite bind credentials (DM). Services rely on files in various places which if done systematically might not be too bad, but might require creative bind mounting and/or duplicating files. Installing it might require a pretty massive rewrite as it assumes a monolith. Upgrades would be another challenge.
I don't know enough about K8S to know how naming would work to tie a bunch of different nodes into a single "service" with a common name.
I don't know how well scaling would work either, if that's a goal.
It will not work well.
Performance differences between TCP/IP and UNIX domain sockets are huge.
A small clarification: in k8s and OpenShift you can use Unix sockets to communicate between different containers in the same *Pod*. So you can avoid the TCP/IP latency in that way.
There is roughly 60% of latency difference. There is 9x throughput difference on a bare metal system. See https://github.com/rigtorp/ipc-bench for the test code.
On virtual machines in a datacenter using KVM I am reliably getting roughly 2x slowdown in both throughput and latency.
That is a starting point. I would not even go into technical details requiring a tight collaboration between multiple DC components we have right now.
-- / Alexander Bokovoy Sr. Principal Software Engineer Security / Identity Management Engineering Red Hat Limited, Finland _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
On Fri, Mar 17, 2023 at 11:37:54AM +0100, Ronald Wimmer via FreeIPA-users wrote:
On 14.05.21 11:26, Ronald Wimmer via FreeIPA-users wrote:
Hi,
are there any plans (or maybe ongoing work already) to let FreeIPA run in a K8s environment?
What about tearing all the tightly coupled parts (389DS, DNS, PKI, HTTPD, KDC, Samba, ...) apart, let them run in K8s and do the coupling there?
Could that work if somebody took the effort (with support from the IPA devs I would be willing to) or are there real showstoppers preventing such an adventure?
We had an effort to get IPA running in OpenShift (with accompanying operator), but we shelved it. One of the main goals was that the solution should support multi-tenancy (e.g. to operate it as a managed service for different customers). The lack of support for user namespaces in k8s/OpenShift became a show-stopper to the "lift and shift" approach (run whole IPA system as a single container). The approach of breaking IPA up and running all the bits in separate containers was technically viable, but it was considered too costly both in up-front engineering effort and ongoing maintenance (as we would essentially be maintaining two distinct architectures of FreeIPA for a long time).
We redeployed the team working on that to another project. In the future perhaps it will be revisited, but it is not in the current plans. If you are keen to contribute, we can discuss further and share all that we have learned. But regardless of the approach, it would be a huge effort.
Cheers, Fraser
Cheers, Ronald _______________________________________________ FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org To unsubscribe send an email to freeipa-users-leave@lists.fedorahosted.org Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahoste... Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
freeipa-users@lists.fedorahosted.org