Long time freeipa users have faced a certain 'fragility' freeipa has inherited, mostly as a result of freeipa being the 'band director' over a number of distinct subsystems maintained by various groups across the world.
This or that 'little upgrade' in a seemingly small sub-part of freeipa 'suddenly breaks' major things like not being able to install a replica & etc, there's a quite a list and it's been going on for a few years at least to my knowledge. Usually one expects newer features to have bugs but none that disrupt core prior functionality.
I wonder whether it would be a solution to this if free-ipa took a look at how a 'similar feeling' multi-host, multi-subsystem architecture has appeared to have solved this puzzle: ceph's 'containers' and 'orchestrator' / cephadm / 'ceph orch' concept.
For some time, as freeipa, ceph relied on packages and 'dependency hell management' to operate as native packages across hosts connected on an internal network. Then in a very effective shift: they treated 'the contents of a container' much as 'one thing owned entirely by and released by ceph' and tested that -- each container housing known-good versions of dependent and third party modules as well as their own code -- 'as one thing', to the point of providing their own tool to 'download and manage upgrade installs in the proper sequence' across hosts providing this-or-that functionality.
You might imagine a freeipa orchestrator upgrading masters and replicas in the correct order, freeipa devs knowing for certain-sure that no 'dnf upgrade' on the host will disrupt the setup that passed qa in the container... Will not 'corrupt a database' owing to a 'sync' with content one version understood but another did not, etc.
Over these many months, while freeipa has struggled to provide consistent service and value, ceph has been working nearly flawlessly across many upgrade cycles and I think it's because ceph controls the versions of the subsystems in the containers-- and that improves QA and dramatically limits surprise breakages' that lead to the feeling of 'always catching up' under conditions of time pressure owing to down services, this or that distro's 100 package mantainers deciding when/if to include this/that patch and when to publish which new version, which are 'security updates', which are 'bug fix updates', etc. If freeipa server came in a container that was tested and QA'd as a container, deployed as a container, perhaps the 'fragility factor' would improve by 10x.
My $0.02
On Wed, Jun 02, 2021 at 01:55:36PM -0500, Harry G. Coin via FreeIPA-users wrote:
Long time freeipa users have faced a certain 'fragility' freeipa has inherited, mostly as a result of freeipa being the 'band director' over a number of distinct subsystems maintained by various groups across the world.
This or that 'little upgrade' in a seemingly small sub-part of freeipa 'suddenly breaks' major things like not being able to install a replica & etc, there's a quite a list and it's been going on for a few years at least to my knowledge. Usually one expects newer features to have bugs but none that disrupt core prior functionality.
I wonder whether it would be a solution to this if free-ipa took a look at how a 'similar feeling' multi-host, multi-subsystem architecture has appeared to have solved this puzzle: ceph's 'containers' and 'orchestrator' / cephadm / 'ceph orch' concept.
For some time, as freeipa, ceph relied on packages and 'dependency hell management' to operate as native packages across hosts connected on an internal network. Then in a very effective shift: they treated 'the contents of a container' much as 'one thing owned entirely by and released by ceph' and tested that -- each container housing known-good versions of dependent and third party modules as well as their own code -- 'as one thing', to the point of providing their own tool to 'download and manage upgrade installs in the proper sequence' across hosts providing this-or-that functionality.
You might imagine a freeipa orchestrator upgrading masters and replicas in the correct order, freeipa devs knowing for certain-sure that no 'dnf upgrade' on the host will disrupt the setup that passed qa in the container... Will not 'corrupt a database' owing to a 'sync' with content one version understood but another did not, etc.
Over these many months, while freeipa has struggled to provide consistent service and value, ceph has been working nearly flawlessly across many upgrade cycles and I think it's because ceph controls the versions of the subsystems in the containers-- and that improves QA and dramatically limits surprise breakages' that lead to the feeling of 'always catching up' under conditions of time pressure owing to down services, this or that distro's 100 package mantainers deciding when/if to include this/that patch and when to publish which new version, which are 'security updates', which are 'bug fix updates', etc. If freeipa server came in a container that was tested and QA'd as a container, deployed as a container, perhaps the 'fragility factor' would improve by 10x.
My $0.02
Hi Harry,
There is a current effort (still in early stages) to implement something like what you describe: FreeIPA in OpenShift managed via an OpenShift Operator.
Can't say much else because we still have a lot of technical and policy challenges to solve. Certainly can't give a timeframe. But rest assured we are aware of the potential benefits of container orchestration. We are also aware that it is not a panacea and that the engineering costs are orders of magnitude greater than 2c ^_^
Cheers, Fraser
On to, 03 kesä 2021, Fraser Tweedale via FreeIPA-users wrote:
On Wed, Jun 02, 2021 at 01:55:36PM -0500, Harry G. Coin via FreeIPA-users wrote:
Long time freeipa users have faced a certain 'fragility' freeipa has inherited, mostly as a result of freeipa being the 'band director' over a number of distinct subsystems maintained by various groups across the world.
This or that 'little upgrade' in a seemingly small sub-part of freeipa 'suddenly breaks' major things like not being able to install a replica & etc, there's a quite a list and it's been going on for a few years at least to my knowledge. Usually one expects newer features to have bugs but none that disrupt core prior functionality.
I wonder whether it would be a solution to this if free-ipa took a look at how a 'similar feeling' multi-host, multi-subsystem architecture has appeared to have solved this puzzle: ceph's 'containers' and 'orchestrator' / cephadm / 'ceph orch' concept.
For some time, as freeipa, ceph relied on packages and 'dependency hell management' to operate as native packages across hosts connected on an internal network. Then in a very effective shift: they treated 'the contents of a container' much as 'one thing owned entirely by and released by ceph' and tested that -- each container housing known-good versions of dependent and third party modules as well as their own code -- 'as one thing', to the point of providing their own tool to 'download and manage upgrade installs in the proper sequence' across hosts providing this-or-that functionality.
You might imagine a freeipa orchestrator upgrading masters and replicas in the correct order, freeipa devs knowing for certain-sure that no 'dnf upgrade' on the host will disrupt the setup that passed qa in the container... Will not 'corrupt a database' owing to a 'sync' with content one version understood but another did not, etc.
Over these many months, while freeipa has struggled to provide consistent service and value, ceph has been working nearly flawlessly across many upgrade cycles and I think it's because ceph controls the versions of the subsystems in the containers-- and that improves QA and dramatically limits surprise breakages' that lead to the feeling of 'always catching up' under conditions of time pressure owing to down services, this or that distro's 100 package mantainers deciding when/if to include this/that patch and when to publish which new version, which are 'security updates', which are 'bug fix updates', etc. If freeipa server came in a container that was tested and QA'd as a container, deployed as a container, perhaps the 'fragility factor' would improve by 10x.
My $0.02
Hi Harry,
There is a current effort (still in early stages) to implement something like what you describe: FreeIPA in OpenShift managed via an OpenShift Operator.
Can't say much else because we still have a lot of technical and policy challenges to solve. Certainly can't give a timeframe. But rest assured we are aware of the potential benefits of container orchestration. We are also aware that it is not a panacea and that the engineering costs are orders of magnitude greater than 2c ^_^
I would also add that it is unlikely a situation for such container would be anything better than what we already have in freeipa/freeipa-container repository because it is being built and tested against a moving target that is a distribution, anyway. Sure, you are making a fixed image for a certain period of time but any CVE fixes in any of the components which are part of that image would force you to rebuild against that moving target that a distribution is.
On 6/3/21 1:56 AM, Alexander Bokovoy wrote:
On to, 03 kesä 2021, Fraser Tweedale via FreeIPA-users wrote:
On Wed, Jun 02, 2021 at 01:55:36PM -0500, Harry G. Coin via FreeIPA-users wrote:
Long time freeipa users have faced a certain 'fragility' freeipa has inherited, mostly as a result of freeipa being the 'band director' over a number of distinct subsystems maintained by various groups across the world.
This or that 'little upgrade' in a seemingly small sub-part of freeipa 'suddenly breaks' major things like not being able to install a replica & etc, there's a quite a list and it's been going on for a few years at least to my knowledge. Usually one expects newer features to have bugs but none that disrupt core prior functionality.
I wonder whether it would be a solution to this if free-ipa took a look at how a 'similar feeling' multi-host, multi-subsystem architecture has appeared to have solved this puzzle: ceph's 'containers' and 'orchestrator' / cephadm / 'ceph orch' concept.
For some time, as freeipa, ceph relied on packages and 'dependency hell management' to operate as native packages across hosts connected on an internal network. Then in a very effective shift: they treated 'the contents of a container' much as 'one thing owned entirely by and released by ceph' and tested that -- each container housing known-good versions of dependent and third party modules as well as their own code -- 'as one thing', to the point of providing their own tool to 'download and manage upgrade installs in the proper sequence' across hosts providing this-or-that functionality.
You might imagine a freeipa orchestrator upgrading masters and replicas in the correct order, freeipa devs knowing for certain-sure that no 'dnf upgrade' on the host will disrupt the setup that passed qa in the container... Will not 'corrupt a database' owing to a 'sync' with content one version understood but another did not, etc.
Over these many months, while freeipa has struggled to provide consistent service and value, ceph has been working nearly flawlessly across many upgrade cycles and I think it's because ceph controls the versions of the subsystems in the containers-- and that improves QA and dramatically limits surprise breakages' that lead to the feeling of 'always catching up' under conditions of time pressure owing to down services, this or that distro's 100 package mantainers deciding when/if to include this/that patch and when to publish which new version, which are 'security updates', which are 'bug fix updates', etc. If freeipa server came in a container that was tested and QA'd as a container, deployed as a container, perhaps the 'fragility factor' would improve by 10x.
My $0.02
Hi Harry,
There is a current effort (still in early stages) to implement something like what you describe: FreeIPA in OpenShift managed via an OpenShift Operator.
Can't say much else because we still have a lot of technical and policy challenges to solve. Certainly can't give a timeframe. But rest assured we are aware of the potential benefits of container orchestration. We are also aware that it is not a panacea and that the engineering costs are orders of magnitude greater than 2c ^_^
I would also add that it is unlikely a situation for such container would be anything better than what we already have in freeipa/freeipa-container repository because it is being built and tested against a moving target that is a distribution, anyway. Sure, you are making a fixed image for a certain period of time but any CVE fixes in any of the components which are part of that image would force you to rebuild against that moving target that a distribution is.
Hi Alexander
The most important aspect of value is-- the added QA pre-tested process of orchestrating upgrades from known-content prior to known-content later versions. All the containers need to worry about is the underlying kernel, and the known versions of the subsystems previously deployed. It's the difference between moving sort-of similar piles of sand and moving mostly similar piles of rocks -- you can count rocks and manage each, but not the 'sand' representing who-knows-which-composite-backported-whatnot. For example, who at the freeipa level of concern is going to keep track of whether a this or that distro has which version of a plug-in to a smart-card reader backported that will or won't lead to crashing bind9/named or silently stop dnssec from updating/adding security in response to enablement in freeipa?
On to, 03 kesä 2021, Harry G. Coin wrote:
On 6/3/21 1:56 AM, Alexander Bokovoy wrote:
On to, 03 kesä 2021, Fraser Tweedale via FreeIPA-users wrote:
On Wed, Jun 02, 2021 at 01:55:36PM -0500, Harry G. Coin via FreeIPA-users wrote:
Long time freeipa users have faced a certain 'fragility' freeipa has inherited, mostly as a result of freeipa being the 'band director' over a number of distinct subsystems maintained by various groups across the world.
This or that 'little upgrade' in a seemingly small sub-part of freeipa 'suddenly breaks' major things like not being able to install a replica & etc, there's a quite a list and it's been going on for a few years at least to my knowledge. Usually one expects newer features to have bugs but none that disrupt core prior functionality.
I wonder whether it would be a solution to this if free-ipa took a look at how a 'similar feeling' multi-host, multi-subsystem architecture has appeared to have solved this puzzle: ceph's 'containers' and 'orchestrator' / cephadm / 'ceph orch' concept.
For some time, as freeipa, ceph relied on packages and 'dependency hell management' to operate as native packages across hosts connected on an internal network. Then in a very effective shift: they treated 'the contents of a container' much as 'one thing owned entirely by and released by ceph' and tested that -- each container housing known-good versions of dependent and third party modules as well as their own code -- 'as one thing', to the point of providing their own tool to 'download and manage upgrade installs in the proper sequence' across hosts providing this-or-that functionality.
You might imagine a freeipa orchestrator upgrading masters and replicas in the correct order, freeipa devs knowing for certain-sure that no 'dnf upgrade' on the host will disrupt the setup that passed qa in the container... Will not 'corrupt a database' owing to a 'sync' with content one version understood but another did not, etc.
Over these many months, while freeipa has struggled to provide consistent service and value, ceph has been working nearly flawlessly across many upgrade cycles and I think it's because ceph controls the versions of the subsystems in the containers-- and that improves QA and dramatically limits surprise breakages' that lead to the feeling of 'always catching up' under conditions of time pressure owing to down services, this or that distro's 100 package mantainers deciding when/if to include this/that patch and when to publish which new version, which are 'security updates', which are 'bug fix updates', etc. If freeipa server came in a container that was tested and QA'd as a container, deployed as a container, perhaps the 'fragility factor' would improve by 10x.
My $0.02
Hi Harry,
There is a current effort (still in early stages) to implement something like what you describe: FreeIPA in OpenShift managed via an OpenShift Operator.
Can't say much else because we still have a lot of technical and policy challenges to solve. Certainly can't give a timeframe. But rest assured we are aware of the potential benefits of container orchestration. We are also aware that it is not a panacea and that the engineering costs are orders of magnitude greater than 2c ^_^
I would also add that it is unlikely a situation for such container would be anything better than what we already have in freeipa/freeipa-container repository because it is being built and tested against a moving target that is a distribution, anyway. Sure, you are making a fixed image for a certain period of time but any CVE fixes in any of the components which are part of that image would force you to rebuild against that moving target that a distribution is.
Hi Alexander
The most important aspect of value is-- the added QA pre-tested process of orchestrating upgrades from known-content prior to known-content later versions. All the containers need to worry about is the underlying kernel, and the known versions of the subsystems previously deployed. It's the difference between moving sort-of similar piles of sand and moving mostly similar piles of rocks -- you can count rocks and manage each, but not the 'sand' representing who-knows-which-composite-backported-whatnot. For example, who at the freeipa level of concern is going to keep track of whether a this or that distro has which version of a plug-in to a smart-card reader backported that will or won't lead to crashing bind9/named or silently stop dnssec from updating/adding security in response to enablement in freeipa?
What you described is what RHEL QE does already.
As for your specific issue, anything needs to be done upstream first. If your changes are not yet in the upstream for that library, they'll most likely will not be in a distribution either. QE does not handle backports at all, distribution developers do. In case of RHEL or Fedora, that's my and my colleagues' task but we need help here -- when you'd get your changes upstream, please point us to them.
freeipa-users@lists.fedorahosted.org