Hi All,
I've been re-roaming through the SSG and this is probably the first of a many part thread regarding different checks.
TL;DR; The potential risk caused by enabling 'repo_gpgcheck' outweighs any potential benefit if TLS is enabled.
In my opinion, the following check should *only* be enabled if all of your repositories are internally managed xccdf_org.ssgproject.content_rule_ensure_gpgcheck_repo_metadata.
The reason for this is that YUM presently does not (to my knowledge) have any way to differentiate between package signing GPG keys and repo signing GPG keys.
This means that if, for instance, I host my packages via some shared Nexus, then I have to add the Nexus GPG key to my trust list for the repo.
I fundamentally do *not* want to do this! I shouldn't be allowing my Nexus maintainer to potentially install software on my system without my explicit knowledge.
You should use TLS, and the repo should have a trusted certificate there and that should be sufficient for the metadata until RPM can tell the difference between these two certificates.
Please let me know if I've missed something, but I don't remember seeing options to split out the two sets of certs.
Additionally, this is marked as 'high' severity and that seems to be massive overkill considering that 1) the packages are still signed and validated and 2) TLS is required.
I've run into the same problem. I go with setting the global yum.conf as DISA says and then override the setting in a repos.d file for the repos that really need repo_gpgcheck to be off.
I think this was a back-ported requirement from DISA, not something originating from SSG.
On Monday, November 13, 2017 12:46:53 PM EST Trevor Vaughan wrote:
Hi All,
I've been re-roaming through the SSG and this is probably the first of a many part thread regarding different checks.
TL;DR; The potential risk caused by enabling 'repo_gpgcheck' outweighs any potential benefit if TLS is enabled.
I'm not sure that I agree with this assessment. I am going to look into this in more detail over the next couple of days because...the current rev of OSPP calls out for auditing of software update integrity checks. It calls out for integrity checks and for them to be enabled. It calls out for the vendor to supply SCAP content for the evaluated configuration. So that means we shouldn't be turning it off. So, if there is anything amiss, I'd be inclined to file bz to get it fixed. Using TLS defends against a different attack vector than the one repo_gpgcheck and gpgcheck were designed for.
So...I'll get back to everyone with what I find out.
-Steve
In my opinion, the following check should *only* be enabled if all of your repositories are internally managed xccdf_org.ssgproject.content_rule_ensure_gpgcheck_repo_metadata.
The reason for this is that YUM presently does not (to my knowledge) have any way to differentiate between package signing GPG keys and repo signing GPG keys.
This means that if, for instance, I host my packages via some shared Nexus, then I have to add the Nexus GPG key to my trust list for the repo.
I fundamentally do *not* want to do this! I shouldn't be allowing my Nexus maintainer to potentially install software on my system without my explicit knowledge.
You should use TLS, and the repo should have a trusted certificate there and that should be sufficient for the metadata until RPM can tell the difference between these two certificates.
Please let me know if I've missed something, but I don't remember seeing options to split out the two sets of certs.
Additionally, this is marked as 'high' severity and that seems to be massive overkill considering that 1) the packages are still signed and validated and 2) TLS is required.
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It calls out for integrity checks and for them to be enabled. It calls out for the vendor to supply SCAP content for the evaluated configuration. So that means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to gpgcheck?
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY PEO STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It calls out for integrity checks and for them to be enabled. It calls out for the vendor to supply SCAP content for the evaluated configuration. So that means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an attacker removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects against modifying an in-transit package or metadata. But it doesn't tell you that your package resolution is using trustworthy data.
-Steve
Steve,
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on. However, currently, these are the two potential threat avenues:
1. Accept GPG Keys for Repos - Allows *repository maintainer* (Nexus, PackageCloud, random directory on a webserver) to transparently add or replace arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge - Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone can remove flawed package and regenerate the metadata - Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated, I'd be more than comfortable with this.
Thanks,
Trevor
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY PEO STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It calls
out
for integrity checks and for them to be enabled. It calls out for the vendor to supply SCAP content for the evaluated configuration. So that means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an attacker removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects against modifying an in-transit package or metadata. But it doesn't tell you that your package resolution is using trustworthy data.
-Steve
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as creating the package in the first place. That is why it uses the same key. For example, someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system. This is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto your system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
- Accept GPG Keys for Repos
directory on a webserver) to transparently add or replace
- Allows *repository maintainer* (Nexus, PackageCloud, random
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary vendor supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the repo_gpgcheck.
However, if you wanted to take this on yourself, then information can be found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it need regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL content. But not every reason to do an update/install is correlated to a security advisory. Perhaps a functionality upgrade now pulls in some new packages?
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated, I'd be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages between repos. I think the answer may be to use yum-plugin-priorities. Using that you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities. : Packages in a repository with a lower priority can't be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It calls
out
for integrity checks and for them to be enabled. It calls out for the vendor to supply SCAP content for the evaluated configuration. So that means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an attacker removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects against modifying an in-transit package or metadata. But it doesn't tell you that your package resolution is using trustworthy data.
-Steve
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the core of the issue that I have.
I push up packages with a *package signing key* that should not be the same as the *repo signing key*. They should be different keys since one is the repo provider's domain and one is the package provider's domain.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Thanks,
Trevor
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb sgrubb@redhat.com wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as creating the package in the first place. That is why it uses the same key. For example, someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system. This is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto your system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
- Accept GPG Keys for Repos
directory on a webserver) to transparently add or replace
- Allows *repository maintainer* (Nexus, PackageCloud, random
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary vendor supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the repo_gpgcheck.
However, if you wanted to take this on yourself, then information can be found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it need regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL content. But not every reason to do an update/install is correlated to a security advisory. Perhaps a functionality upgrade now pulls in some new packages?
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages between repos. I think the answer may be to use yum-plugin-priorities. Using that you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities. : Packages in a repository with a lower priority can't be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY
PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It
calls
out
for integrity checks and for them to be enabled. It calls out for
the
vendor to supply SCAP content for the evaluated configuration. So
that
means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell you
that
your package resolution is using trustworthy data.
-Steve
On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the core of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for the rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
I push up packages with a *package signing key* that should not be the same as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right dependencies based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If that is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing nefarious is happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
But even so, using yum-plugin-priorities you can be certain this 3rd party repo is last so that they cannot provide packages shipped by other repos.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it. At the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your package and the metadata. There is only one key and you can reason about it getting to the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly update the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this is the core of the issue.
After this, it's all transmission to end user with no injections possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb sgrubb@redhat.com wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as creating the package in the first place. That is why it uses the same key. For example, someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system. This is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto your system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud, random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary vendor supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the repo_gpgcheck.
However, if you wanted to take this on yourself, then information can be found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it need regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL content. But not every reason to do an update/install is correlated to a security advisory. Perhaps a functionality upgrade now pulls in some new packages?
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages between repos. I think the answer may be to use yum-plugin-priorities. Using that you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority can't
be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY
PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It
calls
out
for integrity checks and for them to be enabled. It calls out for
the
vendor to supply SCAP content for the evaluated configuration. So
that
means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell you
that
your package resolution is using trustworthy data.
-Steve
Ok! Now we're getting somewhere good, and thanks for continuing the discussion, I'm finding it to be very valuable.
On Tue, Nov 14, 2017 at 4:52 PM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the
core
of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for the rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
But, does that matter if the packages themselves will not install if not signed by a trustworthy key? This is what I'm getting at.
I push up packages with a *package signing key* that should not be the
same
as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
Yes! I want *less* trust on the repo metadata. Heck, I don't want to have to trust the repo metadata at all because I've restricted the GPG keys for my packages to only be the specific vendor keys that I want. Not being able to split the trust between these two doesn't let me isolate my trust effectively.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right dependencies based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If that is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing nefarious is happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
This is the issue, but in reverse! I trust the *packager* because they are the ones with ultimate power on my system. I may, or may not, fully trust the repo provider. This means that I can keep my package GPG key list as restricted as plausible and I want to only trust the repo provider for the repo metadata.
But even so, using yum-plugin-priorities you can be certain this 3rd party repo is last so that they cannot provide packages shipped by other repos.
Or, with the suggested feature, I can validate that the repo is OK, and still validate that the packages come from a vendor that I trust to install on my system.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it. At the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your package and the metadata. There is only one key and you can reason about it getting to the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly update the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this is the core of the issue.
My main issue is that I trust packagers and possibly not repo providers. For a concrete example, let's assume that Varnish has their own package signing keys. Now, let's go download Varnish from https://packagecloud.io/varnishcache/varnish5/install#manual-rpm. If you follow their instructions, you're allowing PackageCloud to be *fully trusted* as an installation provider. I would rather just trust the Varnish keys and not trust PackageCloud to be able to install random other RPMs. Does this make more sense?
I complete agree that these steps should be followed, and it seems like that would be another great security guide that should be written and tied to this requirement. This option should only be enabled for repositories that follow the procedure that you have described *and no others*. Which means that it should not be enabled by default and vendors should provide a statement regarding their package and repo signing practices in some way that the remote system could process in an automated fashion for validation.
After this, it's all transmission to end user with no injections possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb sgrubb@redhat.com wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that
the
current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo
validation,
such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as
creating
the package in the first place. That is why it uses the same key. For
example,
someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system.
This
is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto
your
system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud, random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary
vendor
supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on
your
system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is
automated, a
system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the
repo_gpgcheck.
However, if you wanted to take this on yourself, then information can
be
found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?)
someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it
need
regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL
content.
But not every reason to do an update/install is correlated to a
security
advisory. Perhaps a functionality upgrade now pulls in some new
packages?
Unless I'm missing something, I know which one I'm much more
comfortable
with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully
integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages
between
repos. I think the answer may be to use yum-plugin-priorities. Using
that
you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority
can't
be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com
wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR
USARMY
PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote: > ...the current rev of OSPP > calls out for auditing of software update integrity checks. It
calls
out
> for integrity checks and for them to be enabled. It calls out
for
the
> vendor to supply SCAP content for the evaluated configuration.
So
that
> means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network
facing
daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell
you
that
your package resolution is using trustworthy data.
-Steve
Where is the controlled repository schema?
?xml version="1.0"? !DOCTYPE lolz [ !ENTITY lol "lol" !ELEMENT lolz (#PCDATA) !ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;" !ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;" !ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;" !ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;" !ENTITY lol5 "&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;" !ENTITY lol6 "&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;" !ENTITY lol7 "&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;" !ENTITY lol8 "&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;" !ENTITY lol9 "&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;" ] lolz&lol9;/lolz
From: Trevor Vaughan [mailto:tvaughan@onyxpoint.com] Sent: Tuesday, November 14, 2017 5:16 PM To: Steve Grubb sgrubb@redhat.com Cc: SCAP Security Guide scap-security-guide@lists.fedorahosted.org Subject: Re: [Non-DoD Source] Re: Issue with Repo GPG Checking
Ok! Now we're getting somewhere good, and thanks for continuing the discussion, I'm finding it to be very valuable.
On Tue, Nov 14, 2017 at 4:52 PM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote: On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the core of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for the rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
But, does that matter if the packages themselves will not install if not signed by a trustworthy key? This is what I'm getting at.
I push up packages with a *package signing key* that should not be the same as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
Yes! I want *less* trust on the repo metadata. Heck, I don't want to have to trust the repo metadata at all because I've restricted the GPG keys for my packages to only be the specific vendor keys that I want. Not being able to split the trust between these two doesn't let me isolate my trust effectively.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right dependencies based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If that is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing nefarious is happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
This is the issue, but in reverse! I trust the *packager* because they are the ones with ultimate power on my system. I may, or may not, fully trust the repo provider. This means that I can keep my package GPG key list as restricted as plausible and I want to only trust the repo provider for the repo metadata.
But even so, using yum-plugin-priorities you can be certain this 3rd party repo is last so that they cannot provide packages shipped by other repos.
Or, with the suggested feature, I can validate that the repo is OK, and still validate that the packages come from a vendor that I trust to install on my system.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it. At the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your package and the metadata. There is only one key and you can reason about it getting to the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly update the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this is the core of the issue.
My main issue is that I trust packagers and possibly not repo providers. For a concrete example, let's assume that Varnish has their own package signing keys. Now, let's go download Varnish from https://packagecloud.io/varnishcache/varnish5/install#manual-rpm. If you follow their instructions, you're allowing PackageCloud to be *fully trusted* as an installation provider. I would rather just trust the Varnish keys and not trust PackageCloud to be able to install random other RPMs. Does this make more sense? I complete agree that these steps should be followed, and it seems like that would be another great security guide that should be written and tied to this requirement. This option should only be enabled for repositories that follow the procedure that you have described *and no others*. Which means that it should not be enabled by default and vendors should provide a statement regarding their package and repo signing practices in some way that the remote system could process in an automated fashion for validation.
After this, it's all transmission to end user with no injections possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as creating the package in the first place. That is why it uses the same key. For example, someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system. This is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto your system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud, random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary vendor supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the repo_gpgcheck.
However, if you wanted to take this on yourself, then information can be found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it need regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL content. But not every reason to do an update/install is correlated to a security advisory. Perhaps a functionality upgrade now pulls in some new packages?
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages between repos. I think the answer may be to use yum-plugin-priorities. Using that you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority can't
be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY
PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It
calls
out
for integrity checks and for them to be enabled. It calls out for
the
vendor to supply SCAP content for the evaluated configuration. So
that
means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell you
that
your package resolution is using trustworthy data.
-Steve
-- Trevor Vaughan Vice President, Onyx Point, Inc (410) 541-6699 x788
-- This account not approved for unencrypted proprietary information -- THIS MESSAGE IS FOR THE USE OF THE INTENDED RECIPIENT(S) ONLY AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, PROPRIETARY, CONFIDENTIAL, AND/OR EXEMPT FROM DISCLOSURE UNDER ANY RELEVANT PRIVACY LEGISLATION. No rights to any privilege have been waived. If you are not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying, conversion to hard copy, taking of action in reliance on or other use of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, please notify me by return e-mail and delete or destroy all copies of this message.
To elaborate: other than the schema and the repo certificate, what other inline input validation schemes do you recommend?
From: Brent Kimberley Sent: Thursday, November 16, 2017 4:52 PM To: SCAP Security Guide scap-security-guide@lists.fedorahosted.org; Steve Grubb sgrubb@redhat.com Subject: RE: [Non-DoD Source] Re: Issue with Repo GPG Checking
Where is the controlled repository schema?
?xml version="1.0"? !DOCTYPE lolz [ !ENTITY lol "lol" !ELEMENT lolz (#PCDATA) !ENTITY lol1 "&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;&lol;" !ENTITY lol2 "&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;&lol1;" !ENTITY lol3 "&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;&lol2;" !ENTITY lol4 "&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;&lol3;" !ENTITY lol5 "&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;&lol4;" !ENTITY lol6 "&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;&lol5;" !ENTITY lol7 "&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;&lol6;" !ENTITY lol8 "&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;&lol7;" !ENTITY lol9 "&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;&lol8;" ] lolz&lol9;/lolz
From: Trevor Vaughan [mailto:tvaughan@onyxpoint.com] Sent: Tuesday, November 14, 2017 5:16 PM To: Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> Cc: SCAP Security Guide <scap-security-guide@lists.fedorahosted.orgmailto:scap-security-guide@lists.fedorahosted.org> Subject: Re: [Non-DoD Source] Re: Issue with Repo GPG Checking
Ok! Now we're getting somewhere good, and thanks for continuing the discussion, I'm finding it to be very valuable.
On Tue, Nov 14, 2017 at 4:52 PM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote: On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the core of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for the rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
But, does that matter if the packages themselves will not install if not signed by a trustworthy key? This is what I'm getting at.
I push up packages with a *package signing key* that should not be the same as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
Yes! I want *less* trust on the repo metadata. Heck, I don't want to have to trust the repo metadata at all because I've restricted the GPG keys for my packages to only be the specific vendor keys that I want. Not being able to split the trust between these two doesn't let me isolate my trust effectively.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right dependencies based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If that is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing nefarious is happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
This is the issue, but in reverse! I trust the *packager* because they are the ones with ultimate power on my system. I may, or may not, fully trust the repo provider. This means that I can keep my package GPG key list as restricted as plausible and I want to only trust the repo provider for the repo metadata.
But even so, using yum-plugin-priorities you can be certain this 3rd party repo is last so that they cannot provide packages shipped by other repos.
Or, with the suggested feature, I can validate that the repo is OK, and still validate that the packages come from a vendor that I trust to install on my system.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it. At the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your package and the metadata. There is only one key and you can reason about it getting to the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly update the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this is the core of the issue.
My main issue is that I trust packagers and possibly not repo providers. For a concrete example, let's assume that Varnish has their own package signing keys. Now, let's go download Varnish from https://packagecloud.io/varnishcache/varnish5/install#manual-rpm. If you follow their instructions, you're allowing PackageCloud to be *fully trusted* as an installation provider. I would rather just trust the Varnish keys and not trust PackageCloud to be able to install random other RPMs. Does this make more sense? I complete agree that these steps should be followed, and it seems like that would be another great security guide that should be written and tied to this requirement. This option should only be enabled for repositories that follow the procedure that you have described *and no others*. Which means that it should not be enabled by default and vendors should provide a statement regarding their package and repo signing practices in some way that the remote system could process in an automated fashion for validation.
After this, it's all transmission to end user with no injections possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that the current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo validation, such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as creating the package in the first place. That is why it uses the same key. For example, someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system. This is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto your system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud, random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary vendor supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on your system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is automated, a system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the repo_gpgcheck.
However, if you wanted to take this on yourself, then information can be found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?) someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it need regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL content. But not every reason to do an update/install is correlated to a security advisory. Perhaps a functionality upgrade now pulls in some new packages?
Unless I'm missing something, I know which one I'm much more comfortable with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages between repos. I think the answer may be to use yum-plugin-priorities. Using that you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority can't
be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb <sgrubb@redhat.commailto:sgrubb@redhat.com> wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY
PEO
STRI (US) wrote:
On 11/13/2017 06:59 PM, Steve Grubb wrote:
...the current rev of OSPP calls out for auditing of software update integrity checks. It
calls
out
for integrity checks and for them to be enabled. It calls out for
the
vendor to supply SCAP content for the evaluated configuration. So
that
means we shouldn't be turning it off.
What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell you
that
your package resolution is using trustworthy data.
-Steve
-- Trevor Vaughan Vice President, Onyx Point, Inc (410) 541-6699 x788
-- This account not approved for unencrypted proprietary information -- THIS MESSAGE IS FOR THE USE OF THE INTENDED RECIPIENT(S) ONLY AND MAY CONTAIN INFORMATION THAT IS PRIVILEGED, PROPRIETARY, CONFIDENTIAL, AND/OR EXEMPT FROM DISCLOSURE UNDER ANY RELEVANT PRIVACY LEGISLATION. No rights to any privilege have been waived. If you are not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying, conversion to hard copy, taking of action in reliance on or other use of this communication is strictly prohibited. If you are not the intended recipient and have received this message in error, please notify me by return e-mail and delete or destroy all copies of this message.
Hello Trevor,
On Tuesday, November 14, 2017 5:15:59 PM EST Trevor Vaughan wrote:
Ok! Now we're getting somewhere good, and thanks for continuing the discussion, I'm finding it to be very valuable.
The subject matter experts I am contacting off list seem to be swamped with some planning work for Fedora 28. It might be a couple days before I have an answer on this for you. But I will answer it.
-Steve
On Tue, Nov 14, 2017 at 4:52 PM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually the
core
of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for the rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
But, does that matter if the packages themselves will not install if not signed by a trustworthy key? This is what I'm getting at.
I push up packages with a *package signing key* that should not be the
same
as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
Yes! I want *less* trust on the repo metadata. Heck, I don't want to have to trust the repo metadata at all because I've restricted the GPG keys for my packages to only be the specific vendor keys that I want. Not being able to split the trust between these two doesn't let me isolate my trust effectively.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right dependencies based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If that is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing nefarious is happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
This is the issue, but in reverse! I trust the *packager* because they are the ones with ultimate power on my system. I may, or may not, fully trust the repo provider. This means that I can keep my package GPG key list as restricted as plausible and I want to only trust the repo provider for the repo metadata.
But even so, using yum-plugin-priorities you can be certain this 3rd party repo is last so that they cannot provide packages shipped by other repos.
Or, with the suggested feature, I can validate that the repo is OK, and still validate that the packages come from a vendor that I trust to install on my system.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it. At the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your package and the metadata. There is only one key and you can reason about it getting to the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly update the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this is the core of the issue.
My main issue is that I trust packagers and possibly not repo providers. For a concrete example, let's assume that Varnish has their own package signing keys. Now, let's go download Varnish from https://packagecloud.io/varnishcache/varnish5/install#manual-rpm. If you follow their instructions, you're allowing PackageCloud to be *fully trusted* as an installation provider. I would rather just trust the Varnish keys and not trust PackageCloud to be able to install random other RPMs. Does this make more sense?
I complete agree that these steps should be followed, and it seems like that would be another great security guide that should be written and tied to this requirement. This option should only be enabled for repositories that follow the procedure that you have described *and no others*. Which means that it should not be enabled by default and vendors should provide a statement regarding their package and repo signing practices in some way that the remote system could process in an automated fashion for validation.
After this, it's all transmission to end user with no injections possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb sgrubb@redhat.com wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think that
the
current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo
validation,
such as repo_gpgkey, I would be more than happy to use it and flip it on.
The ability to modify the repo metadata is the same capability as
creating
the package in the first place. That is why it uses the same key. For
example,
someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system.
This
is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package onto
your
system.)
The reason that yum developed several of the defenses to protect the integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud, random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted for thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies which installs new software. I'm not sure about the replace an arbitrary
vendor
supplied package yet. You can fix this with a yum plugin. See my last comment below.
- Mitigation: Manually validate all package signatures on
your
system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all upstream packages to a trusted repository via the SSG (I'm kind of OK
with this one, but how do you check it? Also, the fundamental issue still holds unless you're resigning the repodata and, if this is
automated, a
system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest signing. It then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works _is_ trustworthy with no need of mitigation except to enable the
repo_gpgcheck.
However, if you wanted to take this on yourself, then information can
be
found here:
https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?)
someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it
need
regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL
content.
But not every reason to do an update/install is correlated to a
security
advisory. Perhaps a functionality upgrade now pulls in some new
packages?
Unless I'm missing something, I know which one I'm much more
comfortable
with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully
integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages
between
repos. I think the answer may be to use yum-plugin-priorities. Using
that
you can assign the repo you trust least a higher number. That would make it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority
can't
be overridden by packages from a repository with a higher priority even if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com
wrote:
On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR
USARMY
PEO
STRI (US) wrote: > On 11/13/2017 06:59 PM, Steve Grubb wrote: > > ...the current rev of OSPP > > calls out for auditing of software update integrity checks. It
calls
out
> > for integrity checks and for them to be enabled. It calls out
for
the
> > vendor to supply SCAP content for the evaluated configuration.
So
that
> > means we shouldn't be turning it off. > > What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an
attacker
removed the entry for a critical package update for a network
facing
daemon? The intent being to keep people from patching to allow more
compromises.
This setting would check the metadata to ensure that the signature verification shows the metadata is untampered with. TLS protects
against
modifying an in-transit package or metadata. But it doesn't tell
you
that
your package resolution is using trustworthy data.
-Steve
Seeing Trevor bring up interesting points, would be an interesting company to work for. If nothing else a new way of thinking.
On Nov 16, 2017 5:40 PM, "Steve Grubb" sgrubb@redhat.com wrote:
Hello Trevor,
On Tuesday, November 14, 2017 5:15:59 PM EST Trevor Vaughan wrote:
Ok! Now we're getting somewhere good, and thanks for continuing the discussion, I'm finding it to be very valuable.
The subject matter experts I am contacting off list seem to be swamped with some planning work for Fedora 28. It might be a couple days before I have an answer on this for you. But I will answer it.
-Steve
On Tue, Nov 14, 2017 at 4:52 PM, Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, November 14, 2017 4:01:50 PM EST Trevor Vaughan wrote:
Hi Steve,
Your statement "The ability to modify the repo metadata is the same capability as creating the package in the first place." is actually
the
core
of the issue that I have.
And there simply is no getting around it. The metadata is a proxy for
the
rpm information that allows yum to decide what is in scope to download. Trusting TLS gives less assurance that the package resolution occurred using trustworthy data.
But, does that matter if the packages themselves will not install if not signed by a trustworthy key? This is what I'm getting at.
I push up packages with a *package signing key* that should not be
the
same
as the *repo signing key*.
Why? You can do more harm with the rpm info than the repo metadata.
Yes! I want *less* trust on the repo metadata. Heck, I don't want to have to trust the repo metadata at all because I've restricted the GPG keys
for
my packages to only be the specific vendor keys that I want. Not being
able
to split the trust between these two doesn't let me isolate my trust effectively.
They should be different keys since one is the repo provider's domain and one is the package provider's domain.
They should be considered one in the same since the repo is a proxy for the rpm info. If you let a repo provider sign metadata, then the package provider has no way to let the end user know they received the right
dependencies
based on the package signed by the package provider. The repo provider in the yum model is untrusted. Its like the two generals problem. A message has to get from one general to the other but the message has to go through hostile territory.
Maybe there is a mismatch in the trust model where you want the repo provider to be ultimately trusted and the packager a second class citizen? If
that
is the case, then you can sign the repo metadata with your own key, but you'll have to distribute the public portion to end users. But I don't know what's been gained since the package developer has to assume nothing
nefarious is
happening in the repo provider's scripting. And the end user now has to worry that the repo provider won't try adding some packages to the repo with his/her key.
This is the issue, but in reverse! I trust the *packager* because they
are
the ones with ultimate power on my system. I may, or may not, fully trust the repo provider. This means that I can keep my package GPG key list as restricted as plausible and I want to only trust the repo provider for
the
repo metadata.
But even so, using yum-plugin-priorities you can be certain this 3rd
party
repo is last so that they cannot provide packages shipped by other
repos.
Or, with the suggested feature, I can validate that the repo is OK, and still validate that the packages come from a vendor that I trust to
install
on my system.
Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.
Hmm. The repo creation and package creation should be done in a secure system. Possibly with the key in a HSM so that its not possible to access it.
At
the time a package is merged with a repo, its signed and updated metadata generated. As the developer, all is good. You are certain of your
package
and the metadata. There is only one key and you can reason about it
getting to
the end user correctly. This gets pushed out to mirrors that stage your content. The idea is to ensure every possible angle is covered to directly
update
the end user.
There should be no key laying around for a rogue developer to use. The build system should be tightly controlled. There should be git logs collected and all sources tightly controlled so that any commit requires a kerberos ticket, gpg key, two factor auth, whatever makes you comfortable. I think this
is
the core of the issue.
My main issue is that I trust packagers and possibly not repo providers. For a concrete example, let's assume that Varnish has their own package signing keys. Now, let's go download Varnish from https://packagecloud.io/varnishcache/varnish5/install#manual-rpm. If you follow their instructions, you're allowing PackageCloud to be *fully trusted* as an installation provider. I would rather just trust the
Varnish
keys and not trust PackageCloud to be able to install random other RPMs. Does this make more sense?
I complete agree that these steps should be followed, and it seems like that would be another great security guide that should be written and
tied
to this requirement. This option should only be enabled for repositories that follow the procedure that you have described *and no others*. Which means that it should not be enabled by default and vendors should
provide a
statement regarding their package and repo signing practices in some way that the remote system could process in an automated fashion for
validation.
After this, it's all transmission to end user with no injections
possible.
-Steve
On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb sgrubb@redhat.com
wrote:
Hello Trevor,
On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
I get the bigger value of the GPG validation check but I think
that
the
current implementation is severely flawed.
If there were a separate setting for GPG keys used for repo
validation,
such as repo_gpgkey, I would be more than happy to use it and
flip
it on.
The ability to modify the repo metadata is the same capability as
creating
the package in the first place. That is why it uses the same key. For
example,
someone having direct access to repo metadata can possibly modify dependency information to force a new but vulnerable package onto your system.
This
is the same as directly modifying the rpm to require a new dependency. So, how do you detect this threat without repo gpg signature checking?
(All packages resolved and downloaded still have to pass gpg key verification. So, its not like they are forcing some random, unsigned package
onto
your
system.)
The reason that yum developed several of the defenses to protect
the
integrity of the system came from this study way back in 2008:
https://www2.cs.arizona.edu/stork/packagemanagersecurity/ otherattacks.html#extradep
However, currently, these are the two potential threat avenues:
Accept GPG Keys for Repos
- Allows *repository maintainer* (Nexus, PackageCloud,
random
directory on a webserver) to transparently add or replace
arbitrary vendor packages with those of their choosing targeted
for
thousands of systems without the downstream user knowledge
This is also true in trust TLS option. They can add dependencies
which
installs new software. I'm not sure about the replace an arbitrary
vendor
supplied package yet. You can fix this with a yum plugin. See my
last
comment below.
- Mitigation: Manually validate all package signatures
on
your
system after installing them (Horrible) - Mitigation: Require internal YUM mirrors of all
upstream
packages to a trusted repository via the SSG (I'm kind
of
OK
with this one, but how do you check it? Also, the fundamental
issue
still holds unless you're resigning the repodata and, if this is
automated, a
system compromise is just as bad, if not worse, than the TLS case since arbitrary things could be signed)
The way it starts out is by finding a time stamp of latest
signing. It
then uses this to check the time stamp of mirrors. If they pass then it proceeds to download and checks the gpg signature. So, the way that it works
_is_
trustworthy with no need of mitigation except to enable the
repo_gpgcheck.
However, if you wanted to take this on yourself, then information
can
be
found here:
internals/
They show how to verify things by hand. This can of course all be scripted.
2. Trust TLS - In the event of a repository system compromise, bypassing SELinux restrictions and DAC permissions (kernel level exploit?)
someone
can remove flawed package and regenerate the metadata
But how do you even detect the repo has been compromised to know it
need
regenerating?
- Mitigation: Run the vendor OVAL for checking insecure packages (which we're all doing anyway, right?!)
It is true that every async RHSA errata gets an entry in the OVAL
content.
But not every reason to do an update/install is correlated to a
security
advisory. Perhaps a functionality upgrade now pulls in some new
packages?
Unless I'm missing something, I know which one I'm much more
comfortable
with latter as something that is better for the user and easy to mitigate using currently mandated best practice.
Again, once something like repo_gpgkey exists and is fully
integrated,
I'd
be more than comfortable with this.
The only issue I see is how does yum handle collisions on packages
between
repos. I think the answer may be to use yum-plugin-priorities.
Using
that
you can assign the repo you trust least a higher number. That would
make
it use redhat repos first, and then down to the one you are suspicious of.
Description : This plugin allows repositories to have different priorities.
: Packages in a repository with a lower priority
can't
be overridden by packages from a repository with a higher priority
even
if repo has a later version.
An example of its use is here: https://access.redhat.com/documentation/en-US/ Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/ Getting_Started_Guide/sect- Configuring_Software_Repositories.html
-Steve
On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb sgrubb@redhat.com
wrote:
> On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR
USARMY
PEO
> STRI (US) wrote: > > On 11/13/2017 06:59 PM, Steve Grubb wrote: > > > ...the current rev of OSPP > > > calls out for auditing of software update integrity
checks. It
calls
> out > > > > for integrity checks and for them to be enabled. It calls
out
for
the
> > > vendor to supply SCAP content for the evaluated
configuration.
So
that
> > > means we shouldn't be turning it off. > > > > What are we gaining by enabling repo_gpgcheck in addition to
gpgcheck?
> It's for checking that the metadata hasn't been tampered with > since > signing. > For example, suppose you need some packages out of EPEL. EPEL
has
> a > distributed mirror list that volunteers contribute bandwidth
for
> everyone's > benefit. However, what if their server became compromised and
an
attacker
> removed the entry for a critical package update for a network
facing
> daemon? > The intent being to keep people from patching to allow more
compromises.
> This setting would check the metadata to ensure that the
signature
> verification shows the metadata is untampered with. TLS
protects
against
> modifying an in-transit package or metadata. But it doesn't
tell
you
that
> your > package resolution is using trustworthy data. > > -Steve
scap-security-guide mailing list -- scap-security-guide@lists. fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@ lists.fedorahosted.org
Necromancing this thread!
Any updates on this Steve?
Thanks,
Trevor
Hello Trevor,
On Tuesday, November 14, 2017 5:15:59 PM EST Trevor Vaughan wrote:
The subject matter experts I am contacting off list seem to be swamped with some planning work for Fedora 28. It might be a couple days before I have an answer on this for you. But I will answer it.
-Steve
On Tuesday, October 16, 2018 3:58:01 PM EDT Trevor Vaughan wrote:
Necromancing this thread!
Any updates on this Steve?
The answer I was given is like this:
"The keys for checking repo. metadata are only used for those repos. (so key for repo X can't verify metadata for repo. Y). There are also CA keys, so you can cycle keys etc. The keys for rpm checking are imported into the rpm DB and thus. global, but that's an rpm thing."
So, I don't think rpm/yum were intended to solve the security problem you outlined because its now how software distribution normally works. And if two repos have the same package, I think you will notice some kind of error/ warning. Feel free to open some kind of request. I also think the dnf developers may have things a little better security-wise.
-Steve
Who should I open the request with?
I haven't really seen any differences in DNF from that point of view in Fedora yet.
Thanks,
Trevor
On Fri, Oct 19, 2018 at 3:15 PM Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, October 16, 2018 3:58:01 PM EDT Trevor Vaughan wrote:
Necromancing this thread!
Any updates on this Steve?
The answer I was given is like this:
"The keys for checking repo. metadata are only used for those repos. (so key for repo X can't verify metadata for repo. Y). There are also CA keys, so you can cycle keys etc. The keys for rpm checking are imported into the rpm DB and thus. global, but that's an rpm thing."
So, I don't think rpm/yum were intended to solve the security problem you outlined because its now how software distribution normally works. And if two repos have the same package, I think you will notice some kind of error/ warning. Feel free to open some kind of request. I also think the dnf developers may have things a little better security-wise.
-Steve
I'm also very confused on this. Wasn't this part of the Red Hat recommended security settings?
As far as I can tell, DNF does nothing different for repo metadata.
Andrew
On Fri, Oct 19, 2018, 14:13 Trevor Vaughan tvaughan@onyxpoint.com wrote:
Who should I open the request with?
I haven't really seen any differences in DNF from that point of view in Fedora yet.
Thanks,
Trevor
On Fri, Oct 19, 2018 at 3:15 PM Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, October 16, 2018 3:58:01 PM EDT Trevor Vaughan wrote:
Necromancing this thread!
Any updates on this Steve?
The answer I was given is like this:
"The keys for checking repo. metadata are only used for those repos. (so key for repo X can't verify metadata for repo. Y). There are also CA keys, so you can cycle keys etc. The keys for rpm checking are imported into the rpm DB and thus. global, but that's an rpm thing."
So, I don't think rpm/yum were intended to solve the security problem you outlined because its now how software distribution normally works. And if two repos have the same package, I think you will notice some kind of error/ warning. Feel free to open some kind of request. I also think the dnf developers may have things a little better security-wise.
-Steve
-- Trevor Vaughan Vice President, Onyx Point, Inc (410) 541-6699 x788
-- This account not approved for unencrypted proprietary information -- _______________________________________________ scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedor...
On Friday, October 19, 2018 4:32:08 PM EDT Andrew Gilmore wrote:
I'm also very confused on this. Wasn't this part of the Red Hat recommended security settings?
The issue Trevor is talking about is a very unusable situation. The recommended setting is fine wrt normal use.
Upstream says that a repo key is assigned to a specific repo. Metadata key for shady repo cannot be used for metadata for an official Red Hat repo.
-Steve
As far as I can tell, DNF does nothing different for repo metadata.
Andrew
On Fri, Oct 19, 2018, 14:13 Trevor Vaughan tvaughan@onyxpoint.com wrote:
Who should I open the request with?
I haven't really seen any differences in DNF from that point of view in Fedora yet.
Thanks,
Trevor
On Fri, Oct 19, 2018 at 3:15 PM Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, October 16, 2018 3:58:01 PM EDT Trevor Vaughan wrote:
Necromancing this thread!
Any updates on this Steve?
The answer I was given is like this:
"The keys for checking repo. metadata are only used for those repos. (so key for repo X can't verify metadata for repo. Y). There are also CA keys, so you can cycle keys etc. The keys for rpm checking are imported into the rpm DB and thus. global, but that's an rpm thing."
So, I don't think rpm/yum were intended to solve the security problem you outlined because its now how software distribution normally works. And if two repos have the same package, I think you will notice some kind of error/ warning. Feel free to open some kind of request. I also think the dnf developers may have things a little better security-wise.
-Steve
-- Trevor Vaughan Vice President, Onyx Point, Inc (410) 541-6699 x788
-- This account not approved for unencrypted proprietary information -- _______________________________________________ scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fe dorahosted.org
Case has been open for nearly two years. Just got another response from RH Friday. See also: https://access.redhat.com/solutions/2850911
https://access.redhat.com/support/cases/#/case/01752320 Case Title : Repo metadata not being published (repo_gpgcheck fails) Case Number : 01752320 Case Open Date : 2016-12-05 10:24:14 Severity : 3 (Normal) Problem Type : Defect / Bug Most recent comment: On 2018-10-19 02:23:39, Janorkar, Anuja commented: "Hello, Unfortunately, we have not received the update on this. We will get back to you as soon as we get an update. We appreciate your patience. Best Regards, Anuja J. Global Support Services, Red Hat"
On Fri, Oct 19, 2018 at 4:13 PM Trevor Vaughan tvaughan@onyxpoint.com wrote:
Who should I open the request with?
I haven't really seen any differences in DNF from that point of view in Fedora yet.
Thanks,
Trevor
On Fri, Oct 19, 2018 at 3:15 PM Steve Grubb sgrubb@redhat.com wrote:
On Tuesday, October 16, 2018 3:58:01 PM EDT Trevor Vaughan wrote:
Necromancing this thread!
Any updates on this Steve?
The answer I was given is like this:
"The keys for checking repo. metadata are only used for those repos. (so key for repo X can't verify metadata for repo. Y). There are also CA keys, so you can cycle keys etc. The keys for rpm checking are imported into the rpm DB and thus. global, but that's an rpm thing."
So, I don't think rpm/yum were intended to solve the security problem you outlined because its now how software distribution normally works. And if two repos have the same package, I think you will notice some kind of error/ warning. Feel free to open some kind of request. I also think the dnf developers may have things a little better security-wise.
-Steve
-- Trevor Vaughan Vice President, Onyx Point, Inc (410) 541-6699 x788
-- This account not approved for unencrypted proprietary information -- _______________________________________________ scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedor...
On 11/14/2017 11:14 AM, Steve Grubb wrote:
It's for checking that the metadata hasn't been tampered with since signing. For example, suppose you need some packages out of EPEL. EPEL has a distributed mirror list that volunteers contribute bandwidth for everyone's benefit. However, what if their server became compromised and an attacker removed the entry for a critical package update for a network facing daemon? The intent being to keep people from patching to allow more compromises.
Ultimately, I see this example as an impact to availability and not integrity. Assuming the EPEL package signing key was not compromised (I certainly hope EPEL mirrors are not given the package signing key), a modification to the repo metadata will not prevent my systems from verifying the integrity of the software in each package. If a package or patch is missing in the repo, other controls are in place to monitor this (even on FIPS low systems).
I am completely behind "gpgcheck" being a CAT I, but I am not convinced "repo_gpgcheck" should be any higher than CAT II.
scap-security-guide@lists.fedorahosted.org