Hi Steve,

Your statement "The ability to modify the repo metadata is the same capability as creating the
package in the first place." is actually the core of the issue that I have.

I push up packages with a *package signing key* that should not be the same as the *repo signing key*. They should be different keys since one is the repo provider's domain and one is the package provider's domain.

Again, I'm not saying that it shouldn't be done, I'm just saying that having them be the same key is just too dangerous since the repo key will be trusted to install packages and should not be.

Thanks,

Trevor

On Tue, Nov 14, 2017 at 2:48 PM, Steve Grubb <sgrubb@redhat.com> wrote:
Hello Trevor,

On Tuesday, November 14, 2017 10:24:35 AM EST Trevor Vaughan wrote:
> I get the bigger value of the GPG validation check but I think that the
> current implementation is severely flawed.
>
> If there were a separate setting for GPG keys used for repo validation,
> such as repo_gpgkey, I would be more than happy to use it and flip it on.

The ability to modify the repo metadata is the same capability as creating the
package in the first place. That is why it uses the same key. For example,
someone having direct access to repo metadata can possibly modify dependency
information to force a new but vulnerable package onto your system. This is
the same as directly modifying the rpm to require a new dependency. So, how do
you detect this threat without repo gpg signature checking?

(All packages resolved and downloaded still have to pass gpg key verification.
So, its not like they are forcing some random, unsigned package onto your
system.)

The reason that yum developed several of the defenses to protect the integrity
of the system came from this study way back in 2008:

https://www2.cs.arizona.edu/stork/packagemanagersecurity/
otherattacks.html#extradep


> However, currently, these are the two potential threat avenues:
>
>    1. Accept GPG Keys for Repos
>       - Allows *repository maintainer* (Nexus, PackageCloud, random
>       directory on a webserver) to transparently add or replace
> arbitrary vendor packages with those of their choosing targeted for
> thousands of systems without the downstream user knowledge

This is also true in trust TLS option. They can add dependencies which
installs new software. I'm not sure about the replace an arbitrary vendor
supplied package yet. You can fix this with a yum plugin. See my last comment
below.

>          - Mitigation: Manually validate all package signatures on your
>          system after installing them (Horrible)
>          - Mitigation: Require internal YUM mirrors of all upstream
>          packages to a trusted repository via the SSG (I'm kind of OK
> with this one, but how do you check it? Also, the fundamental issue still
> holds unless you're resigning the repodata and, if this is automated, a
> system compromise is just as bad, if not worse, than the TLS case
> since arbitrary things could be signed)

The way it starts out is by finding a time stamp of latest signing. It then
uses this to check the time stamp of mirrors. If they pass then it proceeds to
download and checks the gpg signature. So, the way that it works _is_
trustworthy with no need of mitigation except to enable the repo_gpgcheck.

However, if you wanted to take this on yourself, then information can be found
here:

https://blog.packagecloud.io/eng/2015/07/20/yum-repository-internals/

They show how to verify things by hand. This can of course all be scripted.


>          2. Trust TLS
>       - In the event of a repository system compromise, bypassing SELinux
>       restrictions and DAC permissions (kernel level exploit?) someone
> can remove flawed package and regenerate the metadata

But how do you even detect the repo has been compromised to know it need
regenerating?


>          - Mitigation: Run the vendor OVAL for checking insecure packages
>          (which we're all doing anyway, right?!)

It is true that every async RHSA errata gets an entry in the OVAL content.
But not every reason to do an update/install is correlated to a security
advisory. Perhaps a functionality upgrade now pulls in some new packages?


> Unless I'm missing something, I know which one I'm much more comfortable
> with latter as something that is better for the user and easy to mitigate
> using currently mandated best practice.
>
> Again, once something like repo_gpgkey exists and is fully integrated, I'd
> be more than comfortable with this.

The only issue I see is how does yum handle collisions on packages between
repos. I think the answer may be to use yum-plugin-priorities. Using that you
can assign the repo you trust least a higher number. That would make it use
redhat repos first, and then down to the one you are suspicious of.

Description  : This plugin allows repositories to have different priorities.
                   : Packages in a repository with a lower priority can't be
overridden by packages from a repository with a higher priority even if repo
has a later version.

An example of its use is here:
https://access.redhat.com/documentation/en-US/
Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/sect-
Configuring_Software_Repositories.html

-Steve


> On Tue, Nov 14, 2017 at 9:47 AM, Steve Grubb <sgrubb@redhat.com> wrote:
> > On Tuesday, November 14, 2017 9:37:18 AM EST Arnold, Paul C CTR USARMY PEO
> >
> > STRI (US) wrote:
> > > On 11/13/2017 06:59 PM, Steve Grubb wrote:
> > > > ...the current rev of OSPP
> > > > calls out for auditing of software update integrity checks. It calls
> >
> > out
> >
> > > > for integrity checks and for them to be enabled. It calls out for the
> > > > vendor to supply SCAP content for the evaluated configuration. So that
> > > > means we shouldn't be turning it off.
> > >
> > > What are we gaining by enabling repo_gpgcheck in addition to gpgcheck?
> >
> > It's for checking that the metadata hasn't been tampered with since
> > signing.
> > For example, suppose you need some packages out of EPEL. EPEL has a
> > distributed mirror list that volunteers contribute bandwidth for
> > everyone's
> > benefit. However, what if their server became compromised and an attacker
> > removed the entry for a critical package update for a network facing
> > daemon?
> > The intent being to keep people from patching to allow more compromises.
> >
> > This setting would check the metadata to ensure that the signature
> > verification shows the metadata is untampered with. TLS protects against
> > modifying an in-transit package or metadata. But it doesn't tell you that
> > your
> > package resolution is using trustworthy data.
> >
> > -Steve





--
Trevor Vaughan
Vice President, Onyx Point, Inc
(410) 541-6699 x788

-- This account not approved for unencrypted proprietary information --