Good question :) I'd say RHEL8 in OCP cluster should be derivative (extension in CaC terms) of the main content. As every difference should be clearly visible, and number of differences minimized.

On Tue, Mar 17, 2020 at 1:15 PM Jakub Hrozek <jhrozek@redhat.com> wrote:
On Tue, Mar 17, 2020 at 12:25:11PM +0100, Jakub Hrozek wrote:
> On Tue, Mar 17, 2020 at 12:03:16PM +0100, Marek Haicman wrote:
> > Hello Jakub,
> > thank you for the question - if I understand correctly, here are the
> > scenarios that you anticipate:
> > * RHCOS (standard checks - there is currently no other checking of RHCOS
> > outside OCP, AFAIK)
> > * kubelet on RHCOS
>     ^^^^^^
>
> btw the kubelet check was just something I used as an example of a check
> that is specific to OCP/k8s, will be implemented with the YAML probe so
> it's sort of outside the usual OS-level checks. It's not the only one,
> but an example.
>
> > * RHEL8 standard checks
> > * kubelet checks on RHEL8
> > * RHEL7 standard checks
> > * kubelet checks on RHEL7
> >
> > Now what are the options:
> > 1. have everything in OCP4
> > 2. RHCOS in OCP4, and rest in respective products, with profiles containing
> > both FedRAMP Moderate and OCP specific checks
>
> FedRAMP moderate is a standard expressed with a profile here, right?. Did
> you mean OS and OCP specific checks?
>
> > 3. standard checks in respective products, and kubelet checks in separate
> > application "product"
> >
> > 1. is most self contained, only one data stream as a result and simpler way
> > how to trigger scan - you just scan, and content knows what is applicable
> > and what's not. At the same time, the content itself will be complex, and
> > will live independently on the OS product development.
>
> The advantage I see here is that the maintenance complexity decreases.
> But I agree we might be just shifting the maintenance cost from profiles
> to rules.
>
> > 2. operator needs to know what is being scanned, to apply correct data
> > stream (as there are three of those).
>
> It needs to know what is being scanned anyway. You provide the content
> profile in a scan definition. And the scan is per-pool, so per a set of
> machines, so you'd have something like this:
>
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ComplianceSuite
> metadata:
>     name: example-compliancesuite
> spec:
>     scans:
>     - name: rhel-workers-scan
>       profile: foo-bar-baz
>       contentImage: bar-baz-foo
>       nodeSelector:
>         node-role.kubernetes.io/rhel-worker: ""
>     - name: rhcos-workers-scan
>       profile: foo-bar-baz
>       contentImage: bar-baz-foo
>       nodeSelector:
>         node-role.kubernetes.io/rhcos-worker: ""
>
> Unless we go for option 1, the profile and contentImage would differ
> either way for each scan.
>
> > Results are still complete per node.
> > Complexity is not so high, but every product contains parts that are
> > dependent on kubelet checks - at least three places to record every change
> > in the profile.
> > 3. operator scans each node twice - once for OS standard check, second time
> > for the kubelet checks. This creates more complex result aggregation, but
> > all (four) pieces are developed just once.
>
> So we're going to have two scans either way: One that checks the
> node-level rules and another one that checks the cluster-level rules
> such as "Is log forwarding enabled for the cluster?". Do I read it
> correctly that you propose to split the node-level scans further to
> "Linux-specific" checks and "kubernetes client-specific" checks?
>
> To be honest, I'm not sure if the versions of the agents or other
> kubernetes-specific stuff on the nodes would be the same in the cluster
> here, IOW if we can guarantee that the checks would be the same on the
> cluster. I suspect they will, though..
>
> But the thing that I personally dislike here is that you would either have
> to define two scans per a machine pool, or the scan would have to be able
> to consume two contents and launch the scans internally. Or the operator
> would have to be able to deduce the kubernetes-level content from the
> os-level content.
>
> This approach would be palatable if the k8s-level checks would be the
> same across the cluster.

btw how does reusing OS content deal with differences in how we want to
use the OS in the OCP context? See the earlier example about bind on
RHEL vs. CoreDNS on OCP nodes?
_______________________________________________
scap-security-guide mailing list -- scap-security-guide@lists.fedorahosted.org
To unsubscribe send an email to scap-security-guide-leave@lists.fedorahosted.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedorahosted.org/archives/list/scap-security-guide@lists.fedorahosted.org