On Mo, 08.11.21 13:54, David Cantrell (dcantrell(a)redhat.com) wrote:
> > One of the reasons we are sticking to JSON here is so that
we can use
> > battle-tested parsers we already use for other stuff. you want a
> > parser that is already used, verified, tested elsewhere, and JSON
> > makes that easy. A homegrown parser of an entirely new special purpose
> > format is a lot more problematic security-wise.
>
> In particular, the implementation in systemd is undergoing continuous
> fuzzing in oss-fuzz, so we hope any simple issues have been already
> caught.
I wasn't really concerned with the reliability of the parser, but
rather the necessity of JSON in the first place. The more layers
anything has, the higher the risk.
JSON is truly universal. We use it *everywhere* already, including in
trusted, security-sensitive areas.
Most prominently, LUKS2 uses JSON to encode most of its metadata in
the LUKS2 superblock. It doesn't get much more security-sensitive than
that.
And I think that's a *good* thing: JSON might not be perfect — because
nothing is —, but it's certainly one of the better designed generic
data formats around, and it's complexity is absolutely managable. I am
pretty sure *more* subsystems should use that to store their data, and
that we'd unify *more* on it, rather then *less*. Homegrown, manual,
application-specific formats and parsers are a *bad* thing, and not
something to strive for.
Sharing data formats, unifying behaviour and parsers is a way to
*minimize* components both on the conceptual level and in code. Thus
one shouldn't see the reuse of JSON here as "yet another layer", but
instead as "reuse existing infra" to *reduce* the number of layers.
So yes, the fact that the JSON parser used here is a layer that is
already deployed widely (though in other contexts) and has been
thoroughly fuzzed is a *good* thing, and putting together a new parser
here for a new format woudn't be.
Lennart
--
Lennart Poettering, Berlin