On Fri, 13 Jan 2017 13:58:25 +0100
Josef Skladanka <jskladan(a)redhat.com> wrote:
On Thu, Jan 12, 2017 at 7:42 AM, Tim Flink <tflink(a)redhat.com>
wrote:
> The idea was to start with static site generation because it doesn't
> require an application server, is easy to host and likely easier to
> develop, at least initially.
>
> I don't really have a strong preference either way, just wanted to
> say
that "initial development" time is the same for web app, and for
static generated pages - it both does the same thing - takes an input
+ output template and produces output. You can't really get around
that from what I'm seeing here. Static generated page equals cached
data in the app, and for the starters we can go on using just the
stupidest of caches provided in Flask (even though it might well be
cool and interesting to use some document store later on, but that's
premature optimization now).
Honestly, I don't care a whole lot about how the dashboards are
implemented so long as they get done and they get done relatively
quickly.
> > After brief discussion with jskladan, I understand that
> > resultsDB would be able to handle requests from dynamic page.
>
> Sure but then someone would have to write and maintain it. The
> things that drove me towards static site generation are:
>
Write and maintain what? I'm being sarcastic here, but this sounds
like the code for static generated pages will not have to be written
and maintained... And once again - the actual code that does the
actual thing will be the same, regardless of whether the output is a
web page, or a http response.
I was thinking that a static site generator would work around the need
for auth and interface code to create new dashboards. We could just
have a git repo with yaml files and if someone wanted a new dashboard,
they could just submit a PR with the new yaml file.
>
> > * I'm not sure what exactly is meant by 'item tag' in the examples
> > section.
> >
> > * Would the YAML configuration look something like this:
> >
> > url:
link.to.resultsdbapi.org
> > overview:
> > - testplan:
> > - name: LAMP
> > - items:
> > - mariadb
> > - httpd
> > - tasks:
> > - and:
> > - rpmlint
> > - depcheck
> > - or:
> > - foo
> > - bar
>
> I was thinking more of the example yaml that's in the git repo at
> taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it
> strongly
> - so long as it works and the format is easy enough to understand.
>
>
I guess I know where you were going with that example, but it is a bit
lacking. For one all it really allows for is "hard and" relationship
between the testcases in the testplan (dashboard, call it whatever you
like), which might be enough, but with what was said here it will
start being insufficient pretty fast. The other thing is, that we
really want to be able to do the "item selection" in some way. We
sure could say "take all results for all these four testcases, and
produce a line-per-item" but that is so broad, that it IMO stops
making sense anywhere beyond the "global" (read applicable to all the
items in the resutsdb) testplans.
This is meant as an initial direction, not a final resting place. I
fully expect that the functionality will continue to evolve if we adopt
the project.
> > Is there going to be any additional grouping (for
example,
> > based on arch) or some kind of more precise outcome aggregation
> > (only warn if part of testplan is failing, etc.)
>
> Maybe but I think those features can be added later. Are you of the
> mind that we need to take those things into account now?
>
>
I don't really think that they can. Take a simple "gating" dashboard
for example. There is a pretty huge difference between "package
passes, if rpmlint, depcheck and abicheck pass on it" and "package
passses if rpmlint, depcheck and abicheck pass for all the required
arches". And I'm certain we want to be able to do the latter. Like it
is not really "pass" when rpmlint passed on ARM, depcheck on X86_64
and abicheck on i386, but all the other combinations failed.
Why can't all that be hardcoded for now, at least? The required checks
and arches don't change very often.
It might seem like unnecessarily overcomplicating things, but I
don't
thin that the dashboard-generating tool should make assumptions (like
that grouping by arch is what you want to do) - it should be spelled
out in the input format, so there is as much black box removed as
possible. Will it take more time to write the input? Sure. Is it
worth it? Absolutely.
Again, this wasn't intended as a final spec but as a starting point. If
at all possible, I want to have something which can be shown off as at
least a demo before devconf. With that in mind, I'd like to keep
everything as simple as possible for now.
> > only, or/and some kind of summary over given period in
history?
>
> For now, the latest results. In my mind, we'd be running the
> dashboard creation on a cron job or in response to fedmsgs. At that
> point, we'd date the generated dashboards and keep a record of
> those without needing a lot more complexity
>
The question here is "what is latest results"? Do we just take
now-month for the first run, and then "update" on top of that? I
would not necessarily have a problem with that, it's just that we
most deffinitely would want to capture _some_ timespan, and I think
this is more about "what timespan it its".
Yeah, I hadn't really thought about that and the details probably need
more working out.
If we decide to go with "take the old state, apply updates on
top of
that", then we will (I think) pretty fast arrive to a point where we
mirror the data from ResultsDB, just in a different format, stored in
a document store instead of relational database. Not saying it's a
bad or wrong thing to do. I actually think it's a pretty good
solution - better than querying increasingly more data from ResultsDB
anyway.
Again, this was meant to be a starting point, not a final product. I
want to have something which is demo-able by next week and assume that
the demo stuff will need more work before we can rely on it.
From the stuff that you've already found, it sounds like you guys are
making great progress. Thanks for working on this.
Tim