areas where we can invest in automation?

Robyn Bergeron rbergero at redhat.com
Tue May 28 14:58:23 UTC 2013



----- Original Message -----
> From: "Ralph Bean" <rbean at redhat.com>
> To: "Fedora Infrastructure" <infrastructure at lists.fedoraproject.org>
> Sent: Tuesday, May 28, 2013 6:45:43 AM
> Subject: Re: areas where we can invest in automation?
> 
> On Tue, May 28, 2013 at 08:33:22AM -0400, Matthew Miller wrote:
> > I was asked (with my Red Hat hat on) to put together a little report on
> > areas in Fedora which could be improved with an investment in better
> > automation. From what I'm working on myself, I'm aware of the gigantic need
> > in the cloud images production process, and I've been keeping an eye on Tim
> > Flink's autoqa revamp ideas. I expect there are others, because I know from
> > my previous jobs that there's always a balance between building
> > condiment-passing machines and just _passing the salt_. [1] Are there other
> > things which could be made better if only someone came up with the spare
> > time and resources to do the work?
> 
> Hi!
> 
> Some observations/ideas:
> 
> 1) The packager workflow is pretty tedious.  There has been some
>    improvement to it, but more can be done.  Things like
>    fedora-review and fedora-create-review (and bodhi!) are a huge
>    help.  But there are plenty of inefficient "blocking" points in the
>    process.
> 
>    For instance, once a new package is approved, only then does the
>    submitter declare what branches they want with an scm admin
>    request.  They then wait for an scm admin to declare that they
>    have created their branches, and then wait for a cronjob to run
>    that gives them permission to push on those branches (manually).
>    They then wait for their koji builds to finish to (manually) submit
>    bodhi updates.
> 
>    It would be nice if we could automate that whole process -- once a
>    package is approved, if there were a "make-it-so" button that
>    required no further intervention from the packager (but still
>    required the keen eye of an scm admin).
> 
>    There are further sequences down the pipeline like requesting that
>    packages in testing be pushed to stable, but there are good
>    arguments against automating that.
> 
> 2) Continuous deployment for infrastructure.  It has been tossed
>    around in IRC, possibly at FUDCon as well.  If application
>    developers could "git push" on the develop branch and have those
>    changes automatically roll out to our staging infrastructure --
>    that would save a lot of time.  Packaging our apps, building rpms,
>    signing them, copying them to our infra yum repos, rebuilding those
>    repos, clearing the cache on the target machines, performing a yum
>    update <-- that process is cumbersome.
> 
>    I suspect that the "release only when we have accumulated enough
>    changes to warrant enduring the burdensome release process" mode
>    of deployment (as opposed to "release early, release often") also
>    poses somewhat of a barrier to new contributors.  They contribute
>    a patch.. nice!  When does it go live?  When one of our
>    overstretched sysadmin-mains can get around to it (it is required
>    that one of them sign the package).

This would be super. And yes - I think that the "instant gratification" combined with having to know what is likely a lot less process seems like it would yield additional contributions.

Do we have tiers for apps according to their "must stay-up" level - things that are critical vs. NTH? It seems like something where we could probably pick a non-critical app or two and work out what the process would be and basically take that process for a in-production-test-run.



> 
>    Caveat #2.1:
>    There are some ways around this.  Individuals can get around the
>    requirement of having a sysadmin-main touch their test release by
>    installing their rpm directly on the target machine.  They still
>    have to jump through some hoops to make it happen.
> 
>    Caveat #2.2:
>    This is one of the reasons we put so much work into our private
>    cloud (dev nodes).  There is no barrier there for teams to set up
>    their own continuous deployment mechanism.  This meets most needs,
>    but we don't have a way to iterate rapidly on some of the more
>    important pieces of our infrastructure.  Apps/services that
>    interact with each other don't quite work out on isolated cloud
>    nodes.  The bodhi masher?  Koji?  fedmsg?  mirror manager?  We
>    can't necessarily test those on dev nodes (and some we can't test
>    in staging -- resolving this down the road would save some
>    headaches).

So I'm definitely treading into "woefully technically unedumacated" territory - but could we not just take snapshots of the associated apps/services/databases, test whatever changes are incoming against the snapshots, and if it passes, push the changes to production and dump the tested nodes? (I realize this is simplifying things - we might have changes that require changes in both places, etc - but just wondering if I'm totally off-base here.) Or is this just a matter of "the infra things aren't running in the same place"?

-robyn

> 
> _______________________________________________
> infrastructure mailing list
> infrastructure at lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/infrastructure


More information about the infrastructure mailing list