#6064: extend srpm-excluded-arch.py so it can read srpms from multiple dirs
by Fedora Release Engineering
#6064: extend srpm-excluded-arch.py so it can read srpms from multiple dirs
-----------------------------+------------------------
Reporter: sharkcz | Owner: rel-eng@…
Type: task | Status: new
Milestone: Fedora 21 Final | Component: other
Keywords: | Blocked By:
Blocking: |
-----------------------------+------------------------
Currently srpm-excluded-arch.py reads the srpm to generate the
"excludelist" for koji-shadow from a single directory. It limits its usage
to the times when buildrawhide/buildbranched are used (rawhide and
branched release). When the script is extended to read the srpms from
multiple directories it will be able to generate the excludelist also for
released Fedoras by reading the GA repo and the updates repo.
--
Ticket URL: <https://fedorahosted.org/rel-eng/ticket/6064>
Fedora Release Engineering <http://fedorahosted.org/rel-eng>
Release Engineering for the Fedora Project
9 years
Re: Random thoughts/crazy idea: Drop SSL certs
by Pierre-Yves Chibon
On Mon, Apr 27, 2015 at 04:32:17PM +0200, Till Maas wrote:
> On Mon, Apr 27, 2015 at 03:45:00PM +0200, Pierre-Yves Chibon wrote:
> > Good morning everyone,
> >
> > This week-end I had a random thought, which I quickly discussed with Dennis on
> > IRC on Sunday but that I thought might be interesting to discuss in a wider
> > audience.
> >
> > The initial thought came from a text that Dennis wrote:
> > """
> > Releng tracks this data in 2 systems, 1 of which we own: Koji and Bodhi. Koji
> > uses ssl certs tied to FAS and bodhi uses FAS for authentication to provide a
> > strong relationship between a user and the content
> > """
> > Source: https://fedoraproject.org/wiki/ReleaseEngineering/Philosophy#Auditable
> >
> > This has lead me to the question: Is this all what SSL certs are bringing us?
> >
> > The following only works under the assumption that it is.
> > So SSL certs are basically a certain type of API token. Everyone has one,
> > specific to koji and the lookaside cache, time limited and gives us a way of
> > doing authentication and authorization server side.
> >
> > So on this they behave just like any other API token, but using SSL certs has
> > some pros and downs:
> >
> > pros:
> > - Easy to find out when the token expires
> > - In place and working
> > - Known to the current process maintainers
> >
> > cons:
> > - One per client
> > - Hard to invalidate iiuc (ie: if someone's machine is compromised/lost it is
> > hard to make this user's certificate invalide)
> > - Relise on the SSL pile
> > - master certificate
> > - self-signed vs signed by an authority
> > - complex tooling
> > - Makes us maintain a whole infrastructure stack around this (cf the dogtag
> > discussion)
> > - Relies on the master certificate that expires every X years making everyone
> > generate a new client certificate
> >
> > On the otherside, recently we have been more and more feeling the need for a
> > centralized API authentication place. Something along the line of a personalized
> > 0Auth. This has also pros and cons.
> >
> > pros
> > - API token per user and per application
> > - Could support multiple tokens per application
> > - Central place to manage API token (ie a central place to revoke someone's
> > access if a machine gets compromised/lost)
> > - Simpler than dealing with the SSL stack
> > - Can be re-used by multiple applications
> >
> > cons:
> > - It's an idea and it needs work :)
> > - Impacts
> > - dist-git
> > - koji
> > - ?
> >
> > I do realize that this would be a pretty big task to undertake but we are
> > currently at a stage where we are planning for the future, including for the
> > next generation of koji but also FAS3, dogtag, our master cert expiring in a
> > couple of years...
> >
> > So I thought I would leave this here as food for thoughts and I'm happy to
> > discuss pros and cons of this idea.
> >
> - not transfered in the clear unlike a token
What is ? :)
All our apps are running behing https and if API tokens are something not to do
I'd really like to know what are the recommended way of doing CLI authentication
that can last longer than a web-cookie/browser time-out.
Pierre
9 years
Mash multilib optimization
by Toshio Kuratomi
Hey guys, I wasn't able to get mash to run locally so I never got profiling
of it done this vacation. But I did see several obvious ways to optimize
the multilib code. So here's a patch that will probably help make that
piece of the puzzle faster.
There's a few conceptual changes:
1) Only create the lists of packages and files once, when we create the
class rather than every time we instantiate it (or worse, everytime we
run the select() method which is what it was doing before).
2) Use frozensets() instead of lists wherever we're doing a containment test
('string' in set_of_packages). Sets are a hashed lookup whereas lists
have to be searched linearly.
3) Reduce the calls to fnmatch. fnmatch uses a regex under the hood so it's
not the cheapest operation in the world (although the regex compilation
is cached so it's not the worst thing either). I was able to change
a few fnmatches into containment tests instead and others I was able to
put behind a cheaper conditional which skips those tests altogether if
the cheaper conditional is satisfied
Without profiling a mash run I don't know how much this will speed up
mashing but talking to people it seemed like the multilib portion was taking
30-45 minutes to complete and since this was all low hanging fruit it seemed
like a good place to start.
-Toshio
9 years
How do get Koji Staging up and running again?
by Adam Miller
Hello all,
As the new person in the group I'm just full of ideas and energy
and I want to do all the things! As such, I'm spamming your inboxes
again today but this time it's to talk about Koji Staging.
I was hoping that we could go ahead and get a list of things written
down for what all needs doing such that we could maybe size out that
list into consumable tasks. From there these tasks would be something
that different people could take point on or that someone newer to the
team could use them as learning opportunities that aren't too
daunting. (Alright, I'm mostly thinking of myself on that one but if
there are fellow Fedora Rel-Eng newbies on the list who'd like to join
in I'd be happy to share in the learning experiences! :) )
Without further adieu:
What needs to be done to make koji staging functional again?
(Also, should we maintain this in the wiki somewhere? Or possibly some
other collaborative document thing that everyone likes?)
Thanks all!
-AdamM
9 years
shared koji shadow setup
by Dan Horák
Hi all,
the reasons, why I started to think about some shared koji shadow setup,
were
- allow multiple people co-work on maintaining koji shadow on secondary
arches
- track all config changes in git, so it is clear who made what change
(and why)
- store the configs and logs at a default and visible place
Currently it is setup by
http://ppc.koji.fedoraproject.org/shadow/tmp/setup-shared-shadow
which means
- everything happens under /home/shadow
- people need to be members of a group
- configs are in shared dir, but certs are taken from people's home dirs
The idea is that there is one person who works on branched (and rawhide)
and another who is responsible for updates in released Fedoras (looks
at failed builds, updates the "exclude" list, etc.).
The group membership should have 2 levels - first is OS level for
people to be able to login and to modify the configs on the hub, second
could be to limit the shadowers from whole "admin" permission in Koji to
"buildfromsrpm" only.
There is also a wrapper script for koji-shadow that
- does log rotation using screen to capture the logs
- selects the right config which is derived from the tag
- allows to build everything from a tag or individual builds
See http://s390.koji.fedoraproject.org/shadow/bin/run-shadow.sh.dev for
latest version.
This scheme works fine for months on s390 koji and to some degree also
on ppc koji. In my opinion having standardised procedures can only
help, even when there are plans for reworking the secondary arches
workflow in koji 2.0.
Dan
9 years
[PATCH] use new vagrant koji format options and add Cloud Vagrant box
by Ian McLeod
use new vagrant koji format options and add Cloud Vagrant box
---
scripts/build-cloud-images | 26 +++++++++++++++++++++-----
1 file changed, 21 insertions(+), 5 deletions(-)
diff --git a/scripts/build-cloud-images b/scripts/build-cloud-images
index c84d994..1b7184d 100755
--- a/scripts/build-cloud-images
+++ b/scripts/build-cloud-images
@@ -49,6 +49,25 @@ do
koji image-build Fedora-Cloud-$spin $RELEASE --distro Fedora-20 $TARGET --kickstart=fedora-cloud-$lspin-$GITHASH.ks $url x86_64 i386 --format=qcow2 --format=raw-xz --release=$BUILD --scratch $REPOS --nowait --disk-size=3
done
+for spin in Base-Vagrant
+do
+ declare -l lspin
+ lspin=$spin
+ kickstart=fedora-cloud-$lspin-$GITHASH.ks
+ ksflatten -c fedora-cloud-$lspin.ks -o $kickstart
+ echo "url --url=$url"|sed -e 's|$arch|$basearch|g' >> $kickstart
+ koji image-build Fedora-Cloud-$spin $RELEASE $TARGET $url x86_64 \
+ $REPOS \
+ --release=$BUILD \
+ --distro Fedora-20 \
+ --kickstart=fedora-cloud-$lspin-$GITHASH.ks \
+ --format=vagrant-libvirt \
+ --format=vagrant-virtualbox \
+ --scratch \
+ --nowait \
+ --disk-size=40
+done
+
for spin in Atomic
do
declare -l lspin
@@ -76,11 +95,8 @@ do
--release=$BUILD \
--distro Fedora-20 \
--kickstart=fedora-cloud-$lspin-$GITHASH.ks \
- --format=qcow2 --format=raw-xz \
- --format=vsphere-ova \
- --format=rhevm-ova \
- --ova-option vsphere_ova_format=vagrant-virtualbox \
- --ova-option rhevm_ova_format=vagrant-libvirt \
+ --format=vagrant-virtualbox \
+ --format=vagrant-libvirt \
--ova-option vagrant_sync_directory=/home/vagrant/sync \
--scratch \
--nowait \
--
2.1.0
9 years