Fedora 31 Beta freeze now in effect
by Kevin Fenzi
Greetings.
we are now in the infrastructure freeze leading up to the Fedora 31
Beta release. This is a pre release freeze.
We do this to ensure that our infrastructure is stable and ready to
release the Fedora 31 Beta when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2019-09-17 (or later if
release slips or uses the secondary target). Frozen hosts should have no
changes made to them without a sign-off on the change from at least 2
sysadmin-main or rel-eng members, along with (in most cases) a patch of
the exact change to be made to this list.
Thanks,
kevin
4 years, 2 months
repospanner and our Ansible repo
by Randy Barlow
Greetings!
Kevin asked me last week whether we are ready to move our
infrastructure Ansible repository into repospanner. The benefit of
moving it into repospanner is that it is one way to enable us to allow
pull requests into the repository, which I think would be nice.
repospanner seems to work correctly as a git server, but it does need
improvements in its performance, so I offered to do a little
benchmarking with our Ansible repo and repospanner to see what kind of
performance we might see.
I deployed a 3-node repospanner cluster today on fairly high
performance hardware (SSD storage). It was three VMs on the same
physical machine. Note that due to my test setup, network latency was
about as good as it could get, and so was storage iops. I believe the
performance bottlenecks will depend heavily on storage iops. Thus, this
hardware is not really a great way to predict how the performance might
be if we deployed into our infra, but it was easy for me to do and get
a "best case" performance benchmark. I am willing to attempt to
replicate this test on more realistic hardware in our infra if we want
more realistic data for our own use case.
I pushed the Ansible repository into it. This took a very long time:
298m2.157s! If we were to deploy nodes in different geos and use NAS
storage, I believe this would take longer. The good thing is that we'd
only need to do this operation once, if we were to decide to proceed.
The next test was to see how long it takes to clone our repo. I did
this on another machine on the same LAN (so again, ideal network
latency) and it took 2m27.433s. That's a pretty long time too I'd say,
but maybe liveable? This would impact every contributor who wanted to
clone us, so I'll let the list debate whether that is acceptable.
Next, I made a small commit (just added/deleted some lines) and pushed
it into the cluster. This went reasonably quick at 0.366s, which I
think we would be OK with.
The last test I performed was to see how quickly another checkout could
pull that commit, and this was again a speed I might consider to be a
bit slow at 4.931s, especially considering that the commit was small
and was only one. I would expect this to be somewhat proportional to
the amount of change that has happened since the user last fetched, and
this repo does see a lot of activity. So I might expect git pull to
take 10's of seconds for contributors who are fairly active and pull
once every few days or so, and maybe longer for users who pull less
frequently.
The repo copy I tested with has 199717 objects and 132918 deltas in it.
repospanner performance seems to be fairly proportionally correlated
with these numbers, as the bodhi repo pushed into it in about an hour
and has 50kish objects, iirc (didn't write it down, so from memory).
I personally am on the fence about whether we should proceed at this
time. I am certain that people will notice the speed issues, and I also
expect that it will be slower than the numbers I listed above since my
tests were done on consumer hardware. But it would also be pretty sweet
if we had pull requests on the repo.
Improving repospanner's performance is a goal I am focusing on, so if
we deployed it now I would hopefully be able to get it into better
shape soon. Alternatively, we hopefully wouldn't have to wait that long
if we wanted to wait for performance fixes before proceeding. I could
see either decision being reasonable.
To reiterate, I'd be willing to replicate the tests I did above on
infra hardware if we are on the fence about the numbers I've reported
here and want to see more realistic numbers to make a final decision. I
think that would give us more realistic numbers since the tests I did
here were on a much more ideal situation, performance wise.
What do others think?
4 years, 2 months
the-new-hotness 0.12.0 deployed on staging
by Michal Konecny
Hi everybody,
today the-new-hotness 0.12.0 was deployed on staging.
This release contains few bug fixes, major feature and few development
changes. See changelog for more information [1].
Few highlighted changes:
- Retrieve the monitoring status from dist-git instead of
fedora-scm-requests (more about this in following e-mail in devel-announce)
- Fix crash when python-bugzilla throws Fault (this will fix the issue
when bugzilla was out of date with dist-git and the-new-hotness stopped
reporting because crashing on the same message over and over)
- Add diff-cover to tox (when creating a new code for the-new-hotness
one test will check if this new code is covered by tests, this will help
us prevent issues that are found too late on staging or in worst case on
production)
Feel free to test it.
On behalf of mages from release-monitoring.org,
Michal
IRC: mkonecny
FAS/GitHub: zlopez
[0] - https://github.com/fedora-infra/the-new-hotness/releases/tag/0.12.0
4 years, 2 months
[Fedocal, Nuancier] looking for new maintainers
by Michal Konecny
Hi everybody,
we are currently looking for community members, which will be willing to
take ownership of Fedocal and Nuancier. To see our reasons for this look
at Fedora community blog article [0].
These two applications are part of the Friday with Infra initiative [1],
so you can see what needs to be done for each of these applications. We
are happy to help with those tasks, just let us know how we could help.
What ownership means:
- you will be responsible for codebase (looking for app lifecycle,
fixing bugs, implementing features)
- you will be admin of the communishift instance (managing openshift
playbooks, maintaining running pods, deployment of new versions)
What rewards do you get:
- Learning useful and marketable programming skills (ansible, python,
PostgreSQL)
- Learn how to write, deploy and manage applications in OpenShift!
- Making significant contributions to the Fedora Project community (and
often others)
- Good feeling for helping Fedora community and Open source world
- A warm glow of accomplishment
On behalf of CPE Team,
Michal
IRC: mkonecny
FAS: zlopez
[0] -
https://communityblog.fedoraproject.org/application-service-categories-an...
[1] - https://fedoraproject.org/wiki/Infrastructure_2020/Friday_with_Infra
4 years, 2 months
FBR: MBS Upgrade
by Matt Prahl
Hello,
On September 6th, the platform:f28 module was retired [1] because the issue
in MBS described in #1243 [2] was fixed, but due to a miscommunication, MBS
was not updated in Fedora's infrastructure to include the fix. Because of
this, any modules that buildrequire a module that was built on a retired
platform stream but can be installed on any Fedora version, can no longer
build. In practice, this occurs with modules that buildrequire the
"javapackages-tools" module. To see more of the backstory, you can read
#1243 [2].
The purpose of this email is to request a freeze exception to upgrade MBS
from v2.25.0 to v2.27.0.
If this request is denied, I'd like permission to temporarily unretire the
platform:f28 stream in the MBS database as an alternative solution.
However, this alternative solution will cause any modulemd file that
buildrequires all platform streams to trigger a Fedora 28 build which will
fail.
1 - https://pagure.io/fedora-infrastructure/issue/7862
2 - https://pagure.io/fm-orchestrator/issue/1243
Thank you for your help.
Sincerely,
Matt
4 years, 2 months