I wrote  to devel some time ago regarding the deprecation of the apps.fp.o
index and plan to move its content to the main docs. Kevin mentionned that it
could end up in the infrastructure docs and that the whole should be moved to
docs.fp.o at some point. I will take a look at both since I have wanted to play
with the new documentation pipeline for a while. I am not the best guy to
meddle with the infrastructure doc but I might as well do something useful
while playing with antora. Tell me if it's not or if I missed something.
I might have something to show you at Flock if I have troubles sleeping in the
See you in Budapest,
On Mon, Apr 29, 2019 at 4:47 PM Kamil Paral <kparal(a)redhat.com> wrote:
> On Mon, Apr 29, 2019 at 11:39 AM Sinny Kumari <ksinny(a)gmail.com> wrote:
>> On Wed, Apr 24, 2019 at 12:19 AM Kevin Fenzi <kevin(a)scrye.com> wrote:
>>> Or could we move f29+ all to whatever is replacing it? (taskotron?)
>> It will be nice but I am not aware of any other system in place which
>> replace checks performed by autocloud.
>> (CC'ed tflink and kparal)
>> Does taskotron provides capability to perform tests on Fedora cloud
>> Images like booting images and other basic checks?
> Theoretically it is possible using nested virt. However, Taskotron is
> going away as well. The replacement is Fedora CI:
Thanks kamil! yeah, it doesn't make sense to move to Taskotron if it is
going to be deprecated as well.
> I recommend to ask in the CI list:
> It should be possible for them to provide the infrastructure you need.
Hmm, I am not very sure if we should spend time investigating and setting
to autocloud unless we have usecases for long run. Fedora Atomic Host Two
Week releases ends with F29 EOL.
The packages app is running on Fedora 30, and its dependencies are not
available in Fedora 31+ as I understand it.
This means it has about 7 months before we need to do something about
it, or shut it off.
Do we know if it can run on RHEL 7?
Fedora's complete MirrorManager setup is still running on Python2. The
code has been ported to Python3 probably over two years ago but we have
not switched yet. One of the reasons is that the backend is running on
RHEL7 which means we are not in a hurry to deploy the Python3 version.
The mirrorlist server which is answering the actual dnf/yum queries for
a mirrorlist/metalink is, however, running in a Fedora 29 container.
This container also still uses Python2 and it actually cannot use the
One of MirrorManager's design points is that the mirrorlist servers,
which are answering around 27 000 000 requests per day, are not directly
accessing the database. The backend creates a snapshot of the relevant
data (113MB) and the mirrorlist servers are using this snapshot to
answer client requests.
This data exchange is based on Python's pickle format and that does not
seem to work with Python3 if it is generated using Python2.
Having used protobuf before, I added code to also export the data for the
mirrorlist servers based on protobuf.
The good news with protobuf is, that the resulting file is only 66MB
instead of 113MB. The bad news is, that loading it from Python requires
3.5 times the amount of memory during runtime (3.5GB instead of 1GB).
In addition to the data exchange problems between backend and
mirrorlist servers the architecture of the mirrorlist server does not
really make sense today. 12 years ago it made a lot of sense as it could
be easily integrated into httpd and it could be easily reloaded without
stopping the service. Today the mirrorlist server and httpd is all part
of a container which is then behind haproxy. So there is a lot of
infrastructure in the container which is not really useful.
To get rid of the pickle format and to have a simpler architecture I
reimplemented the mirrorlist-server in Rust. This was brought up some
time ago on a ticket and with the protobuf problems I was seeing in
Python it made sense to try it out.
My code currently can be found at https://github.com/adrianreber/mirrorlist-server
and so far the results from the new mirrorlist server are the same as
from the Python based mirrorlist server.
It requires less than 700MB instead of the 1GB in Python with production
based data and seems really fast.
I have set up a test instance with the mirror data from Sunday at:
The instance is based on the container I pushed to quay.io:
$ podman run quay.io/adrianreber/mirrorlist-server:latest -h
With this change the mirrorlist server would also finally switch to
geoip2. The currently running mirrorlist server still uses the legacy
After the Fedora 31 freeze I would like to introduce this new mirrorlist
server implementation on the proxies. I already verified that I can run
this mirrorlist container rootless. This new container can be a drop-in
replacement for the current container and no infrastructure around it
needs to be changed.
The main changes to get it into production is to change mirrorlist1.service
and mirrorlist2.service to include a line "User=mirrormanager" and
replace the current container name with new container.
As you can read here, the Fedora Join SIG is experimenting a new way
to help people become part of the community.
This workflow includes also a temporary FAS group if required or asked
by the newcomer. Since this overlaps with the current "wikiedit"
system, we were wondering if it'd make sense to retire it and send
newcomers to the Fedora Join channels instead where they can speak to
community members and take their time learning/exploring the community
and it's projects.
It'll be one less responsibility for the infra team, and it'll help us
channel folks to the new workflow.
Please let us know if this sounds OK and we'll edit the wiki page here
We'll send out more posts announcing the new workflow to the community
this week. If you have the cycles, please hang out in the Fedora Join
Today jlanda, austinpowered and mizdebsk discussed about ticket
https://pagure.io/fedora-infrastructure/issue/8157 in #fedora-admin and
came up with a few questions on how to implement that solution that I think
would be nice to share with the wider group.
There are basically 2 possibility :
1 - We run ansible-report as a pre-commit hook
This means that ansible-report will be run locally before a contributor
commit a change. This is not ideal since our contributor are running all
kind systems (rhel, fedora, windows ?) so having something that work well
for everyone will not be simple. Also this forces our contributors to
install ansible-report locally.
2 - We run ansible-report as a pre-receive hook
This means that ansible-report is run on batcave01, but we cannot run
ansible-report just on a commit, we need to run the tool against the full
repository every time. That involve making a clone of the repo applying the
changes in the incoming commit, then run ansible-report on that repository.
This has also a few disadvantages, first we first need to clear all the
errors reported by ansible-report in our repo before we enable the hook
otherwise all commits will be rejected. It will also slows down every
pushes (time to clone, apply patch, run the tool).
Do people have other ideas ? Is this change worth the trouble ?
we are now in the infrastructure freeze leading up to the Fedora 31
Final release. This is a final release freeze.
We do this to ensure that our infrastructure is stable and ready to
release the Fedora 31 when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2019-10-22 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list.