Tomorrow I am planning to work on the following ticket (
https://pagure.io/fedora-infrastructure/issue/8157 ). If there are people
interested to work on this with me or just to tag along I will be starting
to work on that in #fedora-admin at 12:00 UTC.
My name is Alvaro Castillo, I'm 25 years old. I'm from Canary Islands, but
m living in Mallorca (Spain).
I have a blog https://echemosunbitstazo.es where I write howtos, articles
and more about operative systems world and web development like PHP,
Python, JS... or system shell scripts likes as Bash or Python.
I've worked with Ansible, IDM (Red Hat Identity Management), KVM, VMware
Virtual Center, Nagios, Red Hat Enterprise Linux, Commvault and much more.
Actually, I work in RIU Hotels & Resorts as system administrator.
I hope collaborate with this fantastic community. My username is
You are kindly invited to the meeting:
Fedora Infrastructure on 2019-10-24 from 15:00:00 to 16:00:00 UTC
The meeting will be about:
Weekly Fedora Infrastructure meeting. See infrastructure list for agenda a day before.
On Sat, Oct 19, 2019 at 07:38:04PM -0400, Neal Gompa wrote:
> On Sat, Oct 19, 2019 at 7:37 PM Kevin Fenzi <kevin(a)scrye.com> wrote:
> > Greetings communishift group (and infrastructure list).
> > I was working on the communishift cluster trying to fix it's failing to
> > upgrade as well as some cert issues, and managed to munge up the cluster
> > but good. ;( It's a tribute to the resillance of OpenShift that it's up
> > and serving applications still. :)
> > In any event, I think the easiest way to clean things up and get back to
> > normal is for us to just reinstall it. With that in mind, I am planning
> > to do so starting at 21UTC on 2019-10-21 (monday).
> > If everyone could oc export any config or data they wish to save before
> > then that would be great.
> > Sorry for the trouble, but hopefully we will be back on track after
> > that.
> Out of curiosity, is there some documentation somewhere of how this
> process is being handled?
Which process? The re-install?
Then after the install we run a few things (setup our idp, storage,
certs and users).
As you may know, currently database backups on db-koji01 are causing
very heavy load, disrupting our users builds (
so, they are currently disabled.
However, not having current backups is not a good thing, IMHO.
So, I am considering the idea of adding a db-koji02 vm (also rhel7 using
the same postgres version that db-koji01 is) and enabling streaming
replication from db-koji01 -> db-koji02 and then once thats working, run
the database backups on db-koji02.
It turns out this doesn't require that many changes on db-koji01:
* adding a replication user
* Setting 3 new lines of postgresql config and restarting:
wal_level = 'hot_standby'
max_wal_senders = 10
wal_keep_segments = 100
(May need to adjust senders and keep segments)
All the other changes are on db-koji02:
* create/setup the vm
* run pg_basebackup to pull all the current data from 01
* setup postgresql.conf and recovery.conf files
* start server and confirm it keeps up with 01
* run pg_dump and confirm it keeps up with 01
This is, of course a really big change in a freeze to a criticial
service, so I'd like to get thoughts from others about it.
Should we wait until after freeze and do without backups until then
(note, that we have never had to restore this db from backups in the
past, although we have dumped/restored it to move to newer postgres
Is there something else we can do thats easier to mitigate the issues
Thoughts? ideas? Rotten fruit?
inventory/group_vars/bastion | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/inventory/group_vars/bastion b/inventory/group_vars/bastion
index aeacc87e4..8e63d1a11 100644
@@ -23,7 +23,7 @@ custom_rules: [
# TODO - remove modularity-wg membership here once it is not longer needed:
# This is a postfix gateway. This will pick up gateway postfix config in base
This is a freeze break request to enable the new mirrorlist server on
proxy14 as discussed on the mailing list.
I hope my conditionals are correct for the Ansible and Jinja2 files.
If this freeze break request gets accepted someone needs to run the
playbook against proxy14.
Before running the playbook, proxy14 should be removed from DNS to make
sure, that the old mirrorlist containers are correctly stopped and
deleted and that the new mirrorlist containers are correctly running.
Adrian Reber (1):
Enable new mirrorlist server on proxy14
roles/mirrormanager/backend/files/backend.cron | 8 +++++---
.../backend/templates/sync_pkl_to_mirrorlists.sh | 2 +-
roles/mirrormanager/mirrorlist_proxy/tasks/main.yml | 2 +-
.../mirrorlist_proxy/templates/mirrorlist.service.j2 | 4 ++--
4 files changed, 9 insertions(+), 7 deletions(-)
One of the main goals for Fedora 31 Silverblue was to have core
applications that were removed a few releases ao from the Silverblue
fixed image because they could be installed as Flatpaks actually
pre-installed when you install Silverblue. The anaconda feature landed
this cycle, and we have all the applications available as
Fedora-infrastructure built Flatpaks, so the last missing piece was
getting the ostree-installer config updated appropriately
We tried to squeeze this in before the final freeze, but there were
some bugs in the configuration and templates that didn't quite work -
those are all fixed and tested in rawhide, and we'd like to backport
There are three things that we need:
* Backport fixes to the lorax template:
* Backport fixes to the Pungi config:
* Update Pungi on the F31 compose VM to a version that includes the
patch from https://pagure.io/pungi/pull-request/1278 - this was
already done on the VM that runs rawhide composes.
Note that Silverblue is *not* a release-blocking deliverable for
Fedora 31. I think the risk to the overall compose is pretty small,
given that everything here is a direct backport from Rawhide, and
caused no problems there.