About JS framework
by Pierre-Yves Chibon
Good Morning Everyone,
Our infrastructure is mostly a python store, meaning almost all our apps are
written in python and most using wsgi.
However in python we are using a number of framework:
* flask for most
* pyramid for some of the biggest (bodhi, FAS3)
* Django (askbot, Hyperkitty)
* TurboGears2 (fedora-packages)
* aiohttp (python3, async app: mdapi)
While this makes sometime things difficult, these are fairly standard framework
and most of our developers are able to help on all.
However, as I see us starting to look at JS for some of our apps (fedora-hubs,
wartaa...), I wonder if we could start the discussion early about the different
framework and eventually see if we can unify around one.
This would also allow those of us not familiar with any JS framework to look at
the recommended one instead of picking one up semi-randomly.
So has anyone experience with one or more JS framework? Do you have one that
would you recommend? Why?
Thanks for your inputs,
Pierre
7 months, 2 weeks
[PATCH 0/1] Add a flatpak-indexer openshift service
by Owen Taylor
From: "Owen W. Taylor" <otaylor(a)fishsoup.net>
This is an initial attempt to create a configuration for flatpak-indexer to replace
regindexer and add an image delta capability. The config here is derived from
a working openshift configuration, but is untested in this form.
See: https://pagure.io/fedora-infrastructure/issue/9272
Open questions:
How to propagate content to the registry.fedoraproject.org reverse proxy
========================================================================
Currently the regindexer-generated content is rsync'ed from sundries to
fedora-web/registry. How should this be done with flatpak-indexer running as
an openshift app? Some possibilities that come to mind:
- Run a rsyncd within the openshift app (either as a separate deploymentconfig
or as a sidecar to the indexer) and expose a route to it internally in
Fedora infrastructure.
- Run a web server within the openshift app, expose a route to it internally
in Fedora infrasturcture, and reverse proxy the content on fedora-web/registry
instead of rsync'ing it.
- Write the content onto a netapp volume, and mount that volume RO either
on a host running rsyncd or directly on fedora-web/registry.
What to use for a redis image
=============================
Redis is used for caching and communication between the components. What redis
image should be used?
registry.redhat.io/rhel8/redis-5
needs configuration of a subscription
docker.io/library/redis:5
centos/redis-5-centos7
don't rely on such images currently
Custom Dockerfile image built from fedora:32
how would rebuilds be triggered?
For the two other images needed here, I used ubi8 images - which aren't currently
used elsewhere, but are presumably ok.
How to handle identifying versions to build for staging/production
==================================================================
I see that most openshift applications simply use 'staging'/'production' tags
in the upstream repo, while a few take the approach of having specific hashes
checked into the infrastructure ansible repository.
Is the upstream tag approach considered sufficiently secure? (Making the service
write a malicious index could allow causing users to upgrade to arbitrary
application binaries.)
Owen W. Taylor (1):
Add a flatpak-indexer openshift service
playbooks/openshift-apps/flatpak-indexer.yml | 56 +++++
.../reversepassproxy.registry-generic.conf | 34 ++-
.../flatpak-indexer/files/imagestream.yml | 52 +++++
.../flatpak-indexer/files/service.yml | 16 ++
.../flatpak-indexer/files/storage.yml | 24 ++
.../flatpak-indexer/templates/buildconfig.yml | 84 +++++++
.../flatpak-indexer/templates/configmap.yml | 98 ++++++++
.../templates/deploymentconfig.yml | 221 ++++++++++++++++++
.../flatpak-indexer/templates/secret.yml | 11 +
roles/regindexer/build/tasks/main.yml | 21 --
roles/regindexer/build/templates/config.yaml | 74 ------
11 files changed, 584 insertions(+), 107 deletions(-)
create mode 100644 playbooks/openshift-apps/flatpak-indexer.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/files/imagestream.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/files/service.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/files/storage.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/templates/buildconfig.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/templates/configmap.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/templates/deploymentconfig.yml
create mode 100644 roles/openshift-apps/flatpak-indexer/templates/secret.yml
delete mode 100644 roles/regindexer/build/tasks/main.yml
delete mode 100644 roles/regindexer/build/templates/config.yaml
--
2.28.0
2 years, 6 months
Another Rust MirrorManager experiment
by Adrian Reber
Our MirrorManager setup exports the current state of all mirrors every
hour at :30 to a protobuf based file which is then used by the
mirrorlist servers to answer the requests from yum and dnf.
The Python script requires up to 10GB of memory and takes between 35 and
50 minutes. The script does a lot of SQL queries and also some really
big SQL queries joining up to 6 large MirrorManager tables.
I have rewritten this Python script in Rust and now it only needs around
1 minute instead of 35 to 50 minutes and only 600MB instead of 10GB.
I think the biggest difference is that I am almost not doing any joins
in my SQL request. I download all the tables once and then I do a lot of
loops over the downloaded tables and this seems to be massively faster.
As the mirrorlist-server in Rust has proven to be extremely stable over
the last months we have been using it I would also like to replace the
mirrorlist protbuf input generation with my new Rust based code.
I am planing to try out the new protobuf file in staging in the next
days and would then try to get my new protobuf generation program into
Fedora. Once it is packaged I would discuss here how and if we want to
deploy in Fedora's infrastructure.
Having the possibility to generate the mirrorlist input data in about a
minute would significantly reduce the load on the database server and
enable us to react much faster if broken protobuf data has been synced
to the mirrorlist servers on the proxies.
Adrian
2 years, 6 months
Fedora 33 Beta Freeze now in effect
by kevin
Greetings.
We are now in the infrastructure freeze leading up to the Fedora 33
Beta release. This is a pre release freeze.
We do this to ensure that our infrastructure is stable and ready to
release the Fedora 33 Beta when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2020-09-15 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list.
Thanks,
Kevin
2 years, 8 months
FBR: put new proxies in action
by Mark O'Brien
Hi All,
Attached is a DNS patch to put some new proxy servers in Europe, Asia
Pacific and Africa. The servers are up and running and passing all nagios
checks. This patch would start directing the regional traffic toward the
servers
Just a note, the do-domains script would also need to be run on this. I
left it out so as not to clutter the patch.
Any +1's or comments are appreciated.
Thanks,
Mark
2 years, 8 months
the state of staging
by Kevin Fenzi
greetings everyone.
I thought I would send out an email about where we are with staging and
get more input on some things related to that. :)
Account system / noggin:
I will let the noggin team report here. I know they have been working on
deployment and fixing issues they hit. I don't think other things a
blocking them. Please do chime in here noggin team!
Buildsystem side:
* koji hub is up
* koji db is up, but the prod->stg sync script failed with a db error.
Still need to debug what is wrong there. Likely the koji db schema
changed and we need to adjust the sync script.
* The x86_64 and s390x buildvm's are up.
* The aarch64/armv7 and ppc64le builders are not yet up. I need to talk
with smooge next week when he is back and we need to adjust vlans and
such to get them online.
* I haven't deployed the aarch64 osbs yet, but it should be doable.
* I am currently doing a load of the db01 prod server db into db01.stg.
This will then need adjustment for staging users/passwords on a app by
app basis. This is a good time/place to change staging applications db
passwords.
Things I have not deployed and why:
* basset - unclear if we are wanting this again to interface with noggin
or not.
* datagrepper/datanommer - We did not save any of the staging databases
sadly, so we would need to start with a copy of prod, but since we plan
to redo this application soon, I thought why not just wait until then
and we can deploy the replacement in stg.
* notifs - This is also going to get replaced, but also it would be
difficult to deploy in stg since it's on a EOL fedora.
* badges - do we want to deploy this in stg? Or was it going to move to
openshift?
* fedimg - we don't compose images in stg yet and we were going to
replace this anyhow.
* mailman - Also wanting to be redeployed, so figured there wasn't much
point in deploying the old thing again now.
So, basically everything is installed, we need to go and fix
applications now that aren't working. Perhaps this could be a good use
of a monitoring solution test. :)
One thing that needs doing soon is getting sssd working correctly in
staging, which will allow non root logins and sudo and hopefully groups
(some of which are used in playbooks). I hope to look more into that
next week.
Thoughts? questions?
kevin
2 years, 8 months
Freeze break request: pagure.fedoraproject.org
by Kevin Fenzi
Greetings.
We have been wanting to upgrade pagure01 from rhel7 to rhel8 for a
while. Additionally, the disk we have defined for it is getting too
small, so we would need to add space.
To deal with both issues, I have created a pagure02 rhel8 instance with
a larger disk on the same virthost.
I'd like to modify the sshd_config on pagure01 (temporarily) to allow
agent forwarding. This will allow us to use the ansible agent on
batcave01 to rsync data over ssh from pagure01 to pagure02. I intend to
only have this active while copying.
I plan to then copy:
/srv
/var/lib/pgsql
/var/www/
over and then pingou will take over and get things setup and running.
We can then test things until beta freeze is over and then schedule a
outage late next week to do the cut over.
Can I get +1s for the sshd change and plan in general?
Thanks,
kevin
2 years, 8 months