Holiday Reminder
by Kevin Fenzi
Just a gentle reminder that the holiday season is coming up.
Many contributors have more time to work on things,
others are spending time away with friends and family.
When you push changes during the holidays be extra aware
of anything that might cause outages or breakage that might
pull someone who was enjoying time away back to fix things.
Happy holidays everyone.
kevin
1 year, 4 months
orphaned packages with infra-sig listed as co-maintaner
by Mattia Verga
Looking at the most recent list of orphaned packages, I see there are a
few with infra-sig listed as co-maintainer:
- python-cornice-sphinx: it was once required by Bodhi to generate docs
about REST APIs, but it's no more, since it's broken with Sphinx 4 and
upstream is almost dead. I think we can let it go:
$ sudo dnf repoquery --whatrequires python3-cornice-sphinx
<EMPTY_RESULT>
- python-oauthlib: can't find any reference in ansible that points to
it, may be a chained dependency of something else:
$ sudo dnf repoquery --whatrequires python3-oauthlib
cloud-init-0:22.2-4.fc37.noarch
python3-keystoneclient-tests-1:4.4.0-3.fc37.noarch
python3-lazr-restfulclient-0:0.14.4-4.fc37.noarch
python3-oauthlib+rsa-0:3.2.1-1.fc37.noarch
python3-oauthlib+signals-0:3.2.1-1.fc37.noarch
python3-oauthlib+signedtoken-0:3.2.1-1.fc37.noarch
python3-requests-oauthlib-0:1.3.1-3.fc37.noarch
python3-ring-doorbell-0:0.7.1-5.fc37.noarch
python3-smart-gardena-0:0.7.10-7.fc37.noarch
python3-social-auth-core-0:4.3.0-3.fc37.noarch
python3-tweepy-0:4.7.0-4.fc37.noarch
- python-requests-oauthlib: can't find any reference in ansible that
points to it, may be a chained dependency of something else:
$ sudo dnf repoquery --whatrequires python3-request-oauthlib
<EMPTY_RESULT>
- python-venusian: can't find any reference in ansible that points to
it, it is a dependency of cornice and pyramid, so we can't let it go:
$ sudo dnf repoquery --whatrequires python3-venusian
python3-cornice-0:6.0.1-3.fc37.noarch
python3-pyramid-0:1.10.5-7.fc37.noarch
I have taken python-venusian.
There are some other packages which are chained dependencies of other tools:
infra-sig: python-requests-oauthlib, python-cornice-sphinx,
golang-github-fvbommel-sortorder, golang-github-mitchellh-cli,
python-multilib,
python-venusian, python-oauthlib, golang-github-hanwen-fuse,
golang-github-tonistiigi-rosetta, ibus-table-others, python-requests-mock,
python-argon2-cffi, golang-github-containerd-stargz-snapshotter
The golang-* stuff are required by reg (I suppose). Not sure about hte
others.
Mattia
1 year, 4 months
Rethinking fedora websites deployment
by Francois Andrieu
Hi everyone!
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share.
That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift
We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o.
While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable.
I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
B) Same as before, with a twist
We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder
We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
D) Keep using the Openshift->Sundries->proxies workflow
E) Your solution here.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
What do you all think?
-darknao
1 year, 4 months