planet
by Kevin Fenzi
Hey folks. I thought I would open a discussion about fedoraplanet and
possibly some plans for it.
Right now:
fedoraplanet.org runs on people02.fedoraproject.org (aka fedorapeople).
To add a blog/rss feed you have to login there and edit your .planet
file, then scripting pulls all those .planet files and tries to fetch
all the feeds and then serves them up at http://fedoraplanet.org.
It uses a app called 'venus' to do this. venus is written in very old
python2 and very very dead upstream.
We run into the following problems with it:
* Sometimes it gets stuck and just stops processing until it's killed.
* It's serving on a http site, which causes people to ask us to make it
https, but that would just change the errors because many feeds it pulls
are still http since they were added back before letsencrypt existed.
* We have a handy 'website' field in our new account system, but aren't
using it at all.
* The .planet parsing is poor, any number of things can cause it to
break.
We have two open tickets on it:
- https://pagure.io/fedora-infrastructure/issue/10383 (upgrade to pluto,
a ruby based, but maintained thing)
- https://pagure.io/fedora-infrastructure/issue/10490
( planet not served via ssl) Which I am just going to close now.
So, I can think of a number of options and would love everyone who has
thoughts on it to chime in:
1. Do nothing. Venus "works" and .planet files are cool and retro.
2. Switch to pluto and use account system 'website' fields of
contributors. We could likely shove it in openshift and serve it
directly from there to avoid fedorapeople entirely.
(This would likely break anyone who has multiple feeds in there)
3. Switch to something better/bigger. I would think (although I don't
know) that there might be something that would not only aggregate rss
feeds for contributors, but perhaps mastodon/twitter/whatever also.
4. Planets are old and tired, just drop the entire thing. People can
maintain their own rss lists.
5. Planets are old and tired, just drop the entire thing.
But also, get our social media people to maintain contributor /
interesting lists. ie, the fedoraproject twitter account could maintain
a list of 'fedora contributors' and 'fedora packagers' or whatever.
6. Switch to pluto as in 2, but also setup some curators. Have a
'firehose' of all feeds, but the main fedora planet would be just
curated things that are known to be related to fedora and not off topic
or unrelated.
6. Get someones (not it!) to take in all the
twitter/facebook/mastodon/blog posts/rss feeds and post some kind of
curated round up every week or something.
7. Your brilliant idea here!
So, thoughts? this is not at all urgent, but we should end up doing
something with it sometime. :)
kevin
7 months, 3 weeks
Rethinking fedora websites deployment
by Francois Andrieu
Hi everyone!
The Websites & Apps team is currently working on rewriting all major Fedora websites (such as getfedora, spins & alt) and I believe this is a good opportunity to revisit the current deployment workflow and try to make it simpler.
Currently, websites are being built in Openshift, with a cronjob running every hour that fetches code from git, builds it, then saves it to an NFS share.
That same NFS share is also mounted on sundries01, which exposes it through rsync for proxies to sync it, then serve it to the world.
I have a few solutions in mind to replace that, and I would like your input on them.
A) Full Openshift
We can build, and deploy the websites directly in Openshift, and serve it from here. Just like silverblue.fp-o.
While this is probably the most straightforward solution, it has one major downside: if Openshift is unavailable for any reason, our major websites become also unavailable.
I believe this is why we are still using our proxies to host them, as such a scenario is unlikely to happen on every single one of them at the same time.
B) Same as before, with a twist
We build on Openshift, but instead of going through NFS and sundries with rsync, we store the websites on S3 storage provided by Openshift, then we sync the proxies using `s3cmd sync`.
C) Same as B, but with an external builder
We already build the new websites on Gitlab CI, and since the S3 gateway is accessible from the outside, we could just push the build artifacts to s3 directly from GitLab CI. Then sync the proxies from it.
D) Keep using the Openshift->Sundries->proxies workflow
E) Your solution here.
We could also improve B and C by adding fedora-messaging to the mix to trigger a proxy resync as soon as a new build is available instead of doing so every hour.
What do you all think?
-darknao
1 year
Shared Redis instance
by Aurelien Bompard
Hey folks!
The new version of FMN will run in OpenShift and will use Redis as a
cache backends (we chose it over memcached because it can do native
"is-this-string-in-this-set" operations).
I can deploy redis inside my openshift project easily enough , but I
was wondering if it would be worthwhile to have a shared Redis
instance, like we have a shared PostgreSQL instance.
It's not just for ease of use, but I expect to store quite a bit of
data in our Redis instance, and since we don't attach persistent
storage to OpenShift that means that it will live in the pod's memory.
So I'm being conscious of the memory hog it can become.
Unless I'm mistaken there can be several databases in the same Redis
instance, so we could share it between projects without stepping on
each other's toes.
What do you think?
1 year
Re: Rethinking fedora websites deployment
by darknao
On 2022-11-24 18:58, Jan K wrote:
> What is likelyhood of Openshift going down? A would be best
> solution if stable enough.
>
> copperi
>
Openshift is quite stable. But everything around it may not. Having a
network outage, internal VPN issue, datacenter incident, or just an
openshift migration that goes wrong can still happen.
1 year
What is Catalog's UV process?
by Fanny Zeng
As a must-see field in the catalogue printing market, album printing has shown a good development prospect for many years. It can not only use as an art display but also can use as a means of corporate publicity. Compared with a single pattern or text, the perfect combination of these two elements has an incomparable advantage.
We often hear about the catalog's cover needing to do partial UV. So, what is the UV procedure? In fact, UV means ultraviolet. In the printing industry, this is a special ink treatment procedure. It is also referred to as the UV glazing process. The catalog cover has undergone partial UV treatment. Will clarify the parts to highlight. Such as the logo. After a partial UV treatment. The logo part will have a swelling sensation and a special brightness after the polish. The text and images after UV processing are more textured. More three-dimensional, and more artistic, giving the final touch.
Depending on the UV zone, it is divide into full-screen UV and partial UV. The UV effect of the entire page is usually not very evident. It is like the super polish on the surface, but it is lighter than the super polish. It does not protect the surface from the printed material. Some protection, like the prevention of streaks. The smell on the nose, the smell is quite unpleasant, there is a smell of plastic dough.
There are also many kinds of UV inks. These inks tend to have an unusual shine and texture. They have mirror ink, frosted ink, foam ink, wrinkle ink, and hammer ink. As well as color sand ink, snow ink, ice ink, pearl ink, crystal ink, laser ink and so on. The most common is spectacular UV.
The equipment used in UV calls a UV light curing machine. And the principle is to cure UV ink by ultraviolet light irradiation. We can apply partial UV either after lamination or on the printed matter. But to highlight the effect of local UV. They carry it out after the lamination of the printed matter, and it is sub-filmed. It accounts for about 80% of local UV products. The printed matter after UV is difficult to recycle.
chinaprinting4u is a China printing company with 20 years of printing experience. If you need more professional catalog printing services, please contact us.
Article source:https://www.chinaprinting4u.com/news326.htm
1 year
Fedora 37 Final freeze now in effect!
by Kevin Fenzi
Greetings.
we are now in the infrastructure freeze leading up to the Fedora 37
Final release. This is a final release freeze.
We do this to ensure that our infrastructure is stable and ready to
release Fedora 37 when it's available.
You can see a list of hosts that do not freeze by checking out the
ansible repo and running the freezelist script:
git clone
https://infrastructure.fedoraproject.org/infra/ansible.git
ansible/scripts/freezelist -i inventory
Any hosts listed as freezes is frozen until 2022-10-18 (or later if
release slips). Frozen hosts should have no changes made to them without
a sign-off on the change from at least 2 sysadmin-main or rel-eng
members, along with (in most cases) a patch of the exact change to be
made to this list.
Thanks,
kevin
1 year
I've problem with gdm on wayland
by Setve
Hello, I executed the update command "sudo dnf update --refresh", and immediately after that without rebooting, I executed the upgrade command "sudo dnf system-upgrade download --releaseservr=37" in the boot normal mode, the boot process takes place until "Started gdm.services" reached, and here it freezes. I cannot use "ctrl-alt-f1 to f7",using a liveusb i uncomment "waylandenable=false" at /etc/gdm/custom.conf. It worked normally and you logged in X11, i've intel gpu.
I used it fpaste And these outputs https://paste.centos.org/view/2f797796
In addition to an attached file to journalctl -b -1 -r.
thank you.
1 year
Freeze break request: update/reboot armv7 builders
by Kevin Fenzi
Hey folks.
I'd like to update/reboot our buildvm-a32* builders.
I'm hoping that the 6.0.x kernel will make them happier building python
packages. See https://pagure.io/releng/issue/11095
So, this means applying updates on them and rebooting them all.
I don't see how this could affect the release tomorrow.
f37 doesn't actually use armv7 builders for anything, it's already done
anyhow if it did. :)
+1s ?
kevin
1 year
fate/plans for fedmsg-irc
by Kevin Fenzi
Greetings everyone.
I thought I would open a discussion about fedmsg-irc and what we want to
do with it moving forward.
First some background on what it is. ;)
fedmg-irc is part of fedmsg (Our old zmq message bus). It's a small irc
client that listens to the fedmsg bus and sends messages to a irc
channel based on it's configuration. It used to send to a bunch of
channels, but we dropped most of them off when we setup matrix bridges.
The only two it has left are:
1. #fedora-fedmsg - This channel gets a constant stream of (most)
everything from the message bus. (and #fedora-fedmsg-stg for staging)
2. #fedora-releng - This channel gets reports about releng related
events (failed composes, compose starts, syncs, tickets, etc).
Personally I have found 1 useful because my irc client logs it and I can
use handy things like grep to look for messages. Also, if something is
wrong and messages aren't flowing by there I can notice and fix things.
Also, it's sometimes useful because you can use a IRC hilight and see
messages of interest go by. All that said, there are probibly better
ways to do all those things: I could make a fedora-messaging listener to
just log everything, or get datagrepper better so I didn't want use grep
or have alerts for lack of messages, etc.
2 is there because releng folks liked the notices about things and it
allowed releng to react faster to things like failed composes, etc.
So, likely we don't want to keep running the fedmsg-irc instances
forver, but of course we can keep them going for a while.
Options then would be:
A) Just stop the service and let people figure out their own
alternatives.
B) Add something to do this to the FMN re-write (ie, have a way to
configure a channel/room instead of a user)? (Ideally a matrix thing not
a IRC thing)
C) Retire the service, but write something based on fedora-messaging
that interested folks could use to self service do the same thing.
(Also ideally a matrix thing)
D) Your idea here.
Thoughts everyone?
kevin
1 year