About JS framework
by Pierre-Yves Chibon
Good Morning Everyone,
Our infrastructure is mostly a python store, meaning almost all our apps are
written in python and most using wsgi.
However in python we are using a number of framework:
* flask for most
* pyramid for some of the biggest (bodhi, FAS3)
* Django (askbot, Hyperkitty)
* TurboGears2 (fedora-packages)
* aiohttp (python3, async app: mdapi)
While this makes sometime things difficult, these are fairly standard framework
and most of our developers are able to help on all.
However, as I see us starting to look at JS for some of our apps (fedora-hubs,
wartaa...), I wonder if we could start the discussion early about the different
framework and eventually see if we can unify around one.
This would also allow those of us not familiar with any JS framework to look at
the recommended one instead of picking one up semi-randomly.
So has anyone experience with one or more JS framework? Do you have one that
would you recommend? Why?
Thanks for your inputs,
Pierre
7 months, 2 weeks
Future of fedora-packages
by Clement Verna
Hi all,
fedora-packages [0] code base is showing its age. The code base and
the technology stack (Turbogears2 [1] web framework and the Moksha
[2] middleware) is currently not ready for Python3 and I am not
planning to do the work required to make it Python3 compatible, so the
application will stop working when Fedora 29 is EOL.
In order to keep the service running, I have started a Proof Of
Concept (fedora-search [3]) to replace the backend of the application.
Fedora-search would be a REST API service offering full test search
API. Such a service would then be available for other application to
use, fedora-packages would then become a frontend only application
using the service provided by fedora-search.
While the POC shows that this is a viable solution, I don't think that
we should be proceeding that way, for the simple reason that this add
yet another code base to maintain, I think we should use this
opportunity to consider using Elasticsearch instead of maintaining our
own "search engine".
I think that Elasticsearch offers quite a few advantages :
- Powerful Query language
- Python bindings
- Javascript bindings
- Can be deployed in our infrastructure or used as a service
- Can be useful for other applications ( docs.fp.o, pagure, ??)
So what is the general feeling about using Elasticsearch in our
infrastructure ? Should we look at deploying a cluster in our infra /
Should we approach the Council to see if we can get founding to have
this service hosted by Elastic ?
Thanks
Clément
[0] - https://apps.fedoraproject.org/packages/
[1] - http://www.turbogears.org/
[2] - https://mokshaproject.github.io/mokshaproject.net/
[3] - https://github.com/fedora-infra/fedora-search
4 years, 2 months
What are we going to do about sigul?
by Neal Gompa
Hey all,
So, we've got a bit of a problem. The sigul package is not installable
in Fedora 29, and pygpgme is half-broken in Fedora 28 and was retired
during Fedora 29 development due to constant breakage.
This means that sigul is in danger of being retired in Fedora.
Unfortunately, sigul is the only supported signer system for Koji at
the moment.
What do we want to do here? It's well-known that sigul does not work
with GnuPG 2, though I vaguely recall that some work was done to try
to fix this.
Do we want to port sigul to python3-gpg, switching Sigul to Python 3
and the official gpgme bindings so that it works with GnuPG 2?
Or do we want to adapt the bridge to work with obs-signd (which is
already used by Copr)?
--
真実はいつも一つ!/ Always, there's only one truth!
4 years, 2 months
mirrorlist server with Python3
by Adrian Reber
The latest release of MirrorManager2 is still in updates-testing. I did
not want to push this version to updates-released because this is the
first Python3 based released.
Could someone test the version from updates-testing? Update the
mirrorlist container and test it on one of the proxies?
Adrian
4 years, 2 months
RFR: Message-Tagging-Service
by Chenxiong Qi
Hi all,
This mail is for a new micro-service called Message-Tagging-Service (aka
MTS). It serves to tag module build triggered by specific MBS event.
More detailed information is provided inside RFR ticket[1].
MTS works with a series of predefined rules to see if a module build
should be tagged with one or more tags. There is requirement coming from
module maintainers to ensure a module build is tagged into correct
platforms to fulfill the dependencies of module metadata. Comment[2] has
a specific use case for that.
So far, MTS has been containerized and deployed in internal. The image
is available from quay.io[3]. We would love to run MTS in Fedora as well
in order to make it easier to manage module build tag for module
maintainers and rel-eng.
If anything is missed for this mail thread, please point out. Questions
welcome! Thanks for your time.
[1] https://pagure.io/fedora-infrastructure/issue/7563
[2] https://pagure.io/fedora-infrastructure/issue/7563#comment-553774
[3] https://quay.io/repository/factory2/message-tagging-service
--
Regards,
Chenxiong Qi
4 years, 2 months
External access to the AMQP broker
by Aurelien Bompard
Hey y'all,
Fedora Messaging, the replacement for fedmsg, is using AMQP and thus a
message broker. The current clusters we have deployed in staging and
prod are only accessible from inside our infrastructure.
There are two needs for an externally accessible broker:
- the CentOS folks, who are outside of our infrastructure, would like
to send messages
- people from the community would like to subscribe to messages and do
things based on them
We have several options to make that happen.
1. Use our existing cluster and expose it to the world
The advantage is we don't maintain another cluster, but the downside
is in the case of a DoS attack we're directly affected. With RabbitMQ
3.7 there are some limits[0] you can set on vhosts (max connections
and max queues), but we're not yet on 3.7.
[0] https://www.rabbitmq.com/vhosts.html#limits
2. Use a separate cluster and copy messages over
We could deploy a separate cluster that would get a copy of all
messages, and would be more limited in resources. It truly isolates
infrastructure, so it's better protected against DoS, but it's more
work for sysadmins.
In both cases, there are several paths we can take as regards to authentication.
A: make a single readonly account for everybody in the community to
use, and a few read-write accounts (with X509 certs) for people who
need to publish, ie CentOS CI. If we choose a separate broker we can
copy those messages back to the main cluster.
The issue here is that everybody in the community will be using the
same account, so it's harder to shut down bad actors. It would also be
theoretically possible for someone to consume from somebody else's
queue (unless people make sure they use UUIDs in queues, I think we
can enforce that but it way have side effects).
However, it enables the same kind of usage that fedmsg provided before.
B: require authentication with username & password but make it easy to
get accounts. People could require accounts via tickets for example.
It will make it much harder to abuse the service, and we could easily
shut down bad actors. However it's an obviously heavier load on the
people who will handle the tickets and create the accounts.
My personal preference would be option 2A, so an external broker with
an anonymous read-only account, but all combinations of options
inflict different loads on the sysadmin (on deployment and in the
longer term), so I think it's really up to them.
What do you guys think?
Thanks
Aurélien
4 years, 3 months
cloud retirement
by Kevin Fenzi
Hey everyone.
As you know, we currently have a RHOSP5 ancient cloud. After a bunch of
work last year, we got a RHOSP13 cloud up and mostly working, but it was
a ton of work. After hearing from the Fedora Council and our various
management chains we determined that it wouldn't really be a good use of
our time moving forward to keep maintaining a OpenStack cloud.
We have not yet determined what we want to do with the hardware that we
had allocated to this, but are weighing our options. We may want to
setup OpenShift bare nodes so we can do kubevirt, we may want to just
setup a normal virthost setup managed by ansible.
For the items currently in our cloud, we will be looking at options for
them, we are definitely not shutting things off until we have plans in
place.
Happy to answer any questions and will make sure everything is properly
migrated.
kevin
4 years, 3 months
Future of sse2fedmsg
by Michal Konecny
Hi everybody,
I want to retire the sse2fedmsg [0] application. This is currently
deployed only on staging OpenShift as librariesio2fedmsg and it looks
like only application listening to it is Anitya.
Because I have implemented SSE consumer directly in Anitya [1], and it's
really easy to do it, I want to retire the sse2fedmsg.
But before I do this, I want to ask if anybody here is actually using it
or plan to use it for anything?
Regards,
mkonecny
[0] https://github.com/fedora-infra/sse2fedmsg
[1] https://github.com/release-monitoring/anitya/pull/746
4 years, 3 months
Planned Outage - Fedora Build Services 2019-03-01 20:00 UTC
by Stephen John Smoogen
Planned Outage - Fedora Build Services 2019-03-01 20:00 UTC
There will be an outage starting at 2019-03-01 20:00 UTC,
which will last approximately 4-6 hours.
To convert UTC to your local time, take a look at
http://fedoraproject.org/wiki/Infrastructure/UTCHowto
or run:
date -d '2019-03-01 20:00UTC'
Reason for outage:
Fedora Infrastructure would like to update and reboot all QA and build
systems and services. This will update kernels, glibc, and systemd
plus many other services on the affected systems.
Affected Services:
All build and QA systems under fedoraproject.org will be affected.
Friday Hosts to Update/Reboot:
buildhw-01.phx2 -
buildhw-02.phx2 -
buildhw-03.phx2 -
buildhw-04.phx2 -
buildhw-05.phx2 -
buildhw-06.phx2 -
buildhw-07.phx2 -
buildhw-08.phx2 -
buildhw-09.phx2 -
buildhw-10.phx2 -
buildvmhost-01.phx2 -
buildvmhost-02.phx2 -
buildvmhost-03.phx2 -
buildvmhost-04.phx2 -
bvirthost01.phx2 -
bvirthost04.phx2 -
bvirthost05.phx2 -
bvirthost08.phx2 -
bvirthost12.phx2 -
bvirthost13.phx2 -
bvirthost14.phx2 -
bvirthost15.phx2 -
ppc8-01.ppc -
ppc8-02.ppc -
ppc8-03.ppc -
ppc8-04.ppc -
aarch64-c01n1.arm -
aarch64-c02n1.arm -
aarch64-c03n1.arm -
aarch64-c04n1.arm -
aarch64-c05n1.arm -
aarch64-c06n1.arm -
aarch64-c07n1.arm -
aarch64-c08n1.arm -
aarch64-c09n1.arm -
aarch64-c10n1.arm -
aarch64-c11n1.arm -
aarch64-c12n1.arm -
aarch64-c13n1.arm -
aarch64-c14n1.arm -
aarch64-c15n1.arm -
aarch64-c16n1.arm -
aarch64-c17n1.arm -
aarch64-c18n1.arm -
aarch64-c19n1.arm -
aarch64-c20n1.arm -
aarch64-c21n1.arm -
aarch64-c22n1.arm -
aarch64-c23n1.arm -
aarch64-c24n1.arm -
aarch64-c25n1.arm -
buildhw-aarch64-01.arm -
buildhw-aarch64-02.arm -
buildhw-aarch64-03.arm -
buildhw-aarch64-04.arm -
buildhw-aarch64-05.arm -
buildhw-aarch64-06.arm -
buildhw-aarch64-07.arm -
buildhw-aarch64-08.arm -
buildhw-aarch64-10.arm -
buildvm-s390x-01.s390 -
buildvm-s390x-01.stg.s390 -
buildvm-s390x-02.s390 -
buildvm-s390x-03.s390 -
buildvm-s390x-04.s390 -
buildvm-s390x-05.s390 -
buildvm-s390x-06.s390 -
buildvm-s390x-07.s390 -
buildvm-s390x-08.s390 -
buildvm-s390x-09.s390 -
buildvm-s390x-10.s390 -
buildvm-s390x-11.s390 -
buildvm-s390x-12.s390 -
buildvm-s390x-13.s390 -
buildvm-s390x-14.s390 -
sign-vault03.phx2 -
sign-vault04.phx2 -
sign-vault05.phx2 -
sign-vault06.phx2 -
bkernel01.phx2 -
bkernel02.phx2 -
bkernel03.phx2 -
bkernel04.phx2 -
qa05.qa -
qa07.qa -
qa09.qa -
qa10.qa -
qa11.qa -
qa12.qa -
qa13.qa -
qa14.qa -
virthost-comm01.qa -
virthost-comm03.qa -
virthost-comm04.qa -
aarch64-c26n1-oqa.arm -
aarch64-c27n1-oqa.arm -
aarch64-c28n1-oqa.arm -
aarch64-c29n1-oqa.arm -
aarch64-c30n1-oqa.arm -
kernel01.qa -
kernel02.qa -
retrace01.qa -
retrace02.qa -
Ticket Link:
https://pagure.io/fedora-infrastructure/issue/7602
Please join #fedora-admin or #fedora-noc on irc.freenode.net
or add comments to the ticket for this outage above.
--
Stephen J Smoogen.
4 years, 3 months