Introduction
by Onalenna Junior Makhura
Hi,
I have been a member of this list for a few years now but due to some
constraints I have not been able to actively participate. I would now
like to start be active in Fedora development. I have been using fedora
as my 1st choice OS since Fedora Core 6.
I would be very glad to have someone who can show me the ropes.
Regards,
Onalenna Junior Makhura
10 years, 11 months
wiki testing in staging required (was Re: very old MediaWiki 1.16.5 used at https://fedoraproject.org/ site)
by Kevin Fenzi
On Thu, 26 Jul 2012 15:06:24 -0600
Kevin Fenzi <kevin(a)scrye.com> wrote:
> On Thu, 26 Jul 2012 12:45:50 +0300
> Gena Makhomed <gmm(a)csdoc.com> wrote:
>
> >
> > very old and buggy MediaWiki 1.16.5
> > still used at https://fedoraproject.org/ site
> >
> > now version 1.19.1 already available:
> >
> > https://www.mediawiki.org/wiki/MediaWiki
> >
> > Security MediaWiki 1.19.1 security release is now available.
>
> Yeah, we are aware. ;)
>
> We have been working on setting up 1.19 rpms... and they are now
> finally at a point where we can do some testing. Hopefully they will
> go out to our staging area soon and we can get them rolled into
> production before too long.
ok. I have mediawiki 1.19.1 setup in staging now.
Please test:
http://stg.fedoraproject.org/wiki/
The database conversion took about 45minutes or so, but seems to have
gone fine, and we ran into a theme issue, but athmane managed to fix
that right up.
I've tested:
* Login works.
* editing pages works.
* lockdown works because it won't let me edit Legal namespace pages.
* pages load in general and look ok.
* Performance seems ok.
Please do check and make sure everything works as you expect.
If things look good we can try and push this into production before the
next freeze hopefully.
thanks,
kevin
11 years, 1 month
Re: Planet Fedora: OPML with missing info
by Kévin Raymond
On Thu, Jul 12, 2012 at 2:19 PM, Pedro Francisco
<pedrogfrancisco(a)gmail.com> wrote:
> Some Planets provide a opml.xml file so people can use to subscribe to
> everyone who publishes to the planet (for example, using Liferea,
> which periodically checks if the OPML has been updated and fetches it
> if so).
>
> The file exists on Planet Fedora but has missing URL info (
> http://planet.fedoraproject.org/opml.xml ).
>
> Does anyone know if this is on purpose?
>
>
> Thanks,
> --
> Pedro
Hi Pedro, the planet is infra related.
Forwarding to them so they could see what's going on
@infra, where is it in our arch? I could have the access, not sure…
--
Kévin Raymond
(shaiton)
GPG-Key: A5BCB3A2
11 years, 2 months
Meeting Agenda Item: Introduction Patrick Uiterwijk
by Patrick マルタインアンドレアス Uiterwijk
Hello everyone,
This is me saying "hello", as I would like to help with the infrastructure team.
It was my intention to say hi during the meeting today, but I already
noticed the issue there, so hereby an email.
I have some experience with managing servers, and have quite a bit of
knowledge on development of tools/scripts.
Currently, I am packaging for Fedora and helping with the Freemedia program.
Hopefully I can help, and I would love to learn while doing so.
With kind regards,
Patrick Uiterwijk
11 years, 2 months
Web application frameworks and the future
by Toshio Kuratomi
"""
In the beginning there was cgi. And everything was slow but simple. And
lo, one day we began to crave faster speeds, MVC, and other features that
plain cgi did not provide. And thus we entered the age of web
frameworks....
"""
At last week's infrastructure meeting, I brought up the fact that we seem to
have a proliferation of web application frameworks for the new apps that we
are creating. In some ways this is good as it lets us experiment with new
technologies as a group and lets us fit the needs of a specific application
or programmer's style with the framework. However, it has downsides as
well; mostly in the realm of ongoing maintenance of the apps. We need to
take a moment to figure out where we want to go with this.
== Some issues ==
* Retaining group knowledge of many different application frameworks
even when the original author stops being an active contributor
* Maintaining the packages in EPEL and Infrastructure for these
* Maintaining some knowledge of the frameworks' code and involvement with
their upstreams to fix bugs in the frameworks themselves.
* Deployment of multiple frameworks that may have conflicting deps.
* Deployment of multiple frameworks taking up more memory on the servers.
We think to some extent we currently have ways to manage the deployment
problems:
* Separate app servers for individual apps. As long as we have an inflow of
hardware resources we can continue to separate out applications onto
different machines instead of running them all on app* as our first
generation of apps was. This would be an ongoing expense. We should
continue to allocate at least two servers to each application so that we
can do things like reboots and updates transparently to the users.
* Openshift. Hosting applications on a cloud service like openshift allows
us to separate out applications and parcel out memory as a resource
differently than if we're managing multiple apps on a single host.
While these factors do change the game as far as hardware allocation is
concerned, it doesn't help our manpower resources. As we spin up more hosts
for each web application, we need sysadmin time to spin those hosts up. As
we deploy to openshift we need to figure out how we're going to integrate
configuration and deployment to those hosts into our existing puppet
configurations (I don't think that any of our current openshift deployed
services are puppet managed) and how we're going to manage load balancing
and failover.
== Where are we now? ==
.. note:: I would like this section to be an inventory of everything that
we're deploying and writing but I don't have a complete picture. If you
have more things, feel free to update this on the wiki page:
https://fedoraproject.org/wiki/Infrastructure_Services_Survey
TG1 => Turbogears1, SQLAlchemy and genshi/mako
Old TG1 => TurboGears1, SQLObject and kid
TG2 => TurboGears2
Pyramid => Curent successor to TG2 but a break from the current TG1 style;
may have a new layer built on top of it at a later date that is
more TG-ish.
Flask => Easy to get started with and wrap your head around. Great for small
projects. Not a huge stack of deps.
Application Host Framework Notes
----------- ---- --------- -----
bodhi app* old TG1 has a pyramid branch
bodhi releng* old TG1 has a pyramid branch
busmon ? TG2/moksha Not yet deployed
copr(2) ? flask not yet deployed. Loosely,
"buildsys for fedorapeople repos"
datagrepper ? flask? Not yet deployed
dataviewer ? flask? Not yet deployed
dpsearch ? perl/C Not yet deployed testing on
search01-dev
elections app* TG1 has a TG2 branch and ianweller
trying a flask branch for
comparison
fas fas* TG1
fedorabadges ? pyramid Not yet deployed
fedoracommunity app07? TG2/moksha Only runs on RHEL5. We're
retiring this pending on
datanommer being deployed or we
get tired of keeping app07. (Is
the version of moksha here old as
well?)
fedorahosted-reg openshift? flask not yet deployed
freemedia app* php In Puppet. Looks like it would be
very simple to port to something
lightweight like Flask if we
wanted to get away from PHP.
fudcon-reg openshift flask registration application for
fudcon. Not currently configured
in puppet, load balanced, etc.
koji koji* custom was mod_python. plans to move to
mod_wsgi. (Current status?)
mirrorlist-server app* custom lightweight, mod_wsgi process. No
real framework
mirrormanager app* old TG1 has an older TG2 branch
packagedb app* TG1
packages packages* TG2
pager app*, noc* CGI
raffle app* TG2 Disposable -- no promises to keep
maintaining have been made
smolt value* TG1 We're planning to get rid of this
in favor of census on openshift
(Are we still running the process
on app* even though it isn't
actively serving pages?)
tagger packages* TG2
We deploy but do not code for:
Application Host Framework Notes
----------- ---- --------- -----
askbot ask* django Uses openid login
darkserver darkserver django
insight insight* drupal/php I'm not sure the level of coding
that we do on this.
gitweb(-caching) pkgs* cgi? thinking of replacing with cgit
hosted*
hg? hosted* cgi?
loggerhead hosted* mod_wsgi
mailman webui hosted* python cgi mailman web frontend for
collab* lists.fp.o and lists.fh.o
mediawiki app* php
reviewboard hosted* django we've talked about moving this to
openshift and/or app servers
trac hosted* mod_wsgi genshi templates
Deployed but only for our sysadmins: collectd, nagios, awstats
== Some analysis ==
Right now we're deploying against the following frameworks for applications
in our critical path:
* TG1
* mod_wsgi/mod_python
We also have a few additional applications that are not currently critical
to creating Fedora but are value adds that we've worked hard on. These
applications are written against
* TG1
* TG2
* flask
The new applications that we're writing seem to be written against:
* TG2
* flask
* pyramid
== Some thoughts ==
=== Openshift ===
Although openshift is attractive from a hardware-provisioning perspective,
we haven't figured out how to manage configs for it for any of our currently
deployed services. So, for instance, if there was evidence that one of our
openshift instances had been compromised we wouldn't have the benefit of
configs checked into puppet to refer to and to help us reconstruct that
instance. We probably also don't have these hosts as part of our backups
(don't know if openshift manages backups for us). We should figure out
disaster recovery for these hosts before we go too much further here.
We also don't currently have any openshift hosts working in a load balanced
fashion so, for instance, doing an update of an app could require user
visible downtime.
If we're going to use openshift for deploying production apps, we should
come up with answers for these tasks.
=== Getting rid of TG1 ===
At some point I want to get rid of the TG1 stack. Upstream is in
maintenance-only mode for it. And increasingly, they are moving to the
somewhat incompatible TG-1.5.x stack for their maintenance while
simultaneously pushing people to write their apps for TG2 or pyramid. While
TG1.1 "just works" for us right now, we're eventually going to run up against
things that upstream isn't handling (whether bugfixes in the TG-1.1.x
branch, security fixes, or porting of the stack to new versions of dependent
libraries). While the maintenance burden of the TG1.1 stack is low at this
time, it's just going to get higher over time.
In order to port away from the TG1 stack, I want to figure out what we
should be porting to. Last year we thought that should be TG2 because
moksha was intrinsically linked to TG2 and we were deploying on
fedoracommunity which needed moksha. Now, neither of those is true.
(moksha can now run on other frameworks besides TG2. fedoracommunity is
going away in the future.) However, there's no clear successor.
=== Plethora of frameworks ===
We're writing and deploying apps written against an ever expanding number of
frameworks. I am a bit afraid of this. While it is nice to know that we
have exactly the right tool for the job among the many choices of framework,
I think that maintaining apps written in a variety of frameworks is going to
cause us pain as frameworks die off or change radically and current
contributors move on to other things. With that in mind I think we should
commit to using only a few frameworks in our coding for infrastructure and
those frameworks will serve to be where we concentrate on gathering our
experience, what we write new apps against, what we design our
infrastructure to support, and what we port our apps to as time goes on.
From browsing the list of frameworks we're currently deploying:
Django has a good track record of making new releases with clear porting
guides for making changes in your old code on run on the new versions.
However, it is conceptually something of an application server (like JBoss),
not a pure framework like Turbogears. At the least, this would require some
thought on our part on how to deploy and code for it.
Flask seems to be lighter weight in terms of its deps and in terms of its
learning curve. It's pretty easy to run a flask app in openshift. If we
were to choose just two frameworks, it might make sense to choose flask as
an entry level framework for smaller applications and one other framework
with lots of bells and whistles for things that need those features.
TurboGears2 is still developed upstream. Some of the main developers have
moved on to work on pyramid but others are continuing to work on TG2.
Upstream has committed to doing the necessary work to port TG2 to python3
but much of the TG2 underlying stack is in maintenance mode so the TG2 devs
have had to do some of that work themselves.
Pyramid is a merging of certain segments of the zope community and the
pylons community. If pylons has a successor, this is it. Since TG2 was
built on pylons, pyramid might be the next logical step (or a web framework
built on top of pyramid).
== Final thoughts ==
My primary goal is to decide what framework to port our old TG1 code to so
that we can stop maintaining the TG1 stack before upstream stops working on
it at all. My secondary concern is that we stop growing the other stacks
that we're maintaining and concentrate on one or two which will make
mainenance easier. Can we choose two frameworks right now that will suit
our needs? It seems that flask can serve a niche and maybe should be one of
them. What should our bells and whistles framework be? TG2 or pyramid or
something else entirely?
-Toshio
11 years, 2 months
Plan for tomorrow's Fedora Infrastructure meeting (2012-07-26)
by Kevin Fenzi
The infrastructure team will be having it's weekly meeting tomorrow,
2012-07-26 at 18:00 UTC in #fedora-meeting on the freenode network.
Note that I will be gone for this meeting and the one following.
Smooge will run the meeting.
Suggested topics:
#topic New folks introductions and Apprentice tasks.
If any new folks want to give a quick one line bio or any apprentices
would like to ask general questions, they can do so here.
#topic Applications status / discussion
Check in on status of our applications: pkgdb, fas, bodhi, koji,
community, voting, tagger, packager, dpsearch, etc.
If there's new releases, bugs we need to work around or things to note.
#topic Sysadmin status / discussion
Can note or talk about sysadmin related tasks or items that happened in
the past week or are going to happen.
#topic Upcoming Tasks/Items
#info 2012-07-30 to 2012-08-03 PHX2 trip for smooge
#info 2012-07-31 21UTC to 01UTC outage window
#info 2012-08-01 nag fi-apprentices
#info 2012-08-01 gitweb to cgit migration
#info 2012-08-03 hosted03-> hosted01/02 migration (tenative)
#info 2012-08-07 to 2012-08-21 F18 Alpha Freeze
#info 2012-08-08 drop inactive apprentices.
#info 2012-08-21 F18 Alpha release.
#info 2012-08-31 end of 2nd quarter
#info 2012-09-11 to 2012-09-25 F18 Beta Freeze
#info 2012-09-25 F18 Beta release
#topic Open Floor
Submit your agenda items, as tickets in the trac instance and send a
note replying to this thread.
More info here:
https://fedoraproject.org/wiki/Infrastructure/Meetings#Meetings
Thanks
kevin
11 years, 2 months
Python hash seed randomization enabled in staging
by Luke Macken
I just pushed out a hotfix to enable Python hash seed randomization on
all of our mod_wsgi applications in staging. The implementation details
of the change can be found in this ticket:
https://fedorahosted.org/fedora-infrastructure/ticket/3169
I am hopeful that this won't break anything, and we'll be able to push
it to production ASAP (ideally before the alpha freeze in 3 weeks).
However, in the rare chance that we have any apps that rely on "dict
ordering", then it may cause problems.
I'll do my best to poke at our various apps in staging to ensure they
function as expected, but if you have some spare cycles and want to help
test, it would be much appreciated.
I just confirmed that it is enabled for bodhi on app01.stg, but if you
would like to verify that your application has hash seed randomization
enabled, you can simply 'import sys' and make sure
sys.flags.hash_randomization is 1.
luke
11 years, 2 months
builders of the future!!!!!
by Seth Vidal
The discussion on devel list about ARM and my work last week on
reinstalling builders quickly and commonly has raised a number of
issues with how we manage our builders and how we should manage them in
the future.
It is apparent that if we add arm builders they will be lots of
physical systems (probably in a very small space) but physical,
none-the-less. So we need a sensible way to manage and reinstall these
hosts commonly and quickly.
Additionally, we need to consider what the introduction of a largish
number of arm builders (and other arm infrastructure) would do to our
existing puppet setup. Specifically overloading it pretty badly and
making it not-very-manageable.
I'm making certain assumptions here and I'd like to be clear about what
those are:
1. the builders need to be kept pristine
2. that currently our builders are not freshly installed frequently
enough.
3. that the builders are relatively static in their
configuration and most changes are done with pkg additions
4. that builder setups require at least two manual-ish steps of a koji
admin who can disable/enable/register the builder with the kojihub.
5. that the builders are fairly different networking and setup-wise to
the rest of our systems.
So I am proposing that we consider the following as a general process
for maintaining our builders:
1. disable the builder in koji
2. make sure all jobs are finished
3. add installer entries into grub (or run the undefine, reinstall
process if the builder is virt-based)
4. reinstall the system
5. monitor for ssh to return
6. connect in and force our post-install configuration: identification,
network, mount-point setup, ssl certs/keys for koji, etc
7. reboot
8. re-enable host in koji
We would do this with frequency and regularity. Perhaps even having
some percentage of our builders doing this at all times. Ie: 1/10th of
the boxes reinstalling at any given moment so in a certain time
frame*10 all of them are reinstalled.
Additionally, this would mean these systems would NOT have a puppet
management piece at all. Package updates would still be handled
by pushes as we do now, if things were security critical, but barring
the need for significant changes we could rely on the boxes simply being
refreshed frequently enough that it wouldn't need to be pushed.
What do folks think about this idea? It would dramatically reduce the
node entries in our puppet config, it would drop the number of hosts
connecting to puppet, too. It will mean more systems being reinstalled
and more often. It will also require some work to make the steps I
mention above be automated. I think I can achieve that without too much
difficulty, actually. I think, in general, it will increase our ability
to scale up to more and more builders.
I'd like input, constructive, please.
Thanks,
-sv
11 years, 2 months
Meeting agenda item: Introduction Unai Ruiz Tens (again)
by Unai Ruiz Tens
Hi everybody,
Last year I joined the Fedora Infrastructure team but I couldn't
contribute as much as I wanted due to my job. Now things had changed
and hopefully, I will have more time to help as much as possible.
My name is Unai Ruiz (tensov at IRC channels). I've been a Linux user
for about 15 years and been working professionally with it for 8
years. Currently I work for my region's public administration,
designing high available and high performance infrastructure for web
applications, mainly J2EE and PHP applications deployed on Red Hat
systems. With the years I have gained experience with Red Hat (among
other distros), MySQL, Apache httpd, Tomcat, Nagios ... I can write
bash and python scripts. I think it's time to help the Community back
so I thought that nothing better than the Fedora Project :) Anyway,
I'm sure I will be getting more from this than I will be giving. I
have little experience with Puppet or with systems that are spread
around the globe so I'm sure this will be a thrilling experience!
I have read the "Getting started" guide and I will introduce myself in
the next IRC meeting so I can join the apprentice group and begin with
some soft stuff.
Thank you all!
11 years, 2 months
Plan for tomorrow's Fedora Infrastructure meeting (2012-07-18)
by Kevin Fenzi
The infrastructure team will be having it's weekly meeting tomorrow,
2012-07-18 at 18:00 UTC in #fedora-meeting on the freenode network.
Note that I will be gone for this meeting and the one following.
Smooge will run the meeting.
Suggested topics:
#topic New folks introductions and Apprentice tasks.
If any new folks want to give a quick one line bio or any apprentices
would like to ask general questions, they can do so here.
#topic Applications status / discussion
Check in on status of our applications: pkgdb, fas, bodhi, koji,
community, voting, tagger, packager, dpsearch, etc.
If there's new releases, bugs we need to work around or things to note.
#topic Sysadmin status / discussion
Can note or talk about sysadmin related tasks or items that happened in
the past week or are going to happen.
#topic FAD ?
https://fedoraproject.org/wiki/FAD_Infrastructure_Security_2012
#topic cgit and gitweb-caching retirement
#topic Upcoming Tasks/Items
#info 2012-07-19 migration of last redhat.com lists (smooge)
#info 2012-07-19 migration of lists.fedorahosted.org (smooge)
#info 2012-07-30 to 2012-08-03 PHX2 trip for smooge (tenative)
#info 2012-08-01 nag fi-apprentices
#info 2012-08-07 to 2012-08-21 F18 Alpha Freeze
#info 2012-08-08 drop inactive apprentices.
#info 2012-08-21 F18 Alpha release.
#info 2012-08-31 end of 2nd quarter
#info 2012-09-11 to 2012-09-25 F18 Beta Freeze
#info 2012-09-25 F18 Beta release
#topic Open Floor
Submit your agenda items, as tickets in the trac instance and send a
note replying to this thread.
More info here:
https://fedoraproject.org/wiki/Infrastructure/Meetings#Meetings
Thanks
kevin
11 years, 2 months