Change request: add dist-f10-build as a static repo
by Jesse Keating
When we created the dist-f10 build targets I forgot to create a
corresponding static-repo for it. This diff will create one, and allow
people who are working on F10 actually be able to populate buildroots
from the f10 content and match koji.
RCS file: /cvs/puppet/configs/build/update-static-repos.py,v
retrieving revision 1.4
diff -u -r1.4 update-static-repos.py
--- build/update-static-repos.py 14 Mar 2008 03:17:42 -0000 1.4
+++ build/update-static-repos.py 12 May 2008 14:20:38 -0000
@@ -4,7 +4,7 @@
import sys
import koji
-TAGS = ('dist-olpc2-build', 'dist-fc7-build', 'dist-f8-build',
'dist-f9-build', 'dist-rawhide', 'olpc2-trial3', 'olpc2-update1',
'olpc2-ship2')
+TAGS = ('dist-olpc2-build', 'dist-fc7-build', 'dist-f8-build',
'dist-f9-build', 'dist-f10-build', 'dist-rawhide', 'olpc2-trial3',
'olpc2-update1', 'olpc2-ship2')
STATICPATH = '/mnt/koji/static-repos'
SUFFIX = '-current'
--
Jesse Keating
Fedora -- Freedom² is a feature!
14 years, 10 months
[Fwd: Cron <postgres@db2> /var/lib/pgsql/vacstat.py check]
by Toshio Kuratomi
I'd like to take care of this as soon after change freeze as possible.
It's the same as last time:
Disable the pieces of the cron scripts which vacuum koji's dbs in puppet
cvs commit; make install; make push HOSTS=db2
ssh db2
screen
time sudo -u postgres vacuumdb -vd koji
[wait a few hours for this to finish during which koji is somewhat more
sluggish but there's no actual outage.]
reenable the pieces of the cron scripts which vacuum koji, commit,
install, push.
Done.
Anyone object to my doing it during the day on Wednesday, May 14th?
-Toshio
-------- Original Message --------
Subject: Cron <postgres@db2> /var/lib/pgsql/vacstat.py check
Date: Fri, 9 May 2008 13:20:07 GMT
From: root(a)db2.fedora.phx.redhat.com (Cron Daemon)
To: postgres(a)db2.fedora.phx.redhat.com
Traceback (most recent call last):
File "/var/lib/pgsql/vacstat.py", line 650, in ?
Commands[command](opts)
File "/var/lib/pgsql/vacstat.py", line 150, in test_all
test_transactions(opts)
File "/var/lib/pgsql/vacstat.py", line 147, in test_transactions
raise XIDOverflowWarning, '\n'.join(overflows)
__main__.XIDOverflowWarning: Used over half the transaction ids for
koji. Please schedule a vacuum of that entire database soon:
sudo -u postgres vacuumdb -zvd koji
14 years, 10 months
mod_wsgi vs cherrypy
by Mike McGrath
As many of you have seen we've started getting timeouts with fasClient
against the accounts system. After some probing I decided to look at
alternate deployment methods. Bottom line... python threading blows.
So here's the scoop. Right now we use cherrypy + supervisor to deploy
each turbogears app. Each app gets its own port.
WSGI also relies on cherrypy but does things in a different way. mod_wsgi
is an apache plugin, and instead of every app getting its own port, it
gets its own apache namespace (like /accounts/)
mod_wsgi can be setup to deploy more then one process at a time and send
requests to both. Straight tg + cherrypy cannot do this unless you have
two instance listening on different ports and have a load balancer in
front sending to both ports.
So I ran some tests. (attached and at -
http://mmcgrath.fedorapeople.org/wsgivscherrypy1.png) After proper tuning
for the machine mod_wsgi was not only a little faster (20 seconds faster
in the extreme end) it was also considerably more reliable and scaled
predictably. Straight cherrypy would reliably die on me at around 40
concurrent requests. Those requests would complete but sometimes timeout
or take too long for the client to listen. I think this is helping
attribute to whats going on in our environment. The test in question was
/accounts/group/list (its not a quick/small request)
There's other code changes on the way but I think mod_wsgi is a win for us
in this instance. After the freeze I'd like to deploy it on the fas
boxes and see how things go. We should then talk about our other
deployments. I like supervisord but it may be better for us in the long
run to use apache straight up. Also we get some other niceties like
being
able to more easily serve static content, and all the
mod_[headers,rewrite] stuff as well that comes with apache.
Thoughts, questions, comments?
-Mike
14 years, 10 months
Cron <root@ppc1> /bin/sleep $(($RANDOM/90)); /usr/bin/fasClient -i (fwd)
by Mike McGrath
Ok, we know whats wrong
we even know how to fix it.
not worth the risk right now... mind if I > /dev/null this cron job in the
meantime?
+1?
-Mike
---------- Forwarded message ----------
Date: Fri, 9 May 2008 00:33:08 GMT
From: Cron Daemon <root(a)ppc1.fedora.phx.redhat.com>
To: root(a)ppc1.fedora.phx.redhat.com
Subject: Cron <root@ppc1> /bin/sleep $(($RANDOM/90)); /usr/bin/fasClient -i
HTTP Error 502: Proxy Error
Traceback (most recent call last):
File "/usr/bin/fasClient", line 561, in ?
fas.make_group_db()
File "/usr/bin/fasClient", line 396, in make_group_db
self.groups_text()
File "/usr/bin/fasClient", line 333, in groups_text
self.people_list()
File "/usr/bin/fasClient", line 388, in people_list
self.people = self.send_request('user/list', auth=True, input=params)['people']
File "/usr/lib/python2.4/site-packages/fedora/tg/client.py", line 211, in send_request
raise ServerError, str(e)
fedora.tg.client.ServerError: HTTP Error 502: Proxy Error
14 years, 10 months
Meeting Log - 2008-05-08
by Ricky Zhou
16:01 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Who's here?
16:01 * lmacken
16:01 -!- TheorEPhysicist [n=jfghlynx(a)213.37.199.3.dyn.user.ono.com] has joined #fedora-meeting
16:01 < skvidal> I still miss the days when warren and jeremy would fall off
16:01 < TheorEPhysicist> Hello, I have a question on Fedora, could someone help me?
16:01 * skvidal is here
16:01 < lmacken> skvidal: me too :)
16:01 * jeremy kicks skvidal
16:01 -!- mccann [n=jmccann@nat/redhat-us/x-5e0d390b6441be74] has joined #fedora-meeting
16:01 < skvidal> TheorEPhysicist: ask in #fedora
16:02 < TheorEPhysicist> skvidal?a theoretical physicist?
16:02 < warren> I can still fall off for you.
16:02 < skvidal> TheorEPhysicist: you have a fedora question for a theoretical physicist?
16:02 * skvidal looks at spoleeba
16:02 < mmcgrath> Alllrighty, lets get this party started.
16:03 < spoleeba> skvidal, im not a theorist
16:03 < TheorEPhysicist> no, so, I should move to annother channel?
16:03 < spoleeba> skvidal, in theory....im an experimentalist
16:03 < skvidal> TheorEPhysicist: yah
16:03 < skvidal> TheorEPhysicist: this channel is for meetings
16:03 < skvidal> not for general questions
16:03 < TheorEPhysicist> OK, thanks!!!!
16:03 < skvidal> TheorEPhysicist: try #fedora and/or ask for spoleeba - he's a physicist of sorts
16:04 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- F9 Release.
16:04 < mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=as...
16:04 < zodbot> mmcgrath: http://tinyurl.com/25vzyu
16:04 < mmcgrath> so..
16:04 < mmcgrath> .ticket 421
16:04 < zodbot> mmcgrath: #421 (Fedora Mirror Space) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/421
16:04 < mmcgrath> skvidal: how's archive look?
16:04 < skvidal> running fine
16:04 -!- notting [n=notting@redhat/notting] has joined #fedora-meeting
16:04 < mmcgrath> how much is copied over there?
16:04 < skvidal> fc1 is on there
16:04 < skvidal> nothing else
16:04 < skvidal> b/c I can't get to the bits :)
16:05 < skvidal> f13 asked to wait until f9 is out b/c of bandwidth limits
16:05 < skvidal> only problem with archive right now is that it appears bu is blocking 873 at the router
16:05 < mmcgrath> huh? really?
16:05 < mmcgrath> ah
16:05 < skvidal> but it is up
16:05 < skvidal> it is archive.fp.o and archives.fp.o
16:05 < skvidal> it just needs more bits
16:05 < mmcgrath> we'll want to get rid of one of those and decide which one is canonical.
16:06 -!- TheorEPhysicist [n=jfghlynx(a)213.37.199.3.dyn.user.ono.com] has left #fedora-meeting []
16:06 < skvidal> we do?
16:06 < mmcgrath> yeah.
16:06 < skvidal> why not just leave both - they're just an alias
16:06 < mmcgrath> its confusing.
16:06 < skvidal> to whom?
16:06 < mmcgrath> and I've regreted stuff like that in the past.
16:06 < skvidal> okay
16:06 < mmcgrath> less is more :)
16:06 < skvidal> let me know which one you pick, I seriously have no preference
16:07 < mmcgrath> I'd say archive. since its not downloads.fedoraproject.org
16:07 < mmcgrath> So right now there's 161G free on /pub
16:07 < skvidal> umm
16:08 < skvidal> what?
16:08 < skvidal> on pub of archive?
16:08 < skvidal> or pub of d.fp.o?
16:08 < mmcgrath> pub of the actual primary mirror.
16:08 < skvidal> ah
16:08 < skvidal> sorry
16:08 < skvidal> okay
16:08 -!- Gaaruto [n=Gaaruto(a)atm91-2-82-241-141-128.fbx.proxad.net] has quit "++"
16:08 < skvidal> mmcgrath: should I wait until post-f9 to drop the other name or just do it now?
16:09 < mmcgrath> It can wait
16:09 < mmcgrath> afaik
16:09 < mmcgrath> .421 is fine for space.
16:09 < mmcgrath> f13: would you mind closing that ticket?
16:09 < mmcgrath> .ticket 421
16:09 < zodbot> mmcgrath: #421 (Fedora Mirror Space) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/421
16:09 < f13> oh yeah, I was going to get you a final bit count
16:09 < mmcgrath> if we need to we still have plenty of time to free up space on that box.
16:09 < f13> 114G /pub/fedora/linux/releases/9/
16:10 < mmcgrath> <nod>
16:10 < mmcgrath> Ok, we'll move on to the next bit
16:10 < mmcgrath> .ticket 526
16:10 < zodbot> mmcgrath: #526 (Torrent Prep) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/526
16:11 < mmcgrath> f13: is this something you're going to do or just someone in rel-eng? (or should someone from infra do it?)
16:11 < f13> debuginfo /huge/
16:11 < mmcgrath> its still group assigned.
16:11 < f13> mmcgrath: seth has done it in the past, as has skvidal
16:11 < skvidal> haha
16:11 < mmcgrath> skvidal: you up for that again this release?
16:11 < f13> leave it group assigned, I imagine we'll start uploading it tomorrow.
16:11 < f13> er whoops
16:11 < skvidal> the torrent?
16:11 < mmcgrath> k
16:11 < skvidal> sure
16:11 < f13> seth as has jeremy
16:11 < skvidal> f13: I liked the idea that I have done it as has me
16:11 < f13> I don't mind doing it, I should have clear decks tomorrow
16:11 < mmcgrath> I'm fine with either, I just wanted to make sure someone knows to do it if not I will.
16:12 < skvidal> umm
16:12 < f13> and could use an easy target.
16:12 < skvidal> when would that likely happen?
16:12 < skvidal> b/c I'm going to be visiting my mom this weekend
16:12 < skvidal> and I may or may not be close to a computer
16:12 < f13> skvidal: I was going to start around 10am EST
16:12 < f13> with the scps
16:12 < skvidal> which day?
16:12 < skvidal> tomorrow?
16:12 < skvidal> f13: oh can I ask another question?
16:12 < skvidal> f13: export bits?
16:13 < skvidal> do we need to set that up?
16:13 -!- couf [n=bart@fedora/couf] has quit "leaving"
16:13 -!- fugolini [n=francesc(a)87.13.178.128] has left #fedora-meeting []
16:13 < f13> skvidal: yes, tomorrow.
16:13 < skvidal> okie doke
16:13 < dgilmore> crap im here
16:13 < f13> RE Export bits, we can give them the projected final url for filing
16:13 < skvidal> okie doke
16:13 < f13> I'll need to do that tomorrow, is there a ticket for this?
16:13 < f13> (and is it assigned to me?)
16:14 < mmcgrath> f13: export bits? No, is that just the "send the URl to legal for export compliance" thing?
16:15 < f13> mmcgrath: yes.
16:15 < mmcgrath> f13: I'll create a ticket right after the meeting if you haven't already and make sure it gets added to the SOP.
16:15 -!- rdieter is now known as rdieter_away
16:16 < f13> thanks
16:16 < mmcgrath> f13: so do you want me to leave
16:16 < mmcgrath> .ticket 526
16:16 < zodbot> mmcgrath: #526 (Torrent Prep) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/526
16:16 < mmcgrath> assigned to rel-eng?
16:16 -!- tibbs [n=tibbs@fedora/tibbs] has quit "Konversation terminated!"
16:17 < f13> mmcgrath: please.
16:17 < mmcgrath> k
16:17 < mmcgrath> .ticket 528
16:17 < zodbot> mmcgrath: #528 (Release Day Links) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/528
16:17 < mmcgrath> We had a meeting about this yesterday, we'll make sure they're all on a wiki page somewhere and I'm going to spend some time on a script that actually creates static copies of this.
16:17 < mmcgrath> .ticket 389
16:17 < zodbot> mmcgrath: #389 (Monitor primary mirror) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/389
16:18 < mmcgrath> I don't think 389 is actually going to get done for F9, its not a blocker though so not a big deal.
16:18 < mmcgrath> .54 got moved until just after the change freeze for stability concerns.
16:18 < mmcgrath> .280
16:18 < mmcgrath> .ticket 280
16:18 < zodbot> mmcgrath: #280 (DHCP Server off of lockbox) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/280
16:18 < mmcgrath> is done, I'll close that now.
16:18 < mmcgrath> .ticket 333
16:18 < zodbot> mmcgrath: #333 (Add spam headers to bastion (smtp)) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/333
16:19 < mmcgrath> is blocking on 54 and is also a just after F9 launches thing.
16:19 < mmcgrath> .ticket 411
16:19 < zodbot> mmcgrath: #411 (New Website Fedora - 9) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/411
16:19 < mmcgrath> This is in good shape, we won't close it until its actually live though.
16:19 < mmcgrath> .ticket 416
16:19 < zodbot> mmcgrath: #416 (Infrastructure Change Freeze) - Fedora Infrastructure - Trac - https://fedorahosted.org/projects/fedora-infrastructure/ticket/416
16:19 < mmcgrath> This has been fine.
16:20 < mmcgrath> f13: I saw some murmurings in #fedora-devel this morning about cd's, what was the final outcome of that?
16:20 < notting> we are respinning cd images as we speak
16:21 < mmcgrath> notting: so long story short, thats not as big of a deal as it first seemed?
16:21 < f13> mmcgrath: x86_64 and ppc split CDs are respinning, and will be re-uploaded
16:21 -!- GeroldKa [n=GeroldKa@fedora/geroldka] has joined #fedora-meeting
16:21 < f13> mmcgrath: it's sitll a pretty big deal, and will cause a 0-day anaconda update to match what we hacked together here.
16:21 < mmcgrath> but as of right now we're still on schedule to release on the 13th?
16:22 < f13> yes
16:22 < mmcgrath> solid.
16:22 < mmcgrath> So thats really all I had related to the release.
16:22 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor (release stuff only please)
16:22 < mmcgrath> does anyone have anything else they'd like to discuss related to the release?
16:23 < mmcgrath> I'll take that as a no.
16:23 < mmcgrath> so next bit
16:23 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- mod_wsgi vs straight cherrypy
16:24 < mmcgrath> lmacken abadger1999 ricky: any thoughts on that?
16:24 * dgilmore is all for reliability
16:24 < abadger1999> I want mod_wsgi
16:24 < mmcgrath> I saw no tangible downside to doing this.
16:24 < lmacken> WSGI is the way to go, and mod_wsgi looks to have great benefits
16:24 < lmacken> it'll definitely put us in the right direction for adopting TG2 as well
16:24 < abadger1999> How do we restart pps under mod_wsgi?
16:24 < abadger1999> restart apache?
16:24 < abadger1999> s/pps/apps/
16:25 < mmcgrath> interestingly I found that lots of other of our applications could use the WSGI interface as well so we could at least have common configuration components that way.
16:25 < mmcgrath> abadger1999: yeah, just restart apache.
16:25 < abadger1999> Very nice.
16:25 * mmcgrath wonders if a graceful would work.
16:25 -!- wolfy [n=lonewolf@fedora/wolfy] has left #fedora-meeting ["I fought the lawn, and the - lawn won!"]
16:25 < abadger1999> Okay... so the only drawback of that is that we'd have to restart all of the apps on a machine if we update one.
16:25 < mmcgrath> abadger1999: the other thing thats nice is we can control memory bloat through other means like limiting the number of requests a process will be allowed to run before restarting, etc.
16:26 < abadger1999> mmcgrath: :-) That makes very happy
16:26 < mmcgrath> abadger1999: yes, with the exception of a graceful which, if I understand how it _should_ work... is something like this.
16:26 < abadger1999> <nod>
16:26 < mmcgrath> any http processes that are currently in use will be told to restart when they are done.
16:26 < mmcgrath> any process not in use is told to restart and it does so.
16:26 < abadger1999> There was also a startup cost, though?
16:27 < mmcgrath> so there should be no actual downtime. The only thing is after httpd is restarted, the initial request to start the processes usually takes an additional 3-5 seconds because its actually starting turbogears.
16:27 < abadger1999> Ah. 3-5 seconds isn't bad.
16:27 < mmcgrath> the startup cost is pretty high but still not horrible.
16:27 < abadger1999> I'm all for this.
16:27 * lmacken too
16:28 < mmcgrath> especially since, in my tests, its only worth it to have the number of processes == the number of cpus you have so its not like a whole lot of them will be starting at once.
16:28 < mmcgrath> alrighty then, we'll deploy this right after F9 ships.
16:28 < mmcgrath> I'll stick what I have for fas.wsgi into the git repo with directions on how to get it going.
16:28 < mmcgrath> abadger1999: interested in helping me test it?
16:28 -!- petreu| [n=peter(a)p3EE3E652.dip.t-dialin.net] has quit Read error: 113 (No route to host)
16:28 < lmacken> what TG changes did you have to make to get it under mod_wsgi ?
16:28 < abadger1999> Sure. Can we deploy it on a publictest box?
16:29 < mmcgrath> lmacken: none, I just had to create a fas.wsgi file that had the proper paths in it.
16:29 < lmacken> awesome
16:29 < mmcgrath> I didn't even have to alter the fas.cfg which is handy when testing the difference between the two.
16:29 < mmcgrath> you can overwride any option in your prod.cfg
16:30 < mmcgrath> makes it easy to deploy cherrypy vs wsgi without any changes... you can even run them both at the same time on the same box.
16:30 < mmcgrath> abadger1999: <nod>
16:30 < mmcgrath> alrighty, if no one has anything else there I'll move on to the next bit.
16:31 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- My Trip to PHX.
16:31 < mmcgrath> so I'm going to be going to PHX at some point right after F9 ships.
16:31 < mmcgrath> I'll also be deployin gthe new wiki sometime right after PHX ships.
16:31 < dgilmore> :)
16:31 < dgilmore> so we will have how many new builders?
16:31 < mmcgrath> its two major projects that will happen in a short time so if you see me being un responsive or sending you guys stuff to do... thats why.
16:31 < mmcgrath> dgilmore: not sure, however many are in that blade center.
16:31 < abadger1999> <nod>
16:32 < mmcgrath> I've also got a bunch of new servers coming in, at least 5 that I can think of right now (thats 5 not including whats in the blade center)
16:32 < dgilmore> mmcgrath: excellent. i look forward to bringing them online
16:32 < notting> are we going to have a moin-is-dead release party?
16:32 < mmcgrath> notting: I sure hope so.
16:32 < dgilmore> notting: you get the pieces
16:32 < f13> more likea moin wake
16:32 < mmcgrath> I'm going to be replacing lockbox soon, as well as db1.
16:32 < dgilmore> :)
16:32 < mmcgrath> db3 is going to get brought up as a dedicated koji instance.
16:32 < mmcgrath> also we'll get 2 more application servers.
16:33 < mmcgrath> So... lots of stuff going on there.
16:33 < dgilmore> can we look at retiring hammer2, xenbuilder1 and ppc1
16:33 < mmcgrath> dgilmore: yeah, I was going to let those crazy old boxes die on their own but once the new blade centers are online we may decide to just get rid of them.
16:33 < mmcgrath> honestly as cheap as hammer2 is of a box, I can't believe its still going.
16:34 < dgilmore> mmcgrath: xenbuilder1 is the same hardware
16:34 < dgilmore> :)
16:34 < mmcgrath> yeah
16:34 < mmcgrath> they're both dirt old and somehow still kicking.
16:34 < mmcgrath> So does anyone have any concerns or comments or things they want me to do while I'm in PHX?
16:34 < dgilmore> ppc1 is only a few months younger
16:34 < mmcgrath> I'm not sure if it will be this trip or next but I'd like to move all of the buildsystem hardware into the same rack.
16:34 * dgilmore just wants the blades :)
16:35 < dgilmore> that would be nice
16:35 < dgilmore> though the bladecenter im assuming has a built in switch we will be using
16:35 < mmcgrath> Just so, if we decide to move it somewhere, we can easily say "yes, U4-19 of this rack is all the blade center gear, move it to the new $DATA_CENTER"
16:35 < mmcgrath> dgilmore: correct.
16:36 < mmcgrath> But after the new db3 is up and after we get xen2 dedicated to releng and build stuff we will have a complete separation of services of the build system from the rest of our services.
16:36 < dgilmore> mmcgrath: maybe look at it for the next trip
16:36 < mmcgrath> and thats a good thing.
16:36 < mmcgrath> alllrighty.
16:36 < dgilmore> move the ppc boxes and xenbuilder4 to the same place as the blade center
16:36 < mmcgrath> <nod>
16:36 < mmcgrath> So thats really all I had for this meeting
16:36 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
16:36 < mmcgrath> anyone have anything they'd like to discuss?
16:37 < mmcgrath> any noobs joining the team that want to say hi?
16:37 * dgilmore has been working on getting F-9 on the XO
16:37 < mmcgrath> dgilmore: how's that going?
16:37 < dgilmore> mmcgrath: slowly
16:37 < f13> I've got aline on some hardware donations from Pogo, that I'm getting more information on
16:37 < dgilmore> we got a nasty hack to mack upstart work
16:37 < wfp> I guess I'd be the noob joining the team, hi!
16:37 < dgilmore> im going to try make it better before making notting cry
16:38 < mmcgrath> wfp: hello!
16:38 < dgilmore> hi wfp
16:38 < mmcgrath> wfp: want to say a little about yourself and what you're interested in doing?
16:38 < notting> dgilmore: why would i cry?
16:38 < wfp> I sent an into to the mailing list. I've been doing SysAdmin and development for about 20 years now.
16:39 < wfp> (and why do I always mistype intro) Anyhow, not sure what/where to help. Just need to see what's needed.
16:39 < dgilmore> notting: it turns security off
16:40 < mmcgrath> wfp: well welcome. hang out in #fedora-admin and get to know everyone and whats going on. We're in a change freeze right now so not much actual admining is happening :)
16:40 < abadger1999> wfp: Welcome aboard!
16:40 < mmcgrath> Ok, well does anyone have anything else to discuss? If not we'll close the meeting in 30
16:40 < mmcgrath> 20
16:40 < mmcgrath> q0
16:40 < mmcgrath> 10
16:40 < mmcgrath> 5
16:40 * skvidal likes q0
16:41 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Meeting DOOM!
16:41 < mmcgrath> I mean, meeting end.
16:41 < mmcgrath> Alrighty everyone. thanks for coming
16:41 < caillon> i'm gonna sing the doom song now!
16:41 < nirik> is that the new diet nyQuel with 0 calories?
16:41 < dgilmore> thanks mmcgrath
14 years, 10 months
xen5 change request
by Dennis Gilmore
prior to the network maintenance outage, we had puppet disbaled on xen5 and
iptables stopped. as an interim measure i would like to return to that state
so that backups will run again. longer term, we need to adjust iptables.
But for right now just wanting to do the minimum to have backups functioning
again. nothing else runs on xen5 so its low impact.
Dennis
14 years, 10 months
Using consistent URLs
by Paul W. Frields
One of the issues that we talked about at today's "release readiness"
meeting -- made up of reps from many subprojects in Fedora -- the issue
of using consistent URLs came up. If we can settle on URLs that can
cross releases, and use them consistently for any public communication,
it's easier for us to (1) provide a good user experience through
superior Web server administration, (2) track on metrics of user visits,
and (3) create unified marketing materials.
In our press releases, stories for digg, Slashdot, and anywhere else
people might see them, let's make sure we are using these URLs:
To get Fedora:
--> http://get.fedoraproject.org/
To join Fedora:
--> http://join.fedoraproject.org/
To read Release Notes:
--> http://docs.fedoraproject.org/release-notes
--> http://docs.fedoraproject.org/ (can be shortened if needed)
--
Paul W. Frields http://paul.frields.org/
gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233 5906 ACDB C937 BD11 3717
http://redhat.com/ - - - - http://pfrields.fedorapeople.org/
irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug
14 years, 10 months
change request (CROND)
by Mike McGrath
This weekends outage we disabled crond on a bunch of the boxes. We forgot
to re-enable them on a few. I'd like to enable them.
Risk: Moderate
I can't think of anything that'd break per-say.. Its just a couple of
boxes though. None of them in Fedora's critical path except for our
torrent server.
anyone want to +1?
-Mike
14 years, 10 months