Meeting Log - 2008-10-23
ricky at fedoraproject.org
Thu Oct 23 20:55:35 UTC 2008
20:00 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Who's here?
20:00 * ricky
20:00 < mmcgrath> heh 3rd time's a charm
20:00 < pvangundy> ping
20:01 < G> moo!
20:01 < SmootherFrOgZ> hello guys
20:01 -!- giallu [n=giallu at fedora/giallu] has joined #fedora-meeting
20:02 < mmcgrath> lets get started then
20:02 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Tickets
20:02 < mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=%7EMeeting&order=priority
20:02 < zodbot> mmcgrath: http://tinyurl.com/2hyyz6
20:02 < mmcgrath> .ticket 395
20:02 < ricky> Heheh
20:02 < zodbot> mmcgrath: #395 (Audio Streaming of Fedora Board Conference Calls) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/395
20:02 -!- fchiulli [i=824c4012 at gateway/web/ajax/mibbit.com/x-0a06cef606c98088] has joined #fedora-meeting
20:02 < ricky> I got OperationalError: database is locked
20:02 < mmcgrath> jcollie: I forget, did you want me to take the meeting tag off of this?
20:02 < ricky> Works on a refresh, though
20:02 < mmcgrath> ricky: thats fun
20:02 * mdomsch ducks out for a different meeting
20:03 < G> wow, only two tickets with meeting tag, lets discuss a third...
20:03 * mmcgrath skips 395 for now
20:03 * f13 here
20:03 < mmcgrath> jcollie: if you want to take the meeting keyword off have at it.
20:03 < mmcgrath> .ticket 740
20:04 < zodbot> mmcgrath: #740 (Loaning out system time to OLPC participants) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/740
20:04 < mmcgrath> dgilmore: ^^^ anything new there?
20:04 -!- fozzmoo [n=fozz at 126.96.36.199] has left #fedora-meeting 
20:05 * mmcgrath skips it too
20:05 < G> .ticket 576
20:05 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Pre-release freeze
20:05 < zodbot> G: #576 (Infrastructure Contact Information) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/576
20:05 < G> mmcgrath: oh, I was just going to ask what the status of that was?
20:06 < mmcgrath> G: that part's actually done ish.
20:06 < mmcgrath> there's an emergency response card that was handed out to some of the sysadmin-main guys. We need to come up with some sort of list that can be made public though.
20:07 < mmcgrath> I'll make some comments on that ticket though about some permanent place to put stuff. Like our inventory.
20:07 -!- themayor [n=jack at net2.senecac.on.ca] has quit
20:07 < pvangundy> when you say public, like freely searchable or behind some login/pwd page?
20:07 < mmcgrath> pvangundy: not sure yet.
20:08 < mmcgrath> It gets tricky, I don't want to accidently give out someone's home phone number that didn't want it to be given out.
20:08 < G> mmcgrath: well iirc the original intention was to store "if you can't get hold of me on IRC and you really need something try..."
20:08 < mmcgrath> but ath the same time my information is generally available.
20:08 < mmcgrath> and it seems weird to keep two lists.
20:08 < mmcgrath> G: yeah
20:09 < mmcgrath> my info is all in the nagios configs right now anyway, and via the pager page
20:09 < mmcgrath> https://admin.fedoraproject.org/pager
20:09 < pvangundy> so what information would be given? Just phone #, additional email account to try and reach some one?
20:09 -!- mdomsch_ [n=Matt_Dom at cpe-70-124-62-55.austin.res.rr.com] has joined #fedora-meeting
20:09 < mmcgrath> pretty much, multiple phone numbers if their available, pager email address.
20:09 < mmcgrath> stuff like that.
20:10 < pvangundy> do we have a tool for SMS?
20:10 < mmcgrath> pvangundy: https://admin.fedoraproject.org/pager
20:10 < mmcgrath> I think the big pusher here was ricky wanted peoples contact information once he was put in sysadmin-main and I think he has that now
20:10 < pvangundy> *sees now*
20:11 < mmcgrath> ricky: is that correct?
20:11 * ricky thinks for a moment
20:11 < G> mmcgrath: well it was one of the things I actually suggested iirc
20:12 < mmcgrath> G: not according to the ticket :-P
20:12 < ricky> I guess we could just document the two pager sites better
20:12 < mmcgrath> but either way, I'll get some docs together and have that ticket closed by the end of the week.
20:12 < ricky> (noc2 version: http://noc2.fedoraproject.org/pager and noc1 version: https://admin.fedoraproject.org/pager)
20:12 -!- ubertibbs [n=tibbs at fedora/tibbs] has quit "Konversation terminated!"
20:12 < pvangundy> well, do you have a core group of individuals that you would wnat to get this information one? Surely you wouldn't want everyone in sysadmin-* contact info
20:12 < G> mmcgrath: mainly if the shit hit the fan when I was around (which is when you guys aren't) and noone responded I'd be able to get it another way
20:13 < pvangundy> one = on
20:13 < mmcgrath> <nod>
20:13 < G> mmcgrath: ricky skimped on the paste iirc :) - can't be sure though
20:13 < mmcgrath> pvangundy: yeah but the problem is making sure the non-main members to get ahold of the main members.
20:13 < pvangundy> gotcha
20:14 < mmcgrath> Anywho, I've got an idea. I'll get it done, documented and in the ticket soon.
20:14 < G> mmcgrath: and really the ones that would want to would be -web, -noc & -build really :)
20:14 < mmcgrath> anyone have anything else on that? If not we'll move on?
20:14 < G> move on, sorry for sidetracking :)
20:14 < mmcgrath> no worries, we don't have much on the docket today
20:14 < mmcgrath> So we're in another pre-release freeze.
20:15 < mmcgrath> Should be pretty straight forward, we're getting better at these releases.
20:15 < mmcgrath> I'm going to be spending some time on docs.
20:15 < mmcgrath> I'd encourage everyone to spend time looking through logs and writing down things that should be fixed, or alternatively working on testing F10.
20:16 < mmcgrath> get bugs knocked down, or report new ones. Hopefully more of the former :)
20:16 < wwoods> yes please!
20:16 < mmcgrath> Anyone have any questions about the pre-release freeze?
20:16 < mmcgrath> wwoods: :-P
20:17 < mmcgrath> k
20:17 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- func problems
20:17 < G> ohhh fun :)
20:17 < mmcgrath> So func did a really odd thing recently and I think we're mostly fixed.
20:17 * lmacken checks to see if his func scripts work
20:17 < ricky> I saw you guys discussing it - what did it turn out to be?
20:18 < mmcgrath> So in a previous version of func there was a bug that caused services that func restarted to bind to the func port.
20:18 < lmacken> I think some issue where func wasn't closing file descriptors, or something ?
20:18 < mmcgrath> and they'd wait in line.
20:18 < ricky> Haha
20:18 < lmacken> gah, still getting a traceback :(
20:18 < lmacken> File "/usr/lib/python2.4/site-packages/certmaster/utils.py", line 63, in nice_exception
20:18 < G> func doesn't seem to be running on app6, etc
20:18 < ricky> That is weird stuff.
20:18 < lmacken> lefti = etype.index("'") + 1
20:18 < lmacken> ValueError: substring not found
20:18 < mmcgrath> lmacken: could be, I didn't look that close.
20:18 < mmcgrath> lmacken: there's a couple of hosts where func communcation wasn't working right.
20:18 < mmcgrath> you shouldn't see freezing like you were
20:18 < lmacken> we may need to restart func everywhere
20:18 < mmcgrath> but some tracebacks still hapen.
20:18 < mmcgrath> like on app6
20:18 < G> mmcgrath: yeah, app6, collab2 for instance
20:18 < lmacken> you can do `func "*" ping` to see the same traceback
20:19 < mmcgrath> lmacken: I did that everywhere today. sometimes func just wasn't binding.
20:19 < mmcgrath> just more of the joys of early adoption.
20:20 < lmacken> Hmm, odd. I'll open a ticket
20:20 -!- mdomsch [n=Matt_Dom at cpe-70-124-62-55.austin.res.rr.com] has quit Remote closed the connection
20:20 < mmcgrath> I looked briefly at what was going on with func but never totally figured it out. I suspect its a communication / network issue. Could be wrong though.
20:20 < mmcgrath> skvidal would be good to tap. It might be something simple.
20:20 * skvidal looks up
20:20 < mmcgrath> Needless to say.. what a weird thing to actually witness.
20:20 -!- DemonJester [n=DemonJes at fedora/DemonJester] has quit "leaving"
20:20 < skvidal> the value error thing I have an idea about, yes
20:20 < mmcgrath> skvidal: func isn't restarting properly on some hosts like app6 or tummy1
20:21 < G> mmcgrath: on the same note... proxy1's puppetd seems to like crashing
20:21 < skvidal> mmcgrath: which trace back is it giving?
20:21 < mmcgrath> skvidal: the funcmaster is giving a traceback, func on the minion isn't doing much of anything
20:21 < skvidal> nm I'll look
20:21 < mmcgrath> skvidal: thanks
20:21 < mmcgrath> G: I noticed that too. haven't figured out why yet
20:21 < skvidal> okay
20:21 < skvidal> in the case of app6
20:21 < mmcgrath> I'd think it'd be happening on proxy2 as well but just isn't.
20:22 < skvidal> it is b/c it has neve rbeen signed
20:22 < mmcgrath> skvidal: try tummy1
20:22 < ricky> defunct func :-)
20:22 < ricky> (process, that is)
20:22 < skvidal> ricky: hardly
20:22 * lmacken just opened https://fedorahosted.org/func/ticket/60
20:22 < mmcgrath> anywho, looks like skvidal is looking into that.
20:22 < skvidal> I don't think app6 can reach puppet1 actually
20:22 -!- mmcgrath changed the topic of #fedora-meeting to: /mnt/koji
20:22 < skvidal> ah. wait
20:23 < ricky> Yuh-oh :-(
20:23 < G> skvidal: but the other day, it was proxy5/app5 that was going nuts, app6 was fine
20:23 < skvidal> has app6 been reinstalled recently?
20:23 < mmcgrath> f13: did you have a chance to look at my email earlier?
20:23 < mmcgrath> dgilmore: I saw you responded.
20:23 < mmcgrath> this is something I'd like to have figured out sometime soon
20:23 < ricky> skvidal: Yeah, Oct 5th
20:23 < skvidal> ok
20:23 < skvidal> one sec
20:24 < f13> mmcgrath: I forwarded it to my manager
20:24 < f13> mmcgrath: aka the purse holder.
20:24 -!- rahul_b [n=rbhalera at 188.8.131.52] has joined #fedora-meeting
20:24 < mmcgrath> f13: k.
20:25 < f13> mmcgrath: but I agree with dgilmore, continued garbage collection + continued growth is the plan.
20:25 < G> whats the backstory here?
20:25 < mmcgrath> Depending on time frames and stuff we may have more options for storage though I suspect they'll also be more expensive.
20:25 < mmcgrath> G: at our current rate of growth we'll run out of room on /mnt/koji in about 13 months.
20:25 < f13> mmcgrath: I think we can gain back some storage by manually garbage collecting everything with the old gpg sigs
20:25 < f13> at least everything that we've resigned with new sigs
20:25 < mmcgrath> which is still a year away, but budgets for that time are due soon so we're just trying to get a good grasp on it and not let it sneak up on us.
20:25 < G> f13: +1
20:26 < G> f13: except, wouldn't we want to keep everything that was on the original CDs/DVDs
20:26 < mmcgrath> f13: k
20:26 < mmcgrath> G: why's that?
20:26 < mmcgrath> My take on it is if there's not a legal reason to keep that stuff, lets get rid of it. I'm not sure what releng's take is on it though
20:27 < G> mmcgrath: I'm not sure, I'm just thinking along the lines of we are still distributing that content....
20:27 < mmcgrath> f13: are we aware of any legal issues there?
20:28 * mmcgrath isn't sure what the law requires wrt binaries
20:28 -!- rdieter is now known as rdieter_away
20:28 < mmcgrath> well, either way its not a pressing need. Something ew can look at later.
20:28 < mmcgrath> we've got time right now but it always takes months to get these things priced out and installed and such.
20:29 < G> mmcgrath: so how much extra storage do you reckon we need?
20:29 < mmcgrath> f13: one thing to keep in mind when talking to your manager is when the new one gets purchased we'll have a 10T tray that still has a good year and a half of support on it if you guys need it for anything.
20:29 < mmcgrath> we can always ship it.
20:29 < mmcgrath> G: thats a good question. Its just a matter of $$ really.
20:29 < mmcgrath> at this point I'm pretty confident we'll fill up whatever we purchase.
20:29 < G> heh
20:29 * dgilmore is here
20:30 < mmcgrath> dgilmore: f13: really though do we want to try to target a sustainable solution or are we going to stick with "grow forever"?
20:30 < mmcgrath> well. I know what I want :) but what do you guys think we're actually going to do?
20:30 -!- greenlion [n=greenlio at fedora/greenlion] has quit "Ухожу"
20:31 < dgilmore> mmcgrath: i think we should look at purging old releases
20:31 -!- rdieter_away is now known as rdieter
20:31 < mmcgrath> at this point thats only FC6 and F7 right?
20:31 * mmcgrath can't remember if FC6 got on there or not.
20:31 < dgilmore> mmcgrath: so things that only shipped with F-9 could be removed
20:31 < skvidal> on archive?
20:31 < G> mmcgrath: or if we wait a couple of months, add F8 :)
20:32 < mmcgrath> skvidal: /mnt/koji/
20:32 < skvidal> oh
20:32 < dgilmore> mmcgrath: rawhide as at when we started with koji was there
20:32 < mmcgrath> perhaps we need a rule for /mnt/koji like we have with our releases
20:33 < mmcgrath> releases are n+1+1 month.
20:33 < pvangundy> i think this is something that needs to be nailed down because it's hard to move on with other projects when we're still deciding what we will hold on to and what we don't need anymore. A policy needs to be in place.
20:33 < mmcgrath> maybe /mnt/koji will be n+2 or something
20:33 < dgilmore> mmcgrath: we still ship some packages that were imported at the start
20:34 < mmcgrath> pvangundy: "it's hard to move on with other projects when we're still deciding what we will hold on to and what we don't need anymore." sorry I didn't follow how this is blocking other projects.
20:34 -!- rharrison [n=rusharri at nat/cisco/x-14e340d28892a016] has left #fedora-meeting ["Leaving"]
20:34 < G> neither
20:34 -!- rdieter is now known as rdieter_away
20:34 < pvangundy> planning would be the better word. ie, purchasing
20:34 < G> this is dedicated storage for koji
20:34 < mmcgrath> pvangundy: ah
20:34 < pvangundy> sorry
20:35 < mmcgrath> no worries, yeah.
20:35 < mmcgrath> f13 seems to be distracted. we can go back to this.
20:35 < mmcgrath> 13 months is still a ways away. but getting a budget estimate is important.
20:35 < f13> mmcgrath: we'll keep them on the master mirror until we shuffle them off to archive, but they don't need to live in koji itself.
20:36 < f13> mmcgrath: g: the signed header will be there and as long as the unsigned rpm is still there we can always re-create the signed version
20:36 < mmcgrath> f13: k
20:36 < G> f13: good point
20:36 < f13> mmcgrath: I think we'll pick a certain age of Fedora releases to no longer keep in Koji
20:36 < G> do we garbage collect old buildlogs?
20:36 < pvangundy> sorry guys, I have to head out early. I know it will be hard to run the meeting without me but $DAYJOB calls. ;)
20:37 -!- pvangundy [n=pvangund at host-216-153-209-2.man.choiceone.net] has quit "Leaving"
20:37 < mmcgrath> G: I'm not sure
20:37 < G> some of them are quite big iirc
20:38 * ricky remembers the infinite looping fun :-)
20:38 < mmcgrath> dgilmore: f13: can one of you give a rundown of exactly what the gc does?
20:38 -!- mbacovsk_ [n=mbacovsk at okr2fw.topnet.cz] has joined #fedora-meeting
20:38 < dgilmore> mmcgrath: it untags packages when there is more than 3 builds for the tag
20:39 < dgilmore> it then goes though and moves them to a temporary tag
20:39 -!- cassmodiah [n=cass at fedora/cassmodiah] has quit Remote closed the connection
20:39 < mmcgrath> what does a "package" consist of?
20:39 < dgilmore> once there in the temporary tag for 3 weeks it unlinks them
20:39 < mmcgrath> just the rpm? or the logs too
20:39 < dgilmore> mmcgrath: a build
20:39 < dgilmore> i think its all
20:39 < f13> teh logs too IIRC
20:39 < f13> the only thing that should be left is info in the db itself
20:40 < mmcgrath> <nod>
20:40 < dgilmore> some of the db info is pruned
20:40 < mmcgrath> we should be fine on db space for a while
20:40 < dgilmore> but enough remains so that the same nvr is not built again
20:40 < mmcgrath> 394G 75G 300G 20% /var/lib/pgsql
20:41 < mmcgrath> well, anyone have anything to discuss on that right now? If not we can move on. We'll likely be talking about it quite a bit over the comming weeks.
20:42 < G> sounds good with me
20:42 -!- wwoods [n=wwoods at nat/redhat/x-ebb237103a893721] has quit "new kernel! clean cup! move down!"
20:42 < mmcgrath> k
20:42 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Open Floor
20:42 < mmcgrath> anyone have anything they'd like to discuss?
20:42 < G> err yeah
20:44 < G> Zabbix is finally starting to work, so I must insist that people should start filing tickets about what needs to be monitored, going to start adding users again tonight
20:44 < mmcgrath> G: excellent.
20:44 < mmcgrath> did we lower how often its checking some things?
20:44 < G> mmcgrath: yep
20:44 < mmcgrath> excellent.
20:44 < mmcgrath> did we get the mirror and fedoraproject.org/wiki/ hit monitoring back in there?
20:45 < G> down ~20 checks/sec iirc
20:45 < mmcgrath> excellent.
20:45 < G> mmcgrath: not sure, they were kinda placed wrong
20:45 < mmcgrath> where should they have been placed?
20:46 < mmcgrath> well, we still need training for all that
20:46 < G> mmcgrath: not in a template applied to every machine :)
20:46 < G> mmcgrath: agreed, I'm planning on doing that soon
20:46 < mmcgrath> it was in the proxy template applied only to proxy servers :)
20:46 < mmcgrath> even when we added proxy5 it automatically picked it up :)
20:46 < G> mmcgrath: I thought it was applied to the apache template
20:46 -!- wwoods [n=wwoods at nat/redhat/x-7d4a2607235c28e3] has joined #fedora-meeting
20:47 < G> The other thing, is I won't be able to make many meetings for the next few weeks
20:47 < mmcgrath> <nod> thanks for the heads up
20:47 < G> I'll be arriving in Brisbane on Thursday
20:47 < mmcgrath> well, anyone have anything else to discuss? If not we'll close in 30 min
20:47 < mmcgrath> sweet
20:47 < G> 30 min?
20:47 < SmootherFrOgZ> mmcgrath: any news on xen6 from your side ?
20:48 < mmcgrath> SmootherFrOgZ: I haven't touched it in 2 weeks or so. Have you played around on it at all?
20:48 -!- lfoppiano_ [n=lfoppian at host92-165-dynamic.8-87-r.retail.telecomitalia.it] has joined #fedora-meeting
20:48 < mmcgrath> SmootherFrOgZ: if you were looking for stuff to do, getting some of the guests that are on there up and running would be useful.
20:48 < SmootherFrOgZ> yep alreqdy did
20:49 -!- ianweller is now known as ianweller_afk
20:49 < SmootherFrOgZ> already
20:49 < mmcgrath> excellent
20:49 < SmootherFrOgZ> as i said ovirt really depend on its web-interface :(
20:49 < mmcgrath> yeah
20:50 < mmcgrath> SmootherFrOgZ: we could get that exposed better. Right now I've been using ssh forwarding.
20:50 < SmootherFrOgZ> lynks not enough powerfull to play with it
20:50 < mmcgrath> yeah
20:50 < mmcgrath> do we know when they'll be un-apping it yet?
20:50 < mmcgrath> Right now it still feels very demoish.
20:50 < SmootherFrOgZ> yeah
20:51 < SmootherFrOgZ> did you had a look on enomaly ?
20:51 < mmcgrath> I didn't
20:51 < SmootherFrOgZ> that sound pretty good
20:51 < mmcgrath> <nod>
20:51 < SmootherFrOgZ> python based and turboGears powered
20:51 < mmcgrath> hopefully I'll have some more time and resources to dedicate to this in the near future
20:52 < mmcgrath> SmootherFrOgZ: have you used it yet?
20:52 * mmcgrath will take a look
20:52 < SmootherFrOgZ> yep, its depends on python-2.4 because of elementree
20:53 < SmootherFrOgZ> for now
20:53 < mmcgrath> heh, its always something :)
20:53 < mmcgrath> well we're nearly out of time for our meeting, anyone have anything else to discuss? if not we'll close in 30
20:53 < SmootherFrOgZ> yeaj
20:53 < mmcgrath> 15
20:53 < SmootherFrOgZ> i tried to bind it to xml-etree but i still have some freewe
20:53 < mmcgrath> <nod>
20:53 < mmcgrath> 5
20:54 -!- mmcgrath changed the topic of #fedora-meeting to: Infrastructure -- Meeting End
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 197 bytes
Desc: not available
Url : http://lists.fedoraproject.org/pipermail/infrastructure/attachments/20081023/acbfaa9d/attachment.bin
More information about the infrastructure