20:00 < mmcgrath> #startmeeting Infrastructure
20:00 < zodbot> Meeting started Thu Feb 11 20:00:47 2010 UTC. The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00 < skvidal> oh so much
20:00 < zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
20:00 -!- zodbot changed the topic of #fedora-meeting to: (Meeting topic: Infrastructure)
20:00 -!- sijis [~sijis@fedora/sijis] has joined #fedora-meeting
20:01 < mmcgrath> #topic who's here?
20:01 -!- zodbot changed the topic of #fedora-meeting to: who's here? (Meeting topic: Infrastructure)
20:01 * mmcgrath is
20:01 * lmacken
20:01 * a-k is
20:01 * heffer is too, but just by chance
20:01 * sijis
20:02 * hiemanshu
20:02 * ricky
20:02 * skvidal is
20:02 < mmcgrath> I've got 3 main things I want to talk about. The first two should be short the third one is about updates and will likely be longer
20:02 < mmcgrath> So I'll just get started
20:02 < mmcgrath> actually 4 things, 3 are short
20:03 < mmcgrath> #topic VPN issues
20:03 -!- zodbot changed the topic of #fedora-meeting to: VPN issues (Meeting topic: Infrastructure)
20:03 < mmcgrath> We've been seeing strange vpn issues. we saw a cluster of like 5 outages over the span of an hour this morning.
20:03 < mmcgrath> I poked around a bit, did a couple of restarts and have benerally been keeping an eye on things.
20:03 < mmcgrath> I thought they were fixed except that we had another one about 5 minutes ago.
20:04 < mmcgrath> There's lots of things this could be, but the biggest vpn change we've made was yesterday we were running on bastion2, which was xen. Now we're running on bastion1 which is kvm.
20:04 < mmcgrath> I can't say for sure that's what is going on, but we've seen performance issues before with misconfigured vms
20:04 < mmcgrath> anyone have any questions or concerns on that?
20:04 < sijis> could it be network itself?
20:04 -!- yawns1 [~yawn(a)c-75-73-228-194.hsd1.mn.comcast.net] has joined #fedora-meeting
20:04 < mmcgrath> sijis: it could be
20:05 < mmcgrath> the outages are short lived and unpredictable
20:05 < mmcgrath> so it's been difficult to troubleshoot
20:05 < mmcgrath> Ok, next topic
20:05 < mmcgrath> #topic Equallogic
20:05 -!- zodbot changed the topic of #fedora-meeting to: Equallogic (Meeting topic: Infrastructure)
20:05 < mmcgrath> It's in, it's powered up and Dgilmore has even logged into it so he can be imprinted as it's father.
20:05 < abadger1999> :-)
20:05 < mmcgrath> but we don't think the network ports are actually configured.
20:06 < mmcgrath> so, like I said, short topic on that.
20:06 < mmcgrath> we'll keep working on it and see how it goes.
20:06 * dgilmore is here
20:06 < mmcgrath> any questions or comments on that?
20:06 < dgilmore> please give me multiple gig ports
20:06 < dgilmore> pretty please
20:06 -!- jaxjaxmob [~jaxjaxmob(a)220.127.116.11] has joined #fedora-meeting
20:06 < mmcgrath> dgilmore: well, you should have 8 of them there.
20:06 < mmcgrath> and we can do whatever bonding we desire.
20:07 < Oxf13> WANT
20:07 < mmcgrath> Ok, nothing else on that?
20:07 < dgilmore> nothing
20:07 -!- gholms|mbp [~gholms(a)x-160-94-88-123.uofm-secure.wireless.umn.edu] has joined #fedora-meeting
20:08 < mmcgrath> buhhh
20:08 < mmcgrath> I forgot what the third thing was so we'll go right on to the 4th
20:08 < mmcgrath> #topic Updates
20:08 -!- zodbot changed the topic of #fedora-meeting to: Updates (Meeting topic: Infrastructure)
20:08 < mmcgrath> So we did a group of updates yesterday and, needless to say, things didn't go well.
20:08 < mmcgrath> There's a number of complicated issues here.
20:08 -!- jcollie [~jcollie@fedora/jcollie] has quit Ping timeout: 252 seconds
20:08 < mmcgrath> 1) We have latest versions of things in our repos that aren't to be updated
20:08 < mmcgrath> 2) actually getting a list of things that are to be updated
20:09 < mmcgrath> 3) actually doing the updates.
20:09 < skvidal> okay
20:09 < skvidal> can I jump in here?
20:09 < mmcgrath> Unfortunately system updates scale horribly. Restarting httpd on one server isn't that different from restarting it on 100 servers. But doing updates and restarts... completely different story.
20:09 < mmcgrath> skvidal: absolutely, have at it
20:09 < skvidal> okay
20:09 < skvidal> so something we originally wrote func for was this case
20:09 < skvidal> being able to get a lot of info and act on it
20:10 < skvidal> but we never implemented this
20:10 < skvidal> b/c we got off on other things
20:10 < skvidal> so I decided to work on it this week and I have a really simple script
20:10 < mmcgrath> skvidal: you're talking specifically about 3) or also 2?
20:10 < skvidal> 2 and 2
20:10 < skvidal> err
20:10 < skvidal> 2 and 3
20:10 < mmcgrath> <nod>
20:10 < skvidal> so here's the gist
20:10 < skvidal> get all updates via yumcmd.check_update via func
20:10 < skvidal> • store timestamp of check and list of updates in a dir/db with name of host
20:10 < skvidal> • store complete list of installed pkgs for each host
20:10 < skvidal> • cmd should
20:10 < skvidal> ∘ list hosts needing updates
20:10 < skvidal> ∘ list hosts needing a certain pkg updated
20:10 < skvidal> • apply updates - glob or all
20:10 < skvidal> ∘ report results of this
20:11 < skvidal> right now I'm storing things really simply so we can search it trivially
20:11 < Oxf13> what's with the unicode bullets?
20:11 < skvidal> /some/path/$hostname/[installed|updates|updated-$TIMESTAMP|orphans]
20:11 < skvidal> Oxf13: from my gnote notes - sorry
20:11 < Oxf13> s'ok
20:11 < skvidal> Oxf13: I use it to brainstorm then paste it in places
20:12 < Oxf13> skvidal: ditoo
20:12 < Oxf13> -o+t
20:12 < skvidal> the idea would be to have the script run using func, async, at regular intervals (maybe only once a day is enough)
20:12 < mmcgrath> skvidal: so lets flash forward to where all this work is done and is in place. What would we do come update day?
20:12 < skvidal> to know what';s on the boxes and their status
20:12 -!- pravins [~psatpute(a)18.104.22.168] has quit Quit: Leaving
20:12 < skvidal> func-yum -h hostname --pkg pkgname --update
20:13 < skvidal> or
20:13 < skvidal> func-yum --update
20:13 < skvidal> which hits all the hosts
20:13 < skvidal> or func-yum -h hostglob --pkg pkgglob --update
20:13 < mmcgrath> will we get any output or feedback from that?
20:13 < skvidal> then the results of those runs will be stored in /some/path/$hostname/updated-YYYY-MM-DD-HH:MM:SS
20:14 < skvidal> mmcgrath: so you can see what the results are explicitly
20:14 < skvidal> w/o having to chase all over the place
20:14 < skvidal> does that make sense?
20:14 < mmcgrath> <nod> yeah. I like that, pssh does something similar for ssh commands.
20:14 < skvidal> so I've got the storing info
20:14 < skvidal> and updates part working
20:15 < skvidal> I need to update func and certmaster for our hosts
20:15 < skvidal> b/c we're running an old one
20:15 < skvidal> which doesn't support the --timeout option :)
20:15 < skvidal> which is important here
20:15 < skvidal> and then one more thing I'm working on is
20:15 < skvidal> func-yum --status
20:15 < skvidal> which spits out the status of the hosts as it last knew it
20:15 < skvidal> so things like:
20:15 < skvidal> Last Checked: timestamp
20:15 < skvidal> Last Updated: timestamp
20:15 < skvidal> updates available: #of pkgs
20:16 < skvidal> installed pkgs: #of pkgs
20:16 < skvidal> orphans: #of pkgs
20:16 < skvidal> which seems like a reasonable set of things to list out
20:16 < mmcgrath> skvidal: do you need any help with that?
20:16 < skvidal> sure - it's just a single script
20:16 < mmcgrath> smooge: you around? we haven't heard from you yet? :)
20:16 < skvidal> I'm hoping to post a draft of it this afternoon
20:16 < mmcgrath> skvidal: excellent.
20:16 < smooge> yes
20:16 < smooge> sorry
20:16 < skvidal> one place where I do need help
20:16 < smooge> I have this meeting an hour from now
20:16 < skvidal> smooge: :)
20:17 < smooge> changing
20:17 -!- JSchmitt [~s4504kr@fedora/JSchmitt] has quit Remote host closed the connection
20:17 < skvidal> is the error reporting/catching
20:17 < skvidal> there are lots of things that get in the way here
20:17 < mmcgrath> skvidal: yeah, and we've had some bad luck with conflicts in the past.
20:17 < skvidal> and I want to make sure I catch and report all the errors sanely
20:17 < skvidal> mmcgrath: mmm conflicts
20:17 < skvidal> mmcgrath: so, something we should consider doing
20:17 < skvidal> even though it is a pain in the arse
20:17 < skvidal> is running yum transactions for updates with tsflags=test
20:18 < skvidal> which does EVERYTHING but nothing actually gets written out
20:18 < smooge> ok catching up.. the big issue that I had was that about 1/3 of systems required manual flag changes to yum to work
20:18 < skvidal> and no scriptlets are actually run
20:18 < skvidal> smooge: manual flag changes like what?
20:18 < smooge> --exclude --disablerepo
20:18 < skvidal> hmm, disablerepo?
20:18 < skvidal> I sortof get 'exclude'
20:18 < mmcgrath> skvidal: would that do a full download of the package? because I was thinking about doing that as part of a pre-update thing so we don't pound puppet1 with updates and so when the actual time comes it takes less time.
20:19 < mmcgrath> if what you want does download the package, we could kill two birds with one stone.
20:19 < skvidal> mmcgrath: yes - it does everything including run the transaction but it runs it in rpm's test mode which does nothing
20:19 < smooge> skvidal, there are a couple of boxes that have outside repositories and updates will come up squirrely unless I turn off the repos. Thankfully disable repo only occurs on .stg and publictest boxes normally
20:19 < skvidal> mmcgrath: for a good time set tsflags=test in yum.conf under [main] and forget about it
20:19 < skvidal> mmcgrath: it's great fun trying to figure out why you ALWAYS have new updates
20:20 < mmcgrath> heheheh
20:20 -!- adrianr [~adrian(a)rhlx01.hs-esslingen.de] has joined #fedora-meeting
20:20 < skvidal> smooge: if we know the set of updates we mandate we could only explicitly enable those
20:20 < mmcgrath> smooge: so what were some of the biggest issues you ran into with this last round of updates?
20:20 < smooge> ok slowness of updates.
20:20 < skvidal> smooge: taking too long to download or too long to install?
20:21 < Oxf13> (or too long between udpate sessions)
20:21 < mmcgrath> smooge: the actual 'yum -y update' part?
20:21 < smooge> 1) slowness of updates. some boxes sit for 2-3 minutes on installation of rpm glibc and such..
20:21 * nirik notes doing them more regularly would help with that.
20:21 < skvidal> smooge: yah - that's rpm fingerprinting - and there's nothing we can do until rhel6
20:21 < smooge> 2) slowness of updates. slow network to outside. ibiblio was slower than telia1
20:21 < mmcgrath> nirik: so would downloading the packages earlier. We already do them monthly.
20:21 < ricky> Do we ever not want an update available from the RHEL updates?
20:22 < smooge> 3) errors in updates. various packages would spew scriplet %post errors I wanted to make sure they were ok
20:22 < nirik> well, that would help with the download part, but not the applying part.
20:22 < ricky> If not, could that just be automated so we just need to think about rebooting?
20:22 < smooge> 4) conflicting packages.
20:22 < smooge> 5) systems not coming back due to rawhide+xen
20:22 < mmcgrath> yeah rawhide + xen is an absolute bitch
20:23 < mmcgrath> I wonder if we moved our rawhide boxes to KVM if we'd have a better go at them.
20:23 < smooge> 6) updating 8 boxes at once on a xen box cause slowness.
20:23 < mmcgrath> nirik: how often do you think is good to do updates?
20:23 * mmcgrath thinks this is a good discussion to have
20:24 < sijis> we currently do them monthly?
20:24 < smooge> nirik the locality of a 'proxy' for the remote boxes would make some of the delays easier to know. I can deal with 10 minute wait on install.. but watching a package stop downloading for that long gets me wondering
20:24 < mmcgrath> sijis: yeah, unless there's security updates.
20:24 < Oxf13> mmcgrath: we'd have a much better go with rawhide on kvm
20:24 < nirik> well, for our customers we do them daily if they are not requiring a reboot. ;) If they are, we schedule a day and/or time to do them and do reboots.
20:24 < Oxf13> mmcgrath: but any rawhide host has a inherent risk of not coming back after a change
20:24 < nirik> most rhel updates are security updates.
20:24 < mmcgrath> nirik: how are you doing them?
20:25 -!- Sonar_Guy [~Who@fedora/sonarguy] has quit Quit: Leaving
20:25 < Oxf13> nirik: sadly, there has been more and more of non-security updates in the EL channels as of late
20:25 < mmcgrath> Oxf13: and we're still averaging 1 kernel update / month.
20:25 < mmcgrath> which has also been a PITA.
20:25 < mmcgrath> We may want to be more careful about the kernel updates and determine if we really need to reboot.
20:26 < nirik> I typically use 'mussh'... run a check-update over a group (different host lists/groups) and make sure they are all things we know what they are, then use mussh with 'yum -y' and apply them. Then go back and restart anything that needs restarting.
20:26 < nirik> yeah, kernel updates have gone way up in frequency it seems like. ;(
20:26 < mmcgrath> I don't know wtf that's about but it is very annoying
20:26 < mmcgrath> smooge: ok, so back to the issues you saw
20:26 -!- mdomsch [~mdomsch@2001:1938:16a::2] has quit Quit: Leaving
20:26 < mmcgrath> those are all generally things I see when I do updates
20:27 < mmcgrath> and I think with some work much of it can be automated.
20:27 < nirik> some of the kernel updates however we have applied and not rebooted for.
20:27 < smooge> and while it can be paralleled I didn't get to the part where I wasn't dealing with potential races til way after the window for updates should have finished
20:27 < mmcgrath> nirik: yeah
20:28 < smooge> so we are about 1/2 updated
20:28 < smooge> we still have most remote locations to do
20:28 < skvidal> okay so test transacting would help find systems which are more likely to die
20:28 < mmcgrath> skvidal: just curious, how long do you think it'll be before you're ready to actually test?
20:28 < mmcgrath> because it sounds like smooge still has some to do, but we freeze next week for the alpha.
20:29 < skvidal> I need func updated on some boxes - so I could test on the ones I update
20:29 < smooge> mmcgrath, I am wanting to postmortem yesterday since I felt I was just shit-canning our infrastructure
20:29 < skvidal> I was going to start by testing people1
20:29 < smooge> I haven't updated that box at all
20:29 < smooge> skvidal, so it should be good for a test
20:29 < mmcgrath> smooge: naw, you did fine, the only bad ones were that xen4-mgmt's RSA-II decided to stop working (which made the shutdown -h a problem)
20:29 < smooge> the next issue I ran into was that things like transifex should not have been updated ..
20:30 < mmcgrath> and the other one was just waiting for db3 to come back online, lvm + large shares is annoying.
20:30 < mmcgrath> smooge: yea, and that's the last thing I want to talk about
20:30 < smooge> I think xen4 is having real issues
20:30 < skvidal> brb
20:30 < mmcgrath> Basically we need to have a test repo
20:30 < mmcgrath> and not enable it anywhere.
20:30 < mmcgrath> ricky: you're working on transifex now right?
20:30 < nirik> is epel-testing enabled everywhere?
20:30 < smooge> nirik yes
20:30 < ricky> Yeah, I wasn't aware there was a new package in EPEL
20:30 < mmcgrath> nirik: at the moment it is and we have very few problems with it
20:31 < mmcgrath> smooge: whats the puppet epel-test thing you ran into?
20:31 < smooge> puppet is the usual one
20:31 < mmcgrath> ricky: oh the new transifex is in epel?
20:31 < ricky> Did you guys get issues with puppet? I've been testing the latest version without any pain
20:31 < nirik> yeah, just another source of package updates... if you could reduce the need for that it would help make updates easier.
20:31 < mmcgrath> ricky: I didn't think so but I've heard people complaining about it so I must have missed it.
20:31 < smooge> a couple of php packages on some box a while back.
20:31 < ricky> Er, I'm not sure, maybe it came from the infra repo
20:31 < mmcgrath> smooge: did we have a puppet update go bad recently?
20:31 < ricky> Always make sure to update the puppetmaster first on puppet updates
20:31 < smooge> and one time a bad-scriplet that left me two packages on the box
20:31 < mmcgrath> ricky: can you check real quick?
20:31 < smooge> mmcgrath, 3x last month
20:32 < ricky> It's from infra, my mistake
20:32 < mmcgrath> smooge: we had 3 puppet updates? or we had 3 of them go bad?
20:32 < mmcgrath> what happened?
20:32 < ricky> Maybe we need an infrastructure-test for this special staging stuff :-)
20:32 < smooge> mmcgrath, I did the updates in sections last month
20:32 < mmcgrath> ricky: yeah that's what I'm proposing
20:32 < ricky> Otherwise, if we decide to rebuild app1, we need to special case a bunch of stuff
20:32 < mmcgrath> smooge: but what happened?
20:32 < ricky> **appX
20:32 < smooge> mmcgrath, so there were 2-3 pushes of puppet packages and each time I seemed to get some boxes updated to the new stuff
20:32 < smooge> which broke puppet1 so I had to then update it and the boxes I had done before
20:33 < mmcgrath> what broke though?
20:33 < mmcgrath> like what were the errors?
20:33 < smooge> puppet couldn't talk to them.
20:33 < smooge> I didn't find the error.. ricky let me know 2-3 days after I had done the updates when he caught it
20:33 < mmcgrath> the new versions of puppet couldn't talk to the old puppetmaster or the other way around?
20:33 < mmcgrath> ricky: do you remember what happened there?
20:33 < ricky> The server is generally backwards compatible
20:33 < smooge> I think it was the clients weren't getting updates
20:33 < ricky> So if you accidentally update a client, update the server and check if stuff works - no need to rush on updating clients
20:34 < smooge> so various boxes were in lala land for a couple of days.
20:34 < ricky> I don't remember what happened :-/
20:34 < mmcgrath> yeah
20:34 < ricky> The only thing that should cause pain is a client update without the corresponding server one though
20:34 < ricky> So it must have been that if anything, I guess.
20:34 < mmcgrath> ricky: are you still getting errors sent to you?
20:34 < smooge> but I am trying to piece from xchatlogs
20:35 < ricky> I'm still getting a ton of errors, but most are an unrelated SELinux thing (and lack of mount ACLs in staging)
20:35 < ricky> I think we can reenable puppet email to everybody once that SELinux thing gets fixed
20:35 < mmcgrath> ricky: k
20:36 < mmcgrath> Ok, so I'll create a new testing repo, put it on all the servers but make it so you have to explicitly enable it to use it.
20:36 < smooge> mmcgrath, I am working on a short blurb for what I have done in the past and what we could see if it works for us
20:36 -!- ayoung [~ayoung(a)cumm111-0b01-dhcp172.bu.edu] has joined #fedora-meeting
20:36 < smooge> its longer than IRC level so will send to infrastructure list later today
20:37 < mmcgrath> smooge: k, is it vastly different from what we've generally agreed upon here?
20:37 < smooge> ricky can I get them right now even with the selinux stuff
20:37 < mmcgrath> OH! that reminds me, another thing we didn't do this time around...
20:37 < ricky> So any thoughts about automating updates that come from RHEL as opposed to EPEL/Infra repo?
20:37 < smooge> I am not sure.. it could be :)
20:37 < mmcgrath> we didn't update in staging first.
20:37 < ricky> smooge: Really? As in emails in the form of "Puppet Report for XXX" ?
20:37 < mmcgrath> or if we did staging didn't function well for us.
20:37 < smooge> ricky please
20:38 < smooge> ricky best way for me to learn
20:38 < smooge> mmcgrath, the issues I had with updating staging was a couple
20:38 < ricky> Oh, sorry - I thought you said you were getting them, not that you wanted to get them
20:38 < ricky> Sure thing
20:38 < smooge> 1) stuff wasn't exactly the same as in production
20:38 < mmcgrath> ricky: share the pain :)
20:38 < smooge> 2) boxes are spread out over many xen servers which needed to be rebooted due to xen changes
20:38 < smooge> 3) which affected boxes that weren't staging
20:39 < mmcgrath> I'm more specifically wondering how we missed the transifex and fedoracommunity updates, because neither of those rpms are capable of working in our environment at the moment.
20:39 < mmcgrath> I mean, once we have the testing repo in place, that might be fixed, but it'd still be good to have a way to catch it
20:40 < smooge> I can go check the logs, but I do not think they had been updated on those boxes til I got to them
20:40 < mmcgrath> smooge: thats what I mean, once you updated them did you check to see they were still working?
20:40 < smooge> so yes they had not been properly tested
20:40 < smooge> I get it slowly
20:41 < mmcgrath> smooge: one thing I had started working on but need to get back to is this:
20:41 < mmcgrath> http://git.fedorahosted.org/git/fedora-infrastructure.git/?p=fedora-infra...
20:41 < mmcgrath> .tiny http://git.fedorahosted.org/git/fedora-infrastructure.git/?p=fedora-infra...
20:41 < smooge> mmcgrath, no I had not.. to be honest I didn't grok that it was breaking things.
20:41 < zodbot> mmcgrath: http://tinyurl.com/yeocsvz
20:41 < mmcgrath> sorry
20:41 < mmcgrath> ah yeah.
20:41 < mmcgrath> one thing I usually try to do is update staging first and make sure they're all still working before moving on
20:41 < mmcgrath> that's a good step to add to our SOP
20:41 < sijis> is stg done a day or so prior to prod?
20:42 < mmcgrath> smooge: but that link has some scripts I was working on to basically go out and hit our environment, doing tests for 200's, things like that.
20:42 < smooge> I thought changes to transifex would have been tested before I got to them... I am quite guilty of Somebody Elses Problem field
20:42 < smooge> sijis, it will be
20:42 < sijis> ah ok. good
20:42 < mmcgrath> smooge: well, there's multiple types of tests involved, but it's always up to us to verify things are working when we're the ones making the change.
20:42 < smooge> sijis, I will add that to my self-flaggelation email I am writing
20:43 -!- spoleeba [~one@fedora/Jef] has joined #fedora-meeting
20:43 < smooge> mmcgrath, yes. I agree I got caught up in trying to get everything done by window and didn't do my job properly.
20:43 < mmcgrath> We don't exactly make it easy :)
20:44 < smooge> admitting you screwed up is the first step in scew-a-holics anonymous
20:44 < mmcgrath> hopefully after skvidal's work is done updates won't be such a big deal.
20:44 < skvidal> we'll see
20:45 < mmcgrath> smooge: but yeah, take a look at those fedora-infrastructure.git/scripts/site-tests/ scripts
20:45 < mmcgrath> they're nifty :)
20:45 < mmcgrath> ok, anyone have anything else on this topic before we move on?
20:46 < smooge> another repo I need to check out. is that ok for my office box or should it stay inside the colo?
20:46 < smooge> I am done
20:46 < ricky> It's public
20:46 < mmcgrath> smooge: that one's ok to do whatever with, it's on fedorahosted.org
20:46 < ricky> (As in, git://git.fedorahosted.org/git/fedora-infrastructure.git)
20:47 < smooge> ok
20:47 < abadger1999> smooge, mmcgrath: Staging is a hybrid environment though.... I think fedoracomunity and transifex are both updated beyond production in staging.
20:47 < mmcgrath> they're both in some weird state for sure.
20:48 < smooge> abadger1999, my writeup covers a possible fix. BY ADDING MORE BUREAUCRACY. No not really.. wanted to see if skvidal was awke yet
20:48 < mmcgrath> Ok, well lets all think on this some more and re-group next week.
20:48 < skvidal> smooge: thanks, you're a prince
20:48 < mmcgrath> #topic search engine
20:48 -!- zodbot changed the topic of #fedora-meeting to: search engine (Meeting topic: Infrastructure)
20:48 < mmcgrath> a-k: any update on the search engine?
20:48 < abadger1999> The new repo will go a long ways.
20:48 * mmcgrath is trying to speed things up since we've only got 10 minutes or so left
20:48 < a-k> Really fast update... No progress to report this week
20:48 < smooge> skvidal, you are welcome. I see you get enough ribbing as it is so I owe you a lunch at a cafe next time I am in NC
20:49 < mmcgrath> a-k: no worries
20:49 < mmcgrath> #topic Freeze
20:49 -!- zodbot changed the topic of #fedora-meeting to: Freeze (Meeting topic: Infrastructure)
20:49 * ricky shivers
20:49 < mmcgrath> Just a reminder, we freeze for two weeks starting next tuesday
20:49 < skvidal> smooge: remember, I'm one of your followers :)
20:49 < sijis> ricky: funny (not) :)
20:49 < smooge> YOU ARE AN INDIVIDUAL
20:49 < abadger1999> skvidal: One thing I'm anticipating -- new pkgdb won't go into production in time for this freeze. There's just too many outstanding issues.
20:50 < smooge> ok freeze tag
20:50 < skvidal> abadger1999: :(
20:50 < ricky> Just a heads up, we may try to get a change request in for transifex 0.7
20:50 < abadger1999> That means, tags from the pkgdb and critpath won't be there until after we unfreeze.
20:50 < smooge> ok when are we freezing exactly
20:50 < skvidal> abadger1999: fooey
20:50 < ricky> Docs needs this badly for their translations
20:50 -!- djf_jeff [~jeff(a)modemcable026.33-70-69.static.videotron.ca] has quit Quit: I quit
20:50 < mmcgrath> smooge: the 16th
20:50 < G> brrr, it's cold in here :P
20:50 -!- mether [~Rahul(a)22.214.171.124] has quit Ping timeout: 252 seconds
20:50 < smooge> abadger1999, can we go for a change request for the change?
20:51 < abadger1999> Welll...
20:51 < mmcgrath> ricky: no way to get it in before the freeze?
20:51 < abadger1999> Oxf13: Under the new no frozen rawhide, when are we doing mass branching?
20:51 < Oxf13> abadger1999: alpha freeze
20:51 < Oxf13> so... tuesday
20:52 < ricky> That might happen as well - I'll try to get some test repos setup and tested by this weekend
20:52 < abadger1999> Okay... smooge, If mass branching is done, I might do it via change request.
20:52 < abadger1999> But I'm very hesitant.
20:52 -!- jaxjaxmob [~jaxjaxmob(a)126.96.36.199] has quit Ping timeout: 256 seconds
20:52 < mmcgrath> abadger1999: whats the worry?
20:52 -!- gholms|mbp is now known as gholms
20:52 < mmcgrath> techniaclly if the mass branch is part of the release, it's not actually frozen.
20:52 < ricky> I'm not sure if we need specific testing for docs' use case though, since they're apparently the big consumers for this update
20:53 < abadger1999> mmcgrath: Lots of changes, lots of bugs I noticed and squashed, sync script is slow, db is huge.
20:53 < mmcgrath> abadger1999: oh, this is all related to the work you're doing with pkgdb?
20:53 < smooge> abadger1999, ok thanks
20:54 < abadger1999> mmcgrath: Yep. And a little part of it is just that I didn't do the majority of the code this time so my gut doesn't trust all of the changes that went in yet.
20:54 < mmcgrath> abadger1999: <nod> well as that comes let me know how I can help
20:54 < abadger1999> Some time in staging will let me know what to expect.
20:55 -!- cwickert [~chris@fedora/cwickert] has joined #fedora-meeting
20:55 < abadger1999> ricky, mmcgrath, skvidal: So here's a question -- is tx update more important than new pkgdb?
20:55 < abadger1999> new pkgdb gets us tags and critpath which we need.
20:55 < abadger1999> But it sounds like the tx update needs some love and is important as well.
20:55 < ricky> The tx update is a blocker for docs, so it's pretty important
20:56 < ricky> Right now, we have it running in staging - we need test repos (ideally test repos that test docs workflow) and also some config file cleanup.
20:56 < smooge> ricky, its just documentation.. i mean next we will be worrying about quality assurance :)
20:56 < abadger1999> Do you guys want me to switch over to working on tx instead of pkgdb since I already am sure pkgdb is going to slip?
20:56 < ricky> (This is why puppet is currently disabled on app01.stg, sorry for hogging it :-))
20:57 -!- J5 [~quinticen(a)ool-44c7526f.dyn.optonline.net] has quit Ping timeout: 272 seconds
20:57 < mmcgrath> abadger1999: I don't really know I have the knowledge to answer that.
20:57 < mmcgrath> I don't know what tx not making it in would mean
20:57 < G> mmcgrath: no French/German/etc translations?
20:57 < ricky> .ticket 1455
20:57 < zodbot> ricky: #1455 (transifex upgrade) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1455
20:57 < mmcgrath> G: for what? we had german and french translations for F12
20:58 < mmcgrath> that's my confusion
20:58 < ricky> My info is what sparks said on the second-to-last comment
20:58 < ricky> Apparently docs translations need certain features from tx 0.7
20:58 < abadger1999> "This will adversely affect the Release Notes and all other Docs Guides if not completed by Mar 11."
20:58 < mmcgrath> huh? why
20:58 < ricky> Looking at that comment again though, the date is past the freeze, so not as much of a rush as I thought
20:59 < mmcgrath> ricky: k
20:59 < mmcgrath> well since we're about done I'm going to open the floor real quick
21:00 < mmcgrath> #topic open floor
21:00 -!- zodbot changed the topic of #fedora-meeting to: open floor (Meeting topic: Infrastructure)
21:00 < mmcgrath> anyone have anything they'd like to quickly discuss?
21:00 < smooge> i had something.. sneezed and forgot it
21:00 < smooge> don't turn 40.. its the new 80
21:00 < mmcgrath> hahaha
21:00 < mmcgrath> Ok, and with that
21:00 < abadger1999> :-)
21:00 < mmcgrath> #endmeeting
21:00 -!- zodbot changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Meeting_channel for meeting schedule
21:00 < zodbot> Meeting ended Thu Feb 11 21:00:37 2010 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot .
21:00 < zodbot> Minutes: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting...
21:00 < mmcgrath> thanks for coming everyone!
21:00 < zodbot> Minutes (text): http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting...
21:00 < zodbot> Log: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-11/fedora-meeting...
Can one of the mailman folks look into this?
-------- Forwarded Message --------
From: Jukka Lahtinen <walker(a)netsonic.fi>
Subject: List archive grouping
Date: Wed, 10 Feb 2010 11:50:54 +0200 (EET)
I have been reading the Fedora package-announce mailing list through the
archive for quite a while, ordered by date.
Before the archive was moved to another address in December, the monthly
archive listing was grouped by date, thus it was easy to follow and see
where entries for each day begun.
But ever since it was moved to
when I click the "Date" link for the month, I see just one long list with
no daily headings and no grouping whatsoever.
I wish the list were grouped by day again, like it used to be.
And another thing that I miss, some time ago there used to be a list of
packages affected on each notification posting.
Now there is just something like
"This update can be installed with the "yum" update program. Use
su -c 'yum update binutils' at the command line."
as an example from the binutils-188.8.131.52.14-36.fc12 announcement.
However, "yum update binutils" didn't update the binutils-devel package
that I suppose was also affected by the same changes, as it showed in
the list of packages having updates available.
It isn't ALWAYS as obvious as it was in this case, to a user like me,
to guess which updates are related, when there isn't a separate announcement
for every individual updateable package.
So I'd like to also get back the affected packages listing that there once
Fedora -- Freedom² is a feature!
The alpha... is upon us!
Man, time goes quick. The alpha release is scheduled for March 2nd. Two
weeks prior (the 16th) we will have a partial freeze of all
Infrastructure. I'll announce it again when the time comes, please don't
push a bunch of changes out on the 15th though :)
My name is Larry and I have used Fedora off and on since it was
initially released and Linux in general since around 1998. I started
with Red Hat 5.2. I am currently employed as a Linux System Admin for a
large hosting provider and have worked as an administrator since 2004.
While I am proficient with bash and can generally script what I need, I
am by no means a developer so figured joining the infrastructure team
would be a good way to help out with Fedora. I normally recommend Fedora
to people who ask which OS should be used for desktops. Ill be glad to
answer any questions anyone has :)
Hello to all the list,
even if I subscribed to the list and joined the Fedora Project some time
ago, first as translator later as part of the infrastructure group but
due to some problems (motorbike accident, relocation and some other RL
issues) I was not even able to start participating in the list/group
activities nor attend meetings and such...
Now that everything is back to normality I want even more to get
involved in the project at the next level, of course I have to take some
time read all the Infrastructure docs/wikis all over but I'll do that
Just a few notes about me, I'm a network Engineer and trainer in life
working often with Open Source technologies and Linux (Fedora, RHEL and
Centos distribution) as a sysadmin while trying to learn something new
I'm working toward my RHCE (I should have sit the exam last January but
I could not due to the above happenings) and I think can benefit me more
than applying what I study.
If anyone has any suggestion about docs or books that could come in
handy please let me know, well it's time to go read infrastructure wikis.
Re-sending my email because the first one got held in moderation
(non-member to member-only list). Please disregard the duplicate.
I'm about to make a presentation at Geek Camp 3.0 with the
Trac: Issue Tracking and Project Management
Trac is an enhanced wiki and issue tracking system for software
development projects. Trac strives to impose as little as possible on
a team's established development process and policies. For this reason
and more, it is being used by the Fedora Project for its fedorahosted
This talk will cover installation of trac and a sample trac workflow
for a software development project.
I have used trac for FreeMedia, FAmSCo and Ambassador Membership and have some
experience installing the software.
It has been suggested to me that it would be great to incorporate more
Fedora experience into the presentation. Can anyone indulge me with
the details surrounding using trac for fedorahosted.org? Champions/How
the present service is set up/Quotable quotes/ - those sort of things
that are great to throw at the audience for emphasis. :)
There will be an outage starting at 2010-02-04 23:00 UTC, which will last
approximately 1 hour.
To convert UTC to your local time, take a look at
date -d '2010-02-04 23:00 UTC'
CVS / Source Control
Fedora Account System
Fedora Package Database
Reason for Outage:
Network team is working on stuff in PHX2. This is a "there may be an
outage" type deal so it's quite possible this will have no impact on us.
Please join #fedora-admin in irc.freenode.net or respond to this email to
track the status of this outage.
19:59 < mmcgrath> #startmeeting Infrastructure
19:59 < zodbot> Meeting started Thu Feb 4 20:01:09 2010 UTC. The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:59 -!- JSchmitt [~s4504kr(a)p4FDD0050.dip0.t-ipconnect.de] has joined #fedora-meeting
19:59 -!- JSchmitt [~s4504kr(a)p4FDD0050.dip0.t-ipconnect.de] has quit Changing host
19:59 -!- JSchmitt [~s4504kr@fedora/JSchmitt] has joined #fedora-meeting
19:59 < zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:59 < mmcgrath> who's here?
19:59 -!- zodbot changed the topic of #fedora-meeting to: (Meeting topic: Infrastructure)
19:59 * lmacken
19:59 < Oxf13> I'm here, but distracted with lunch and another meeting
19:59 * jaxjax is here
19:59 < yawns1> here
19:59 < wzzrd> here
20:00 -!- sheid [U2FsdGVkX1(a)weizentrinker.com] has joined #fedora-meeting
20:00 * dgilmore is present
20:00 * hiemanshu
20:00 * skvidal is here
20:00 * nirik is around in the cheap seats.
20:00 -!- a-k [~akistler@2002:638e:1d25:3:20d:56ff:fe10:bb8d] has joined #fedora-meeting
20:00 < sheid> is here
20:00 * a-k is here
20:00 < mmcgrath> no one creates meeting tickets anymore so we can skip that :)
20:00 * abadger1999 here
20:00 < mmcgrath> #topic /mnt/koji
20:00 -!- zodbot changed the topic of #fedora-meeting to: /mnt/koji (Meeting topic: Infrastructure)
20:00 < mmcgrath> So I'm just making everyone aware what's going on here.
20:01 < mmcgrath> 1) we have a try'n buy from Dell on a equallogic
20:01 < mmcgrath> 2) /mnt/koji is 91% full
20:01 < mmcgrath> so we need to get cracking.
20:01 -!- mizmo [~duffy@fedora/mizmo] has joined #fedora-meeting
20:01 < mmcgrath> the equallogic is in a box in the colo so hopefully it won't take long.
20:01 < lmacken> I still have a script running from last night that is cleaning /mnt/koji/mash/updates
20:01 < mmcgrath> lmacken: oh that's good to know, any estimate on how much it'll clean up?
20:02 < lmacken> mmcgrath: I'm not quite sure...
20:02 < mmcgrath> <nod> no worries.
20:02 < lmacken> there are a *ton* of mashes to clean up
20:02 < Oxf13> hard to say with the hardlinks
20:02 < dgilmore> i expect little
20:02 * ricky is around
20:02 < dgilmore> since it should be mostly hardlinks
20:02 < mmcgrath> I'm *hoping* to have that thing installed and pingable by early next week.
20:02 < mmcgrath> the plan is going to be this.
20:02 < mmcgrath> once it's up and running I'm going to drop everything I'm doing and try to get it up and going.
20:02 < mmcgrath> dgilmore has promised a significant portion of his time as well.
20:02 * dgilmore will be focusing on it also
20:03 < mmcgrath> we're going to be focusing on testing, speed, what works, virtualized, unvirtualized, etc.
20:03 < mmcgrath> the equallogic will be exporting an iscsi interface, it's our job to figure out what to do with it.
20:03 < Oxf13> I've also promised some of my time to help generate traffic for testing
20:03 < mmcgrath> Oxf13: excellent
20:03 < mmcgrath> mdomsch: I know you enjoy the equallogics, do you have any interest in being involved in this?
20:04 < mdomsch> mmcgrath, I probably shouldn't, just so you feel it's a fair eval
20:04 < mdomsch> but if have questions, hit me and I'll try to help
20:04 < mmcgrath> mdomsch: fair enough :)
20:04 < mmcgrath> Ok, so that's really all I have on that for the moment, any other questions?
20:05 * mdomsch wants to bias the decision, but :-)
20:05 < mmcgrath> mdomsch: you just want to make it a 'n buy? :)
20:05 -!- giarc [~cwt(a)184.108.40.206] has joined #fedora-meeting
20:05 < mmcgrath> Ok, moving on.
20:05 < mmcgrath> #topic PHX2 network issues
20:05 -!- zodbot changed the topic of #fedora-meeting to: PHX2 network issues (Meeting topic: Infrastructure)
20:06 < mmcgrath> so there's been just a lot of strange things at the network layer in PHX2.
20:06 < mmcgrath> our data layer traffic has been fine so that's good.
20:06 < skvidal> mmcgrath: scooby doo sees odd things - phx2 is downright haunted
20:06 < mmcgrath> It seems much of that has been fixed at least as of right now.
20:06 < mmcgrath> skvidal: :)
20:06 < mmcgrath> Still, the way things are is just too bad for releng and QA to do their work
20:06 < smooge> here..
20:06 < mmcgrath> so I'm working on setting up alternate sites to grab their snapshots and test info.
20:07 < mmcgrath> one example is on sb1: http://serverbeach1.fedoraproject.org/pub/alt/stage/
20:07 * mmcgrath thanks the websites-team and likely ricky for getting that all properly branded.
20:07 * ricky passes the thanks onto sijis :-)
20:07 < mmcgrath> The good news is so far this setup hasn't required a change in releng's workflow.
20:08 < mmcgrath> the eh' news is that we haven't fully tested it yet but over time as people use it if it's working we'll keep it and/or add additional sites.
20:08 < mmcgrath> Any questions on that?
20:08 -!- iarlyy [~iarlyy(a)220.127.116.11] has joined #fedora-meeting
20:08 < mmcgrath> alllrighty
20:08 < mmcgrath> #topic Fedora Search Engine
20:08 -!- zodbot changed the topic of #fedora-meeting to: Fedora Search Engine (Meeting topic: Infrastructure)
20:08 < mmcgrath> a-k: whats the latest?
20:09 < a-k> The news this week is that I've made Nutch available at
20:09 < a-k> #link http://publictest3.fedoraproject.org/nutch
20:09 < a-k> I intend to see how much both Xapian and Nutch can crawl before they break
20:09 < a-k> With Nutch, I expect the time it takes will just become unacceptable eventually
20:09 < a-k> Nutch takes longer than Xapian to crawl
20:09 < a-k> I still intend to keep looking for/at other candidates, too
20:09 < nirik> a-k: what content are you pointing it at right now?
20:09 < mmcgrath> Does Nutch make any smart decisions about crawling?
20:10 < a-k> I point both at just http://fedoraproject.org
20:10 < mmcgrath> a-k: FWIW, one of the test things I've been doing is searching for "UTC" I've found it's a good way to determine a good engine from a bad one on the wiki
20:10 < mmcgrath> for example:
20:10 < mmcgrath> https://fedoraproject.org/wiki/Special:Search?search=UTC&go=Go
20:10 -!- iarlyy [~iarlyy(a)18.104.22.168] has left #fedora-meeting ["Leaving"]
20:10 < mmcgrath> CRAP
20:10 < a-k> mmcgrath: what do you mean by smart?
20:10 < mmcgrath> http://publictest3.fedoraproject.org/nutch/search.jsp?lang=en&query=UTC
20:10 < mmcgrath> not bad
20:10 < mmcgrath> well, nutch found the UTCHowto
20:11 < mmcgrath> instead of all the ones below it.
20:11 * mmcgrath just sayin.
20:11 < skvidal> cool
20:11 < a-k> It's important nor to confuse searching with indexing
20:11 < mmcgrath> a-k: how long are we talking about for crawling with nutch?
20:11 < nirik> a-k: you might also try meetbot.fedoraproject.org and see how it does with irc logs.
20:12 < a-k> Nutch crawled in about 16 hours what Xapian crawled in 8
20:12 < a-k> Neither crawls are the complete site yet
20:12 < mmcgrath> are there tunables? is this as simple as 'add more processes' ?
20:13 < a-k> Nothing is especially tunable. It might be limited by bandwicth.
20:13 < mmcgrath> yeah
20:13 < smooge> crawler needs more systems badly...
20:13 < nirik> don't shoot the url! ;)
20:13 < mmcgrath> 16 hours is a lot but might be acceptable.
20:14 < a-k> Although part of Nutch's problem could be an inherent inefficiecy in it's Java code
20:14 < a-k> Xapian is compiled C
20:14 < mmcgrath> a-k: what did we get with that 16 hours exactly?
20:14 < a-k> About 44k documents indexed
20:15 < mmcgrath> and Xapian crawled the same thing?
20:15 < a-k> Nutch and Xapian crawl differently
20:15 < mmcgrath> a-k: where was the Xapian url again?
20:16 < a-k> #link http://publictest3.fedoraproject.org/cgi-bin/omega
20:16 < a-k> As always, I keep notes on the wiki page
20:16 < a-k> #link http://fedoraproject.org/wiki/Infrastructure/Search
20:16 < abadger1999> a-k: You also had the unicode thing you posted in #fedora-admin
20:16 < abadger1999> Were you able to find a fix for that?
20:17 < a-k> No fixes. Non-Latin characters hasn't really been something for which there's a requirement yet.
20:17 < a-k> I thought Nutch was a little funky with non-Latin characters, e.g., переводу, compared to Xapian
20:17 < a-k> But I've found Xapian examples that handle non-Latin just as bizarrely
20:17 < a-k> Neither Xapian nor Nutch claim to handle non-Latin characters
20:18 < a-k> We breifly mentioned non-Latin (non-UTF8) in a previous meeting
20:18 < abadger1999> <nod>
20:18 < a-k> Should there be a requirement around it?
20:19 < abadger1999> mmcgrath: What do you think?
20:19 < a-k> I suspect any requirement would eliminate ALL candidates
20:19 -!- jokajak [~jokajak(a)r83h51.res.gatech.edu] has joined #fedora-meeting
20:19 < abadger1999> We have a lot of non-native English users.
20:19 < mmcgrath> it probably should be a requirement.
20:20 < jaxjax> ññ
20:20 < mmcgrath> a-k: I'd think most engines have support for it, if not we should contact them and find out why
20:20 < a-k> A requiirement as opposed to something we take into consideration when choosing finally?
20:20 < smooge> actually its really really really slow to do non-ascii at times
20:21 < dgilmore> a-k: handling all languages should be a requirement
20:21 < a-k> Both seem to handle searching by expanding DBCS into hex
20:21 < a-k> Most of the time it seems to work
20:21 < a-k> Some of the time the results look screwed up
20:22 < a-k> Anyway I don't think I've got much more to add right now
20:23 < mmcgrath> a-k: thanks
20:23 < mmcgrath> We'll move on for now
20:23 < mmcgrath> a-k: try to find out what the language deal is
20:23 < smooge> a-k I remember old search engines had problems where language formats got combined on the same page.
20:23 -!- fbijlsma_ [~fbijlsma(a)p54B2C984.dip.t-dialin.net] has quit Quit: Leaving
20:23 < a-k> mmcgrath: ok
20:23 < mmcgrath> Anyone have anything else on that?
20:24 < mmcgrath> k
20:24 < mmcgrath> #topic Our 'cloud'
20:24 -!- zodbot changed the topic of #fedora-meeting to: Our 'cloud' (Meeting topic: Infrastructure)
20:24 < mmcgrath> so I'm trying to get our cloud hardware back in order.
20:24 < mmcgrath> I've been rebuilding the environment and getting it prepared for virt_web
20:24 < smooge> yeah
20:24 < mmcgrath> which should be at or near usable at this point.
20:24 < smooge> what can I do to help
20:24 < smooge> oh you already did it
20:24 < mmcgrath> smooge: not sure yet, we have a new volunteer working with me, sheid
20:24 < mmcgrath> and I'm sure SmootherFrOgZ as well.
20:24 < smooge> cool
20:24 < mmcgrath> setting things up initially won't take long
20:25 < mmcgrath> it's getting them working and coming up with a solid maintanence plan that will be the tricky part.
20:25 < dgilmore> mmcgrath: what base are we using?
20:25 < mmcgrath> dgilmore: RHEL
20:25 < mmcgrath> and xen at first
20:25 < dgilmore> mmcgrath: ok
20:25 < mmcgrath> though the conversion to kvm should be quick
20:26 < dgilmore> did we sort out the libvirt-qpid memory leaks?
20:26 < mmcgrath> dgilmore: nope, I've got a ticket submitted upstream
20:26 < dgilmore> mmcgrath: any reason not to start with kvm?
20:26 * mmcgrath is hoping to find some C coders to submit patches for me.
20:26 < mmcgrath> dgilmore: not really
20:26 * dgilmore set up new box in colo with centos 5.4 and kvm
20:26 < dgilmore> its working great
20:27 < jokajak> i use kvm with my rhel 5.4 box and it works much better than xen ever did
20:27 < mmcgrath> the memory leak *might* be limited only to libvirt-qpid installs that can't contact the broker.
20:27 < mmcgrath> jokajak: that's weird, we've had generally the opposite experience. Performance has either been terrible or as good as xen but never better.
20:27 < dgilmore> i never got the deps sorted out to get libvirt-qpid running on my new box
20:27 < jokajak> i had stability problems with xen
20:28 < Oxf13> mmcgrath: I think it depends on what you install into the vm, and whether or not virtio is used
20:28 < mmcgrath> dgilmore: yeah, I need to come up with a long term plan for that too.
20:28 < Oxf13> without virtio, kvm is going to be slower than paravirt xen
20:28 < mmcgrath> Oxf13: yeah for us most issues were cleard with different drivers
20:28 < mmcgrath> I think we have most of it figured out now, our app7 is kvm
20:28 < nirik> kvm works great here, but I am using fedora hosts. ;)
20:28 < mmcgrath> nirik: yeah
20:28 < mmcgrath> so anyone have any questions on this for now?
20:29 < dgilmore> the most recent rhel kernel fixed some clock issues i was having
20:29 < mmcgrath> k
20:29 < mmcgrath> #topic Hosted automation
20:29 -!- zodbot changed the topic of #fedora-meeting to: Hosted automation (Meeting topic: Infrastructure)
20:29 < mmcgrath> jaxjax: you want to talk about this?
20:29 < jaxjax> yep
20:29 -!- jpwdsm [~jason(a)desm-45-237.dsl.netins.net] has joined #fedora-meeting
20:30 < jaxjax> I'm currently in the process of installing a full environment on a kvm v machine
20:30 < jaxjax> testing on my desktop was a bit crap and I expect to have it ready by end of this week so I can test properly
20:31 < jaxjax> some questions about fas integration
20:31 < mmcgrath> <nod>
20:31 < mmcgrath> sure, whats up?
20:32 < jaxjax> Can I work in the automatic creation of groups when required?
20:32 < jaxjax> or we would have to do it manually?
20:32 < mmcgrath> jaxjax: yeah, and it'll be required almost every time.
20:32 < mmcgrath> ricky: you still around?
20:32 < ricky> Yup
20:32 < mmcgrath> ricky: would you be interested in writing a CLI based fas client that creates groups?
20:32 < ricky> I don't think we have write methods exposed in FAS yet, so that will require FAS extra
20:32 < ricky> **extra FAS support
20:33 < mmcgrath> once you're logged in couldn't you just post?
20:33 < ricky> Well... I guess you can use the normal form and skip past having a JSON function for it
20:33 < ricky> You will probably just have hacky error handling in that case.
20:34 < jaxjax> Ricky: Do you mind if I contact you 2morrow or Sat for this?
20:34 < mmcgrath> ricky: well, should we focus on getting SA0.5 out the door so we can continue working on stuff like that?
20:34 < ricky> Yes
20:35 < ricky> jaxjax: Sure, eitiher of those is fine
20:35 < jaxjax> thx, will do.
20:35 < mmcgrath> k
20:35 < mmcgrath> we'll have to meet up and figure out exactly what is still busted
20:35 < ricky> There's currently a privacy branch in the git repo
20:36 < ricky> (privacy filtering is the current main broken thing)
20:36 -!- sijis [~sijis@fedora/sijis] has joined #fedora-meeting
20:36 < ricky> There's basically one design decision I'd like to make before we can refactor all privacy stuff :-)
20:37 < mmcgrath> ricky: is that something you can work on in the comming week?
20:37 < ricky> Yeah, I'll get started on that this weekend
20:37 < mmcgrath> ricky: excellent, happy to hear it
20:37 < mmcgrath> anyone have anything else on this topic?
20:38 < mmcgrath> k, we'll move on
20:38 < mmcgrath> jaxjax: thanks
20:38 < mmcgrath> #topic Patch Wed.
20:38 -!- zodbot changed the topic of #fedora-meeting to: Patch Wed. (Meeting topic: Infrastructure)
20:38 < mmcgrath> smooge: want to take this one?
20:38 < ricky> Haha
20:38 * sijis is here late. sorry
20:38 < smooge> yes
20:39 < smooge> Ok I would like to make every second Wednesday of the month patch day
20:39 -!- fbijlsma [~fbijlsma(a)p54B2C984.dip.t-dialin.net] has joined #fedora-meeting
20:39 < smooge> we would run yum update on the systems and reboot as needed
20:39 < smooge> which lately has been, we will be rebooting every 2nd wednesday of the month
20:40 < mmcgrath> smooge: do you want to alter when our yum nag mail gets sent to us?
20:40 < mmcgrath> right now I think it's on the first day of the month
20:40 < smooge> yes. I will change it to the first weekend of the month
20:40 < smooge> close enough for government work
20:40 < smooge> in the case of emergency security items, we will patch as needed
20:41 < mmcgrath> yeah
20:41 * mmcgrath is fine with that
20:41 < mmcgrath> anyone have any issues there?
20:41 < smooge> usually systems will need to be rebooted per xen/kvm server
20:41 < mmcgrath> smooge: It'd be good to get this in an SOP
20:41 < mmcgrath> now that we're getting some actual structure around it.
20:41 < smooge> yes
20:42 < smooge> I have two in mind
20:42 < smooge> update strategy, server layout strategy
20:42 < ricky> Just curious, is this roughly the way big companies, etc. do updates?
20:42 < smooge> making sure we have services on different boxes so we don't screw up things too much
20:42 < smooge> it depends
20:43 < smooge> some big companies will do them at something like 2am every saturday morning
20:43 < smooge> some big companies will do them once a month
20:43 < smooge> and some will rely on their sub-parts to do it appropriately (eg never)
20:43 < ricky> But nothing like "reboot the db server automatically once a month," right?
20:43 < jaxjax> nop
20:43 < smooge> depends on the db server
20:43 < smooge> if it has a memory leak then yes
20:44 < mmcgrath> heh
20:44 < jaxjax> you dont do the updates for all servers at the same time
20:44 < ricky> Hahaa
20:44 < jokajak> why not use something like spacewalk to better manage updates?
20:44 < mmcgrath> jaxjax: because then we'd be using spacewalk?
20:44 < mmcgrath> doesn't that still require oracle anyway?
20:44 < smooge> we might when its postgres support is ready
20:44 < wzzrd> yes it does
20:44 -!- JSchmitt [~s4504kr@fedora/JSchmitt] has quit Remote host closed the connection
20:45 < smooge> jokajak, it is a good idea. we are just having to wait for things we have little knowledge of to help with
20:45 < mmcgrath> smooge: got anything else on that?
20:45 < skvidal> how does spacewalk help?
20:45 < smooge> jaxjax, yeah you usually schedule the servers into classes and do them per 'class' so that services stay up
20:45 < jaxjax> sorry was at phone
20:46 < smooge> skvidal, knowledge of what boxes are in what state.
20:46 < mmcgrath> skvidal: it makes it easy to track what servers need updates, send the 'do the update' requirement and see how it went afterward.
20:46 < skvidal> smooge: and massive infrastructure to do that
20:46 < jaxjax> yes normally what you wanna is avoiding downtime because some patches make the system not working properly
20:46 < ricky> Is it necessary to reboot the xen machines as often as the other ones?
20:46 < mmcgrath> I have to say that aspect of satellite did appeal
20:46 < mmcgrath> skvidal: yeah it does have a cost
20:46 < skvidal> mmcgrath: a huge cost
20:46 < mmcgrath> ricky: they keep releasing kernel updates.
20:46 < ricky> They don't seem to touch sa much user data, so it's nice to avoid rebooting them if we can :-)
20:46 < smooge> I wouldn't call it a huge cost
20:46 < skvidal> mmcgrath: and for more or less 'yum list updates' that's a lot of crap to sift
20:47 < smooge> its pretty minimal compared to some of the beasts I have had to deal with
20:47 < ricky> Ah, I was thinking about the value of security updates on those vs. on proxies, etc.
20:47 < skvidal> smooge: you have to run an entire infrastructure and communiucations mechanism
20:47 * nirik notes some of the kernel updates lately don't pertain to all machines.
20:47 < mmcgrath> skvidal: updating all of our hosts monthly has become expensive though too.
20:47 < nirik> ie, driver fixes where the machine doesn't use that driver at all.
20:47 < skvidal> mmcgrath: how would spacewalk help that, then?
20:47 < mmcgrath> it's just a couple of clicks and it'll go do the rest.
20:47 < skvidal> mmcgrath: I'm not arguing against patch wednesday
20:47 < skvidal> I'm arguing against spacewalk being the answer
20:48 < mmcgrath> yeah I'm not so sold on spacewalk either
20:48 < mmcgrath> but the way we do updates now is pretty expensive.
20:48 < smooge> skvidal, I didn't say it was the answer. I said it "might" be the answer
20:48 < skvidal> smooge: let's talk about other solutions
20:48 < smooge> when the time comes it will be evaluated against what other frankenstein we can come up with to do it better
20:48 < mmcgrath> skvidal: FWIW, no one's actually said "we should use spacewalk"
20:49 < mmcgrath> jaxjax just asked why we don't and we told him :)
20:49 < smooge> I am not against frankensteins.. Its the Unix way
20:49 < skvidal> smooge: I'm not talking about frankensteins, either
20:49 < smooge> oh I am.
20:49 < jaxjax> I see.
20:49 < jokajak> s/jaxjax/jokajak ;-)
20:49 < skvidal> I'm talking about using the tools we have
20:49 < mmcgrath> oh jokajak
20:49 < mmcgrath> jokajak: jaxjax: wait, you two aren't the same person?
20:49 * mmcgrath only just realized that
20:49 < skvidal> smooge: do you have a rough set of requirments?
20:49 < mmcgrath> I kept thinking jaxjax was changing his nic to jokajak :)
20:49 < jaxjax> :D
20:50 < jaxjax> not at all
20:50 < smooge> skvidal, yes.. and when you assemble them together they become a frankenstein of parts. talk off channel after meeting
20:50 < skvidal> smooge: ok
20:50 < mmcgrath> Ok, anyone have anything else on that? if not we'll open the floor
20:51 < mmcgrath> alrighty
20:51 < mmcgrath> #topic Open Floor
20:51 -!- zodbot changed the topic of #fedora-meeting to: Open Floor (Meeting topic: Infrastructure)
20:51 < mmcgrath> anyone have anything else they'd like to discuss?
20:51 < mmcgrath> any new people around that want to say hello?
20:51 < jpwdsm> I think OpenID might (finally) be ready for some testing
20:51 < sheid> hello, i'm new ;)
20:51 < mmcgrath> jpwdsm: oh that's excellent news.
20:52 < mmcgrath> jpwdsm: how far away from it being packaged and whatnot
20:52 < jpwdsm> I can log into StackOverflow and LiveJournal with it, but that's all I've done
20:52 < mmcgrath> jpwdsm: is it directly tied to FAS or is it it's own product?
20:52 < mmcgrath> jpwdsm: test opensource.com
20:52 < jpwdsm> mmcgrath: own product
20:52 < ricky> Nice :-) What publictest are you on again?
20:52 < jpwdsm> mmcgrath: will do
20:52 < jpwdsm> ricky: pt6.fp.o/id
20:52 < ricky> For what it's worth, I haven't had luck with opensource.com and google or livejournal's openid :-(
20:52 < mmcgrath> sheid: welcome
20:53 < ricky> Good to hear - I look forward to dropping openid out of FAS :-)
20:53 < jpwdsm> mmcgrath: I haven't done much packaging, so I'll probably need some help with that
20:53 < jpwdsm> ricky: It uses FasProxyClient, but that's it :)
20:53 < ricky> abadger1999 is our python/packaging guru, and we're all around if you have any questions on it
20:54 < ricky> We'll also want to ask abadger1999 and lmacken about using the FAS identity provider (and if the TG2 one works with pylons)
20:54 < mmcgrath> <nod>
20:55 < ricky> (disclaimer if you're not aware - this is written in pylons, which is kind of a subset of TG2 I guess)
20:55 < mmcgrath> yeah
20:55 < mmcgrath> Ok, anyone have anything else they'd like to discuss? If not we can close the meeting.
20:56 < Oxf13> I'd like to point out something for no frozen rawhide
20:56 < dgilmore> Oxf13: have at it
20:56 < G> oh, Infra meeting?
20:56 < Oxf13> my initial tests of doing two composes on two machines at once was favorable. there was not a significant increase in the amount of time necessary to compose
20:56 < mmcgrath> Oxf13: sure
20:57 < mmcgrath> G: hey
20:57 < G> damn, I was awake for it too
20:57 < mmcgrath> heheh
20:57 < dgilmore> Oxf13: :) nice
20:57 < Oxf13> this combined with lmacken's testing of bodhi means I think we can move forward with no frozen rawhide
20:57 < ricky> Have koji01.stg and releng01.stg been good for you and lmacken's testing?
20:57 < Oxf13> which means we will be stressing things more in the near future
20:57 < Oxf13> and ti's going to cause a lot of confusion amongst the masses
20:57 < ricky> (and cvs01.stg)
20:57 < mmcgrath> Oxf13: that should be fine and we should have more hardware for you
20:57 < G> The good news guys, when I'm back in NZ I'll be able to attend them more often
20:58 < Oxf13> ricky: it was for luke. I wasn't using .stg for my testing
20:58 < mmcgrath> G: excellent :)
20:59 < Oxf13> ricky: I will be using .stg for dist-git testing soon, but that will require modifications to koji.stg
20:59 < mmcgrath> Oxf13: Ok, well I'm glad that's working out for you
20:59 < ricky> Cool
20:59 -!- Ac-town [~dymockd(a)shell.onid.oregonstate.edu] has quit Changing host
20:59 -!- Ac-town [~dymockd@fedora/Actown] has joined #fedora-meeting
21:00 < mmcgrath> ok, if that's it I'll close the meeting
21:00 < mmcgrath> #endmeeting
21:00 -!- zodbot changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Meeting_channel for meeting schedule
21:00 < zodbot> Meeting ended Thu Feb 4 21:02:02 2010 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot .
21:00 < zodbot> Minutes: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-04/fedora-meeting...
21:00 < zodbot> Minutes (text): http://meetbot.fedoraproject.org/fedora-meeting/2010-02-04/fedora-meeting...
21:00 < zodbot> Log: http://meetbot.fedoraproject.org/fedora-meeting/2010-02-04/fedora-meeting...
There will be an outage starting at 2010-02-10 17:00 UTC, which will
last approximately X hours. Outages will be small but noticeble for
small segments as systems are updated and rebooted.
To convert UTC to your local time, take a look at
http://fedoraproject.org/wiki/Infrastructure/UTCHowto or run:
date -d '2010-02-10 17:00 UTC'
All systems will be rebooted, but services should only be impacted in
small increments as we take down things in a loop.
Reason for Outage:
Monthly updates and security updates.
Please join #fedora-admin in irc.freenode.net or respond to this email to track
the status of this outage. Note that the Fedora Infrastructure team does not
run bugzilla.redhat.com, though.
Stephen J Smoogen.
Ah, but a man's reach should exceed his grasp. Or what's a heaven for?
-- Robert Browning