Developers and sysadmins: Config settings in wsgi scripts
by Toshio Kuratomi
Due to some bad examples on how to port code to wsgi, pretty much all of
our apps put configuration information into the wsgi scripts that
startup our apps. This is bad practice and we should stop now. The
reasons this is bad:
* There should be only one place that a config is set.
* Config should live in files that rpm knows are marked %config.
Otherwise, when a TG1 application's rpm is updated the application will
be using its own version of the configuration until the next puppet run.
Case study:
Today I made some changes to the config files of all of our TG1
applications via puppet. The config changes had the desired effect in
almost every case except FAS. After several hours of debugging, I found
that a configuration variable wasn't set to what it should be according
to the config file. Intuition hit and I found that the wsgi script was
overwriting that particular variable.
To fix this, I looked in modules/fas/files/fas.wsgi and found all of the
lines like::
turbogears.config.update({'global': {'server.environment': 'production'}})
I made sure that those lines were reflected in
modules/fas/templates/fas-prod.cfg.erb::
server.environment = 'production'
Then committed those changes. I then made a patch for the upstream code
and applied it (could also open a bug report if you don't have commit)
so that other people using FAS know that this is bad practice and
shouldn't continue.
Wash, rinse, and repeat for pkgdb, bodhi, elections, and mirrormanager.
Just letting you all know that if you think you have to check a wsgi
script into puppet, chances are you should tell the developer or
packager should be making changes to the code so that it goes into the
config file instead.
-Toshio
14 years, 6 months
Re: [puppet: 1/2] Adding memcached selinux policy"
by Darren VanBuren
Why is maxconn an array? Seems like a waste to me.
Darren VanBuren
-------------------------
Sent from my iPod
Try Fedora 10 today. Fire it up. http://fedoraproject.org/
On Sep 22, 2009, at 7:38, Mike McGrath <mmcgrath(a)fedoraproject.org>
wrote:
> commit 5b943443066594955fb0194d65524dff4e5ad468
> Author: Mike McGrath <mmcgrath(a)redhat.com>
> Date: Tue Sep 22 09:38:19 2009 -0500
>
> Adding memcached selinux policy"
>
> modules/memcached/manifests/init.pp | 1 +
> 1 files changed, 1 insertions(+), 0 deletions(-)
> ---
> diff --git a/modules/memcached/manifests/init.pp b/modules/memcached/
> manifests/init.pp
> index bea2842..cdf5911 100644
> --- a/modules/memcached/manifests/init.pp
> +++ b/modules/memcached/manifests/init.pp
> @@ -1,5 +1,6 @@
> class memcached {
> package { memcached: ensure => present }
> + package { memcached-selinux: ensure => present }
>
> $maxconn = $memcached_maxconn ? {
> "" => "1024",
14 years, 6 months
CVS1 and selinux
by Mike McGrath
Selinux on cvs1 is now in enforcing mode. Please keep an eye out for any
oddities or broken services and let us know.
-Mike
14 years, 6 months
TG1/Cherrypy config change to make redirects more robust
by Toshio Kuratomi
mirrormanager uses the TurboGears raise redirect('/new/url') idiom
heavily. Today we found that whenever such a redirect was occurring in
staging, the users browser would end up at the production mirrormanager
site instead of staging. mmcgrath traced this to cherrypy creating URLs
like this:
http://admin.stg.fedoraproject.org/mirrormanager/ instead of like this:
https://admin.stg.fedoraproject.org/mirrormanager/
When the http:// URL goes back to the server, the server rewrites it as
an https:// URL. Due to the way staging works, that ended up being
https://admin.fedoraproject.org instead of admin.stg.
This problem also affects production -- it's just that it isn't as
apparent there. In production we end up doing two requests instead of
one -- the first one requests the http:// URL. Then apache tells the
client to redirect to https:// and the second request is made. This
also has the potential to return information to the server over http://
instead of https://. Although we haven't found a case where we'd get to
that in a way that would reveal sensitive information yet (it has to be
a specific controller method where sensitive data is being passed
through a redirect() call) we want to close this potential for
unpleasant surprises.
Luckily, there's a quick config change that makes this problem go away:
base_url_filter.on = True
base_url_filter.base_url = "https://admin.fedoraproject.org/APPNAME"
base_url_filter.use_x_forwarded_host = False
(substitute "admin.stg" for "admin" if you're deploying to staging.)
.on Turns on the base_url filter in cherrypy. Because we're deploying
on one domain anyhow, this is on for almost all of our configs.
.base_url manually specifies the base_url to use with the app. This
gets substituted into redirects as the scheme, host, and initial path.
.use_x_forwarded_host is the unexpected one. This was set to True on
almost all of our apps before. When True, it tells cherrypy to
construct the redirect URL from the X_FORWARDED_FOR header sent by the
apache proxy instead of using the manually specified base_url. The
X_FORWARDED_FOR header contains the host that is being forwarded to the
proxy. It's combined with the scheme (http or https) that cherrypy is
serving. Since we're serving http from the app servers (https is on the
proxies only), that means the constructed urls use http. The algorithm
behind .use_x_forwarded_host is simply making assumptions that aren't
true in our environment. We have to set it False.
I've just deployed a config update to elections, bodhi, mirrormanager,
pkgdb, and fas that makes these changes. If I've missed any apps let me
know or update the config in puppet yourself.
Thanks,
Toshio
14 years, 6 months
Introduction
by Mike Santangelo
Hi all,
I'm following along with the wiki and have reached the point where
I need to send an introduction e-mail to the list. :)
My name is Mike Santangelo and I'm a linux consultant within Red Hat. I
decided I wanted to get more involved and volunteer in the Fedora side
of the house as my time permits, so the infrastructure team looked like
a good place for me and my skills.
I've been working with various *nix's for around 17 years now, starting
with AT&T Unix way back in '92. Solaris was probably the OS I worked
with the most leading up to when I started working on Red Hat products
full time. I have worked with Red Hat products professionally off and on
since 2002 or so, and been experimenting with various Linux flavors
since the late 90's.
I'm an RHCE with a Certificate of Expertise in Enterprise Storage
Management (GFS and Clustering), and hope to get my certificate in
Satellite soon. I have done a lot of engagements centered around
Satellite and Clustering in the last year and a half so I am fairly
strong on those technologies. I'm also a very good troubleshooter, which
I think is my strongest skill. That, and I'm not afraid to go ask for
help if I don't know something.
I hope to be able to make some of the meetings soon. Since I'm a
consultant I am on the road a lot so schedules can be tough, but I will
definitely try. I'm not involved in anything else with Fedora yet, as
infrastructure seemed like the natural first place for me as I looked
around.
Thanks and look forward to working with everyone,
-Mike
14 years, 6 months
Hello, My intro
by Chris Johnson
Hi all,
I've been lurking on the mailing list for a while and I finally
registered for my fedora account today (username: chrisj)
I'm interested in helping out as time permits. I got on irc once
(lurking again) and haven't really logged in since. I'll try to make a
few meetings after the holidays
I'm planning to get my personal test systems setup soon. I just moved
and still getting things straight at home. Bought a 750GB drive last
night and will be installing F10 over the weekend. I had been running
the U... distro and it's time to get back to the fedora/RH rpm way of
doing things :-)
I've used RedHat since before Fedora existed (I think 6 was the first
one). Started as a hobbyist, 2 years. Then got a job as an admin and
have been doing Linux admin and Cisco networks for the last 5 years.
My current employer is a Win shop so I just get to run the DNS,
email, and network, but the network is 50 remote offices and 3
different data centers in the midwest. I don't mind the Windows too
much and can find my way around them, it's also kinda fun to get the
Linux and MS products to play nice together. I've worked with a lot of
different linux and OSS software products including: postfix,
openldap, apache, bind, samba, mailman, pam, built some custom rpm's,
etc. I use RHEL mostly at work and some fedora and Cent for testing
(some suse, deb, and slackware in the past). I used to do lots of
security firewall apliances with various linux distros (I was a big
fan of LRP when it would fit on a floppy), most of this is now done
with Cisco in my world. I can shell script pretty well and I've
written several perl scripts in the last few years (dabbled in php but
not enough to know it well). I've always been interested in python but
don't have much if any exp with it. I also don't have much experience
with SQL/DB or source control.
I was looking at the FIGs and would be interested in the base sysadmin
and sysadmin-noc for now while I figure out where everything is and
what it does. I'm also interested in more info on the sysadmin-tools
and sysadmin-web FIG.
So, next just apply for the FIGs, keep lurking, ask some questions,
show up for IRC meetings?
Thanks all,
--
Chris Johnson
++++++++++
j.chris.johnson(a)gmail.com
++++++++++++++++++++
14 years, 6 months
Meeting Log - 2009-09-17
by Ricky Zhou
19:59 < mmcgrath> #startmeeting Infrastructure
19:59 < zodbot> Meeting started Thu Sep 17 19:59:45 2009 UTC. The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:59 < zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:59 -!- zodbot changed the topic of #fedora-meeting to: (Meeting topic: Infrastructure)
19:59 < mmcgrath> #topic Who's here?
19:59 -!- zodbot changed the topic of #fedora-meeting to: Who's here? (Meeting topic: Infrastructure)
19:59 * ricky
19:59 * nirik is in the back.
19:59 * SmootherFrOgZ is
20:00 * skvidal nods
20:00 -!- notting [n=notting@redhat/notting] has joined #fedora-meeting
20:00 * collier_s here
20:00 * a-k is here
20:00 -!- sseiersen|Laptop [n=Scott(a)96.244.48.139] has joined #fedora-meeting
20:01 < mmcgrath> Ok, doesn't look like we have any meeting tickets so we'll just get right into it
20:01 < mmcgrath> #topic FAS accounts and capcha
20:01 -!- zodbot changed the topic of #fedora-meeting to: FAS accounts and capcha (Meeting topic: Infrastructure)
20:02 < mmcgrath> One thing I wanted to make known is we recently implemented a capcha in FAS.
20:02 < mmcgrath> by we I mean abadger1999
20:02 < mmcgrath> the problem was mostly caused from spammers we think.
20:02 -!- RodrigoPadula [n=RodrigoP(a)189.106.73.153] has quit "Saindo"
20:02 < mmcgrath> STill more research to do but as many as 2 in every 3 account we have in FAS was created by a spammer.
20:02 < mmcgrath> or at least someone who didn't verify their email.
20:02 < mmcgrath> But! no more :)
20:03 < ricky> :-)
20:03 < smooge> here
20:03 < mmcgrath> we re-consider freeing up those account names as well. Something worth discussing.
20:03 < mmcgrath> they'd not have been in any groups or anything so it might be safe.
20:03 < mmcgrath> might not be though, more research required.
20:03 * ricky is all for freeing up accounts that have never been verified (ie never logged on)
20:03 < mdomsch> yo
20:03 < mmcgrath> mdomsch: hey
20:03 * ianweller rolls in for da lulz
20:04 < mmcgrath> anyone have any questions or comments on that?
20:04 < mdomsch> wow, that's crazy
20:04 < mmcgrath> mdomsch: indeed :)
20:04 < notting> so, does this screw up all our account/registration growth stats?
20:05 < mmcgrath> notting: it does as far as what we have done in the past but we can get new stats I think.
20:05 < abadger1999> mmcgrath: It'll be interesting if our initial registration drops
20:05 < mmcgrath> abadger1999: I'm almost positive it will.
20:05 < ricky> Did we ever really assume that # accounts = # of active people?
20:05 < mmcgrath> ricky: depends on 'we'
20:05 < abadger1999> It'll only drop significantly if the accounts are being created by a machine.
20:05 < mmcgrath> I really don't consider a contributor a contributor unless they have a fedorapeople account.
20:05 < ricky> I guess the question is if those specific numbers are usually quoted to the press or anything
20:06 < ricky> Same here - I get my counts from ls /home/fedora | wc -l on fedorapeople.org :-)
20:06 < mmcgrath> ricky: some are. *but* generally we encourage people to use 'contributor' when siting those numbers.
20:06 < abadger1999> People that sign up but never verify email would still get through the captcha.
20:06 < mmcgrath> and we define a contributor as cla_done + one group
20:06 < mmcgrath> which is the fedorapeople count.
20:06 < mmcgrath> and that count will stay the same
20:07 < mmcgrath> So that's really all there is on that.
20:07 < mmcgrath> One thing I wanted to talk about was with affix
20:08 < mmcgrath> but I don't see him around right now so we'll wait.
20:08 < mmcgrath> he's looking for search engines for us.
20:08 < mmcgrath> Ok, so next topic
20:08 < mmcgrath> #topic PHX1 to PHX2 move
20:08 -!- zodbot changed the topic of #fedora-meeting to: PHX1 to PHX2 move (Meeting topic: Infrastructure)
20:08 < mmcgrath> So smooge and I have been working to get the various network maps, inventory and other related directions to RH's IT department.
20:09 < mmcgrath> Much of that stuff is available in gobby.
20:09 < smooge> makes gobby gravy
20:09 < mmcgrath> smooge also put some network diagrams up http://smooge.fedorapeople.org/ideas/
20:09 -!- sseiersen|Laptop [n=Scott(a)96.244.48.139] has quit Remote closed the connection
20:09 < mmcgrath> It looks like we're going to be moving into 5 racks.
20:09 < mmcgrath> and spread properly so power isn't an issue.
20:09 < mmcgrath> we might expand into the 6th rack but I'm not counting on it, at least not for November.
20:10 < smooge> and look at smaller pdu's as we expand
20:10 * Oxf13
20:10 < Oxf13> here
20:10 < mmcgrath> smooge: yeah, are you a fan of those 1U horizontally mounted PDUs?
20:10 < smooge> mmcgrath, more that I am a fan of PDU's that aren't treated like checkbooks :)
20:11 < smooge> most PDU's have too many plugs for what you can use with modern equipment
20:11 < mmcgrath> maybe we should start getting servers with 4PS's in them :)
20:11 < mmcgrath> then double the PDU's
20:11 < mmcgrath> oh wait, that's the same as just doubling the pdus
20:12 < mmcgrath> anywho :)
20:12 < mmcgrath> So I still have some outstanding questions, I'm sure smooge does too
20:12 < mmcgrath> but it looks like this is going to be about a 48 hour outage.
20:12 -!- sseiersen|Laptop [n=Scott(a)96.244.48.139] has joined #fedora-meeting
20:12 < mdomsch> mmcgrath, when?
20:12 < smooge> no we should look at DC racks
20:12 < mmcgrath> mdomsch: after F12 ships, before the end of November.
20:12 < smooge> mdomsch, depends on how much F12 slide there is
20:13 < mdomsch> smooge, DC still doesn't buy much
20:13 < mmcgrath> Oxf13: whats the latest "what are the odds of F12 slipping"?
20:13 < Oxf13> no change
20:13 < mmcgrath> k
20:13 < Oxf13> still looking good
20:13 < mmcgrath> So does anyone have any questions or concerns about this move?
20:13 < mdomsch> mmcgrath, can we migrate bapp1 to another datacenter before then?
20:13 < mmcgrath> it's going to be a lot of planning, then a week or two of hell.
20:13 < mmcgrath> mdomsch: yeah we can add that to our list.
20:14 < mmcgrath> I've moved stuff I had budgeted for the last quarter of this FY into a more major purchse just thi sweek.
20:14 * ricky will probably be in his own hell week of at that point :-(
20:14 < mmcgrath> we'll have some servers in PHX2 hopefully in a couple of weeks that we can use for that.
20:14 < mmcgrath> ricky: sucker, at least we don't get graded :)
20:14 < mmcgrath> mdomsch: so right nwo the list of things we want to 'pre-move' is db1, db2, bastion3 (new) app1 and bapp1.
20:14 < mdomsch> mmcgrath, sure we do
20:15 < mmcgrath> mdomsch: I always thought this was more pass fail :)
20:15 < smooge> ricky its just midterms... you should skip them and come on out
20:15 < mdomsch> failure is not an option
20:15 < smooge> failure is always an option.. in fact most of my academic career is based on that.
20:16 < smooge> but then again one should not take Zonker Harris as a role model
20:16 < notting> if we slip, does that mean we relocate fudcon to mesa and do the move then?
20:16 < ricky> Haha
20:17 < mmcgrath> notting: actually not the worst idea I've ever heard :)
20:17 < Oxf13> hey, it'll be warmer than Toronto
20:17 -!- Sonar_Gal [n=Andrea@fedora/SonarGal] has quit "Leaving"
20:17 < skvidal> smooge: zipper, then?
20:18 < mmcgrath> One other thing we wanted to make people aware of is in PHX2 we're going to start seperating our network segments.
20:18 < mmcgrath> We're basically going to move to a 3 network system
20:18 < smooge> nah zipper is a real loser
20:18 < mmcgrath> 1) Buildsystem network
20:18 < mmcgrath> 2) Combined nfs / storage type network
20:18 < mmcgrath> 3) public network.
20:18 < mmcgrath> For those familiar with the environment, it's not too hard to figure out what will go where.
20:19 < ricky> Where does something not public like bapp1 go?
20:19 -!- thomasj [n=thomasj@fedora/thomasj] has quit Nick collision from services.
20:19 -!- thomasj_ [n=thomasj(a)e180136046.adsl.alicedsl.de] has joined #fedora-meeting
20:19 < mmcgrath> ricky: on the public network
20:19 < ricky> Oh, OK :-)
20:19 -!- thomasj_ is now known as thomasj
20:19 < abadger1999> smooge: You're just too old to appreciate him.
20:19 -!- XulLunch is now known as XulChris
20:19 < mmcgrath> the public network isn't so much "public IP space" as it is just for all general network stuff like the 834 network is now in PHX.
20:19 < ricky> Ah, OK
20:19 < mmcgrath> Ok, anyone have any questions on this? If not we'll move on.
20:19 < smooge> actually general network sounds better
20:19 -!- Pikachu_2014 [n=Pikachu_(a)85-171-18-56.rev.numericable.fr] has joined #fedora-meeting
20:20 < Oxf13> smooge: and sargent router?
20:20 < mmcgrath> Ok
20:20 < mdomsch> ricky, http://fpaste.org/oPMp/
20:20 < mmcgrath> #topic Favicon.ico
20:20 -!- zodbot changed the topic of #fedora-meeting to: Favicon.ico (Meeting topic: Infrastructure)
20:20 < mmcgrath> a-k: whats the poop on this?
20:20 -!- Sonar_Gal [n=Andrea@fedora/SonarGal] has joined #fedora-meeting
20:20 < a-k> There are 4 favicons in puppet, but only 2 get sourced, and none of them are actually used by existing html, as far as I can tell so far
20:21 < mmcgrath> a-k: what ticket number was that? I forget?
20:21 * a-k checks
20:21 -!- Pikachu_2014 [n=Pikachu_(a)85-171-18-56.rev.numericable.fr] has quit Read error: 60 (Operation timed out)
20:21 < a-k> .ticket 1669
20:21 < zodbot> a-k: #1669 (the old favicon must die.) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/1669
20:22 < mmcgrath> danke.
20:22 < mmcgrath> a-k: and you saw the note about smolts.org?
20:22 < a-k> Yes. Did you want to have no favicon at all on smolts?
20:22 < mmcgrath> We don't have one so just leave it null.
20:22 < a-k> OK
20:22 < mmcgrath> a-k: anything else on that?
20:23 -!- lmr [n=lmr(a)201.82.124.191] has quit "Ex-Chat"
20:23 < mdomsch> a-k: favicon doesn't have to get referenced in our html
20:23 < a-k> I had a couple suggestions in the ticket and Ricky liked one of them. If anybody has other ideas, let me know.
20:23 < mdomsch> it's automatically looked for by browsers
20:23 < a-k> mdomsch: yes. In document root as favicon.ico.
20:24 < mmcgrath> a-k: alrighty, thanks.
20:24 < a-k> That was one of my suggestions.
20:24 < mmcgrath> #topic mod_evasive
20:24 -!- zodbot changed the topic of #fedora-meeting to: mod_evasive (Meeting topic: Infrastructure)
20:24 < mmcgrath> So, some of you have probably seen make the mod_evasive module a bit mo' betta
20:24 < mmcgrath> This is largely because cvs1 has had some load issues recently
20:25 < mmcgrath> RH's internal search engine has been very agressive.
20:25 < mmcgrath> and google's crawler doesn't honor Crawl-Delay
20:25 < dgilmore> stupid bots
20:25 < mmcgrath> you can force google's crawler to crawl more slowly but it only lasts for 90 days.
20:25 < mmcgrath> So
20:25 < mmcgrath> I've decided to setup mod_evasive to be a bit more agressive about banning people.
20:26 < ricky> Does it have whitelists like denyhosts? ;-)
20:26 < mmcgrath> It's actually pretty hard to predict how or when someone will get banned because the various apache children don't talk to eachother.
20:26 < mmcgrath> ricky: it does.
20:26 < ricky> Ah, cool
20:26 < mmcgrath> I still want the content on cvs.fedoraproject.org to be searchable or I'd have banned it altogether.
20:26 < mmcgrath> it might be worth looking at alternatives to viewvc though.
20:27 < mmcgrath> or at least figuring out why it's so freaking slow.
20:27 < ricky> s/viewvc/cvs/ :-D
20:27 < notting> migrate all projects away from cvs?
20:27 < ianweller> cough get rid of cvs cough
20:27 < smooge> not our problem
20:27 < ricky> One side effect of viewvc is that it creates a bunch of junk rcs* files in /tmp
20:27 < ricky> I wasn't able to figure out why though
20:27 < smooge> ouch
20:28 -!- llaumgui [n=llaumgui(a)lns-bzn-51f-81-56-136-221.adsl.proxad.net] has joined #fedora-meeting
20:28 < mmcgrath> It's probably time for someone with time and courage to try starting the SCM Sig back up again.
20:29 < mmcgrath> Anywho, any questions about that?
20:29 -!- llaumgui [n=llaumgui(a)lns-bzn-51f-81-56-136-221.adsl.proxad.net] has quit Client Quit
20:29 -!- llaumgui [n=llaumgui(a)lns-bzn-51f-81-56-136-221.adsl.proxad.net] has joined #fedora-meeting
20:29 < abadger1999> svn!
20:29 < abadger1999> Or should that be !svn ?
20:29 < abadger1999> :-)
20:30 < mmcgrath> moving on :)
20:30 < mmcgrath> #topic pgpool
20:30 -!- zodbot changed the topic of #fedora-meeting to: pgpool (Meeting topic: Infrastructure)
20:30 < smooge> bitkeeper
20:30 < mmcgrath> welp, we tested pgpool in staging and it's been working on db2 for a while with no issues (and no connections)
20:30 < mmcgrath> mdomsch: after the meeting do you want to enable pgpool in production with mirrormanager?
20:30 < smooge> no connections?
20:30 < ricky> It could be good to start off with making the crawlers use pgpool
20:30 < mmcgrath> smooge: it's been deployed but the firewall's been active.
20:30 < ricky> Then the the mirrormanager and transifex apps
20:31 < mdomsch> mmcgrath, if you wish
20:31 < mdomsch> mmcgrath, I'll have to go to another meeting, so can't be there if it breaks
20:31 < mmcgrath> mdomsch: you won't have to do anything for that but it'd be good to have you around incase the sky falls.
20:31 < mmcgrath> the revert is simple enough and all.
20:31 < mdomsch> and I've got a minor MM update to push, to fix the can't-create-a-netblock bug
20:31 < mmcgrath> ah, well goodie.
20:31 < mdomsch> trying to to test that on stg
20:32 < mdomsch> but that's separate; so do your thing
20:32 < mmcgrath> mdomsch: will do.
20:32 < mdomsch> and remember it won't take effect for all the crawlers for a couple hours
20:32 < mmcgrath> <nod>
20:32 < mmcgrath> FWIW, I saw a measurable (not major) performance increase in my tests.
20:32 < mmcgrath> I can't explain that
20:32 < mmcgrath> but it was there.
20:33 < mmcgrath> perhaps the logging in phase of psql is slower then it could be :)
20:33 < mmcgrath> Ok, anyway. With that
20:33 < mmcgrath> #topic Open Floor
20:33 -!- zodbot changed the topic of #fedora-meeting to: Open Floor (Meeting topic: Infrastructure)
20:33 < mmcgrath> Anyone have anything they'd like to discuss?
20:33 < ricky> Can we get a quick overview of how the ipv6 stuff went?
20:33 < mmcgrath> Sure.
20:33 < ricky> Are all the major problems ironed out?
20:33 < mmcgrath> I'll do the quick version
20:34 < mmcgrath> we started, some people couldn't reliably use TCP traffic (and probably other types of traffic)
20:34 < mmcgrath> the iptables rule on the list won't work for us because RHEL5 doesn't support it yet.
20:34 < mmcgrath> but by specifying a lower MTU, I've not heard a single complaint since.
20:34 < mmcgrath> and we're *STILL* waiting on the glue record AFAIK.
20:34 < mmcgrath> I'm not sure why that is though
20:34 < ricky> Cool - that's something that happened on the person's side and not our side?
20:34 < mdomsch> mmcgrath, right, no glue yet
20:35 < mmcgrath> ricky: well it could be done in either place actually.
20:35 < mmcgrath> but by doing it on our server, others don't have to do it.
20:35 -!- lmr [n=lmr@nat/redhat/x-zqdnruhqxvutbeee] has joined #fedora-meeting
20:35 < ricky> Ah, good
20:35 < mmcgrath> oh and mdomsch did some neat stuff as he mentioned on the list wrt MM and geo ip
20:36 < mmcgrath> mdomsch: I saw you prodding warthog9 again about geoip dns. We starting to think that over again?
20:36 < mdomsch> mmcgrath, not really; i was just looking for how to implement a better backend for bind
20:36 < mdomsch> the zonefile for doing BGP lookups takes 1GB RAM
20:36 < mmcgrath> ah
20:36 < mdomsch> so I didn't do that; I custom-coded it in MM, takes 7MB
20:36 < smooge> wow thats big..
20:37 < mmcgrath> mdomsch: when you figured out how to do it in 7M did you do a dance?
20:37 < smooge> and then its small
20:37 * mmcgrath would have.
20:37 < mdomsch> oh yeah
20:37 < mdomsch> celeste was scared
20:37 < ricky> Heheh
20:37 < mmcgrath> hahah, it's like you've invented fire
20:37 < mmcgrath> well good work on taht too
20:37 < smooge> removed all the 0's?
20:37 < mdomsch> that's not in production yet
20:37 < mdomsch> but coming along
20:37 < mmcgrath> related to dns and geoip, I'd still like to see that as a TODO sometime.
20:37 < mmcgrath> but not urgent.
20:37 < mdomsch> ok, on that note
20:38 < mdomsch> we have an offer for hosting from China Unicom
20:38 < mmcgrath> we're probably getting to the point where we need to re-think our content distribution network.
20:38 < smooge> cool.
20:38 < mdomsch> they're looking for size estimates for servers
20:38 < smooge> can I go to do the buildout?
20:38 < mdomsch> what do we want to put there, and what server resources do we need?
20:38 < ricky> mmcgrath: wikipedia apparently uses powerdns for geodns
20:39 < mmcgrath> mdomsch: do you think they were thinking about providing servers as well?
20:39 < mdomsch> smooge, catch Ivory on IRC
20:39 < ricky> Maybe something to take a look at (and it has a bind-style zone file backend)
20:39 < mdomsch> mmcgrath, yes, I believe so. 2 asks:
20:39 < mmcgrath> ricky: yeah I think those are the big winners right now in that market. pdns or bind + a patch
20:39 < mdomsch> 1) they set up and run a mirror
20:39 -!- petreu| [n=peter(a)213.20.156.34] has quit "Don't worry; it's been deprecated. The new one is worse."
20:39 < mdomsch> 2) they give us dedicated hosting and Xen guests
20:39 < mmcgrath> So we wouldn't have to worry about the hardware at all with the xen guests?
20:40 < mmcgrath> I'd totally go for that
20:40 < mdomsch> that's the ask. We'll see.
20:40 < mmcgrath> it's worked out well for us in BU
20:40 < smooge> I think we would like xen dom0 if possible . domU's is nice
20:40 < mmcgrath> yeah
20:40 < mdomsch> so if anyone sees Ivory on IRC, be nice
20:40 < mmcgrath> heheh
20:40 < smooge> np.
20:40 < mdomsch> and if anyone speaks Chinese, that would be a big plus!
20:40 < mdomsch> as I don't
20:40 < smooge> crap. neither do i
20:40 < mmcgrath> yeah, do we have any native chinese speakers in the house?
20:40 < Oxf13> I can barely order it off a menu
20:41 * ricky wishes he could read/write, but nope :-(
20:41 < mdomsch> his english is pretty good, but he's concerned about it.
20:41 < mmcgrath> mdomsch: I'd know how to ask Ivory to order spicy tofu.
20:41 < mmcgrath> and then tell him it was good :)
20:41 < smooge> his english is probably better than mine
20:41 < ricky> Something like "la dou fu" :-)
20:41 < mmcgrath> mdomsch: I'll get some specs to you about what would be good to have over there.
20:41 < mdomsch> on another unrelated note
20:41 < mdomsch> I'll be at LinuxCon all next week
20:42 < smooge> mdomsch, cool
20:42 < Oxf13> ditto
20:42 < smooge> where is that this year?
20:42 < Oxf13> Portland
20:43 < smooge> cool. and wet
20:43 < smooge> blackberries are really good right now I have been told
20:44 < smooge> on my note, xen13 is now RHEL-5.4 and should not reboot as often
20:44 < mmcgrath> smooge: cheers!
20:44 < smooge> bastion1 should also really really think its bastion1
20:44 < ricky> Nice :-)
20:45 < ricky> Did it go pretty smoothly, or were there any bumps along the way?
20:45 < ricky> rhel 5.4 domU went pretty smoothly
20:45 < smooge> xen13 needed /etc/grub hand edited.
20:45 < ricky> Ah, yow
20:45 < smooge> so I had to reboot twice.. as I forgot the first time
20:45 -!- lmr [n=lmr@nat/redhat/x-zqdnruhqxvutbeee] has quit "Ex-Chat"
20:45 < smooge> and fas1 took 3 xen creates to start up
20:46 < mmcgrath> smooge: what needed to be altered in grub?
20:46 < smooge> and I didn't see why in the logs
20:46 < smooge> it was booting by default to an old kernel.
20:46 < ricky> Strange
20:46 < ricky> Ah, then I take my "yow" back :-)
20:46 < smooge> so when I updated it moved from booting from 1 to 2
20:46 < smooge> so I moved it to 0
20:46 -!- RadicalRo [n=radical(a)193.254.32.144] has quit "Leaving."
20:46 < mmcgrath> ahh
20:47 < a-k> smooge: /etc/sysconfig/kernel ?
20:47 < ricky> It's set to yes on xen13
20:48 < mmcgrath> ricky: what about DEFAULTKERNEL
20:48 < ricky> Not sure why it wouldn't have gotten updated automatically. Maybe we had manually edited things before?
20:48 < mmcgrath> is it kernel or kernel-xen
20:48 < ricky> That's set to kernel
20:48 < mmcgrath> that should probably get set to kernel-xen
20:48 < ricky> Ahh
20:48 < mmcgrath> Ok, anyone have anything else to discuss?
20:48 -!- rdieter is now known as rdieter_away
20:48 < mmcgrath> If not we'll close the meeting in 30
20:48 < smooge> ricky, I think it was manually edited in the past when trying to figure out which kernel worked
20:48 < smooge> nothing else from me.
20:49 < ricky> One more thing
20:49 -!- lmr [n=lmr@nat/redhat/x-yicihbutbatmwhdy] has joined #fedora-meeting
20:49 < ricky> Should we email mirror-list-d about the i2 mirror soon?
20:49 < mdomsch> what i2 mirror?
20:49 < mmcgrath> ricky: good question
20:49 < ricky> The networking issues there seem to be resolved for the most part
20:49 < mdomsch> at rdu?
20:49 < mmcgrath> mdomsch: the RDU i2 mirror seems to be up and running well
20:49 < ricky> Yup
20:49 < mdomsch> ah, good to know
20:49 < abadger1999> mmcgrath: how's kvm testing going?
20:49 < ricky> (And same question for sync1/sync2)
20:49 < mmcgrath> ricky: what was the last speed test?
20:49 -!- cdelpino [n=cdelpino(a)pool-70-111-139-93.nwrk.east.verizon.net] has quit "Leaving"
20:50 < mmcgrath> abadger1999: oh, good question. I'll get to it in just a sec.
20:50 < mmcgrath> ricky: sync1/2 isn't ready for public consumption yet.
20:50 < mdomsch> so we don't need static routes for RDU i2 anymore?
20:50 < mmcgrath> mdomsch: that's my impression
20:50 < mmcgrath> ricky: what IP was it at?
20:50 < ricky> download-i2.fedora.redhat.com?
20:51 < ricky> I'm getting excellent speeds from osuosl1 right now
20:51 < ricky> (it showed >140 MB/s in the beginning and eventually returned to >7 MB/s which is still good)
20:51 < mmcgrath> mdomsch: ^^
20:51 < mmcgrath> Oh, and I wanted to talk about cloud stuff too
20:52 < mmcgrath> Ok, real quick.
20:52 < mmcgrath> abadger1999: the kvm stuff is going ok. We were having horrible IO performance issues on app7
20:52 < ricky> Although starting a download from ibiblio1 at the same time caused the osuosl1 one to drop down to <500 KB/s :-(
20:52 < mmcgrath> after changing some drivers around it's better but still not as fast as some of the app servers.
20:52 < mmcgrath> await has been a big problem.
20:52 < ricky> So it's not very balanced it seems
20:52 < mmcgrath> abadger1999: but research continues :)
20:52 < mmcgrath> the good news is we're in no rush to switch to it so we can make it behave exactly was we want to.
20:53 < mmcgrath> SmootherFrOgZ: you around?
20:53 < mmcgrath> The cloud stuff has been going well, SmootherFrOgZ has been hard at work.
20:53 < SmootherFrOgZ> mmcgrath: yep
20:53 < mmcgrath> we keep running into lots of little issues.
20:53 < mmcgrath> a million paper cuts causing us to be unable to use cumulus.fedoraproject.org
20:53 < mmcgrath> SmootherFrOgZ: any luck with the nics?
20:54 < SmootherFrOgZ> we're currently debuging them to get them work properply
20:54 < SmootherFrOgZ> actyually nic show up with wrong interface name
20:54 < mmcgrath> SmootherFrOgZ: ok, well good work on it so far, I know this issue in particular has been quite irksome
20:54 < mmcgrath> Ok, anyone have anything else they'd like to discuss?
20:55 < mmcgrath> if not we'll close the meeting in 30.
20:56 < mmcgrath> Allright
20:56 < mmcgrath> #endmeeting
20:56 -!- zodbot changed the topic of #fedora-meeting to: Channel is used by various Fedora groups and committees for their regular meetings | Note that meetings often get logged | For questions about using Fedora please ask in #fedora | See http://fedoraproject.org/wiki/Communicate/FedoraMeetingChannel for meeting schedule
20:56 < zodbot> Meeting ended Thu Sep 17 20:56:42 2009 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot .
20:56 < zodbot> Minutes: http://meetbot.fedoraproject.org/fedora-meeting/2009-09-17/fedora-meeting...
20:56 < zodbot> Minutes (text): http://meetbot.fedoraproject.org/fedora-meeting/2009-09-17/fedora-meeting...
20:56 < zodbot> Log:
http://meetbot.fedoraproject.org/fedora-meeting/2009-09-17/fedora-meeting...
14 years, 6 months
Approval?
by Eric Meng
Hey guys,
Well I haven't been approved yet to join the sysadmin base group, so I don't think I can work on anything yet, am I? Well, I am currently in school right now and I'm not sure if there is anything I can do that fits my skill level. Any suggestions? When can I get approved?
Thanks,
Eric
14 years, 6 months
Merging from staging to master
by Mike McGrath
This has bugged me forever and I think I've got it figured out now (thanks
mdomsch for pointing me in the right direction)
Lets say you've been working for weeks on a module in staging and you want
to cherry pick those commits. How do you do it? It becomes trickier even
if someone else has been working on other modules in staging. This seems
to do the trick:
git log --name-status master..staging module/path
or to see the diff
git diff master staging module/path
The only thing to be careful of that I can tell is if one of the commits
listed in git-log includes something in another path outside of your
module.
Anyway, if you can think of a better way to do this let me know, so far
though this worked well for me.
-Mike
14 years, 6 months
Introduction
by Eric Meng
Hi,
My name is Eric Meng, I'm a sophomore in high school with substantial experience with Linux. I have no previous work experience, but hope to gain new knowledge and information by volunteering for Fedora.
Credentials:
- Certified Red Hat Technician (my number is 605009710126274)
- Self taught
- Self-motivated; strong passion for technology
- Knowledge in Perl & Shell scripting
- Analytical thinker; able to trouble shoot problems that arise
- Strong administration skills in all areas listed under the RHCT requirements
- Exceptional communicator
- 4.0 GPA student
Reasons for joining:
- Gain experience/ learn
- Help Linux community
- Improve Fedora
- Get a sneak peak of new technology & software
- See how Fedora works behind the scenes
- Have Fun!!
14 years, 6 months