On Fri, Apr 24, 2015 at 07:50:02PM -0400, Chuck Anderson wrote:
Should this go to the mirror-list rather than just infrastructure?
It was actually send to one of the three mirror-lists. But you are
right, I will forward it to the other two mirror-lists.
What units is the "Bandwidth" parameter in?
This has already been part of the old MirrroManager and is in MBit/s it
used to sort the mirrorlist. The higher your bandwidth the higher the
likelihood your mirror will be on the top of the list.
How is "Max connections" used?
In the old MirrorManager those fields had a description. These seems to
have been lost. This is from MM1:
Maximum parallel download connections per client, suggested via metalinks.
I will try to get those descriptions also in the new MM2, so that it is
clear what all the fields mean.
My mirror isn't crawled because it is a private mirror, but I do
the check-in script after every sync. Is there a way to test
checking-in to the staging instance? It says last crawled in 2013,
and last checked in 2014-12-05.
I was successfully using: https://admin.stg.fedoraproject.org/mirrormanager2/xmlrpc
On Fri, Apr 24, 2015 at 11:32:21AM -0600, Kevin Fenzi wrote:
> As you may know we have been working on a new version of mirrormanager,
> and we are finally ready to roll it out in production (barring any·
> show stoppers).·
> mirrormanager 2 is re-written in flask and has a number of improvements
> over mirrormanager 1.·
> We have a set of staging instances setup:·
> Which is the new flask frontend. The data for mirror admins should·
> be pretty much the same. You should be able to login and check your·
> mirrors settings (which have been copied from the production instance).·
> (Note that any changes made here will not be reflected in production
> Which is a mirrorlist server using the data from the staging database.
> You should be able to use this in place of 'mirrors.fedoraproject.org'
> in· yum config and the like.·
> There is also an internal crawler instance, checking mirrors and·
> removing out of date ones or readding up to date ones.·
> Source is of course available at:·
> and bugs or issues can be reported at:·
> Please do let us know if you see any problems or issues.·
> We will likely be scheduling a short outage next week to roll the new·
> version out to production.·