Looks like we are not syncing out the f23 updates/updates-testing repos
diff --git a/roles/bodhi2/backend/files/fedora-updates-push b/roles/bodhi2/backend/files/fedora-updates-push
index a5c6ee5..9af4739 100755
@@ -49,7 +49,7 @@ for rel in 21 22 23; do
-for rel in 21 22; do
+for rel in 21 22 23; do
OUTPUT1=$(rsync $OPTIONS --ignore-existing \
We ran into an issue today with the developer.fedoraproject.org site.
Every once in a while it would show up with some really old blog
Turns out we pull the site from github at :25 after the hour, and then
only update the rss feeds at :45 after the hour. This results in 20min
of having the old content from the repo and general confusion.
This change removes the seperate cron job for this and just adds the
update to the cron that pulls the site so they are always in sync.
diff --git a/roles/developer/build/files/developer-rss-update.cron b/roles/developer/build/files/developer-rss-update.cron
deleted file mode 100644
@@ -1,2 +0,0 @@
-45 * * * * apache /usr/local/bin/rss.py /srv/web/developer.fedoraproject.org/index.html
diff --git a/roles/developer/build/files/syncDeveloper.sh b/roles/developer/build/files/syncDeveloper.sh
index f8e7d48..91adb80 100644
@@ -11,3 +11,6 @@ cd /srv/web/developer.fedoraproject.org
/usr/bin/git reset -q --hard || exit 1
/usr/bin/git checkout -q master || exit 1
/usr/bin/git pull -q --ff-only || exit 1
+# Now we update the blog content
diff --git a/roles/developer/build/tasks/main.yml b/roles/developer/build/tasks/main.yml
index 8b99578..eb46bbf 100644
@@ -29,13 +29,12 @@
-- name: Install the syncDeveloper and rss feed update jobs
+- name: Install the syncDeveloper cron job
owner=root group=root mode=0644
- - developer-rss-update
Good Morning everyone,
Over the last few days I have been working on a small app: mdapi.
It is aimed at serving the metadata from our repos simply and *fast*, offering
information from koji, rawhide, all our active branches and epel (you'll have to
specify which you want) and for each it will return you the first hit it finds
in the testing, updates or release repo (it says which in the json returned).
I deployed it for testing in our cloud, at: http://18.104.22.168/
This morning I took a look at what it would take to package mdapi for epel7.
mdapi has 5 dependencies:
- aiohttp -> Has an epel7 branch but was never built, open a ticket on the
bugzilla asking it
- requests -> Exists in RHEL channels but not as py3, so will need a new
- simplejson -> Asked the current PoC if it could be built with the py3
sub-package in epel7
- sqlalchemy -> Looked at it (I'm the PoC for epel7), but requires:
- python3-setuptools (same situation as requests)
- requires: python3-pip, python3-wheel, python3-mock,
- python3-nose (same situation as requests and setuptools
- werkzeug -> same situation as requests, setuptools...
The dependency list as well as the current situation of the guidelines for
packaging python3 app in epel , makes me proposing that we deploy mdapi in
Fedora (at least for now).
The second point I would like to raise is how we deploy this application. This
isn't a standard wsgi application (since it's async) and it cannot run with
So far the other async application we have (in pagure) have been deployed simply
as a systemd service.
I wonder if we want to use the same approach here or if we should investigate
things like gunicorn/nginx or so.
Does someone have experience in this field? Any advice/feedback?
Thanks for your attention,
Skills and what I would like to learn:
I'd like to learn how to administer Linux systems as a whole from end to end; building machines to storage management and naturally the administration of the pieces that ride on the servers. I've moved from being primarily a windows admin at work to the primary unix admin on my team at work (i've been the primary contact for level 2 unix for about six months), and while i've learned a great deal in a short time the knowledge is split between Linux (specifically Red Hat) and AIX. Additionally I work three long days, so the extra exposure to the Linux environment and other administrators is appealing and beneficial. From reading the tickets it appears that docker is used, which is something that I also find interesting and would like to learn more about how it actually used, set up and administered as well. I understand shell scripting, specifically bash and loops, i'd like to get better with those. I've also done some work with python scripts via code academy and would like to learn
more about how that applies to an administration. As far as skilled I am told by my co-workers that I am good and troubleshooting and tracking down what is causing an issue and when I ask questions they are usually the well thought out and relevant to solving the problem at hand. I am also told that I am a quick learner.
What you would like to work on.:
I've seen some scripting tickets in the queue that would be of interest to me, also a docker related ticket. Additionally I am interested in the apprentice group and will likely also situate myself with the documentation group after a bit as it seems to me that a person working on a system and at some stage making changes, or close to those doing so, would be a prime candidate to document it.
Can I get retroactive +1s for the following patch?
Given the imminent Fedora 23 release, we want up-to-date mirrorlists,
so I will push this patch ahead of +1s to prevent having to depend on
people in the weekend.
This tunes the number of threads down for the mm crawler to 23.
This is probably lower than we can handle, but I'm setting it this low to
make sure that the crawls finish for the F23 release.
After release we should re-evaluate the number of threads used.
Author: Patrick Uiterwijk <puiterwijk(a)redhat.com>
Date: Sun Nov 1 01:04:35 2015 +0000
Decrease the number of mm_crawler threads
It seems like an update caused the crawler to use slightly
more memory than before, meaning the previous tuning of
27 threads no longer fits in the server's memory.
This patch brings it down to 23, which is for now known-good.
We should look again at what values to use after freeze.
Signed-off-by: Patrick Uiterwijk <puiterwijk(a)redhat.com>
diff --git a/roles/mirrormanager/crawler/files/crawler.cron b/roles/mirrormanager/crawler/files/crawler.cron
index 8ace23d..a3691d7 100644
@@ -5,4 +5,4 @@
# [ "`hostname -s`" == "mm-crawler02" ] && sleep 2h is used to start the crawl
# later on the second crawler to reduce the number of parallel accesses to
# the database
-0 */12 * * * mirrormanager [ "`hostname -s`" == "mm-crawler02" ] && sleep 2h; /usr/bin/mm2_crawler --timeout-minutes 180 --threads 27 `/usr/local/bin/run_crawler.sh 2` > /dev/null 2>&1
+0 */12 * * * mirrormanager [ "`hostname -s`" == "mm-crawler02" ] && sleep 2h; /usr/bin/mm2_crawler --timeout-minutes 180 --threads 23 `/usr/local/bin/run_crawler.sh 2` > /dev/null 2>&1
With kind regards,