Plan for tomorrow's Fedora Infrastructure meeting - 2016-04-07
by Kevin Fenzi
The infrastructure team will be having it's weekly meeting tomorrow,
2016-04-07 at 18:00 UTC in #fedora-meeting on the freenode network.
We have a gobby document
(see: https://fedoraproject.org/wiki/Gobby )
fedora-infrastructure-meeting-next is the document.
Please try and review and edit that document before the meeting and we
will use it to have our agenda of things to discuss. A copy as of today
is included in this email.
If you have something to discuss, add the topic to the discussion area
with your name. If you would like to teach other folks about some
application or setup in our infrastructure, please add that topic and
your name to the learn about section.
kevin
--
= Introduction =
This shared document is for the next fedora infrastructure meeting.
We will use it over the week before the meeting to gather status and info and
discussion items and so forth, then use it in the irc meeting to transfer
information to the meetbot logs.
= Meeting start stuff =
#startmeeting Infrastructure (2016-04-07)
#meetingname infrastructure
#topic aloha
#chair smooge relrod nirik abadger1999 lmacken dgilmore threebean pingou puiterwijk pbrobinson
#topic New folks introductions / Apprentice feedback
= Status / information / Trivia / Announcements =
(We put things here we want others on the team to know, but don't need to discuss)
(Please use #info <the thing> - your name)
#topic announcements and information
#info Production to staging FAS has been synced - kevin
#info retrace and faf staging hosts are live - kevin
#info postgresql tuning on non dedicated db hosts done - kevin
#info bunch of ansible cleanup patches from misc. Thanks misc!
#info kojira/koji db issues continuing, needs more investigation - kevin
#info epel5 repodata issue, hopefully solved now - kevin
#info Outages next week likely tue/wed for updates/reboot cycle - kevin
#info outage friday for openstack cloud - patrick
= Things we should discuss =
We use this section to bring up discussion topics. Things we want to talk about
as a group and come up with some consensus or decision or just brainstorm a
problem or issue. If there are none of these we skip this section.
(Use #topic your discussion topic - your username)
#topic new rsync setup soon - smooge / kevin
#topic <yourtopic> - yourname
= Learn about some application or setup in infrastructure =
(This section, each week we get 1 person to talk about an application or setup
that we have. Just going over what it is, how to contribute, ideas for improvement,
etc. Whoever would like to do this, just add the info in this section. In the
event we don't find someone to teach about something, we skip this section
and just move on to open floor.)
Schedule:
unknown. Sign up now?
#topic Learn about:
= Meeting end stuff =
#topic Open Floor
#endmeeting
8 years, 1 month
[PATCH] Refactor the code deploying zodbot/ursabot
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
Since there is the same code between staging and production, except
for the bot name, using a variable permit to avoid duplication.
---
roles/supybot/tasks/main.yml | 36 ++++++++----------------------------
roles/supybot/vars/main.yml | 3 +++
2 files changed, 11 insertions(+), 28 deletions(-)
create mode 100644 roles/supybot/vars/main.yml
diff --git a/roles/supybot/tasks/main.yml b/roles/supybot/tasks/main.yml
index 59cc5ea..36bfce9 100644
--- a/roles/supybot/tasks/main.yml
+++ b/roles/supybot/tasks/main.yml
@@ -10,34 +10,23 @@
- packagedb-cli
tags: supybot
+- set_fact: botname=botnames[env]
+
- name: creating zodbot log dir
file: path={{ item }} state=directory owner=daemon
with_items:
- - /var/lib/zodbot
- - /var/lib/zodbot/conf
- - /var/lib/zodbot/data
- - /var/lib/zodbot/logs
+ - /var/lib/{{ botname }}
+ - /var/lib/{{ botname }}/conf
+ - /var/lib/{{ botname }}/data
+ - /var/lib/{{ botname }}/logs
- /srv/web
- /srv/web/meetbot
- when: env != "staging"
tags: supybot
- name: create teams directory
file: path=/srv/web/meetbot/teams state=directory owner=apache group=apache mode=0755
tags: supybot
-- name: creating usrabot log dir
- file: path={{ item }} state=directory owner=daemon
- with_items:
- - /var/lib/ursabot
- - /var/lib/ursabot/conf
- - /var/lib/ursabot/data
- - /var/lib/ursabot/logs
- - /srv/web
- - /srv/web/meetbot
- when: env == "staging"
- tags: supybot
-
- name: setup meetings_by_team script
copy: src=meetings_by_team.sh dest=/usr/local/bin/meetings_by_team.sh mode=755
tags: supybot
@@ -70,18 +59,9 @@
- meetbot
- supybot
-- name: setup cron job to start zodbot/ursabot on boot
- cron: name=zodbot special_time=reboot job='cd /srv/web/meetbot; supybot -d /var/lib/zodbot/conf/zodbot.conf' user=daemon
- tags:
- - config
- - meetbot
- - supybot
- when: env != "staging"
-
-- name: setup cron job to start zodbot/ursabot on boot
- cron: name=ursabot special_time=reboot job='cd /srv/web/meetbot; supybot -d /var/lib/ursabot/conf/ursabot.conf' user=daemon
+- name: setup cron job to start {{ botname }}/ursabot on boot
+ cron: name={{ botname }} special_time=reboot job='cd /srv/web/meetbot; supybot -d /var/lib/{{ botname }}/conf/{{ botname }}.conf' user=daemon
tags:
- config
- meetbot
- supybot
- when: env == "staging"
diff --git a/roles/supybot/vars/main.yml b/roles/supybot/vars/main.yml
new file mode 100644
index 0000000..6e3df99
--- /dev/null
+++ b/roles/supybot/vars/main.yml
@@ -0,0 +1,3 @@
+botnames:
+ staging: ursabot
+ production: zodbot
--
1.8.3.1
8 years, 1 month
[PATCH 1/2] Move service start to the end (after we did set it up)
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
Also remove notify for handler for service, since the service
will be started by service, no need to notify a handler to restart
it.
---
roles/fedmsg/gateway/tasks/main.yml | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/roles/fedmsg/gateway/tasks/main.yml b/roles/fedmsg/gateway/tasks/main.yml
index 86bece4..cd34b26 100644
--- a/roles/fedmsg/gateway/tasks/main.yml
+++ b/roles/fedmsg/gateway/tasks/main.yml
@@ -15,14 +15,6 @@
tags:
- fedmsgmonitor
-- name: enable on boot and start fedmsg-gateway
- service: name=fedmsg-gateway state=started enabled=true
- tags:
- - services
- - fedmsg/gateway
- notify:
- - restart fedmsg-gateway
-
- name: setup fedmsg-gateway config file
copy: src=gateway.py dest=/etc/fedmsg.d/gateway.py
tags:
@@ -40,3 +32,9 @@
- fedmsg/gateway
notify:
- restart fedmsg-gateway
+
+- name: enable on boot and start fedmsg-gateway
+ service: name=fedmsg-gateway state=started enabled=true
+ tags:
+ - services
+ - fedmsg/gateway
--
1.8.3.1
8 years, 1 month
[PATCH] Move a few handlers to the only role using them
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
---
handlers/restart_services.yml | 15 ---------------
roles/copr/backend/handlers/main.yml | 3 +++
roles/haproxy/handlers/main.yml | 3 +++
roles/koji_builder/handlers/main.yml | 3 +++
roles/kojipkgs/handlers/main.yml | 3 +++
roles/mariadb_server/handlers/main.yml | 3 +++
6 files changed, 15 insertions(+), 15 deletions(-)
create mode 100644 roles/haproxy/handlers/main.yml
create mode 100644 roles/koji_builder/handlers/main.yml
create mode 100644 roles/kojipkgs/handlers/main.yml
create mode 100644 roles/mariadb_server/handlers/main.yml
diff --git a/handlers/restart_services.yml b/handlers/restart_services.yml
index a2e9669..75a805b 100644
--- a/handlers/restart_services.yml
+++ b/handlers/restart_services.yml
@@ -45,9 +45,6 @@
- name: restart jenkins
action: service name=jenkins state=restarted
-- name: restart kojid
- action: service name=kojid state=restarted
-
- name: restart koschei-polling
action: service name=koschei-polling state=restarted
@@ -63,9 +60,6 @@
- name: restart libvirtd
action: service name=libvirtd state=restarted
-- name: restart lighttpd
- action: service name=lighttpd state=restarted
-
- name: restart mailman
action: service name=mailman state=restarted
@@ -155,15 +149,6 @@
ignore_errors: true
when: ansible_virtualization_role == 'host'
-- name: restart haproxy
- service: name=haproxy state=restarted
-
-- name: restart mariadb
- service: name=mariadb state=restarted
-
-- name: restart squid
- service: name=squid state=restarted
-
- name: "update ca-trust"
command: /usr/bin/update-ca-trust
diff --git a/roles/copr/backend/handlers/main.yml b/roles/copr/backend/handlers/main.yml
index 2994015..afbcf7c 100644
--- a/roles/copr/backend/handlers/main.yml
+++ b/roles/copr/backend/handlers/main.yml
@@ -9,3 +9,6 @@
- name: systemctl daemon-reload
command: /usr/bin/systemctl daemon-reload
+
+- name: restart lighttpd
+ action: service name=lighttpd state=restarted
diff --git a/roles/haproxy/handlers/main.yml b/roles/haproxy/handlers/main.yml
new file mode 100644
index 0000000..2de15f4
--- /dev/null
+++ b/roles/haproxy/handlers/main.yml
@@ -0,0 +1,3 @@
+---
+- name: restart haproxy
+ service: name=haproxy state=restarted
diff --git a/roles/koji_builder/handlers/main.yml b/roles/koji_builder/handlers/main.yml
new file mode 100644
index 0000000..407cf29
--- /dev/null
+++ b/roles/koji_builder/handlers/main.yml
@@ -0,0 +1,3 @@
+---
+- name: restart kojid
+ action: service name=kojid state=restarted
diff --git a/roles/kojipkgs/handlers/main.yml b/roles/kojipkgs/handlers/main.yml
new file mode 100644
index 0000000..54e5791
--- /dev/null
+++ b/roles/kojipkgs/handlers/main.yml
@@ -0,0 +1,3 @@
+---
+- name: restart squid
+ service: name=squid state=restarted
diff --git a/roles/mariadb_server/handlers/main.yml b/roles/mariadb_server/handlers/main.yml
new file mode 100644
index 0000000..6f737d9
--- /dev/null
+++ b/roles/mariadb_server/handlers/main.yml
@@ -0,0 +1,3 @@
+---
+- name: restart mariadb
+ service: name=mariadb state=restarted
--
1.8.3.1
8 years, 1 month
[PATCH] Fix edafebe7dca01, varnish file was missing
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
---
roles/varnish/handlers/main.yml | 2 ++
1 file changed, 2 insertions(+)
create mode 100644 roles/varnish/handlers/main.yml
diff --git a/roles/varnish/handlers/main.yml b/roles/varnish/handlers/main.yml
new file mode 100644
index 0000000..ce6018b
--- /dev/null
+++ b/roles/varnish/handlers/main.yml
@@ -0,0 +1,2 @@
+- name: restart varnish
+ service: name=varnish state=restarted
--
1.8.3.1
8 years, 1 month
[PATCH 1/3] Split watchdog related setup in a separate file
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
---
roles/base/tasks/main.yml | 46 ++-----------------------------------------
roles/base/tasks/watchdog.yml | 44 +++++++++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+), 44 deletions(-)
create mode 100644 roles/base/tasks/watchdog.yml
diff --git a/roles/base/tasks/main.yml b/roles/base/tasks/main.yml
index ddc751d..8139bb0 100644
--- a/roles/base/tasks/main.yml
+++ b/roles/base/tasks/main.yml
@@ -408,47 +408,5 @@
#
# Watchdog stuff
#
-- name: See if theres a watchdog device
- stat: path=/dev/watchdog
- when: ansible_virtualization_role == 'guest'
- register: watchdog_dev
-
-- name: install watchdog
- yum: pkg={{ item }} state=present
- with_items:
- - watchdog
- tags:
- - packages
- - watchdog
- - base
- when: ansible_distribution_major_version|int < 22 and ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
-
-- name: install watchdog
- dnf: pkg={{ item }} state=present
- with_items:
- - watchdog
- tags:
- - packages
- - watchdog
- - base
- when: ansible_distribution_major_version|int > 21 and ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
-
-- name: watchdog device configuration
- copy: src=watchdog.conf dest=/etc/watchdog.conf owner=root group=root mode=644
- when: ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
- tags:
- - config
- - watchdog
- - base
- notify: restart watchdog
-
-- name: Set watchdog to run on boot
- service: name=watchdog enabled=yes
- when: ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
- ignore_errors: true
- notify:
- - restart watchdog
- tags:
- - service
- - watchdog
- - base
+- name: Set up watchdog
+ include: watchdog.yml
diff --git a/roles/base/tasks/watchdog.yml b/roles/base/tasks/watchdog.yml
new file mode 100644
index 0000000..6ae0d54
--- /dev/null
+++ b/roles/base/tasks/watchdog.yml
@@ -0,0 +1,44 @@
+- name: See if theres a watchdog device
+ stat: path=/dev/watchdog
+ when: ansible_virtualization_role == 'guest'
+ register: watchdog_dev
+
+- name: install watchdog
+ yum: pkg={{ item }} state=present
+ with_items:
+ - watchdog
+ tags:
+ - packages
+ - watchdog
+ - base
+ when: ansible_distribution_major_version|int < 22 and ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
+
+- name: install watchdog
+ dnf: pkg={{ item }} state=present
+ with_items:
+ - watchdog
+ tags:
+ - packages
+ - watchdog
+ - base
+ when: ansible_distribution_major_version|int > 21 and ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
+
+- name: watchdog device configuration
+ copy: src=watchdog.conf dest=/etc/watchdog.conf owner=root group=root mode=644
+ when: ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
+ tags:
+ - config
+ - watchdog
+ - base
+ notify: restart watchdog
+
+- name: Set watchdog to run on boot
+ service: name=watchdog enabled=yes
+ when: ansible_virtualization_role == 'guest' and watchdog_dev.stat.exists
+ ignore_errors: true
+ notify:
+ - restart watchdog
+ tags:
+ - service
+ - watchdog
+ - base
--
1.8.3.1
8 years, 1 month
[PATCH] Fix typo in doc
by Michael Scherer
From: Michael Scherer <misc(a)zarb.org>
---
CONVENTIONS | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/CONVENTIONS b/CONVENTIONS
index f6c2eec..f6b37fe 100644
--- a/CONVENTIONS
+++ b/CONVENTIONS
@@ -9,7 +9,7 @@ Playbook naming
===============
The top level playbooks directory should contain:
-* Playbooks that are generic and used by serveral groups/hosts playbooks
+* Playbooks that are generic and used by several groups/hosts playbooks
* Playbooks used for utility purposes from command line
* Groups and Hosts subdirs.
@@ -95,7 +95,7 @@ We would like to get ansible running over hosts in an automated way.
A git hook could do this.
* On commit:
- If we have a way to detemine exactly what hosts are affected by a
+ If we have a way to determine exactly what hosts are affected by a
change we could simply run only on those hosts.
We might want a short delay (10m) to allow someone to see a problem
--
1.8.3.1
8 years, 1 month