Freeze Break Request: Change oz.cfg on power builders to just use 1
cpu for now
by Kevin Fenzi
From f5934de9f70bc1b31348091bd69279f143f7ef78 Mon Sep 17 00:00:00 2001
From: Kevin Fenzi <kevin(a)scrye.com>
Date: Thu, 22 Mar 2018 19:13:23 +0000
Subject: [PATCH] Per https://pagure.io/releng/issue/7326 move the power
builders oz config to use just 1 cpu for now. There is a bug in nested virt
with more than 1 cpu that is causing all the images to fail to build.
Note that all these images are non release blocking and are currently
failing in f28 and rawhide, so there's not much downside.
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
---
roles/koji_builder/files/oz.cfg | 22 ----------------------
roles/koji_builder/tasks/main.yml | 2 +-
roles/koji_builder/templates/oz.cfg.j2 | 26 ++++++++++++++++++++++++++
3 files changed, 27 insertions(+), 23 deletions(-)
delete mode 100644 roles/koji_builder/files/oz.cfg
create mode 100644 roles/koji_builder/templates/oz.cfg.j2
diff --git a/roles/koji_builder/files/oz.cfg
b/roles/koji_builder/files/oz.cfg
deleted file mode 100644
index 3d045d2..0000000
--- a/roles/koji_builder/files/oz.cfg
+++ /dev/null
@@ -1,22 +0,0 @@
-[paths]
-output_dir = /var/lib/libvirt/images
-data_dir = /var/lib/oz
-screenshot_dir = /var/lib/oz/screenshots
-# sshprivkey = /etc/oz/id_rsa-icicle-gen
-
-[libvirt]
-uri = qemu:///system
-image_type = raw
-# type = kvm
-# bridge_name = virbr0
-cpus = 2
-memory = 3096
-
-[cache]
-original_media = yes
-modified_media = no
-jeos = no
-
-[icicle]
-safe_generation = no
-
diff --git a/roles/koji_builder/tasks/main.yml
b/roles/koji_builder/tasks/main.yml
index 75c427c..c573739 100644
--- a/roles/koji_builder/tasks/main.yml
+++ b/roles/koji_builder/tasks/main.yml
@@ -154,7 +154,7 @@
# oz.cfg upstream ram and cpu definitions are not enough
- name: oz.cfg
- copy: src=oz.cfg dest=/etc/oz/oz.cfg
+ template: src=oz.cfg dest=/etc/oz/oz.cfg
tags:
- koji_builder
diff --git a/roles/koji_builder/templates/oz.cfg.j2
b/roles/koji_builder/templates/oz.cfg.j2
new file mode 100644
index 0000000..b3dacc8
--- /dev/null
+++ b/roles/koji_builder/templates/oz.cfg.j2
@@ -0,0 +1,26 @@
+[paths]
+output_dir = /var/lib/libvirt/images
+data_dir = /var/lib/oz
+screenshot_dir = /var/lib/oz/screenshots
+# sshprivkey = /etc/oz/id_rsa-icicle-gen
+
+[libvirt]
+uri = qemu:///system
+image_type = raw
+# type = kvm
+# bridge_name = virbr0
+{% if ansible_architecture == 'ppc64' or ansible_architecture ==
'ppc64le' %}
+cpus = 1
+{% elif %}
+cpus = 2
+{% endif %}
+memory = 3096
+
+[cache]
+original_media = yes
+modified_media = no
+jeos = no
+
+[icicle]
+safe_generation = no
+
--
1.8.3.1
5 years, 6 months
FBR: F28 onwards fedimg needs to parse AtomicHost and Cloud variants separately
by Sayan Chowdhury
Hi,
F28 onwards fedimg needs to parse AtomicHost and Cloud variants
separately. So, I have created a hotfix for the same. The same patch
will be ported to fedimg for the issue[1]
From c6dbcbcc2d105d0ce6976cbce676c311fe59991a Mon Sep 17 00:00:00 2001
From: Sayan Chowdhury <sayan.chowdhury2012(a)gmail.com>
Date: Wed, 21 Mar 2018 15:00:59 +0530
Subject: [PATCH 1/2] fedimg: Put hotfix for add Atomic & Cloud variant
Signed-off-by: Sayan Chowdhury <sayan.chowdhury2012(a)gmail.com>
---
files/hotfix/fedimg/consumers.py | 88 ++++++++++++++++++++++++++++++++++++++++
roles/fedimg/tasks/main.yml | 8 ++++
2 files changed, 96 insertions(+)
create mode 100644 files/hotfix/fedimg/consumers.py
diff --git a/files/hotfix/fedimg/consumers.py b/files/hotfix/fedimg/consumers.py
new file mode 100644
index 0000000..cad1495
--- /dev/null
+++ b/files/hotfix/fedimg/consumers.py
@@ -0,0 +1,88 @@
+# This file is part of fedimg.
+# Copyright (C) 2014 Red Hat, Inc.
+#
+# fedimg is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as
+# published by the Free Software Foundation, either version 3 of the
+# License, or (at your option) any later version.
+#
+# fedimg is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public
+# License along with fedimg; if not, see http://www.gnu.org/licenses,
+# or write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+#
+# Authors: David Gay <dgay(a)redhat.com>
+#
+
+import logging
+log = logging.getLogger("fedmsg")
+
+import multiprocessing.pool
+
+import fedmsg.consumers
+import fedmsg.encoding
+import fedfind.release
+
+import fedimg.uploader
+from fedimg.util import get_rawxz_urls, safeget
+
+
+class FedimgConsumer(fedmsg.consumers.FedmsgConsumer):
+ """ Listens for image Koji task completion and sends image files
+ produced by the child createImage tasks to the uploader. """
+
+ # It used to be that all *image* builds appeared as scratch builds on the
+ # task.state.change topic. However, with the switch to pungi4, some of
+ # them (and all of them in the future) appear as full builds under the
+ # build.state.change topic. That means we have to handle both cases like
+ # this, at least for now.
+ topic = [
+ 'org.fedoraproject.prod.pungi.compose.status.change',
+ ]
+
+ config_key = 'fedimgconsumer'
+
+ def __init__(self, *args, **kwargs):
+ super(FedimgConsumer, self).__init__(*args, **kwargs)
+
+ # threadpool for upload jobs
+ self.upload_pool = multiprocessing.pool.ThreadPool(processes=4)
+
+ log.info("Super happy fedimg ready and reporting for duty.")
+
+ def consume(self, msg):
+ """ This is called when we receive a message matching our topics. """
+
+ log.info('Received %r %r' % (msg['topic'], msg['body']['msg_id']))
+
+ STATUS_F = ('FINISHED_INCOMPLETE', 'FINISHED',)
+
+ msg_info = msg['body']['msg']
+ if msg_info['status'] not in STATUS_F:
+ return
+
+ location = msg_info['location']
+ compose_id = msg_info['compose_id']
+ cmetadata = fedfind.release.get_release_cid(compose_id).metadata
+
+ images_meta = safeget(cmetadata, 'images', 'payload', 'images',
+ 'CloudImages', 'x86_64')
+
+ if images_meta is None:
+ return
+
+ self.upload_urls = get_rawxz_urls(location, images_meta)
+ compose_meta = {
+ 'compose_id': compose_id,
+ }
+
+ if len(self.upload_urls) > 0:
+ log.info("Processing compose id: %s" % compose_id)
+ fedimg.uploader.upload(self.upload_pool,
+ self.upload_urls,
+ compose_meta)
diff --git a/roles/fedimg/tasks/main.yml b/roles/fedimg/tasks/main.yml
index 6074903..f36ec72 100644
--- a/roles/fedimg/tasks/main.yml
+++ b/roles/fedimg/tasks/main.yml
@@ -136,3 +136,11 @@
tags:
- cron
- fedimg
+
+- name: hotfix - copy the consumers.py over to the site-packages
+ copy: src="{{ files }}/hotfix/fedimg/consumers.py"
dest=/usr/lib/python2.7/site-packages/fedimg/consumers.py
+ notify:
+ - restart fedmsg-hub
+ tags:
+ - fedimg
+ - hotfix
--
2.9.4
From 833da82dae0da1988e4345a1134120c981c84ba7 Mon Sep 17 00:00:00 2001
From: Sayan Chowdhury <sayan.chowdhury2012(a)gmail.com>
Date: Wed, 21 Mar 2018 15:14:31 +0530
Subject: [PATCH 2/2] fedimg: Add the hotfix patch to parse Cloud and Atomic
Image variant
Signed-off-by: Sayan Chowdhury <sayan.chowdhury2012(a)gmail.com>
---
files/hotfix/fedimg/consumers.py | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/files/hotfix/fedimg/consumers.py b/files/hotfix/fedimg/consumers.py
index cad1495..3ebd4cb 100644
--- a/files/hotfix/fedimg/consumers.py
+++ b/files/hotfix/fedimg/consumers.py
@@ -70,8 +70,19 @@ class FedimgConsumer(fedmsg.consumers.FedmsgConsumer):
compose_id = msg_info['compose_id']
cmetadata = fedfind.release.get_release_cid(compose_id).metadata
- images_meta = safeget(cmetadata, 'images', 'payload', 'images',
- 'CloudImages', 'x86_64')
+ # Till F27, both cloud-base and atomic images were available
+ # under variant CloudImages. With F28 and onward releases,
+ # cloud-base image compose moved to cloud variant and atomic images
+ # moved under atomic variant.
+ prev_rel = ['26', '27']
+ if msg_info['release_version'] in prev_rel:
+ images_meta = safeget(cmetadata, 'images', 'payload', 'images',
+ 'CloudImages', 'x86_64')
+ else:
+ images_meta = safeget(cmetadata, 'images', 'payload', 'images',
+ 'Cloud', 'x86_64')
+ images_meta.extend(safeget(cmetadata, 'images', 'payload',
+ 'images', 'AtomicHost', 'x86_64'))
if images_meta is None:
return
--
2.9.4
+1s?
[1] https://github.com/fedora-infra/fedimg/issues/71
5 years, 6 months
Re: Proposed zchunk file format - V3
by Jonathan Dieter
CC'ing fedora-infrastructure, as I think they got lost somewhere along
the way.
On Tue, 2018-03-20 at 17:04 +0100, Michal Domonkos wrote:
<snip>
> Yeah, the level doesn't really matter much. My point was, as long as
> we chunk, some of the data that we will be downloading we will already
> have locally. Typically (according to mdhist), it seems that package
> updates are more common than new additions, so we won't be reusing the
> unchanged parts of package tags. But that's inevitable if we're
> chunking.
Ok, I see your point, and you're absolutely right.
<snip>
> > The beauty of the zchunk format (or zsync, or any other chunked format)
> > is that we don't have to download different files based on what we
> > have, but rather, we download either fewer or more parts of the same
> > file based on what we have. From the server side, we don't have to
> > worry about the deltas, and the clients just get what they need.
>
> +1
>
> Simplicity is key, I think. Even at the cost of not having the
> perfectly efficient solution. The whole packaging stack is already
> complicated enough.
+1000 on that last!
<snip>
> While I'm not completely sure about application-specific boundaries
> being superior to buzhash (used by casync) in terms of data savings,
> it's clear that using http range requests and concatenating the
> objects together in a smart way (as you suggested previously) to
> reduce the number of HTTP requests is a good move in the right
> direction.
Just to be clear, zchunk *could* use buzhash. There's no rule about
where the boundaries need to be, only that the application creating the
zchunk file is consistent. I'd actually like to make the command-line
utility use buzhash, but I'm trying to keep the code BSD 2-clause, so I
can't just lift casync's buzhash code, and I haven't had time to write
that part myself.
Currently zck.c has a really ugly if statement that chooses a division
based on string matching if it's true and a really naive inefficient
rolling hash if it's false. If you wanted to contribute buzhash, I'd
happily take it!
> BTW, in the original thread, you mentioned a reduction of 30-40% when
> using casync. I'm wondering, how did you measure it? I saw chunk
> reuse ranging from 80% to 90% per metadata update, which seemed quite
> optimistic. What I did was:
>
> $ casync make snap1.caidx /path/to/repodata/snap1
> $ casync make --verbose snap2.caidx /path/to/repodata/snap2
> <snip>
> Reused chunks: X (Y%)
> <snip>
IIRC, I went into the web server logs and measured the number of bytes
that casync actually downloaded as compared to the gzip size of the
data.
Thanks so much for your interest!
Jonathan
5 years, 6 months
Initial pre-alpha version of zchunk available for testing and
comments
by Jonathan Dieter
I've got a working zchunk library, complete with some utilities at
https://github.com/jdieter/zchunk, but I wanted to get some feedback
before I went much further. It's only dependencies are libcurl and
(optionally, but very heavily recommended) libzstd.
There are test files in https://www.jdieter.net/downloads/zchunk-test,
and the dictionary I used is in https://www.jdieter.net/downloads.
What works:
* Creating zchunk files (using zck)
* Reading zchunk files (using unzck)
* Downloading zchunk files (using zckdl)
What doesn't:
* Resuming zchunk downloads
* Using any of the tools to overwrite a file
* Automatic maximum ranges in request detection
* Streaming chunking in the library
The main thing I want to ask for advice on is the last item on that
last list. Currently, every piece of data send to zck_compress() is
treated as a new chunk.
I'd prefer to have zck_compress() just keep streaming data and have a
zck_end_chunk() function that ends the current chunk, but zstd doesn't
support streamed compression with a dict in its dynamic library. You
have to use zstd's static library to get that function (because it's
not seen as stable yet).
Any suggestions on how to deal with this? Should I require the static
library, write my own wrapper that buffers the streamed data until
zck_end_chunk() is called, or just require each chunk to be sent in its
entirety?
Jonathan
5 years, 6 months
Fedora Infrastructure Meeting 2018-03-22 1800 UTC
by Stephen John Smoogen
This shared document is for the next fedora infrastructure meeting.
= Preamble =
The infrastructure team will be having its weekly meeting tomorrow,
2018-03-22 at 18:00 UTC in #fedora-meeting on the freenode network.
We have a gobby document
(see: https://fedoraproject.org/wiki/Gobby )
fedora-infrastructure-meeting-next is the document.
Please try and review and edit that document before the meeting and we
will use it to have our agenda of things to discuss. A copy as of today
is included in this email.
If you have something to discuss, add the topic to the discussion area
with your name. If you would like to teach other folks about some
application or setup in our infrastructure, please add that topic and
your name to the learn about section.
= Introduction =
We will use it over the week before the meeting to gather status and info and
discussion items and so forth, then use it in the irc meeting to transfer
information to the meetbot logs.
= Meeting start stuff =
#startmeeting Infrastructure (2018-03-22)
#meetingname infrastructure
#topic aloha
#chair smooge relrod nirik pingou puiterwijk tflink
= Let new people say hello =
#topic New folks introductions
#info This is a place where people who are interested in Fedora
Infrastructure can introduce themselves
= Status / Information / Trivia / Announcements =
(We put things here we want others on the team to know, but don't need
to discuss)
(Please use #info <the thing> - your name)
#topic announcements and information
#info Infrastructure hackfest in april:
https://fedoraproject.org/wiki/Infrastructure_Hackathon_2018
#info We are in F28 Beta Freeze! All changes to frozen hosts must get
+2 on list before being applied - kevin
#info Bodhi 3.5.1 update pushed out. This time with more feeeling.
#info
= Things we should discuss =
We use this section to bring up discussion topics. Things we want to talk about
as a group and come up with some consensus /suor decision or just brainstorm a
problem or issue. If there are none of these we skip this section.
(Use #topic your discussion topic - your username)
#topic Ticket cleanup
#info none this week.
#topic Moving bots to dedicated channels
#info fm-apps and other reporting agents might be better in dedicated channels
#info does zodbot need to be in every channel?
= Apprentice office hours =
#topic Apprentice Open office minutes
#info A time where apprentices may ask for help or look at problems.
Here we will discuss any apprentice questions, try and match up people looking
for things to do with things to do, progress, testing anything like that.
= Learn about some application or setup in infrastructure =
(This section, each week we get 1 person to talk about an application or setup
that we have. Just going over what it is, how to contribute, ideas for
improvement,
etc. Whoever would like to do this, just add the i/nfo in this section. In the
event we don't find someone to teach about something, we skip this section
and just move on to open floor.)
#topic Learn about:
#info none this week
= Meeting end stuff =
#topic Open Floor
#endmeeting
--
Stephen J Smoogen.
5 years, 6 months
Meeting Agenda Item: Introduction Peter Szabo
by Peter Szabo
Hi Infra-team,
I'm a new contributor candidate.
My name is Peter Szabo, IRC nick "sapo". Originally I'm Hungarian, but
currently I live in Brooklyn, New York, USA. My time zone is UTC - 05:00.
For 18 years I worked as an IT Engineer. I planned, installed and supported
complex contact center solutions. A single system was built upon mixed
Windows and Red Hat Linux (I have RHCSA) servers, 10-20 server entities per
system, depending on the customer requirements (number of end users,
required features, high availability needs, etc...) I also developed
(no-too-complex) scripts to meet customer needs and add some functionality
that the system lacked out of the box. (For example special reporting needs
or more advanced backup solution that the system offered). These scripts
were written in a system specific language, BASH, Python or PowerShell -
whichever was the most useful in the given scenario or was specifically
requested by the customer.
I'd like to join this team and community for three main reasons:
1. I always preferred open source and community driven products and
activities. I was an area manager in the early years of Waze the community
based GPS navigation program and made a great amount of mapping of my
neighborhood. I'm also a regular volunteer of New York Road Runners. I was
thinking for a long time to join the Fedora project, I just didn't have the
confidence to do it so far.
2. I'd like to improve my Linux administration and troubleshooting
skills as well as stepping to a higher level in scripting. I'm trying to
break out from the Voice world and move towards some less product specific
career that involves some kind of system operations and maybe some code
writing part as well.
3. Since I moved to the US I'm struggling with finding a job so I have a
lot of time that I want to use well in both ways, improving my skills and
doing something useful for others too.
What I'm looking to do is hard to tell at this point, since I don't know
the project's infrastructure in detail. The getting started guide suggests
to pick something from the outstanding issues. As a new contributor maybe I
could pick something from the easyfix category like #5290 - Generate
infrastructure map <https://pagure.io/fedora-infrastructure/issue/5290>
What my experience lies in is deploying and keeping servers alive,
improving them by writing automated jobs. I'm willing to learn any
technology but I'd like to stay as close as possible to the basic Linux
services since those are I'd like to learn more in details.
Who I am: I'm a guy who likes to keep the systems tidy and in good
condition. Someone for whom up-time is more important than new features.
Over 18 years of supporting in IT I learned first to think then to act. I'm
a perfectionist in the wrong sense of the word: I'm rarely satisfied with
my job even if others are.
Who I'm not: Though I worked with a large amount of Linux based systems I'm
absolutely not a Linux Ninja. I'm also not a scripting expert (yet). I
solved several problems by writing scripts but my work never needed
outstanding scripting experience. Actually these are some of the reasons
I'm here, to gain deeper knowledge in these things.
Right now I can offer as many as 12-16 hours per week of work. I hope
you'll find me useful and valuable, and I can find a mentor who could
introduce me to the infrastructure of the fedora project, and slowly I can
get a role here.
Peter
5 years, 6 months
Freeze Break Request: Make pkgs redirect more general
by Kevin Fenzi
Greetings.
I'd like to apply the following ansible patch and run the pkgs playbook.
This will make our pkgs redirect handle redirecting more than / and keep
links in bugs alive pointing to the new location.
See ticket: https://pagure.io/fedora-infrastructure/issue/6785 for more
details.
+1s?
kevin
--
diff --git a/roles/distgit/templates/lookaside-upload.conf
b/roles/distgit/templates/lookaside-upload.conf
index dc2b882..716a166 100644
--- a/roles/distgit/templates/lookaside-upload.conf
+++ b/roles/distgit/templates/lookaside-upload.conf
@@ -39,7 +39,7 @@ Alias /robots.txt /var/www/robots-src.txt
</Location>
RewriteEngine on
- RewriteRule "^/$" "https://src{{ env_suffix }}.fedoraproject.org/"
+ RewriteRule "^/(.*)$" "https://src{{ env_suffix
}}.fedoraproject.org/$1"
RewriteRule "^/login/$" "https://src{{ env_suffix
}}.fedoraproject.org/login/"
</VirtualHost>
5 years, 6 months
Freeze Break Request: update koji and oz_timeout on buildvm-s390x-02
by Kevin Fenzi
Greetings.
image composes using oz take a long time on our s390x image builder,
(due to them being in another datacenter). They take so long, they timeout.
See: https://pagure.io/koji/pull-request/837 from sinny and
https://pagure.io/koji/pull-request/841 which makes this value
configurable in kojid.conf
I would like to:
1. Update koji on buildvm-s390x-02 to the version I just built in
rawhide/f28 that has a backport of the patch that makes this value
configurable.
2. Apply the following ansible patch to set the timeout only on the
buildvm-s390x-02 builder:
diff --git a/roles/koji_builder/templates/kojid.conf
b/roles/koji_builder/templates/kojid.conf
index 832d80c..9f10d9c 100644
--- a/roles/koji_builder/templates/kojid.conf
+++ b/roles/koji_builder/templates/kojid.conf
@@ -20,7 +20,12 @@ maxjobs=25
keepalive=False
rpmbuild_timeout=172800
-
+{% if ansible_hostname.startswith('buildvm-s390x-02') %}
+; Set oz timeout higher on x390x image builder to allow it to finish.
+; Install timeout(seconds) for image build
+; if it's unset, use the number in /etc/oz/oz.cfg, supported since
oz-0.16.0
+oz_install_timeout=14400
+{% endif %}
use_createrepo_c=True
{% if host in groups['buildvm-s390x'] %}
3. Run the playbook on it to update the config and restart kojid.
+1s?
kevin
5 years, 6 months