Freeze Break Request: More debugging on armv7 composer vm
by Kevin Fenzi
Greetings.
For a long while we were plagued by a issue where our
buildvm-armv7-01/02/03 vm's would get 'stuck' on nightly rawhide and
branched composes.
Laura found a workaround for this issue where we set a obscure sysctl
and then they don't hang anymore. However, it would be nice if they
wouldn't hang in cases like this for everyone else out of the box, so
upstream kernel developers would like us to do some more debugging of
the issue and send them that to try and get the default case working right.
So, what I would like to do:
* Take buildvm-armv7-03 and install
https://koji.fedoraproject.org/koji/taskinfo?taskID=25509848 on it and
reboot it into that kernel.
* Disable the sysctl we set in ansible for it:
diff --git a/roles/koji_builder/tasks/main.yml
b/roles/koji_builder/tasks/main.yml
index 6b6ecf0..75c427c 100644
--- a/roles/koji_builder/tasks/main.yml
+++ b/roles/koji_builder/tasks/main.yml
@@ -276,4 +276,4 @@
sysctl: name=vm.highmem_is_dirtyable value=1 state=present
sysctl_set=yes reload=yes
tags:
- koji_builder
- when: inventory_hostname.startswith(('buildvm-armv7-01.arm',
'buildvm-armv7-02', 'buildvm-armv7-03'))
+ when: inventory_hostname.startswith(('buildvm-armv7-01.arm',
'buildvm-armv7-02'))
* Wait for a compose to hang on it and gather the information needed,
then put everything back the way it was before.
Note that this causes no compose failures, just a delay as the compose
waits for that job to finish, and rebooting the box causes the job to
restart and complete fine on that box.
We could hold off and do this after freeze, but I have a feeling freeze
is going to be long and I'd prefer to get the info to upstream while
they are still interested in looking into it.
Thoughts? +1s? -1s? rotten fruit?
kevin
5 years, 6 months
Freeze Break Request: fix prerelease redirects on getfedora.org
by Kevin Fenzi
From 85c547ceae99c648375f69e67722b6c152ca432f Mon Sep 17 00:00:00 2001
From: Kevin Fenzi <kevin(a)scrye.com>
Date: Mon, 12 Mar 2018 18:52:46 +0000
Subject: [PATCH] Tweak the prerelease redirects for
server/workstation/atomic.
If using the default lang, there will not be a /NN/ code in the url
resulting
in the redirect failing to work. So, we drop the / on the end and it works
for both /en/workstation/prerelease and /workstation/prerelease. We need to
fix this so users aren't directed to a Fedora 28 prerelease page even
before
Beta.
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
---
playbooks/include/proxies-redirects.yml | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/playbooks/include/proxies-redirects.yml
b/playbooks/include/proxies-redirects.yml
index 068e402..78f8ff6 100644
--- a/playbooks/include/proxies-redirects.yml
+++ b/playbooks/include/proxies-redirects.yml
@@ -350,22 +350,22 @@
- role: httpd/redirectmatch
name: prerelease-to-final-gfo-ws
website: getfedora.org
- regex: /(.*)/workstation/prerelease.*$
- target: https://getfedora.org/$1
+ regex: /(.*)workstation/prerelease.*$
+ target: https://getfedora.org/$1/workstation
when: env != 'staging'
- role: httpd/redirectmatch
name: prerelease-to-final-gfo-srv
website: getfedora.org
- regex: /(.*)/server/prerelease.*$
- target: https://getfedora.org/$1
+ regex: /(.*)server/prerelease.*$
+ target: https://getfedora.org/$1/server
when: env != 'staging'
- role: httpd/redirectmatch
name: prerelease-to-final-gfo-atomic
website: getfedora.org
- regex: /(.*)/atomic/prerelease.*$
- target: https://getfedora.org/$1
+ regex: /(.*)atomic/prerelease.*$
+ target: https://getfedora.org/$1/atomic
when: env != 'staging'
- role: httpd/redirectmatch
--
1.8.3.1
5 years, 6 months
FBR: Fix the broken fedimg hub config key
by Sayan Chowdhury
Hi,
After the last change for updating fedimg on stg. The config key in
the prod also broke. To fix this issue I would like to apply the
following patch so the it can listen to pungi messages.
---
roles/fedimg/templates/fedmsg.d/fedimg.py | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/roles/fedimg/templates/fedmsg.d/fedimg.py
b/roles/fedimg/templates/fedmsg.d/fedimg.py
index 25401cd..f3aa4da 100644
--- a/roles/fedimg/templates/fedmsg.d/fedimg.py
+++ b/roles/fedimg/templates/fedmsg.d/fedimg.py
@@ -29,8 +29,6 @@ config = {
}
{% else %}
config = {
- 'fedimgconsumer.dev.enabled': False,
- 'fedimgconsumer.prod.enabled': True,
- 'fedimgconsumer.stg.enabled': False,
+ 'fedimgconsumer': True,
}
{% endif %}
--
+1s?
--
Sayan Chowdhury <https://sayanchowdhury.dgplug.org/>
Senior Software Engineer, Fedora Engineering - Emerging Platform
GPG Fingerprint : 0F16 E841 E517 225C 7D13 AB3C B023 9931 9CD0 5C8B
Proud to work at The Open Organization!
5 years, 6 months
FBR: fix hooks for new packages on src.fedoraproject.org
by Kevin Fenzi
Greetings.
It was reported to me that a number of new packages were not showing up
in grokmirror on src.fedoraproject.org. On looking I noted that new
packages did not have the chained post-receive hook we normally put in
place. The cause seems to be a typo in the command.
I'd like to:
* Apply the attached git patch
* run the pkgs ansible playbook.
* Manually run the check-git processes to fix all existing packages.
* Manually update grokmirror for all existing packages.
Note that without this we are not backing up any of the new packages, as
we are using grokmirror to back them up now.
+1s?
kevin
--
Subject: [PATCH] fix invocation of git-check-perms. This is needed to setup
hooks on new packages as they are added
Signed-off-by: Kevin Fenzi <kevin(a)scrye.com>
---
roles/distgit/tasks/main.yml | 2 +-
roles/gitolite/check_fedmsg_hooks/tasks/main.yml | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/roles/distgit/tasks/main.yml b/roles/distgit/tasks/main.yml
index 7d1f280..40c8a84 100644
--- a/roles/distgit/tasks/main.yml
+++ b/roles/distgit/tasks/main.yml
@@ -150,7 +150,7 @@
name="check-update-hooks" cron_file="ansible-check-update-hooks"
minute=0 hour=0 weekday=3
user=nobody
- job="MAILTO=root PATH=/usr/bin:/usr/local/bin/git-check-perms
--check=update-hook /srv/git/repositories"
+ job="/usr/local/bin/git-check-perms --check=update-hook
/srv/git/repositories"
tags:
- distgit
diff --git a/roles/gitolite/check_fedmsg_hooks/tasks/main.yml
b/roles/gitolite/check_fedmsg_hooks/tasks/main.yml
index 8766c8e..a22018e 100644
--- a/roles/gitolite/check_fedmsg_hooks/tasks/main.yml
+++ b/roles/gitolite/check_fedmsg_hooks/tasks/main.yml
@@ -8,7 +8,7 @@
minute=10
hour="0, 12"
user=nobody
- job="MAILTO=root PATH=/usr/bin:/usr/local/bin/git-check-perms
/srv/git/repositories --check=fedmsg-hook -f"
+ job="MAILTO=root /usr/local/bin/git-check-perms
/srv/git/repositories --check=fedmsg-hook -f"
tags:
- git
- gitolite
--
1.8.3.1
5 years, 6 months
Re: A proof-of-concept for delta'ing repodata
by Michal Domonkos
Hi Jonathan,
To me, the zchunk idea looks good.
Incidentally, for the last couple of months, I have been trying to
rethink the way we cache metadata on the clients, as part of the
libdnf (re)design efforts. My goal was to de-duplicate the data
between similar repos in the cache as well as decrease the size that
needs to be downloaded every time (inevitably leading to this topic).
I came up with two different strategies:
1) Chunking
At first, I realized that there's a resemblance of the git data model
(a content-addressable file system) in our repodata.
Git has objects. They can either be blobs or trees. A tree is an index
of objects referred to by their hashes. In our domain, we have
repomd.xml (a tree) that refers to primary.xml and other files
(trees), which in turn refer (well, semantically at least) to
<package> snippets (blobs) and rpm files. What's different from git is
that our trees are xml files and we compress/combine some of them in a
single file (such as primary.xml). On the abstract level, though, the
concept is the same.
With this, you already get a pretty efficient way to distribute a
recursive data structure such as the repodata, if you can break it
down into objects wisely. It might not be super efficient, but it's
many times better than what we have now.
That made me think that either using git (libgit2) directly or doing a
small, lightweight implementation of the core concepts might be the
way to go. I even played with the latter a bit (I didn't get to
breaking down primary.xml, though):
https://github.com/dmnks/rhs-proto
In the context of this thread, this is basically what you do with
zchunk (just much better) :)
2) Deltas
Later, during this year's devconf, I had a few "brainstorming"
sessions with Florian Festi who pointed out that the differences in
metadata updates might often be on the sub-package level (e.g. NEVRA
in the version tag) so chunking on the package boundaries might not
give us the best results possible. Instead perhaps, we could generate
deltas on the binary level.
Git does implement object deltas (see packfiles). However, they
require the webserver to be "smart" while all we can afford in the
Fedora infrastructure are pure HTTP GET requests, so that's already a
no-go.
An alternative would be to pre-generate (compressed) binary deltas for
the last N versions and let clients download an index file that will
tell them what deltas they're missing and should download. This is
basically what debian's pdiff format does. One downside to this
approach is that it doesn't give us the de-duplication on clients
consuming multiple repos with similar content (probably quite common
with RHEL subscriptions at least).
Then I stumbled upon casync which combines the benefits of both
strategies; it chunks based on the shape of the data (arguably giving
better results than chunking on the package boundaries), and it
doesn't require a smart protocol. However, it involves a lot of HTTP
requests as you already mentioned.
Despite that, I'm still leaning towards chunking as being the better
solution of the two. The question is, how much granularity we want.
You made a good point: the repodata format is fixed (be it xml or
solv), so we might as well take advantage of it to detect boundaries
for chunking, rather than using a rolling hash (but I have no data to
back it up). I'm not sure how to approach the many-GET-requests (or
the lack of range support) problem, though.
As part of my efforts, I created this "small" git repo that contains
metadata snapshots since ~February which can be useful to see how
typical metadata updates look like. Feel free to use it (e.g. for
testing out zchunk):
https://pagure.io/mdhist
Thanks,
Michal
5 years, 6 months
FBR: Update Bodhi tree filenames for atomic-workstation
by Patrick マルタインアンドレアス Uiterwijk
Hi all,
The filenames for atomic workstation treefiles changed from -ostree- to -atomic-.
Can I get +1s?
Patrick
diff --git a/roles/bodhi2/backend/templates/pungi.rpm.conf.j2 b/roles/bodhi2/backend/templates/pungi.rpm.conf.j2
index ff31d400e..1561360d6 100644
--- a/roles/bodhi2/backend/templates/pungi.rpm.conf.j2
+++ b/roles/bodhi2/backend/templates/pungi.rpm.conf.j2
@@ -151,7 +151,11 @@ ostree = {
[% if release.version_int >= 28 %]
"version": "!OSTREE_VERSION_FROM_LABEL_DATE_TYPE_RESPIN",
[% endif %]
- "treefile": "fedora-ostree-workstation-updates-[[ request.name ]].json",
+ [% if release.version_int >= 28 %]
+ "treefile": "fedora-atomic-workstation-updates-[[ request.name ]].json",
+ [% else %]
+ "treefile": "fedora-ostree-workstation-updates-[[ request.name ]].json",
+ [% endif %]
"config_url": "https://pagure.io/workstation-ostree-config.git",
"config_branch": "f[[ release.version ]]",
"repo": [
5 years, 6 months
FBR: add exclude to mirror crawler
by Adrian Reber
The rsync based crawler has two excludes hardcoded:
cmd = "rsync --temp-dir=/tmp -r --exclude=.snapshot --exclude='*.~tmp~'"
I am just in discussions with a new mirror and the rsync based crawler
fails there with:
rsync: opendir "/lost+found" (in epel) failed: Permission denied (13)
To avoid this in this case and also on other mirrors I would like to
apply following patch:
diff --git a/roles/mirrormanager/frontend2/templates/mirrormanager2.cfg b/roles/mirrormanager/frontend2/templates/mirrormanager2.cfg
index 2bdd60273..69c930b23 100644
--- a/roles/mirrormanager/frontend2/templates/mirrormanager2.cfg
+++ b/roles/mirrormanager/frontend2/templates/mirrormanager2.cfg
@@ -161,7 +161,7 @@ CHECK_SESSION_IP = True
# Specify additional rsync parameters for the crawler
# # --timeout 14400: abort rsync crawl after 4 hours
-CRAWLER_RSYNC_PARAMETERS = '--no-motd --timeout 14400'
+CRAWLER_RSYNC_PARAMETERS = '--no-motd --timeout 14400 --exclude=lost+found'
###
# Configuration options used by the crons
This excludes 'lost+found' from being scanned on all hosts. Can I get two +1
to update the crawler's configuration?
Adrian
5 years, 6 months
Unsubscribe from broken Jenkins build
by Justin W. Flory
Hi all,
For the past week, I've started to receive a few hundred emails of
failed Jenkins builds for the Elections application.
https://pagure.io/elections
I'm not sure why I'm subscribed to the Jenkins build failures. Could
someone help fix the broken builds (email output below) or unsubscribe
me from these notifications?
Thanks!
-------- Forwarded Message --------
Subject: Build failed in Jenkins: elections #570
Date: Mon, 12 Mar 2018 01:24:06 +0000 (UTC)
From: jenkins(a)fedoraproject.org
To: pingou(a)pingoured.fr, rlerch(a)redhat.com, git(a)jwf.io
See <https://jenkins.fedorainfracloud.org/job/elections/570/>
------------------------------------------
Started by an SCM change
Building remotely on EL7 (el7 EL el rhel RHEL rhel7 RHEL7) in workspace
<https://jenkins.fedorainfracloud.org/job/elections/ws/>
Cloning the remote Git repository
Cloning repository https://github.com/fedora-infra/elections.git
> git init <https://jenkins.fedorainfracloud.org/job/elections/ws/> #
timeout=10
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init
<https://jenkins.fedorainfracloud.org/job/elections/ws/>
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$4.execute(CliGitAPIImpl.java:585)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:442)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at ......remote call to EL7(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1433)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:253)
at hudson.remoting.Channel.call(Channel.java:797)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy42.execute(Unknown Source)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1003)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Error performing git command
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1611)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1576)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1572)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1233)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$4.execute(CliGitAPIImpl.java:583)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:442)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
at hudson.remoting.Request$2.run(Request.java:332)
at
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at hudson.Proc$LocalProc.<init>(Proc.java:264)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:819)
at hudson.Launcher$ProcStarter.start(Launcher.java:381)
at
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1596)
... 15 more
ERROR: null
Skipping Cobertura coverage report as build was not UNSTABLE or better ...
5 years, 6 months