https://bugzilla.redhat.com/show_bug.cgi?id=1202517
Bug ID: 1202517
Summary: docker fd leak
Product: Red Hat Enterprise Linux 7
Version: 7.2
Component: docker
Assignee: dwalsh(a)redhat.com
Reporter: dwalsh(a)redhat.com
QA Contact: virt-bugs(a)redhat.com
CC: adimania(a)gmail.com, admiller(a)redhat.com,
arozansk(a)redhat.com, b(a)wtnb.mydns.jp,
dwalsh(a)redhat.com, extras-qa(a)fedoraproject.org,
golang(a)lists.fedoraproject.org, hushan.jia(a)gmail.com,
jchaloup(a)redhat.com, jeder(a)redhat.com,
jmario(a)redhat.com, jperrin(a)centos.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, miminar(a)redhat.com, s(a)shk.io,
srao(a)redhat.com, thrcka(a)redhat.com,
tstclair(a)redhat.com, vbatts(a)redhat.com,
vgoyal(a)redhat.com
Depends On: 1189028
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1189028
[Bug 1189028] docker fd leak
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1214958
Bug ID: 1214958
Summary: etcd-v2.1.0-alpha.0 is available
Product: Fedora
Version: rawhide
Component: etcd
Keywords: FutureFeature, Triaged
Assignee: lacypret(a)gmail.com
Reporter: upstream-release-monitoring(a)fedoraproject.org
QA Contact: extras-qa(a)fedoraproject.org
CC: avagarwa(a)redhat.com, eparis(a)redhat.com,
golang(a)lists.fedoraproject.org, jchaloup(a)redhat.com,
lacypret(a)gmail.com, lemenkov(a)gmail.com,
lsm5(a)redhat.com, walters(a)redhat.com
Latest upstream release: v2.1.0-alpha.0
Current version/release in rawhide: 2.0.10-1.fc23
URL: https://github.com/coreos/etcd
Please consult the package updates policy before you issue an update to a
stable branch: https://fedoraproject.org/wiki/Updates_Policy
More information about the service that created this bug can be found at:
https://fedoraproject.org/wiki/Upstream_release_monitoring
Please keep in mind that with any upstream change, there may also be packaging
changes that need to be made. Specifically, please remember that it is your
responsibility to review the new version to ensure that the licensing is still
correct and that no non-free or legally problematic items have been added
upstream.
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1298116
Bug ID: 1298116
Summary: kubernetes: Improper admission check control
Product: Security Response
Component: vulnerability
Keywords: Security
Severity: medium
Priority: medium
Assignee: security-response-team(a)redhat.com
Reporter: amaris(a)redhat.com
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
It was found that patch will check admission control with an empty object and
if it passes, then will proceed to update the object with the patch. Admission
control plugins don't get a chance to see/validate what is actually going to be
updated.
CVE request:
http://seclists.org/oss-sec/2016/q1/76
Upstream patch:
https://github.com/deads2k/kubernetes/commit/d1e258afcf837cf70522c2950bb0ae…
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1298117
Bug ID: 1298117
Summary: kubernetes: Improper admission check control
[fedora-all]
Product: Fedora
Version: 23
Component: kubernetes
Keywords: Security, SecurityTracking
Severity: medium
Priority: medium
Assignee: jchaloup(a)redhat.com
Reporter: amaris(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Blocks: 1298116
This is an automatically created tracking bug! It was created to ensure
that one or more security vulnerabilities are fixed in affected versions
of Fedora.
For comments that are specific to the vulnerability please use bugs filed
against the "Security Response" product referenced in the "Blocks" field.
For more information see:
http://fedoraproject.org/wiki/Security/TrackingBugs
When submitting as an update, use the fedpkg template provided in the next
comment(s). This will include the bug IDs of this tracking bug as well as
the relevant top-level CVE bugs.
Please also mention the CVE IDs being fixed in the RPM changelog and the
fedpkg commit message.
NOTE: this issue affects multiple supported versions of Fedora. While only
one tracking bug has been filed, please correct all affected versions at
the same time. If you need to fix the versions independent of each other,
you may clone this bug as appropriate.
[bug automatically created by: add-tracking-bugs]
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1298116
[Bug 1298116] kubernetes: Improper admission check control
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1313783
Bug ID: 1313783
Summary: Regression 'kubectl config view' no longer has
--output=json flag
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The 'kubectl config view' no longer has --output=json flag.
Version-Release number of selected component (if applicable):
kubernetes-client-1.2.0-0.12.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Every time
Steps to Reproduce:
1. kubectl config view --output=json
Actual results:
unknown flag: --output[root@localhost ~]#
Expected results:
JSON output for kubernetes config as seen with previous kubernetes-client
versions.
$ kubectl config view --output=json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
...
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1311861
Bug ID: 1311861
Summary: kubectl exec is broken in kubernetes-1.2.0-0.4.alpha1
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The 'kubectl exec' command has regressed, and produces this failure:
error: error executing remote command: Error executing command in container:
Unrecognized input header
Version-Release number of selected component (if applicable):
# rpm -q kubernetes
kubernetes-1.2.0-0.4.alpha1.git4c8e6f4.fc23.x86_64
How reproducible:
Every time
Steps to Reproduce:
1. Run some pods.
2. Run the code below.
Actual results:
# rpm -q kubernetes
kubernetes-1.2.0-0.4.alpha1.git4c8e6f4.fc23.x86_64
# rpm -q docker
docker-1.10.2-1.git86e59a5.fc23.x86_64
# docker exec
k8s_mock-container.88f1f36_mock-aogwd_default_d53a657a-db98-11e5-8811-9e005dd50001_39e4050e
date
Thu Feb 25 08:39:58 UTC 2016
# kubectl get pods
NAME READY STATUS RESTARTS AGE
mock-aogwd 1/1 Running 0 2m
mock-cxirx 1/1 Running 0 2m
# kubectl exec mock-aogwd -- date
error: error executing remote command: Error executing command in container:
Unrecognized input header
Expected results:
# rpm -q kubernetes
kubernetes-1.1.0-0.17.git388061f.fc23.x86_64
# rpm -q docker
docker-1.9.1-6.git6ec29ef.fc23.x86_64
# kubectl get pods
NAME READY STATUS RESTARTS AGE
mock-on1bw 1/1 Running 0 46s
mock-ra9ad 1/1 Running 0 36s
# kubectl exec mock-on1bw -- date
Thu Feb 25 08:36:13 UTC 2016
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1259148
Bug ID: 1259148
Summary: Regression: kubernetes 1.1.0 breaks watching for
resources
Product: Fedora
Version: 22
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
lsm5(a)redhat.com, nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
When cockpit tries to watch resources against kubernetes 1.1.0 it gets the
following complaints:
.> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
E> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
Version-Release number of selected component (if applicable):
kubernetes x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 33 k
kubernetes-client x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 2.8 M
kubernetes-master x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 13 M
kubernetes-node x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 9.1 M
How reproducible:
Every time.
Steps to Reproduce:
1. Run Cockpit on your node, with cockpit-kubernetes
2. Navigate to cluster tab
3. Look at javascript console output
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1315472
Bug ID: 1315472
Summary: kube-controller-manager ignores --master argument
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: dustymabe(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
kube-controller-manager ignores --master argument
Version-Release number of selected component (if applicable):
[root@f23 kubernetes]# rpm -qa | grep kubernetes sort
kubernetes-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-client-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-master-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-node-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Always
Steps to Reproduce:
After setting up system like you normally would do (create cert etc..) observe
the kube-controller manager service ignores the '--master' arg on the command
line.
You can repro with this command:
```
/usr/bin/kube-controller-manager --logtostderr=true --v=0
--master=http://127.0.0.1:8080
--service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key
```
I set up my systems using the lines in this ansible playbook:
https://github.com/dustymabe/vagrantdirs/blob/master/f23/playbook.yml#L62
Actual results:
The following is the message you receive. Note the "nor --master was specified"
log message, which indicates it didn't recognize the
--master=http://127.0.0.1:8080 argument we provided:
```
[root@f23 kubernetes]# /usr/bin/kube-controller-manager --logtostderr=true
--v=0 --master=http://127.0.0.1:8080
--service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key
W0307 20:07:44.007893 13698 client_config.go:352] Neither --kubeconfig nor
--master was specified. Using the inClusterConfig. This might not work.
W0307 20:07:44.008076 13698 client_config.go:357] error creating
inClusterConfig, falling back to default config: %vunable to load in-cluster
configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be
defined
I0307 20:07:44.008618 13698 plugins.go:71] No cloud provider specified.
I0307 20:07:44.010789 13698 replication_controller.go:185] Starting RC
Manager
I0307 20:07:44.012329 13698 nodecontroller.go:134] Sending events to api
server.
E0307 20:07:44.012490 13698 controllermanager.go:212] Failed to start service
controller: ServiceController should not be run without a cloudprovider.
I0307 20:07:44.012519 13698 controllermanager.go:225] allocate-node-cidrs set
to false, node controller not creating routes
I0307 20:07:44.023177 13698 controllermanager.go:258] Starting
extensions/v1beta1 apis
I0307 20:07:44.023323 13698 controllermanager.go:260] Starting horizontal pod
controller.
I0307 20:07:44.023500 13698 controllermanager.go:274] Starting daemon set
controller
I0307 20:07:44.023730 13698 controllermanager.go:280] Starting job controller
I0307 20:07:44.023833 13698 controller.go:180] Starting Daemon Sets
controller manager
```
As a result we see this when trying to create a pod:
```
[root@f23 ~]# kubectl create -f /tmp/busybox.yaml
Error from server: error when creating "/tmp/busybox.yaml": pods "busybox" is
forbidden: no API token found for service account default/default, retry after
the token is automatically created and added to the service account
```
--
You are receiving this mail because:
You are on the CC list for the bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1313787
Bug ID: 1313787
Summary: Regression 'kubectl create -f path' no longer works
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The syntax 'kubectl create -f path.json' is often used to create objects in
kubernetes. It is no longer accepted.
Version-Release number of selected component (if applicable):
kubernetes-client-1.2.0-0.12.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Every time.
Steps to Reproduce:
1. kubectl create -f test.json
Actual results:
unknown shorthand flag: 'f' in -f[root@localhost ~]# rpm -q kubernetes-client
Expected results:
Create the objects. Or tell us that test.json doesn't exist (if it doesn't).
--
You are receiving this mail because:
You are on the CC list for the bug.