[Bug 1313783] New: Regression 'kubectl config view' no longer has
--output=json flag
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1313783
Bug ID: 1313783
Summary: Regression 'kubectl config view' no longer has
--output=json flag
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The 'kubectl config view' no longer has --output=json flag.
Version-Release number of selected component (if applicable):
kubernetes-client-1.2.0-0.12.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Every time
Steps to Reproduce:
1. kubectl config view --output=json
Actual results:
unknown flag: --output[root@localhost ~]#
Expected results:
JSON output for kubernetes config as seen with previous kubernetes-client
versions.
$ kubectl config view --output=json
{
"kind": "Config",
"apiVersion": "v1",
"preferences": {},
"clusters": [
...
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years
[Bug 1311861] New: kubectl exec is broken in
kubernetes-1.2.0-0.4.alpha1
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1311861
Bug ID: 1311861
Summary: kubectl exec is broken in kubernetes-1.2.0-0.4.alpha1
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The 'kubectl exec' command has regressed, and produces this failure:
error: error executing remote command: Error executing command in container:
Unrecognized input header
Version-Release number of selected component (if applicable):
# rpm -q kubernetes
kubernetes-1.2.0-0.4.alpha1.git4c8e6f4.fc23.x86_64
How reproducible:
Every time
Steps to Reproduce:
1. Run some pods.
2. Run the code below.
Actual results:
# rpm -q kubernetes
kubernetes-1.2.0-0.4.alpha1.git4c8e6f4.fc23.x86_64
# rpm -q docker
docker-1.10.2-1.git86e59a5.fc23.x86_64
# docker exec
k8s_mock-container.88f1f36_mock-aogwd_default_d53a657a-db98-11e5-8811-9e005dd50001_39e4050e
date
Thu Feb 25 08:39:58 UTC 2016
# kubectl get pods
NAME READY STATUS RESTARTS AGE
mock-aogwd 1/1 Running 0 2m
mock-cxirx 1/1 Running 0 2m
# kubectl exec mock-aogwd -- date
error: error executing remote command: Error executing command in container:
Unrecognized input header
Expected results:
# rpm -q kubernetes
kubernetes-1.1.0-0.17.git388061f.fc23.x86_64
# rpm -q docker
docker-1.9.1-6.git6ec29ef.fc23.x86_64
# kubectl get pods
NAME READY STATUS RESTARTS AGE
mock-on1bw 1/1 Running 0 46s
mock-ra9ad 1/1 Running 0 36s
# kubectl exec mock-on1bw -- date
Thu Feb 25 08:36:13 UTC 2016
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years
[Bug 1259148] New: Regression: kubernetes 1.1.0 breaks watching for resources
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1259148
Bug ID: 1259148
Summary: Regression: kubernetes 1.1.0 breaks watching for
resources
Product: Fedora
Version: 22
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
lsm5(a)redhat.com, nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
When cockpit tries to watch resources against kubernetes 1.1.0 it gets the
following complaints:
.> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
> watching kubernetes endpoints failed: too old resource version: 0 (7)
E> watching kubernetes nodes failed: too old resource version: 0 (13)
> watching kubernetes pods failed: too old resource version: 0 (1)
Version-Release number of selected component (if applicable):
kubernetes x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 33 k
kubernetes-client x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 2.8 M
kubernetes-master x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 13 M
kubernetes-node x86_64 1.1.0-0.14.gitb6f18c7.fc22 updates-testing 9.1 M
How reproducible:
Every time.
Steps to Reproduce:
1. Run Cockpit on your node, with cockpit-kubernetes
2. Navigate to cluster tab
3. Look at javascript console output
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years
[Bug 1315472] New: kube-controller-manager ignores --master argument
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1315472
Bug ID: 1315472
Summary: kube-controller-manager ignores --master argument
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: dustymabe(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
kube-controller-manager ignores --master argument
Version-Release number of selected component (if applicable):
[root@f23 kubernetes]# rpm -qa | grep kubernetes sort
kubernetes-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-client-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-master-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
kubernetes-node-1.2.0-0.13.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Always
Steps to Reproduce:
After setting up system like you normally would do (create cert etc..) observe
the kube-controller manager service ignores the '--master' arg on the command
line.
You can repro with this command:
```
/usr/bin/kube-controller-manager --logtostderr=true --v=0
--master=http://127.0.0.1:8080
--service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key
```
I set up my systems using the lines in this ansible playbook:
https://github.com/dustymabe/vagrantdirs/blob/master/f23/playbook.yml#L62
Actual results:
The following is the message you receive. Note the "nor --master was specified"
log message, which indicates it didn't recognize the
--master=http://127.0.0.1:8080 argument we provided:
```
[root@f23 kubernetes]# /usr/bin/kube-controller-manager --logtostderr=true
--v=0 --master=http://127.0.0.1:8080
--service_account_private_key_file=/etc/pki/kube-apiserver/serviceaccount.key
W0307 20:07:44.007893 13698 client_config.go:352] Neither --kubeconfig nor
--master was specified. Using the inClusterConfig. This might not work.
W0307 20:07:44.008076 13698 client_config.go:357] error creating
inClusterConfig, falling back to default config: %vunable to load in-cluster
configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be
defined
I0307 20:07:44.008618 13698 plugins.go:71] No cloud provider specified.
I0307 20:07:44.010789 13698 replication_controller.go:185] Starting RC
Manager
I0307 20:07:44.012329 13698 nodecontroller.go:134] Sending events to api
server.
E0307 20:07:44.012490 13698 controllermanager.go:212] Failed to start service
controller: ServiceController should not be run without a cloudprovider.
I0307 20:07:44.012519 13698 controllermanager.go:225] allocate-node-cidrs set
to false, node controller not creating routes
I0307 20:07:44.023177 13698 controllermanager.go:258] Starting
extensions/v1beta1 apis
I0307 20:07:44.023323 13698 controllermanager.go:260] Starting horizontal pod
controller.
I0307 20:07:44.023500 13698 controllermanager.go:274] Starting daemon set
controller
I0307 20:07:44.023730 13698 controllermanager.go:280] Starting job controller
I0307 20:07:44.023833 13698 controller.go:180] Starting Daemon Sets
controller manager
```
As a result we see this when trying to create a pod:
```
[root@f23 ~]# kubectl create -f /tmp/busybox.yaml
Error from server: error when creating "/tmp/busybox.yaml": pods "busybox" is
forbidden: no API token found for service account default/default, retry after
the token is automatically created and added to the service account
```
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years
[Bug 1313787] New: Regression 'kubectl create -f path' no longer
works
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1313787
Bug ID: 1313787
Summary: Regression 'kubectl create -f path' no longer works
Product: Fedora
Version: 23
Component: kubernetes
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jcajka(a)redhat.com, jchaloup(a)redhat.com,
nhorman(a)redhat.com, vbatts(a)redhat.com
Description of problem:
The syntax 'kubectl create -f path.json' is often used to create objects in
kubernetes. It is no longer accepted.
Version-Release number of selected component (if applicable):
kubernetes-client-1.2.0-0.12.alpha6.gitf0cd09a.fc23.x86_64
How reproducible:
Every time.
Steps to Reproduce:
1. kubectl create -f test.json
Actual results:
unknown shorthand flag: 'f' in -f[root@localhost ~]# rpm -q kubernetes-client
Expected results:
Create the objects. Or tell us that test.json doesn't exist (if it doesn't).
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years