[Bug 1096286] New: "docker top all" doesn't show processes when --tty=false
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1096286
Bug ID: 1096286
Summary: "docker top all" doesn't show processes when
--tty=false
Product: Red Hat Enterprise Linux 7
Version: 7.1
Component: docker
Assignee: lsm5(a)redhat.com
Reporter: ldoktor(a)redhat.com
QA Contact: virt-bugs(a)redhat.com
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, skottler(a)redhat.com,
vbatts(a)redhat.com
Depends On: 1088259
+++ This bug was initially created as a clone of Bug #1088259 +++
After using the same magic required for the upstream docker 0.10 this behaves
the same on RHEL docker-0.10.0-8.el7.x86_64
Description of problem:
Hi guys, when I run docker with --tty=false, the `docker top $NAME all` doesn't
show any processes (just headers). When using only `docker top $NAME` it shows
them...
Also the `ps` command inside container works, but `ps all` does not.
Version-Release number of selected component (if applicable):
docker-0.10.0-8.el7.x86_64
docker-io-0.9.0-3.fc20.x86_64
How reproducible:
always
Steps to Reproduce:
1. docker run -i --tty=false fedora bash
2. docker logs $NAME
Actual results:
F UID PID PPID
PRI NI VSZ RSS
WCHAN STAT TTY TIME
COMMAND
Expected results:
F UID PID PPID
PRI NI VSZ RSS
WCHAN STAT TTY TIME
COMMAND
1 0 24006 23954
20 0 11732 540
- R pts/14 0:00
bash
--- Additional comment from Lukas Doktor on 2014-05-05 03:41:35 EDT ---
Can't reproduce with the upstream Docker version 0.10.0, build dc9c28f/0.10.0
top doesn't work at all (due of
https://bugzilla.redhat.com/show_bug.cgi?id=1088125 )
--- Additional comment from Lukas Doktor on 2014-05-05 03:46:07 EDT ---
OK I moved it to the old cgroup location and the results are the same. So
booth, fedora docker-io-0.9.0-3.fc20.x86_64 and upstream dc9c28f/0.10.0 are
unable to list processes using `all` argument in non-tty mode.
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1088259
[Bug 1088259] "docker top all" doesn't show processes when --tty=false
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1088259] New: "docker top all" doesn't show processes when --tty=false
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1088259
Bug ID: 1088259
Summary: "docker top all" doesn't show processes when
--tty=false
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: ldoktor(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, skottler(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
Hi guys, when I run docker with --tty=false, the `docker top $NAME all` doesn't
show any processes (just headers). When using only `docker top $NAME` it shows
them...
Also the `ps` command inside container works, but `ps all` does not.
Version-Release number of selected component (if applicable):
docker-io-0.9.0-3.fc20.x86_64
How reproducible:
always
Steps to Reproduce:
1. docker run -i --tty=false fedora bash
2. docker logs $NAME
Actual results:
F UID PID PPID
PRI NI VSZ RSS
WCHAN STAT TTY TIME
COMMAND
Expected results:
F UID PID PPID
PRI NI VSZ RSS
WCHAN STAT TTY TIME
COMMAND
1 0 24006 23954
20 0 11732 540
- R pts/14 0:00
bash
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1111189] New: yumBackend started with each `docker start`
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1111189
Bug ID: 1111189
Summary: yumBackend started with each `docker start`
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: ldoktor(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
hushan.jia(a)gmail.com, lsm5(a)redhat.com,
mattdm(a)redhat.com, mgoldman(a)redhat.com, s(a)shk.io,
vbatts(a)redhat.com
Description of problem:
Hello guys, today I noticed 100% utilization followed by each `docker run` or
`docker start`. It wasn't caused by the docker itself, but with each instance,
yumBackend is executed. It never happened with older version.
4806 ? RN 0:00 /usr/bin/python
/usr/share/PackageKit/helpers/yum/yuBackend.py get-updates none
Version-Release number of selected component (if applicable):
docker-io-1.0.0-1.fc20.x86_64
How reproducible:
always
Steps to Reproduce:
1. docker run -t -i fedora bash
2. top
3. ps ax |grep yum
Actual results:
yumBackend is started and takes couple of secconds in 100% utilization
4806 ? RN 0:00 /usr/bin/python
/usr/share/PackageKit/helpers/yum/yuBackend.py get-updates none
Expected results:
Apart from container no other processes should be started. (I can run `docker
run -t -i fedora bash -c exit` which finishes immediately and the only process
using cpu is the yumBackend process)
Additional info:
I tried executing `busybox sh`, `bash -c exit`, .... I even rebooted the
machine but the results are the same.
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1102751] New: Got "Error running removeDevice" after kill -9 docker
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1102751
Bug ID: 1102751
Summary: Got "Error running removeDevice" after kill -9 docker
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: mfojtik(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, s(a)shk.io, vbatts(a)redhat.com
Description of problem:
While I was playing with Docker today, I found interesting behavior:
When you 'kill -9' the 'docker -d' process, then this happens:
1. The 'docker -d' process gets killed ;-)
2. All containers gets killed
3. The 'docker -d' restarts, because of Restart=on-failure
Now, while all this is expected and OK, I get this when I tried to 'restart'
the container that was killed in 2):
May 29 13:24:56 ip-10-146-193-102 systemd[1]: Starting Container origin-db-1...
May 29 13:24:56 ip-10-146-193-102 sh[19731]: Reusing
b80104263de02f39bbc6d742c977ddadecb0660f5c50386eaf1cf645f3515b9c
May 29 13:25:07 ip-10-146-193-102 docker[19739]: Error: Cannot destroy
container origin-db-1: Driver devicemapper failed to remove root filesystem
e536cf328e0083e2414c750434deb1127517d00399fddac84903825d6f003787: Error running
removeDevice
May 29 13:25:07 ip-10-146-193-102 docker[19739]: 2014/05/29 13:25:07 Error:
failed to remove one or more containers
May 29 13:25:07 ip-10-146-193-102 gear[21193]: user: unknown user
ctr-origin-db-1
May 29 13:25:09 ip-10-146-193-102 docker[21192]: 2014/05/29 13:25:09 Error:
Cannot start container
bd1a88a35e84ef6d71fa3e1df0b4e2998318dfed6d60fd3ae381567fff2ac2a3: Cannot find
child for /origin-db-1
May 29 13:25:09 ip-10-146-193-102 systemd[1]: ctr-origin-db-1.service: main
process exited, code=exited, status=1/FAILURE
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Start up some containers
2. kill -9 (the 'docker -d' process pid)
3. Try to start the dead container again
4. Get the error above.
Actual results:
Expected results:
Containers should be successfully started back.
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1087700] New: lost signals when sending lots of signals using --sig-proxy to docker
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1087700
Bug ID: 1087700
Summary: lost signals when sending lots of signals using
--sig-proxy to docker
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: ldoktor(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, skottler(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
When I send lots of signals to the running docker with --sig-proxy (actual kill
signals, not `docker kill`), most of them got lost.
Version-Release number of selected component (if applicable):
docker-io-0.9.1-1.fc21.x86_64
How reproducible:
always
Steps to Reproduce:
1. /usr/bin/docker -D run --tty=false --rm -i --name test_eoly
localhost:5000/ldoktor/fedora:latest bash -c 'for NUM in `seq 1 64`; do trap
"echo Received $NUM, ignoring..." $NUM; done; while :; do sleep 1; done'
2. ps ax |grep docker
3. for AAA in `seq 1 32`; do [ $AAA -ne 9 ] && [ $AAA -ne 20 ] && [ $AAA -ne 19
] && kill -s $AAA $PID; done
Actual results:
Output of the docker is:
Received 1, ignoring...
Received 2, ignoring...
Expected results:
Messages for all of the `Received $NUM, ignoring...` printed (order doesn't
matter)
Additional info:
Skipping 9, 19, 20 as they are a bit too special..
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1093000] New: Unable to save an image to a tar archive
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1093000
Bug ID: 1093000
Summary: Unable to save an image to a tar archive
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: lslebodn(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, s(a)shk.io, vbatts(a)redhat.com
Description of problem:
I tried to dump content of docker image to a tarball with command "docker save"
It failed with error message "no space left on device"
Version-Release number of selected component (if applicable):
I was able to reproduce problem with docker-io from stable repository and with
docker-io from updates-testing
[root@vm-169 docker]# rpm -q docker-io
docker-io-0.9.1-1.fc20.x86_64
or
docker-io-0.10.0-2.fc20.x86_64
How reproducible:
Allways
Steps to Reproduce:
1. Create big docker image (e.g. 1.8 GiB)
[root@vm-169 docker-freeipa]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
big_image latest b366efe500de 57 minutes ago 1.778 GB
fedora 20 b7de3133ff98 5 days ago 372.7 MB
2. save docker image to tarball
[root@vm-169 docker]# docker save big_image > big_image.tar
Actual results:
[root@vm-169 docker]# docker save big_image > big_image.tar
2014/04/30 12:39:38 Error: write
/docker-export-085323451/934d868afd0a79629df2cad704cbc1ed9344654625569263a630933d2785de57/layer.tar:
no space left on device
[root@vm-169 docker]# file big_image.tar
big_image.tar: empty
[root@vm-169 docker]# ls -l big_image.tar
-rw-r--r--. 1 root root 0 Apr 30 13:41 big_image.tar
Expected results:
File big_image.tar should not be empty and should contain data from docker
image with name big_image
Additional info:
Directory /tmp should be used for creating small temporary files, because it is
mounted on tmpfs. I think it would be better to use /var/tmp in this case.
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1210336] New: cAdvisor fails to start because -samples
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1210336
Bug ID: 1210336
Summary: cAdvisor fails to start because -samples
Product: Fedora
Version: 22
Component: cadvisor
Assignee: jchaloup(a)redhat.com
Reporter: stefw(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: eparis(a)redhat.com, golang(a)lists.fedoraproject.org,
jchaloup(a)redhat.com, lsm5(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
cAdvisor fails to start because of the --samples argument:
Apr 09 15:17:51 falcon.thewalter.lan cadvisor[5117]: flag provided but not
defined: -samples
Apr 09 15:17:51 falcon.thewalter.lan cadvisor[5117]: Usage of
/usr/bin/cadvisor:
...
Version-Release number of selected component (if applicable):
cadvisor-0.10.1-0.1.gitef7dddf.fc22.x86_64
How reproducible:
Every time
Steps to Reproduce:
1. systemctl start cadvisor
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1216096] New: Hard coded /tmp size of 64M causes problems
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1216096
Bug ID: 1216096
Summary: Hard coded /tmp size of 64M causes problems
Product: Fedora
Version: 21
Component: docker-io
Keywords: Extras
Assignee: ichavero(a)redhat.com
Reporter: agoldste(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: adimania(a)gmail.com, admiller(a)redhat.com,
agoldste(a)redhat.com, eparis(a)redhat.com,
golang(a)lists.fedoraproject.org, hushan.jia(a)gmail.com,
ichavero(a)redhat.com, jchaloup(a)redhat.com,
jperrin(a)centos.org, lsm5(a)redhat.com, lsu(a)redhat.com,
mattdm(a)redhat.com, mgoldman(a)redhat.com,
miminar(a)redhat.com, s(a)shk.io, thrcka(a)redhat.com,
vbatts(a)redhat.com
Depends On: 1215768
+++ This bug was initially created as a clone of Bug #1215768 +++
I'm looking at b482aff8fbb5dc59d25335b67353465071d6bd45 in rhatdan/docker
It seems that you are hode coding all tmpfs mounts to 64M. Docker previously
mounted tmpfs at /tmp and this added /run.
Golang uses TMPDIR to do its builds. When doing a rather large build (like
openshift) which is larger than 64M it obviously can not work!
I think this size needs to be somehow configurable... Deciding that X is
enough /tmp space for everyone just doesn't seem like a possibility...
--- Additional comment from Daniel Walsh on 2015-04-28 08:13:12 EDT ---
Eric, any chance of changing TMPDIR -> /var/tmp?
--- Additional comment from Daniel Walsh on 2015-04-28 08:15:46 EDT ---
This should also not be happening in a docker build?
--- Additional comment from Daniel Walsh on 2015-04-28 08:32:57 EDT ---
Removed the size restrictions on /tmp and /run
Lokesh we need a new build of docker-1.6
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1215768
[Bug 1215768] Hard coded /tmp size of 64M causes problems
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months
[Bug 1033604] New: Unable to start systemd service in Docker container
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1033604
Bug ID: 1033604
Summary: Unable to start systemd service in Docker container
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: mfojtik(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: golang(a)lists.fedoraproject.org, lsm5(a)redhat.com,
mattdm(a)redhat.com, mgoldman(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
Using Dockerfile like this:
FROM mattdm/fedora:latest
RUN yum update -y
RUN yum install -y redis
RUN systemctl enable redis.service
RUN systemctl start redis.service
EXPOSE 6379
ENTRYPOINT ["/usr/bin/redis-cli"]
The step 'systemctl start redis.service' failed with this error message:
Failed to get D-Bus connection: No connection to service manager.
Version-Release number of selected component (if applicable):
Name : docker-io
Arch : x86_64
Version : 0.7
Release : 0.17.rc6.fc20
Steps to Reproduce:
1. Save the above Dockerfile
2. Run: docker build -t test/redis .
3. The build fill faile on systemctl start.
Actual results:
The service failed to start due to D-BUS connection.
Expected results:
The service should be started?
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
8 years, 11 months