[Bug 1086430] New: Update to latest version 0.10.0
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1086430
Bug ID: 1086430
Summary: Update to latest version 0.10.0
Product: Fedora
Version: rawhide
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: bobby(a)laptop.org
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, skottler(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
A new version of docker is available
Version-Release number of selected component (if applicable):
0.10.0
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1096280] New: docker top stopped working, cgroup path changed
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1096280
Bug ID: 1096280
Summary: docker top stopped working, cgroup path changed
Product: Red Hat Enterprise Linux 7
Version: 7.1
Component: docker
Assignee: lsm5(a)redhat.com
Reporter: ldoktor(a)redhat.com
QA Contact: virt-bugs(a)redhat.com
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, skottler(a)redhat.com,
vbatts(a)redhat.com
Depends On: 1088125
+++ This bug was initially created as a clone of Bug #1088125 +++
Description of problem:
The `docker top` command stopped working, the path to cgroup processes had
changed.
Version-Release number of selected component (if applicable):
docker-0.10.0-8.el7.x86_64
docker-io-0.10.0-2.fc20.x86_64
How reproducible:
always
Steps to Reproduce:
1. docker run -i fedora bash
2. docker top $NAME
Actual results:
[root@t530 ~]# docker top determined_jones
2014/04/16 07:47:52 Error: open
/sys/fs/cgroup/devices/system.slice/docker/93dfbd4d375cf026df215d87b5d93de8b0a9bd55094e33c2ca3322cd6afb53f1/tasks:
no such file or directory
Expected results:
list of processes...
Additional info:
[root@t530 ~]# ls /sys/fs/cgroup/devices/system.slice/
cgroup.clone_children
cgroup.event_control
cgroup.procs
devices.allow
devices.deny
devices.list
docker-93dfbd4d375cf026df215d87b5d93de8b0a9bd55094e33c2ca3322cd6afb53f1.scope
notify_on_release
tasks
--- Additional comment from Lukas Doktor on 2014-05-05 03:46:55 EDT ---
The same bug is in upstream Docker version 0.10.0, build dc9c28f/0.10.0
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1088125
[Bug 1088125] docker top stopped working, cgroup path changed
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1102911] New: can't run iscsid in a docker container
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1102911
Bug ID: 1102911
Summary: can't run iscsid in a docker container
Product: Fedora
Version: 20
Component: iscsi-initiator-utils
Assignee: cleech(a)redhat.com
Reporter: jslagle(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, agrover(a)redhat.com,
cleech(a)redhat.com, dwalsh(a)redhat.com,
golang(a)lists.fedoraproject.org, hdegoede(a)redhat.com,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, s(a)shk.io, vbatts(a)redhat.com
Depends On: 1100000
+++ This bug was initially created as a clone of Bug #1100000 +++
Description of problem:
Can't start iscsid in a docker container
Version-Release number of selected component (if applicable):
# rpm -q docker-io
docker-io-0.11.1-3.fc20.x86_64
How reproducible:
always
Steps to Reproduce:
1. use the published fedora image, docker pull fedora
2. start the container, docker run -t -i fedora /bin/bash
3. install iscsi-initiator-utils
4. try to start iscsid:
bash-4.2# iscsid -f
iscsid: can not bind NETLINK_ISCSI socket
strace also attached
--- Additional comment from James Slagle on 2014-05-21 14:45:57 EDT ---
To give a little more context into what I'm doing, I'm trying to run OpenStack
nova compute configured to use the nova-baremetal driver inside a container.
when nova-baremetal provisions a machine it acts as an iscsi initiator and logs
into a target that has been created on the machine that is being provisioned.
It then dd's the requested image onto the disk.
therefore, aiui, iscsid must be running inside the container where you are also
running iscsiadm.
This same thing has also been tried in lxc, with what I expect is the same
issue:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
--- Additional comment from Daniel Walsh on 2014-05-28 12:29:35 EDT ---
Was SELinux involved? If you put the machine into permissive mode does it
work? Or try this as a privleged image. Might be something that we are doing
to lock the system down.
--- Additional comment from Daniel Walsh on 2014-05-28 12:33:18 EDT ---
What are the permissions on /opt/hello
ls -ld /opt
ls -ld /opt/hello
--- Additional comment from James Slagle on 2014-05-28 15:01:53 EDT ---
SELinux is already in permissive mode on the Docker host.
I did try in a privileged container, and I get something slightly different.
iscsid -f just hangs forever on the command line.
An strace shows (attached) it polling forever on a fd, i had to ctrl-c it in
both cases.
--- Additional comment from James Slagle on 2014-05-28 15:02:26 EDT ---
--- Additional comment from James Slagle on 2014-05-28 15:03:16 EDT ---
i think comment 3 was for another bug maybe? anyway, if not:
bash-4.2# ls -ld /opt
drwxr-xr-x. 2 root root 4096 Aug 7 2013 /opt
bash-4.2# ls -ld /opt/hello
ls: cannot access /opt/hello: No such file or directory
--- Additional comment from James Slagle on 2014-05-28 15:08:58 EDT ---
ah....actually, i suspect iscsid running forever in the foreground may indicate
it *is* working. sorry, i wasn't thinking that -f was telling it to run in the
foreground.
i will see if i can actually connect to a target from the privileged container
and report back
--- Additional comment from James Slagle on 2014-05-29 09:34:32 EDT ---
I'm using a privileged container running sshd as the process (so that I can
login with a couple of different shells),
I had to add this to my Dockerfile for the container, otherwise iscsid won't
start:
VOLUME ["/var/lock/iscsi"]
i start the conatiner with:
docker run --privileged -ti --name initiator -p 8022:22 -d iscsi-initiator
then ssh in, and i start iscsid:
iscsid -d 8 -f
that appears to start fine
then ssh in another session, and i can discover the target (target is actually
on the container host):
[root@bcf697cb8673 ~]# iscsiadm -m discovery -t st -p 192.168.122.1
192.168.122.1:3260,1 iqn.2013-07.com.example.storage.ssd1
but when i try to login to the target, iscsid exits (or crashes, hard to tell):
[root@bcf697cb8673 ~]# iscsiadm -m node --targetname
iqn.2013-07.com.example.storage.ssd1 --portal 192.168.122.1 --login
Logging in to [iface: default, target: iqn.2013-07.com.example.storage.ssd1,
portal: 192.168.122.1,3260] (multiple)
iscsiadm: got read error (0/0), daemon died?
iscsiadm: Could not login to [iface: default, target:
iqn.2013-07.com.example.storage.ssd1, portal: 192.168.122.1,3260].
iscsiadm: initiator reported error (18 - could not communicate to iscsid)
iscsiadm: Could not log into all portals
from the ssh session where iscsid is running i see:
iscsid: mgmt_ipc_write_rsp: rsp to fd 5
iscsid: poll result 1
iscsid: mgmt_ipc_write_rsp: rsp to fd 5
iscsid: poll result 1
iscsid: in read_transports
iscsid: Adding new transport tcp
iscsid: Matched transport tcp
iscsid: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'handle'
iscsid: sysfs_attr_get_value: new uncached attribute
'/sys/class/iscsi_transport/tcp/handle'
iscsid: sysfs_attr_get_value: add to cache
'/sys/class/iscsi_transport/tcp/handle'
iscsid: sysfs_attr_get_value: cache '/sys/class/iscsi_transport/tcp/handle'
with attribute value '18446744072107593760'
iscsid: sysfs_attr_get_value: open '/class/iscsi_transport/tcp'/'caps'
iscsid: sysfs_attr_get_value: new uncached attribute
'/sys/class/iscsi_transport/tcp/caps'
iscsid: sysfs_attr_get_value: add to cache
'/sys/class/iscsi_transport/tcp/caps'
iscsid: sysfs_attr_get_value: cache '/sys/class/iscsi_transport/tcp/caps' with
attribute value '0x39'
iscsid: Allocted session 0x7f38ceb4f9b0
iscsid: no authentication configured...
iscsid: resolved 192.168.122.1 to 192.168.122.1
iscsid: setting iface default, dev , set ip , hw , transport tcp.
iscsid: get ev context 0x7f38ceb5c470
iscsid: set TCP recv window size to 524288, actually got 425984
iscsid: set TCP send window size to 524288, actually got 425984
iscsid: connecting to 192.168.122.1:3260
iscsid: sched conn context 0x7f38ceb5c470 event 2, tmo 0
iscsid: thread 0x7f38ceb5c470 schedule: delay 0 state 3
iscsid: Setting login timer 0x7f38ceb578e0 timeout 15
iscsid: thread 0x7f38ceb578e0 schedule: delay 60 state 3
iscsid: exec thread 7f38ceb5c470 callback
iscsid: put ev context 0x7f38ceb5c470
iscsid: connected local port 37259 to 192.168.122.1:3260
iscsid: in kcreate_session
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: sendmsg: bug? ctrl_fd 4
Maybe the lines with sysfs_attr_get_value are indicative of something that's
needed from /sys still?
These exact same discovery and login commands work fine running from a libvirt
vm connecting to the same target.
On my container host, i do have the correct iscsi kernel modules, and I also
see these in the container:
on the host:
[root@teletran-1 docker]# lsmod | grep iscsi
iscsi_tcp 18333 0
libiscsi_tcp 24176 1 iscsi_tcp
libiscsi 54750 2 libiscsi_tcp,iscsi_tcp
scsi_transport_iscsi 97405 4 iscsi_tcp,libiscsi
on the container:
[root@bcf697cb8673 ~]# lsmod | grep iscsi
iscsi_tcp 18333 0
libiscsi_tcp 24176 1 iscsi_tcp
libiscsi 54750 2 libiscsi_tcp,iscsi_tcp
scsi_transport_iscsi 97405 4 iscsi_tcp,libiscsi
i'll attach an strace of the iscsid process that's exiting, if that helps.
i can also attach an strace of an iscsid process that shows it working from a
libvirt VM, if you think that would be helpful to compare.
--- Additional comment from James Slagle on 2014-05-29 09:37:12 EDT ---
strace of iscsid generated with:
strace -f -o iscsid.strace iscsid -d 12 -f
the iscsid process exits when you try to login to an iscsi target from the
container.
--- Additional comment from James Slagle on 2014-05-29 10:08:33 EDT ---
note that the output i'm now seeing seems to match very closely what was
reported in https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855 when
this same thing was tried with lxc-tools instead of Docker.
--- Additional comment from Daniel Walsh on 2014-05-29 15:32:31 EDT ---
So this looks to be more specific to iscsid and namespacing then to docker. I
think you should open the bug with them and see if they can help.
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1100000
[Bug 1100000] can't run iscsid in a docker container
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1094372] New: Images fedora:rawhide and fedora:20 are the same
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1094372
Bug ID: 1094372
Summary: Images fedora:rawhide and fedora:20 are the same
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: jpazdziora(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: admiller(a)redhat.com, golang(a)lists.fedoraproject.org,
lsm5(a)redhat.com, mattdm(a)redhat.com,
mgoldman(a)redhat.com, s(a)shk.io, vbatts(a)redhat.com
Description of problem:
It seems something is wrong with docker indes
(https://index.docker.io/_/fedora/?) because the fedora:rawhide seems to have
fedora:20 content.
Version-Release number of selected component (if applicable):
# rpm -q docker-io
docker-io-0.9.1-1.fc20.x86_64
and state of whatever is used as default docker index
How reproducible:
Deterministic.
Steps to Reproduce:
1. # docker run fedora:rawhide cat /etc/fedora-release
Actual results:
# docker run fedora:rawhide cat /etc/fedora-release
Unable to find image 'fedora:rawhide' locally
Pulling repository fedora
b7de3133ff98: Download complete
511136ea3c5a: Download complete
ef52fb1fe610: Download complete
Fedora release 20 (Heisenbug)
Expected results:
Some different id than b7de3133ff98 and something differen than Fedora release
20 (Heisenbug).
Additional info:
This is what I get about fedora:20:
# docker run fedora:20 cat /etc/fedora-release
Unable to find image 'fedora:20' locally
Pulling repository fedora
b7de3133ff98: Download complete
511136ea3c5a: Download complete
ef52fb1fe610: Download complete
Fedora release 20 (Heisenbug)
# docker images
REPOSITORY TAG IMAGE ID CREATED
VIRTUAL SIZE
fedora 20 b7de3133ff98 10 days ago
372.7 MB
fedora rawhide b7de3133ff98 10 days ago
372.7 MB
#
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1035304] New: Error when starting a container: cannot unmarshal bool into Go value of type string
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1035304
Bug ID: 1035304
Summary: Error when starting a container: cannot unmarshal bool
into Go value of type string
Product: Fedora EPEL
Version: el6
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: mgoldman(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: golang(a)lists.fedoraproject.org, lsm5(a)redhat.com,
mattdm(a)redhat.com, vbatts(a)redhat.com
Description of problem:
accept4(7, 0xc2001be540, [112], SOCK_CLOEXEC|SOCK_NONBLOCK) = -1 EAGAIN
(Resource temporarily unavailable)
read(9, "POST /v1.7/containers/create HTT"..., 4096) = 550
clock_gettime(CLOCK_REALTIME, {1385557957, 704016349}) = 0
clock_gettime(CLOCK_REALTIME, {1385557957, 704076878}) = 0
clock_gettime(CLOCK_REALTIME, {1385557957, 704218620}) = 0
write(2, "[debug] api.go:1008 Calling POST"..., 52[debug] api.go:1008 Calling
POST /containers/create
) = 52
futex(0xd0bef8, FUTEX_WAIT, 0, NULL2013/11/27 14:12:37 POST
/v1.7/containers/create
[/var/lib/docker|237066b1] +job create()
[/var/lib/docker|237066b1] -job create() = ERR (ExportEnv json: cannot
unmarshal bool into Go value of type string)
[error] api.go:1034 Error: create: ExportEnv json: cannot unmarshal bool into
Go value of type string
[error] api.go:82 HTTP Error: statusCode=500 create: ExportEnv json: cannot
unmarshal bool into Go value of type string
[debug] api.go:1008 Calling GET /images/json
2013/11/27 14:28:41 GET /v1.7/images/json
This issue exists when running docker-io-0.7.0-2.el6.x86_64 (unreleased) as
daemon with following flag: -b none.
[root@centos64 ~]# docker -v
Docker version 0.7.0, build 0d078b6/0.7.0
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1037634] New: The devicemapper volumes are not removed after container is removed
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1037634
Bug ID: 1037634
Summary: The devicemapper volumes are not removed after
container is removed
Product: Fedora
Version: 20
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: mfojtik(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: golang(a)lists.fedoraproject.org, lsm5(a)redhat.com,
mattdm(a)redhat.com, mgoldman(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
After removing all containers in Docker, some of the devicemapper volumes are
not removed and they are showed in 'df -h'
Version-Release number of selected component (if applicable):
[root@localhost postgresql]# docker -v
Docker version 0.7.0, build 0ff9bc1/0.7.0
How reproducible:
Create some containers and then use 'docker rm CONTAINER_ID' and remove them
all. When the list of all containers (docker ps -a), is empty,
run 'df -h'. There are some devicemapper volumes from removed container still
mounted in system.
After restarting docker service they seems to be properly removed.
Steps to Reproduce:
1. docker run ... (create some containers)
2. docker rm ALL_CONTAINERS
3. df -h (-> will show several devicemapper volumes mounted)
4. systemctl restart docker
5. df -h (-> all devicemapper volumes are erased)
Actual results:
[root@localhost postgresql]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
[root@localhost postgresql]#
[root@localhost postgresql]# df -h | grep mapper
/dev/mapper/fedora-root
9.2G 5.0G 3.7G 58% /
/dev/dm-3
9.8G 390M 8.9G 5%
/var/lib/docker/devicemapper/mnt/64e0619c473e974ac710568673fa0907fe9bc78f6998888ff41913cf6a69af0b
/dev/dm-4
9.8G 388M 8.9G 5%
/var/lib/docker/devicemapper/mnt/1bdd590de956281c7a5b1394cc0801ee879bc220be52e399eb61a9c3f2810c0d
/dev/dm-5
9.8G 235M 9.0G 3%
/var/lib/docker/devicemapper/mnt/97fc5bf7f8d42606fa896e1d391a0b882f78322ce0ff77c03fbd7f8e3b7a73ed
/dev/dm-6
9.8G 388M 8.9G 5%
/var/lib/docker/devicemapper/mnt/a567e6f3d26a5736d00a5e9be9a609b44ab13f0e43fc466f45beaa7ea9054766
/dev/mapper/docker-253:1-15108-7ee96a4f5167932745b68de0d026d750d12afa911a0e2591edd48efebdc8eb2f-init
9.8G 390M 8.9G 5%
/var/lib/docker/devicemapper/mnt/7ee96a4f5167932745b68de0d026d750d12afa911a0e2591edd48efebdc8eb2f-init
/dev/mapper/docker-253:1-15108-c8c302f735e607c55ac4ae542fda9081959f128abbe3b126fa9afa541c84d52d-init
9.8G 518M 8.7G 6%
/var/lib/docker/devicemapper/mnt/c8c302f735e607c55ac4ae542fda9081959f128abbe3b126fa9afa541c84d52d-init
/dev/mapper/docker-253:1-15108-5d0694e3a73324e395adfd03d7524c76f126c7449a4f5b73bac64b7bf26f51ef-init
9.8G 556M 8.7G 6%
/var/lib/docker/devicemapper/mnt/5d0694e3a73324e395adfd03d7524c76f126c7449a4f5b73bac64b7bf26f51ef-init
/dev/mapper/docker-253:1-15108-f8a7e54667e09c1e9f3edf104124279d2e34fa1bade1ccbd8772a85369132cb6-init
9.8G 591M 8.7G 7%
/var/lib/docker/devicemapper/mnt/f8a7e54667e09c1e9f3edf104124279d2e34fa1bade1ccbd8772a85369132cb6-init
/dev/mapper/docker-253:1-15108-fcf2d918ff47b9f24b1a9dcd6ea2829d4b20a5736f06d0a6d987c5cce74d4a55-init
9.8G 591M 8.7G 7%
/var/lib/docker/devicemapper/mnt/fcf2d918ff47b9f24b1a9dcd6ea2829d4b20a5736f06d0a6d987c5cce74d4a55-init
/dev/mapper/docker-253:1-15108-004e09e25c19d765db7c75a767fffead530cac52aefd82d6acec5a9e765d4149-init
9.8G 591M 8.7G 7%
/var/lib/docker/devicemapper/mnt/004e09e25c19d765db7c75a767fffead530cac52aefd82d6acec5a9e765d4149-init
/dev/mapper/docker-253:1-15108-5f32ced8736080d768cf72f210cda8ae06ef23d7018e5f07819c4c59607ae488-init
9.8G 591M 8.7G 7%
/var/lib/docker/devicemapper/mnt/5f32ced8736080d768cf72f210cda8ae06ef23d7018e5f07819c4c59607ae488-init
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months
[Bug 1037830] New: Error pulling image, Authentication is required / Server error: 404
by Red Hat Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1037830
Bug ID: 1037830
Summary: Error pulling image, Authentication is required /
Server error: 404
Product: Fedora EPEL
Version: el6
Component: docker-io
Assignee: lsm5(a)redhat.com
Reporter: jeckersb(a)redhat.com
QA Contact: extras-qa(a)fedoraproject.org
CC: golang(a)lists.fedoraproject.org, lsm5(a)redhat.com,
mattdm(a)redhat.com, mgoldman(a)redhat.com,
vbatts(a)redhat.com
Description of problem:
Version-Release number of selected component (if applicable):
docker-io-0.7.0-14.el6.x86_64
Red Hat Enterprise Linux Server release 6.5 (Santiago)
How reproducible:
Always
Steps to Reproduce:
1. Run `docker -d -b none -D`
2. In another terminal, run `docker pull mattdm/fedora`
Actual results:
# docker pull mattdm/fedora
Pulling repository mattdm/fedora
1bdd590de956: Error pulling image (f20) from mattdm/fedora, Authentication is
required.
64e0619c473e: Error pulling image (latest) from mattdm/fedora, Authentication
is required.
97fc5bf7f8d4: Error pulling image (f20rc3.small) from mattdm/fedora,
Authentication is required.
a567e6f3d26a: Error pulling image (f19) from mattdm/fedora, Authentication is
required.
2013/12/03 21:15:36 Server error: 404 trying to fetch remote history for
mattdm/fedora
Expected results:
(Taken from my F20 system which works perfectly)
# docker pull mattdm/fedora
Pulling repository mattdm/fedora
1bdd590de956: Download complete
a567e6f3d26a: Download complete
64e0619c473e: Download complete
97fc5bf7f8d4: Download complete
Additional info:
There doesn't seem to be anything useful in the daemon output by default. I've
added debug statements in various places to try and understand what's going on.
I'll elaborate in future comments when I can better organize what I'm seeing.
Just wanted to get this out there incase it's something obvious :)
--
You are receiving this mail because:
You are on the CC list for the bug.
9 years, 10 months