Hi, Is there a wish list for things to consider adding to a future ISO? These are things not on the current ISO that often are on other baremetal installs like Workstation and Server products. I don't have enough familiarity with whether these things should just be included in the base installation and those that could be gathered in one or more "util" or "extras" type of Fedora docker image. So far I'm running into:
/lib/fimware/ is read-only so I can't add this: [ 14.599501] iwlwifi 0000:02:00.0: request for firmware file 'iwlwifi-7265D-13.ucode' failed.
I don't know whether a /var/lib/firmware being bind mounted to /lib/firmware can be done soon enough that it'll be picked up by the kernel.
pciutils, which contains lspci hdparm smartmontools iotop
These I have running in a fedora container. lspci mostly works, but getting full -vvnn detail requires --privileged=true. And the other three require it. iotop additionally needs --net=host. I'd be OK with them just being available in a container, but it might make more sense to just include them in the atomic ISO installation, maybe even borrowing a list from the Server product?
On 12/11/2015 02:23 PM, Chris Murphy wrote:
These I have running in a fedora container. lspci mostly works, but getting full -vvnn detail requires --privileged=true. And the other three require it. iotop additionally needs --net=host. I'd be OK with them just being available in a container, but it might make more sense to just include them in the atomic ISO installation, maybe even borrowing a list from the Server product?
We want, as much as possible, to keep the image small and run all the things in containers where possible.
If there's something where that just won't work, or is ludicrously difficult, we should discuss including it.
I would be super-interested in having "util" or "extras" docker images that we can run as Super Privileged Containers (SPCs) [1] to add functionality where it's Good To Have(TM) for some percentage of the audience but not necessary for the majority.
Best,
jzb
[1] http://developerblog.redhat.com/2014/11/06/introducing-a-super-privileged-co...
On Fri, Dec 11, 2015 at 12:33 PM, Joe Brockmeier jzb@redhat.com wrote:
On 12/11/2015 02:23 PM, Chris Murphy wrote:
These I have running in a fedora container. lspci mostly works, but getting full -vvnn detail requires --privileged=true. And the other three require it. iotop additionally needs --net=host. I'd be OK with them just being available in a container, but it might make more sense to just include them in the atomic ISO installation, maybe even borrowing a list from the Server product?
We want, as much as possible, to keep the image small and run all the things in containers where possible.
If there's something where that just won't work, or is ludicrously difficult, we should discuss including it.
I think these may be needed in the ISO:
cryptsetup - needed to boot encrypted devices rng-tools - this includes rngd, seems useful for all containers esp in a cloud context. Even with --privileged=true I get:
# systemctl start rngd Failed to get D-Bus connection: Operation not permitted # systemctl status rngd Failed to get D-Bus connection: Operation not permitted
Also, a way to separate kernels from the rest of the current tree. Right now I'm on atomic 23.29, the previous tree I have installed is way back to 23 (because it's an ISO installation), but I'm encountering a kernel regression. It's very suboptimal to have to rollback everything to 23, rather than just the kernel. Stepping the kernel forward independently from the cloud atomic host tree is maybe even better in some instances than rolling back.
OK at this moment I'm thinking hdparm and smartmontools just need to go on the ISO, along with iotop.
While both hdparm and smartmontools appear to work OK in a container with --privileges=true, any hardware changes are not reflected in that container in a way these two programs can see.
[root@3d2386bbd250 /]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk |-sda1 8:1 0 200M 0 part |-sda2 8:2 0 500G 0 part /etc/hosts |-sda3 8:3 0 500M 0 part |-sda4 8:4 0 426.5G 0 part `-sda5 8:5 0 4.3G 0 part [SWAP]
**plug in some drives***
[root@3d2386bbd250 /]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk |-sda1 8:1 0 200M 0 part |-sda2 8:2 0 500G 0 part /etc/hosts |-sda3 8:3 0 500M 0 part |-sda4 8:4 0 426.5G 0 part `-sda5 8:5 0 4.3G 0 part [SWAP] sdb 8:16 0 698.7G 0 disk sdc 8:32 0 465.8G 0 disk sdd 8:48 0 698.7G 0 disk sde 8:64 0 465.8G 0 disk
[root@3d2386bbd250 /]# hdparm -I /dev/sdb /dev/sdb: No such file or directory [root@3d2386bbd250 /]# hdparm -I /dev/sdc /dev/sdc: No such file or directory [root@3d2386bbd250 /]# hdparm -I /dev/sdd /dev/sdd: No such file or directory [root@3d2386bbd250 /]# hdparm -I /dev/sde /dev/sde: No such file or directory [root@3d2386bbd250 /]# smartctl -a /dev/sde smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.2.6-301.fc23.x86_64] (local build) Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org
Smartctl open device: /dev/sde [SAT] failed: No such device
Maybe lsblk, by virtue of libblkid, gets some state update for free, I don't know. Clearly that's not the case for hdparm and smartctl, and therefore I have to restart the container or start a new one for the change to be visible to these tools. If I replace or add drives, will I need to restart the container running smartd? If yes, that'd kinda be a regression. Maybe I'm doing something wrong, but at the moment I'm not groking the advantage of running these tools in a container.
Chris Murphy
On 12/13/2015 07:11 PM, Chris Murphy wrote:
OK at this moment I'm thinking hdparm and smartmontools just need to go on the ISO, along with iotop.
What's the usage scenario you're picturing here? This feels to me like a "pet" usage scenario where you're caring a whole lot about a single server install.
On Sun, Dec 13, 2015 at 7:07 PM, Joe Brockmeier jzb@redhat.com wrote:
On 12/13/2015 07:11 PM, Chris Murphy wrote:
OK at this moment I'm thinking hdparm and smartmontools just need to go on the ISO, along with iotop.
What's the usage scenario you're picturing here? This feels to me like a "pet" usage scenario where you're caring a whole lot about a single server install.
Any server with any number of drives.
Best practices is to have smartd monitor drive heath and report failures by email or text, rather than via a service disruption or irate human. While smartd could be running in a container, if the container doesn't get state updates when drives are swapped or added, then that requires a workaround: periodically restarting that container. So what's the advantage of running this utility in a container?
It's also best practices to disable the write cache on all drives used in any kind of RAID. That's not a persistent setting, so it has to happen every boot. Instead of a boot script or service that does this, a container needs to startup shortly after each boot and do this. What's the benefit of that workflow change? I don't understand that. Another use of hdparm is ATA secure erase before dismissal of drives.
If the container not being fully aware of state changes is a bug, then that's fine. In that case a super user highly privilegd container running persistently with sshd running can then be used to do all these things. But I still don't know what the advantage is, having to remote into that container for some tasks, and into the host itself for other tasks. Don't you think there should be some considerable advantage, commensurate with the workflow change caused by relocating simple tools commonly available on servers, to running only in containers instead? I do.
On Sun, Dec 13, 2015 at 8:29 PM, Chris Murphy lists@colorremedies.com wrote:
It's also best practices to disable the write cache on all drives used in any kind of RAID.
More important, all drives in RAID need SCT ERC set on each drive, which is also not persistent on non-enterprise drives. That requires smartctl -l scterc 70,70 <dev> otherwise read failures don't always get fixed correctly, fester, and can needlessly result in the RAID degrading or failing.
And now I see mdadm is not installed either on the ISO.
I filed this bug to get dosfs-tools included, mainly for UEFI systems. https://bugzilla.redhat.com/show_bug.cgi?id=1290575
Should I just file bugs like that for each one of these other missing components, and set them to block a tracker bug for things to include on the ISO? In my view a baremetal installation is first a server, so it should have basic server tools.
On Mon, Dec 14, 2015 at 03:11:04PM -0700, Chris Murphy wrote:
Should I just file bugs like that for each one of these other missing components, and set them to block a tracker bug for things to include on the ISO? In my view a baremetal installation is first a server, so it should have basic server tools.
The hardware enablement and configuration stuff needs to be available, I agree. Unfortunately, after all of these years, hardware is still terrible.* But I'm not sure stuffing every possible tool into Atomic is the way to go. Options I can see are:
A. Separate hardware/virt trees; have the installer ISO point at the hardware one by default (but also have the option of virt) B. Finishing Atomic overlay support; making hardware enablement an overlay C. Getting all this stuff to work properly in SPCs D. Something else?
* so is software. *sigh*
B. Finishing Atomic overlay support; making hardware enablement an overlay
This could be highly useful for supporting things that need to be added for support of the base daemon set of an Atomic host, e.g. gluster clients and fuse for GlusterFS persistent storage support. But I think we'd need to clone Colin to make this faster... The idea of a containerized version of k8s that shipped all the relevant needed packages for all the k8s specific support.
Alternatively, if we think that having glusterfs / ceph / etc available as a "native" storage method, then I guess that would be a feature add.
C. Getting all this stuff to work properly in SPCs
There is a "tools" (fedora/tools) SPC that starts down this road. I'm sure that could use some more eyes and work on what needs to be added and how things work. I'm not sure what tools would need to be taught in order to use the /host mount or to change how the underlying host gets represented up. Maybe we can get a thread going around updates to the tools SPC?
- Matt M
On Tue, Dec 15, 2015 at 9:08 AM, Matthew Miller mattdm@fedoraproject.org wrote:
On Mon, Dec 14, 2015 at 03:11:04PM -0700, Chris Murphy wrote:
Should I just file bugs like that for each one of these other missing components, and set them to block a tracker bug for things to include on the ISO? In my view a baremetal installation is first a server, so it should have basic server tools.
The hardware enablement and configuration stuff needs to be available, I agree. Unfortunately, after all of these years, hardware is still terrible.* But I'm not sure stuffing every possible tool into Atomic is the way to go. Options I can see are:
A. Separate hardware/virt trees; have the installer ISO point at the hardware one by default (but also have the option of virt) B. Finishing Atomic overlay support; making hardware enablement an overlay C. Getting all this stuff to work properly in SPCs D. Something else?
- so is software. *sigh*
-- Matthew Miller mattdm@fedoraproject.org Fedora Project Leader _______________________________________________ cloud mailing list cloud@lists.fedoraproject.org http://lists.fedoraproject.org/admin/lists/cloud@lists.fedoraproject.org
Matthew,
On 2015-12-16 01:08, Matthew Miller wrote:
On Mon, Dec 14, 2015 at 03:11:04PM -0700, Chris Murphy wrote:
Should I just file bugs like that for each one of these other missing components, and set them to block a tracker bug for things to include on the ISO? In my view a baremetal installation is first a server, so it should have basic server tools.
The hardware enablement and configuration stuff needs to be available, I agree. Unfortunately, after all of these years, hardware is still terrible.* But I'm not sure stuffing every possible tool into Atomic is the way to go. Options I can see are:
A. Separate hardware/virt trees; have the installer ISO point at the hardware one by default (but also have the option of virt) B. Finishing Atomic overlay support; making hardware enablement an overlay C. Getting all this stuff to work properly in SPCs D. Something else?
- so is software. *sigh*
Does this mean there would be different hardware trees on the iso or that a basic iso would be pulling the appropriate tree via the network?
What are SPCs?
Thanks,
Phil.
On Wed, Dec 16, 2015 at 10:52:43AM +1100, Philip Rhoades wrote:
A. Separate hardware/virt trees; have the installer ISO point at the hardware one by default (but also have the option of virt) B. Finishing Atomic overlay support; making hardware enablement an overlay C. Getting all this stuff to work properly in SPCs D. Something else?
- so is software. *sigh*
Does this mean there would be different hardware trees on the iso or that a basic iso would be pulling the appropriate tree via the network?
Well, for "A", I was thinking one for hardware, one for virt/cloud — not going down the path of different trees for different types of hardware, because that's definitely the road to madness.
For "B" (which is only theoretical, and as someone mentioned, may require cloning Colin), there could be different overlays depending on needs.
What are SPCs?
Super-privileged containers. Basically, containers that are meant to manage the host OS. See https://www.youtube.com/watch?v=eJIeGnHtIYg from DevConf.cz last year.
On Tue, Dec 15, 2015 at 5:03 PM, Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Dec 16, 2015 at 10:52:43AM +1100, Philip Rhoades wrote:
A. Separate hardware/virt trees; have the installer ISO point at the hardware one by default (but also have the option of virt) B. Finishing Atomic overlay support; making hardware enablement an overlay C. Getting all this stuff to work properly in SPCs D. Something else?
- so is software. *sigh*
Does this mean there would be different hardware trees on the iso or that a basic iso would be pulling the appropriate tree via the network?
Well, for "A", I was thinking one for hardware, one for virt/cloud — not going down the path of different trees for different types of hardware, because that's definitely the road to madness.
For "B" (which is only theoretical, and as someone mentioned, may require cloning Colin), there could be different overlays depending on needs.
What are SPCs?
Super-privileged containers. Basically, containers that are meant to manage the host OS. See https://www.youtube.com/watch?v=eJIeGnHtIYg from DevConf.cz last year.
Between ostree, spcs, fs options, and overlays, I think this is a lot to chew off. And a lot of change in a short amount of time. That goes for people doing the work, those who will have to document the differences compared to conventional installs+setup+management, and the users who will have to learn this.
A persistent spc to login to, to manage the host, is problematic for a significant minority of use cases where the storage hardware changes and now the container (currently) isn't aware of this for some tools. So I think that needs more investigation and fixes so that we're not having to document exceptions.
It seems to me the easier thing to do is tolerate baking more stuff into the images and ISO. Growing that list now, and shrinking it later is a better understood process, can be done faster, and requires fewer resources. And by later, I mean once spcs and overlay stuff are a. more mature b. better understood c. people doing that work have time to do it.
The hardware specific utils could go in a metapackage that's enabled for installation by default only on the ISO, and not for images.
On Tue, Dec 15, 2015, 5:49 PM Chris Murphy lists@colorremedies.com wrote:
It seems to me the easier thing to do is tolerate baking more stuff into the images and ISO.
OK I just rewound this in my head, and said WTS out loud. There's one cloud atomic tree, right? So adding a bunch of hardware stuff affects that whole tree, and everything that uses it.
OK instead, more clarity on the downloads page the limitations of atomic ISO on baremetal, and offer instead Server ISO or netinstall media (non-atomic install) and picking the Cloud Server option in the installer for a more complete and flexible install on hardware.
I do still wonder about decoupling kernel from the tree. Kernel regressions happen.
Chris Murphy
rowing that list now, and shrinking it later is a better understood process, can be done faster, and requires fewer resources. And by later, I mean once spcs and overlay stuff are a. more mature b. better understood c. people doing that work have time to do it.
The hardware specific utils could go in a metapackage that's enabled for installation by default only on the ISO, and not for images.
-- Chris Murphy
On Wed, Dec 16, 2015 at 07:51:05AM +0000, Chris Murphy wrote:
It seems to me the easier thing to do is tolerate baking more stuff into the images and ISO. OK I just rewound this in my head, and said WTS out loud. There's one cloud atomic tree, right? So adding a bunch of hardware stuff affects that whole tree, and everything that uses it.
LOL. Yeah.
OK instead, more clarity on the downloads page the limitations of atomic ISO on baremetal, and offer instead Server ISO or netinstall media (non-atomic install) and picking the Cloud Server option in the installer for a more complete and flexible install on hardware.
Yeah, I think this might be the way to go, especially as we elevate Atomic more to the top level.
On 2015-12-11 20:23, Chris Murphy wrote:
Hi, Is there a wish list for things to consider adding to a future ISO? These are things not on the current ISO that often are on other baremetal installs like Workstation and Server products. I don't have enough familiarity with whether these things should just be included in the base installation and those that could be gathered in one or more "util" or "extras" type of Fedora docker image. So far I'm running into:
/lib/fimware/ is read-only so I can't add this: [ 14.599501] iwlwifi 0000:02:00.0: request for firmware file 'iwlwifi-7265D-13.ucode' failed.
I don't know whether a /var/lib/firmware being bind mounted to /lib/firmware can be done soon enough that it'll be picked up by the kernel.
pciutils, which contains lspci hdparm smartmontools iotop
These I have running in a fedora container. lspci mostly works, but getting full -vvnn detail requires --privileged=true. And the other three require it. iotop additionally needs --net=host. I'd be OK with them just being available in a container, but it might make more sense to just include them in the atomic ISO installation, maybe even borrowing a list from the Server product?
Hi,
I recently started experimenting with Kubernetes on Atomic Hosts and one aspect I find important is persistent storage. K8s offers quite a few options like NFS, Gluster, Ceph, etc.. I initially wanted to use Gluster because it kind of fits the use case. But unfortunaly the Gluster packages were not available in the Atomic image so I used NFS for now. Would it be possible to add the needed packages for the different types of shared storage.
Vincent
On Fri, Dec 11, 2015 at 08:55:29PM +0100, Vincent Van der Kussen wrote:
I initially wanted to use Gluster because it kind of fits the use case. But unfortunaly the Gluster packages were not available in the Atomic image so I used NFS for now. Would it be possible to add the needed packages for the different types of shared storage.
Hi Vincent. I think the plan here is to have Gluster as a super-privileged container, rather than being on the host. I saw some hacking around that a little while ago, but I'm not sure of the current state. I'll check and see what I can find.
Hi Matthew,
Thanks for replying. I also saw the hack with privileged containers too. I was not sure how "production ready" this was and if it would become somehow a general practice.
Vincent
On 2015-12-15 13:36, Matthew Miller wrote:
On Fri, Dec 11, 2015 at 08:55:29PM +0100, Vincent Van der Kussen wrote:
I initially wanted to use Gluster because it kind of fits the use case. But unfortunaly the Gluster packages were not available in the Atomic image so I used NFS for now. Would it be possible to add the needed packages for the different types of shared storage.
Hi Vincent. I think the plan here is to have Gluster as a super-privileged container, rather than being on the host. I saw some hacking around that a little while ago, but I'm not sure of the current state. I'll check and see what I can find.