-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
So, the Workstation group has done a truly fantastic job these last couple of weeks designing their Technical Specification[1]. At today's Server WG meeting, the topic came up about the "Core Services and Features" section of this document. I think that much of it is reusable for the Server Technical Specification, so I'm going to go through it and make comments and recommendations. Please add your own thoughts and I'll get them up on the wiki tomorrow.
=== File system ===
The default file system type for workstation installs should be btrfs.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
Btrfs still makes me somewhat nervous, given that its upstream doesn't consider it stable[3].
=== Service management ===
Systemd provides ways to control and monitor the activity and status of system services, resources they require, etc. System services are expected to provide systemd units. See the systemd [http://0pointer.de/public/systemd-man/systemd.unit.html documentation].
I think we want to go along with this, but make a stronger statement about systemd units.
"System services *must* provide systemd units to be included in the Fedora Server standard installation."
=== Logging ===
The systemd journal will be used as the local storage backend for system logs. For 'managed' scenarios (e.g the 'developer in a large organization' use case of the PRD), it will be possible to collect the logs in a centralized location, off the local machine.
Applications and services can either use the syslog API or the journal APIs for their logging. See the journal API [http://0pointer.de/public/systemd-man/sd-journal.html documentation].
I agree with this as well. We should focus on the use of journald as the preferred log aggregator and make sure that it is fully capable of aggregating those logs centrally. The advantages provided by journald's structured logging will make processing of those logs much easier.
That said, for the immediate future I think we also need to mandate that the system MUST continue to be capable of exporting traditional syslog messages to existing log aggregation systems.
=== Networking ===
Network devices and connections will be controlled by NetworkManager. This includes support for VPN, which is relevant for 'corporate' scenarios. Applications are advised to use higher-level APIs (such as [https://developer.gnome.org/gio/stable/GNetworkMonitor.html GNetworkMonitor] in GIO) to monitor online status.
This is going to be the contentious point, I expect. We've already seen some chatter on this list around systemd-networkd and of course there's a long history of wariness around NetworkManager as a replacement for traditional network scripts.
This will need to come to a vote, but here are my feelings on the matter:
== Network Scripts == + Powerful, stable and widely deployed. - Configuration requires modification of a large number of plaintext files. Central management of this configuration is difficult and generally requires tools like Puppet and Chef to be deployed to do so reliably. - Complex configuration is highly prone to accidents necessitating a visit from the "crash cart" in the data center.
== Network Manager == + Powerful and (now) stable with support for enterprise features like bridging and bonding. + Consistent API allows for simplified configuration with fewer opportunities for producing an unusable system. + Default networking stack for RHEL 7 means it will get considerable bugfixing resources allocated to it. - Requires a running daemon on the system (though work is under way to have the daemon shut down except when needed) - Bad PR history means that some administrators view it unfavorably
== systemd-networkd == + Very low overhead, tight integration with low-level plumbing - Immature with few features. Only saw its first release this month. - Not currently available on Fedora[4]
My personal view is that NetworkManager is the best option *today* (understanding full well that there will need to be a fair amount of marketing effort to educate people on how far it has come in the last couple years), while systemd-networkd may become a highly interesting option in the future. Given our focus in Fedora Server on making configuration more approachable, I can't really see us recommending network scripts as the *default* offering.
=== Firewall ===
A firewall in its default configuration may not interfere with the normal operation of programs installed by default.
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
We should detect when the system is on a public or untrusted network and prevent the user from unwanted sharing of e.g. music or other media in this situation. A firewall (and network zones as currently implemented by firewalld) may or may not be part of a solution to this.
The concept of network zones should probably be basically ignored for Fedora Server, as we should generally default to closing all ports except for those made available for installed Roles. (Also, the Role configuration should optionally be able to specify on which interfaces it wishes to operate, so we can restrict internal vs. external operation in a multi-homed environment).
=== SELinux ===
SELinux will be enabled in enforcing mode, using the targeted policy.
+1000
The base install and all approved Server Roles must operate in targeted enforcing mode.
=== Problem reporting ===
Problems and error conditions (e.g. kernel oopses, Selinux AVCs, application crashes, OOM, disk errors) should all be reported in the systemd journal.
Ack
Sending this information to a central place (like abrt does for crashes today) should be possible, but not mandatory. Depending on the use case, it may be turned off, enabled manually on a case-by-case basis, or entirely automatic without user intervention.
In the case of the Fedora Server, I think that reporting information to a central location must be mandatory. Most servers in real-world deployment are headless in a datacenter somewhere. Administrators will need to be able to see all issues from a standard console.
Also, we need to keep in mind that the majority of servers will *not* have visibility to the internet, so transmitting ABRT results directly to Bugzilla is often impossible. We will need to be able to aggregate the issues on a network-local management server. (Note: IMHO this is not a blocker requirement on F21)
=== Session tracking ===
Logind will be used as the session tracking facility.
Applications that need to interact with sessions can use the logind [http://www.freedesktop.org/software/systemd/man/sd_session_is_active.html
library API], the
[http://www.freedesktop.org/wiki/Software/systemd/logind/ D-Bus API], or a higher-level API
+1
=== Account handling ===
SSSD is providing the backing storage for identity management. For 'managed' scenarios (e.g. the 'developer in a large organization' use case of the PRD), it will be possible to configure it to rely on a directory service for this information. The accountsservice is providing a D-Bus interface for user account information; this may be integrated into SSSD at some point.
Depending on their needs, application and services can either use the POSIX APIs (getpwent(), etc) or the accountsservice D-Bus interface to obtain user information.
As the Fedora Server is more likely than Workstation to require central management, I think we need to adopt this wholeheartedly. Also, realmd should be considered a core piece of our story, as it enables automatic configuration of SSSD with either FreeIPA (our Domain Controller Role) or Active Directory (Microsoft Windows Domain Controller).
=== Software updates ===
gnome-software will use PackageKit with the hawkey backend to obtain and install software updates for packaged applications and the OS itself. The recommendation for applications is to use the PackageKit APIs to interact with the underlying packaging system.
Software updates on a server system should be designed in such a way that they can be enforced centrally. With Fedora Server, this probably means picking one of the common config management systems such as Puppet, Chef, Red Hat Satellite or else relying on OpenLMI for performing central software upgrades.
For single-server manipulation, I think we should focus on supporting yum/dnf.
=== Miscellaneous system information ===
System locale, timezone, hostname, etc. will be managed through the services provided by systemd for this purpose. See developer documentation for [http://www.freedesktop.org/wiki/Software/systemd/localed/ localed], [http://www.freedesktop.org/wiki/Software/systemd/timedated/ timedated] and [http://www.freedesktop.org/wiki/Software/systemd/hostnamed/ hostnamed]
I also think we should stick with the systemd-offered mechanisms for this functionality (and I know that Cockpit is already interfacing with much of it).
=== Virtualization ===
libvirt-daemon will be used to manage virtualization capabilities.
We probably want to use libvirt-daemon for virtualization and focus on systemd-nspawn for containerization.
=== Display manager ===
gdm will be used as the display manager. It is responsible for showing a login screen on each seat. It will be able to launch both X-based sessions and Wayland sessions.
Desktop environments are expected to make themselves known as an available session option on the login screen by dropping a .desktop file into /usr/share/xsessions (or its wayland equivalent).
Other facilities provided by the display manager include screen unlock authentication and user switching.
Display manager is irrelevant to the Server product.
=== Accessibility ===
The accessibility support in the workstation includes a screen reader, a high-contrast theme and a zoom capability, amongst others. The screen reading is provided through orca, which runs as a session service and requires the at-spi infrastructure. Applications are expected to provide suitable information to the screen reader via the toolkit's accessibility support. Applications are also expected to work acceptably in the high-contrast theme. The zoom is implemented in the desktop shell and does not need any application support.
Accessibility on the server is a topic I'm fairly comfortable with deferring to the management tools such as Cockpit and Katello/Foreman. On the pure command-line, I think the most we can do is assert that any interactive operation we enable should have a configurable timeout to deal with potentially slow typists.
=== Input Methods ===
The input method framework on the workstation is provided by ibus. Input methods and keyboard layouts can be configured in the control-center, and selected in shell keyboard menu. The supported application toolkits all support ibus.
I'm not sure this is a situation we need to get ourselves involved in. Most interaction will be in the shell, so hopefully the LOCALE setting will be sufficient.
=== Graphics ===
The workstation session will switch to using a Wayland compositor as soon as feasible. Until then, it will be based on X11. Even after the switch, an X server will be included, so applications can either connect to Wayland natively, or run as an X client.
Not applicable
=== Media support ===
Sound hardware and audio streams will be managed by pulseaudio. Applications are recommended to use the [http://gstreamer.freedesktop.org/documentation/ gstreamer] framework for media playback.
Not applicable
=== Appearance ===
The workstation will ship with a single theme, which will have support for the included toolkits: gtk3, qt and gtk2. Applications are expected to work well with this theme, as well as with the high-contrast theme that is used for accessibility. The theme will include a dark variant that applications can opt into using (this is most suitable for certain content-focused applications). The theme also includes an icon theme that provides named icons according to the icon-naming spec, plus symbolic variants.
We will be using the Adwaita theme, with a yet-to-be-written qt variant.
As for "appearance", my view is that Cockpit should be the official "face" of the Fedora Server. Opinions welcome :)
=== Application Integration ===
Installed applications are expected to install a desktop file in /usr/share/applications and an application icon in the hicolor icon theme.
Packaged applications are also expected to provide [http://people.freedesktop.org/~hughsient/appdata/ appdata] for use in the application installer.
Not applicable
=== System Installer ===
The desired installation experience for the workstation product is to limit the pre-installation user interaction to the minimum. The storage configuration UI should be focused on the classes of hardware that are expected in workstation-class machines. Package selection is not necessary: the installer will install the workstation product as defined. Tweaks, customizations and software additions should be performed after the installation.
One aspect of storage configuration that will be needed is support for dual-boot setups (preserving preexisting Windows or OS X installations), since e.g. students may be required to run software on those platforms for their coursework.
gnome-initial-setup already provides support for post-install user creation, language selection, timezone configuration, etc. If necessary, it should be extended to cover all required setup tasks.
I'm not even sure where to begin here. The system installer discussion probably needs to have its own thread.
[1] https://fedoraproject.org/wiki/Workstation/Technical_Specification
[2] I haven't got an opinion on traditional LVM vs. thinly-provisioned LVM at this point.
[3] https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F
[4] http://lists.freedesktop.org/archives/systemd-devel/2014-February/017146.htm...
Stephen Gallagher (sgallagh@redhat.com) said:
=== Display manager ===
gdm will be used as the display manager. It is responsible for showing a login screen on each seat. It will be able to launch both X-based sessions and Wayland sessions.
Desktop environments are expected to make themselves known as an available session option on the login screen by dropping a .desktop file into /usr/share/xsessions (or its wayland equivalent).
Other facilities provided by the display manager include screen unlock authentication and user switching.
Display manager is irrelevant to the Server product.
Admittedly, this is rehashing a bit of an old discussion, but we have (RHEL) stats that somewhere around 20% of server installs install a desktop.
If Fedora Server wants to explicitly not serve those users, it's worth noting specifically. If it wants to serve them by offering some sort of environment that boots into just cockpit + terminal + web browser, it's worth specifying the requirements there.
Bill
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/25/2014 03:58 PM, Bill Nottingham wrote:
Stephen Gallagher (sgallagh@redhat.com) said:
=== Display manager ===
gdm will be used as the display manager. It is responsible for showing a login screen on each seat. It will be able to launch both X-based sessions and Wayland sessions.
Desktop environments are expected to make themselves known as an available session option on the login screen by dropping a .desktop file into /usr/share/xsessions (or its wayland equivalent).
Other facilities provided by the display manager include screen unlock authentication and user switching.
Display manager is irrelevant to the Server product.
Admittedly, this is rehashing a bit of an old discussion, but we have (RHEL) stats that somewhere around 20% of server installs install a desktop.
If Fedora Server wants to explicitly not serve those users, it's worth noting specifically. If it wants to serve them by offering some sort of environment that boots into just cockpit + terminal + web browser, it's worth specifying the requirements there.
"7. The user must be able to install and manage Fedora Server in a headless mode where the display framework is inactive." [1]
That being said, Cockpit itself has a terminal emulator built-in as well. So ideally there's no reason to install a graphical environment on the system. You should be able to just connect to it from any browser and operate from there.
Also, I want to make it relatively easy for a Fedora Server install to also essentially just "push a button" and also have a fully-functional Workstation. That should be a non-default case.
[1] https://fedoraproject.org/wiki/Server/Product_Requirements_Document#Use_Case...
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
So, the Workstation group has done a truly fantastic job these last couple of weeks designing their Technical Specification[1]. At today's Server WG meeting, the topic came up about the "Core Services and Features" section of this document. I think that much of it is reusable for the Server Technical Specification, so I'm going to go through it and make comments and recommendations. Please add your own thoughts and I'll get them up on the wiki tomorrow.
=== File system ===
The default file system type for workstation installs should be btrfs.
UGH! Certainly not on my hardware, hopefully this will be easy to change.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
Even if it weren't the default on RHEL7 I would second using XFS, it's defeinitely the best Server File System we have available right now, when you weight size/features/reliability
Btrfs still makes me somewhat nervous, given that its upstream doesn't consider it stable[3].
It is not stable yet, I do not think it makes sense for us to go btrfs for a server. A desktop may get away with (although the one laptop where I have it for testing sure would like to have something that is not that slow)
=== Service management ===
Systemd provides ways to control and monitor the activity and status of system services, resources they require, etc. System services are expected to provide systemd units. See the systemd [http://0pointer.de/public/systemd-man/systemd.unit.html documentation].
I think we want to go along with this, but make a stronger statement about systemd units.
"System services *must* provide systemd units to be included in the Fedora Server standard installation."
I think this is a given, but if we feel the need to spell it out let's do it.
=== Logging ===
The systemd journal will be used as the local storage backend for system logs. For 'managed' scenarios (e.g the 'developer in a large organization' use case of the PRD), it will be possible to collect the logs in a centralized location, off the local machine.
Applications and services can either use the syslog API or the journal APIs for their logging. See the journal API [http://0pointer.de/public/systemd-man/sd-journal.html documentation].
I agree with this as well. We should focus on the use of journald as the preferred log aggregator and make sure that it is fully capable of aggregating those logs centrally. The advantages provided by journald's structured logging will make processing of those logs much easier.
That said, for the immediate future I think we also need to mandate that the system MUST continue to be capable of exporting traditional syslog messages to existing log aggregation systems.
I agree I think we MUST support full log aggregation via rsyslog, and possibly make it easy to activate it.
=== Networking ===
Network devices and connections will be controlled by NetworkManager. This includes support for VPN, which is relevant for 'corporate' scenarios. Applications are advised to use higher-level APIs (such as [https://developer.gnome.org/gio/stable/GNetworkMonitor.html GNetworkMonitor] in GIO) to monitor online status.
This is going to be the contentious point, I expect. We've already seen some chatter on this list around systemd-networkd and of course there's a long history of wariness around NetworkManager as a replacement for traditional network scripts.
This will need to come to a vote, but here are my feelings on the matter:
== Network Scripts ==
- Powerful, stable and widely deployed.
- Configuration requires modification of a large number of plaintext files. Central management of this configuration is difficult and generally requires tools like Puppet and Chef to be deployed to do so reliably.
- Complex configuration is highly prone to accidents necessitating a visit from the "crash cart" in the data center.
I think we should stop with scripts in Fedora Server, they are brittle.
== Network Manager ==
- Powerful and (now) stable with support for enterprise features like bridging and bonding.
- Consistent API allows for simplified configuration with fewer opportunities for producing an unusable system.
- Default networking stack for RHEL 7 means it will get considerable bugfixing resources allocated to it.
- Requires a running daemon on the system (though work is under way to have the daemon shut down except when needed)
- Bad PR history means that some administrators view it unfavorably
+1
== systemd-networkd ==
- Very low overhead, tight integration with low-level plumbing
- Immature with few features. Only saw its first release this month.
- Not currently available on Fedora[4]
-1 on immature grounds in the short term
My personal view is that NetworkManager is the best option *today* (understanding full well that there will need to be a fair amount of marketing effort to educate people on how far it has come in the last couple years), while systemd-networkd may become a highly interesting option in the future. Given our focus in Fedora Server on making configuration more approachable, I can't really see us recommending network scripts as the *default* offering.
ack
=== Firewall ===
A firewall in its default configuration may not interfere with the normal operation of programs installed by default.
I do not understand what "normal operation" means here. What is normal is in part determined by what the user of the system wants to accomplish.
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ? Should we have the firewall configured by default, or not ?
We should detect when the system is on a public or untrusted network and prevent the user from unwanted sharing of e.g. music or other media in this situation. A firewall (and network zones as currently implemented by firewalld) may or may not be part of a solution to this.
The concept of network zones should probably be basically ignored for Fedora Server, as we should generally default to closing all ports except for those made available for installed Roles. (Also, the Role configuration should optionally be able to specify on which interfaces it wishes to operate, so we can restrict internal vs. external operation in a multi-homed environment).
What do we gain from a firewall that any application can poke holes at ? Can someone state the benefits, or a situation where the default configuration would be safer with a firewall ?
=== SELinux ===
SELinux will be enabled in enforcing mode, using the targeted policy.
+1000
The base install and all approved Server Roles must operate in targeted enforcing mode.
+1
=== Problem reporting ===
Problems and error conditions (e.g. kernel oopses, Selinux AVCs, application crashes, OOM, disk errors) should all be reported in the systemd journal.
Ack
Is there another place they may be reported to ?
Sending this information to a central place (like abrt does for crashes today) should be possible, but not mandatory. Depending on the use case, it may be turned off, enabled manually on a case-by-case basis, or entirely automatic without user intervention.
In the case of the Fedora Server, I think that reporting information to a central location must be mandatory. Most servers in real-world deployment are headless in a datacenter somewhere. Administrators will need to be able to see all issues from a standard console.
There is a problem with sending information from things like abrtd to a central location by default, in that sensitive information may get disclosed unless the log server is reached only through authenticated and encrypted connections. Ideally we have a way to signal the trustworthiness of the log and change behavior accrodingly and automatically. Whether we can do this in the short/medium term I do not know.
But technically rsyslog can secure connections and even do mutual authentication as it supports both TLS and GSSAPI. I also discussed at DevConf.cz with the rsyslog maintainer some secure store-and-forward techniques to use with ephemeral encryption keys and such so it is an option.
Also, we need to keep in mind that the majority of servers will *not* have visibility to the internet, so transmitting ABRT results directly to Bugzilla is often impossible. We will need to be able to aggregate the issues on a network-local management server. (Note: IMHO this is not a blocker requirement on F21)
Whether it is "possible" or not automatic transmission is almost always inappropriate IMO. Too much potentially sensitive info can be transmitted with these kinds of reports, they have to be validated and approved for transmission by an admin.
I think this should be an actual requirement for the Server platform.
=== Session tracking ===
Logind will be used as the session tracking facility.
Applications that need to interact with sessions can use the logind [http://www.freedesktop.org/software/systemd/man/sd_session_is_active.html
library API], the
[http://www.freedesktop.org/wiki/Software/systemd/logind/ D-Bus API], or a higher-level API
+1
=== Account handling ===
SSSD is providing the backing storage for identity management. For 'managed' scenarios (e.g. the 'developer in a large organization' use case of the PRD), it will be possible to configure it to rely on a directory service for this information. The accountsservice is providing a D-Bus interface for user account information; this may be integrated into SSSD at some point.
Depending on their needs, application and services can either use the POSIX APIs (getpwent(), etc) or the accountsservice D-Bus interface to obtain user information.
As the Fedora Server is more likely than Workstation to require central management, I think we need to adopt this wholeheartedly. Also, realmd should be considered a core piece of our story, as it enables automatic configuration of SSSD with either FreeIPA (our Domain Controller Role) or Active Directory (Microsoft Windows Domain Controller).
+1 (though I have a conflict of interest here :-)
=== Software updates ===
gnome-software will use PackageKit with the hawkey backend to obtain and install software updates for packaged applications and the OS itself. The recommendation for applications is to use the PackageKit APIs to interact with the underlying packaging system.
Software updates on a server system should be designed in such a way that they can be enforced centrally. With Fedora Server, this probably means picking one of the common config management systems such as Puppet, Chef, Red Hat Satellite or else relying on OpenLMI for performing central software upgrades.
I think you forgot spacewalk here.
For single-server manipulation, I think we should focus on supporting yum/dnf.
I think yum/successor CLI tool should be the default here indeed.
=== Miscellaneous system information ===
System locale, timezone, hostname, etc. will be managed through the services provided by systemd for this purpose. See developer documentation for [http://www.freedesktop.org/wiki/Software/systemd/localed/ localed], [http://www.freedesktop.org/wiki/Software/systemd/timedated/ timedated] and [http://www.freedesktop.org/wiki/Software/systemd/hostnamed/ hostnamed]
I also think we should stick with the systemd-offered mechanisms for this functionality (and I know that Cockpit is already interfacing with much of it).
To be honest I find hostnamed quite inadequate for a server case as it introduced confusion in the naming and by default will mangle perfectly valid fqdns that the admin want to assign to the machine.
I think we should carefully evaluate these mechanisms.
For example, messing up with the hostname often has annoying consequences when a server is enrolled into a central identity management system.
=== Virtualization ===
libvirt-daemon will be used to manage virtualization capabilities.
We probably want to use libvirt-daemon for virtualization and focus on systemd-nspawn for containerization.
what about libvirt-lxc/docker ?
=== Display manager ===
gdm will be used as the display manager. It is responsible for showing a login screen on each seat. It will be able to launch both X-based sessions and Wayland sessions.
Desktop environments are expected to make themselves known as an available session option on the login screen by dropping a .desktop file into /usr/share/xsessions (or its wayland equivalent).
Other facilities provided by the display manager include screen unlock authentication and user switching.
Display manager is irrelevant to the Server product.
We already discussed in some cases we will need them as some server software unfortunately need a grpahical session for installation/configuration purposes.
So we should have at least a recommendation of how to start a graphical session if required, even if it is just a manual startx or if the recommendation is to use Xvnc and a vnc client or other similar options.
=== Accessibility ===
The accessibility support in the workstation includes a screen reader, a high-contrast theme and a zoom capability, amongst others. The screen reading is provided through orca, which runs as a session service and requires the at-spi infrastructure. Applications are expected to provide suitable information to the screen reader via the toolkit's accessibility support. Applications are also expected to work acceptably in the high-contrast theme. The zoom is implemented in the desktop shell and does not need any application support.
Accessibility on the server is a topic I'm fairly comfortable with deferring to the management tools such as Cockpit and Katello/Foreman. On the pure command-line, I think the most we can do is assert that any interactive operation we enable should have a configurable timeout to deal with potentially slow typists.
We should at least support braille devices out of the box for console interaction IMO.
=== Input Methods ===
The input method framework on the workstation is provided by ibus. Input methods and keyboard layouts can be configured in the control-center, and selected in shell keyboard menu. The supported application toolkits all support ibus.
I'm not sure this is a situation we need to get ourselves involved in. Most interaction will be in the shell, so hopefully the LOCALE setting will be sufficient.
=== Graphics ===
The workstation session will switch to using a Wayland compositor as soon as feasible. Until then, it will be based on X11. Even after the switch, an X server will be included, so applications can either connect to Wayland natively, or run as an X client.
Not applicable
See above.
=== Media support ===
Sound hardware and audio streams will be managed by pulseaudio. Applications are recommended to use the [http://gstreamer.freedesktop.org/documentation/ gstreamer] framework for media playback.
Not applicable
There are server side media streamers, DLNA, etc.. we can defer taking any action but it is incorrect to say that a server os has nothing to do with media support.
=== Appearance ===
The workstation will ship with a single theme, which will have support for the included toolkits: gtk3, qt and gtk2. Applications are expected to work well with this theme, as well as with the high-contrast theme that is used for accessibility. The theme will include a dark variant that applications can opt into using (this is most suitable for certain content-focused applications). The theme also includes an icon theme that provides named icons according to the icon-naming spec, plus symbolic variants.
We will be using the Adwaita theme, with a yet-to-be-written qt variant.
As for "appearance", my view is that Cockpit should be the official "face" of the Fedora Server. Opinions welcome :)
Should we say something about how the shell is configured, defaults, bash-completion, vim-enhanced/emacs/other plugins ?
=== Application Integration ===
Installed applications are expected to install a desktop file in /usr/share/applications and an application icon in the hicolor icon theme.
Packaged applications are also expected to provide [http://people.freedesktop.org/~hughsient/appdata/ appdata] for use in the application installer.
Not applicable
=== System Installer ===
The desired installation experience for the workstation product is to limit the pre-installation user interaction to the minimum. The storage configuration UI should be focused on the classes of hardware that are expected in workstation-class machines. Package selection is not necessary: the installer will install the workstation product as defined. Tweaks, customizations and software additions should be performed after the installation.
One aspect of storage configuration that will be needed is support for dual-boot setups (preserving preexisting Windows or OS X installations), since e.g. students may be required to run software on those platforms for their coursework.
gnome-initial-setup already provides support for post-install user creation, language selection, timezone configuration, etc. If necessary, it should be extended to cover all required setup tasks.
I'm not even sure where to begin here. The system installer discussion probably needs to have its own thread.
+1
Simo.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Replies inline; I cut out the places where we were in agreement.
On 02/25/2014 04:47 PM, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
=== Firewall ===
A firewall in its default configuration may not interfere with the normal operation of programs installed by default.
I do not understand what "normal operation" means here. What is normal is in part determined by what the user of the system wants to accomplish.
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ? Should we have the firewall configured by default, or not ?
I think that we should have a firewall configured by default, yes.
We should detect when the system is on a public or untrusted network and prevent the user from unwanted sharing of e.g. music or other media in this situation. A firewall (and network zones as currently implemented by firewalld) may or may not be part of a solution to this.
The concept of network zones should probably be basically ignored for Fedora Server, as we should generally default to closing all ports except for those made available for installed Roles. (Also, the Role configuration should optionally be able to specify on which interfaces it wishes to operate, so we can restrict internal vs. external operation in a multi-homed environment).
What do we gain from a firewall that any application can poke holes at ? Can someone state the benefits, or a situation where the default configuration would be safer with a firewall ?
This is not "any application". This is Server Roles. If a Server Role can't be seen through the firewall, it's broken. As I noted above, I do think that part of Role configuration needs to be whether it's visible on certain interfaces.
=== Problem reporting ===
Problems and error conditions (e.g. kernel oopses, Selinux AVCs, application crashes, OOM, disk errors) should all be reported in the systemd journal.
Ack
Is there another place they may be reported to ?
They each have individual places where they might end up if not drawn together in the journal. This is basically just formalizing the current plan of record with journald.
Sending this information to a central place (like abrt does for crashes today) should be possible, but not mandatory. Depending on the use case, it may be turned off, enabled manually on a case-by-case basis, or entirely automatic without user intervention.
In the case of the Fedora Server, I think that reporting information to a central location must be mandatory. Most servers in real-world deployment are headless in a datacenter somewhere. Administrators will need to be able to see all issues from a standard console.
There is a problem with sending information from things like abrtd to a central location by default, in that sensitive information may get disclosed unless the log server is reached only through authenticated and encrypted connections. Ideally we have a way to signal the trustworthiness of the log and change behavior accrodingly and automatically. Whether we can do this in the short/medium term I do not know.
But technically rsyslog can secure connections and even do mutual authentication as it supports both TLS and GSSAPI. I also discussed at DevConf.cz with the rsyslog maintainer some secure store-and-forward techniques to use with ephemeral encryption keys and such so it is an option.
Also, we need to keep in mind that the majority of servers will *not* have visibility to the internet, so transmitting ABRT results directly to Bugzilla is often impossible. We will need to be able to aggregate the issues on a network-local management server. (Note: IMHO this is not a blocker requirement on F21)
Whether it is "possible" or not automatic transmission is almost always inappropriate IMO. Too much potentially sensitive info can be transmitted with these kinds of reports, they have to be validated and approved for transmission by an admin.
I think this should be an actual requirement for the Server platform.
You make excellent points. This is a piece we're going to have to spend some time thinking about. I hope that Miloslav will chime in here, as he has a lot of first-hand knowledge on this front.
=== Account handling ===
SSSD is providing the backing storage for identity management. For 'managed' scenarios (e.g. the 'developer in a large organization' use case of the PRD), it will be possible to configure it to rely on a directory service for this information. The accountsservice is providing a D-Bus interface for user account information; this may be integrated into SSSD at some point.
Depending on their needs, application and services can either use the POSIX APIs (getpwent(), etc) or the accountsservice D-Bus interface to obtain user information.
As the Fedora Server is more likely than Workstation to require central management, I think we need to adopt this wholeheartedly. Also, realmd should be considered a core piece of our story, as it enables automatic configuration of SSSD with either FreeIPA (our Domain Controller Role) or Active Directory (Microsoft Windows Domain Controller).
+1 (though I have a conflict of interest here :-)
I am, of course, equally guilty of this conflict of interest. :)
=== Software updates ===
gnome-software will use PackageKit with the hawkey backend to obtain and install software updates for packaged applications and the OS itself. The recommendation for applications is to use the PackageKit APIs to interact with the underlying packaging system.
Software updates on a server system should be designed in such a way that they can be enforced centrally. With Fedora Server, this probably means picking one of the common config management systems such as Puppet, Chef, Red Hat Satellite or else relying on OpenLMI for performing central software upgrades.
I think you forgot spacewalk here.
It wasn't meant to be an exhaustive list.
For single-server manipulation, I think we should focus on supporting yum/dnf.
I think yum/successor CLI tool should be the default here indeed.
=== Miscellaneous system information ===
System locale, timezone, hostname, etc. will be managed through the services provided by systemd for this purpose. See developer documentation for [http://www.freedesktop.org/wiki/Software/systemd/localed/ localed], [http://www.freedesktop.org/wiki/Software/systemd/timedated/ timedated] and [http://www.freedesktop.org/wiki/Software/systemd/hostnamed/ hostnamed]
I also think we should stick with the systemd-offered mechanisms for this functionality (and I know that Cockpit is already interfacing with much of it).
To be honest I find hostnamed quite inadequate for a server case as it introduced confusion in the naming and by default will mangle perfectly valid fqdns that the admin want to assign to the machine.
I think we should carefully evaluate these mechanisms.
For example, messing up with the hostname often has annoying consequences when a server is enrolled into a central identity management system.
True, we probably want to work with them to disallow hostname changes while a machine is enrolled in a domain. I think that the interface here is what we want; we can work on improving the execution.
=== Virtualization ===
libvirt-daemon will be used to manage virtualization capabilities.
We probably want to use libvirt-daemon for virtualization and focus on systemd-nspawn for containerization.
what about libvirt-lxc/docker ?
Docker is moving away from libvirt-lxc towards systemd-nspawn (and I believe the latest releases have experimental support for this already). I think we probably want to meet them there.
=== Display manager ===
gdm will be used as the display manager. It is responsible for showing a login screen on each seat. It will be able to launch both X-based sessions and Wayland sessions.
Desktop environments are expected to make themselves known as an available session option on the login screen by dropping a .desktop file into /usr/share/xsessions (or its wayland equivalent).
Other facilities provided by the display manager include screen unlock authentication and user switching.
Display manager is irrelevant to the Server product.
We already discussed in some cases we will need them as some server software unfortunately need a grpahical session for installation/configuration purposes.
So we should have at least a recommendation of how to start a graphical session if required, even if it is just a manual startx or if the recommendation is to use Xvnc and a vnc client or other similar options.
Valid points. We should start by deciding if we want to bite this off right now or defer it for a future release, though. Getting this right might be a challenge.
=== Accessibility ===
The accessibility support in the workstation includes a screen reader, a high-contrast theme and a zoom capability, amongst others. The screen reading is provided through orca, which runs as a session service and requires the at-spi infrastructure. Applications are expected to provide suitable information to the screen reader via the toolkit's accessibility support. Applications are also expected to work acceptably in the high-contrast theme. The zoom is implemented in the desktop shell and does not need any application support.
Accessibility on the server is a topic I'm fairly comfortable with deferring to the management tools such as Cockpit and Katello/Foreman. On the pure command-line, I think the most we can do is assert that any interactive operation we enable should have a configurable timeout to deal with potentially slow typists.
We should at least support braille devices out of the box for console interaction IMO.
I hadn't thought of that. Thanks for bringing it up.
=== Graphics ===
The workstation session will switch to using a Wayland compositor as soon as feasible. Until then, it will be based on X11. Even after the switch, an X server will be included, so applications can either connect to Wayland natively, or run as an X client.
Not applicable
See above.
Ack
=== Media support ===
Sound hardware and audio streams will be managed by pulseaudio. Applications are recommended to use the [http://gstreamer.freedesktop.org/documentation/ gstreamer] framework for media playback.
Not applicable
There are server side media streamers, DLNA, etc.. we can defer taking any action but it is incorrect to say that a server os has nothing to do with media support.
Well, I'm not sure that fits in terms of our Fedora Server platform at least. We might want to explicitly state it as a non-goal though.
=== Appearance ===
The workstation will ship with a single theme, which will have support for the included toolkits: gtk3, qt and gtk2. Applications are expected to work well with this theme, as well as with the high-contrast theme that is used for accessibility. The theme will include a dark variant that applications can opt into using (this is most suitable for certain content-focused applications). The theme also includes an icon theme that provides named icons according to the icon-naming spec, plus symbolic variants.
We will be using the Adwaita theme, with a yet-to-be-written qt variant.
As for "appearance", my view is that Cockpit should be the official "face" of the Fedora Server. Opinions welcome :)
Should we say something about how the shell is configured, defaults, bash-completion, vim-enhanced/emacs/other plugins ?
Perhaps... I'm not sure if we need to go to that particular level of detail, but if you want to write it up, I'm sure no one will argue :)
On Tue, 2014-02-25 at 16:47 -0500, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ?
Just want to ask this question again, with an additional one. What does firewalld give us that iptables doesn't in a server environment? Should we default to iptables instead? Are there other alternatives we should consider?
Jonathan
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/26/2014 03:03 AM, Jonathan Dieter wrote:
On Tue, 2014-02-25 at 16:47 -0500, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ?
Just want to ask this question again, with an additional one. What does firewalld give us that iptables doesn't in a server environment? Should we default to iptables instead? Are there other alternatives we should consider?
About two years ago now, some members of product management did a customer tour to see how people are actually using RHEL and Fedora in production environments.
The overwhelming majority of real-world deployments disable the Linux kernel firewall (iptables) entirely and rely exclusively on perimeter security. The reason they cited for this was this: iptables is nearly impossible to manage centrally. This is primarily because the only interface to manipulating iptables are highly-complex command-line tools that have to be executed in a shell.
The main advantage that we get from firewalld is that it is providing a public D-BUS interface that we can use to connect central management tools (such as puppet) to apply a complete set of rules in one go (as opposed to the necessarily procedural approach we are currently faced with, which is reading the current state, parsing it, determining which changes need to be made and then performing the diff... all manually and racy)
On Wed, 2014-02-26 at 08:24 -0500, Stephen Gallagher wrote:
On 02/26/2014 03:03 AM, Jonathan Dieter wrote:
On Tue, 2014-02-25 at 16:47 -0500, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ?
Just want to ask this question again, with an additional one. What does firewalld give us that iptables doesn't in a server environment? Should we default to iptables instead? Are there other alternatives we should consider?
About two years ago now, some members of product management did a customer tour to see how people are actually using RHEL and Fedora in production environments.
The overwhelming majority of real-world deployments disable the Linux kernel firewall (iptables) entirely and rely exclusively on perimeter security. The reason they cited for this was this: iptables is nearly impossible to manage centrally. This is primarily because the only interface to manipulating iptables are highly-complex command-line tools that have to be executed in a shell.
The main advantage that we get from firewalld is that it is providing a public D-BUS interface that we can use to connect central management tools (such as puppet) to apply a complete set of rules in one go (as opposed to the necessarily procedural approach we are currently faced with, which is reading the current state, parsing it, determining which changes need to be made and then performing the diff... all manually and racy)
Ok, that makes sense.
Thanks, Jonathan
On 02/26/2014 03:03 AM, Jonathan Dieter wrote:
On Tue, 2014-02-25 at 16:47 -0500, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ?
The function by which to open a port on the firewall should be a consistent API regardless of the firewall back end. And should stay consistent for a long period. Mike
On Wed, Feb 26, 2014 at 08:24:18 -0500, Stephen Gallagher sgallagh@redhat.com wrote:
The main advantage that we get from firewalld is that it is providing a public D-BUS interface that we can use to connect central management tools (such as puppet) to apply a complete set of rules in one go (as opposed to the necessarily procedural approach we are currently faced with, which is reading the current state, parsing it, determining which changes need to be made and then performing the diff... all manually and racy)
You can also update iptable rules in a non-racy way if you are willing to drop packects (which can happen in networks anyway, so most things should be able to cope).
nftables provides atomic updates: http://wiki.nftables.org/wiki-nftables/index.php/Atomic_rule_replacement So that might be another option.
On Wed, Feb 26, 2014 at 5:24 AM, Stephen Gallagher sgallagh@redhat.com wrote:
The main advantage that we get from firewalld is that it is providing a public D-BUS interface that we can use to connect central management tools (such as puppet) to apply a complete set of rules in one go (as opposed to the necessarily procedural approach we are currently faced with, which is reading the current state, parsing it, determining which changes need to be made and then performing the diff... all
We manage iptables by using an iptables.d directory, dropping rules into it, and then using the summation of those rules on firewall configuration reloads.
On Wed, 2014-02-26 at 10:03 +0200, Jonathan Dieter wrote:
On Tue, 2014-02-25 at 16:47 -0500, Simo Sorce wrote:
On Tue, 2014-02-25 at 15:42 -0500, Stephen Gallagher wrote:
I would extend this statement to include that the deployment of Server Roles should also adjust the firewall operation in a manner consistent with user expectation.
Are we going to use something like firewalld or something else ?
Just want to ask this question again, with an additional one. What does firewalld give us that iptables doesn't in a server environment? Should we default to iptables instead? Are there other alternatives we should consider?
To be honest my question is more about: what is the point of doing this ?
Do we have applications that we do not trust and open unwanted ports ? If we do not trust them why do we install them ? If we trust them why do we firewall them ?
Considering that the default policy on Fedora is not not start daemon automatically I am trying to understand why having a firewall configured by default is a good idea.
Note that I am not saying it is not, but it seem one of those Security Dogma that has gone on w/o much formalizing the actual reasons why it makes sense to have a local firewall installed.
Keep in mind that I make an absolute distinction between local firewall and perimeter firewall, the latter is about not trusting all machines in a network to be configured correctly or according to an organization policy which is a completely different use case from a local firewall.
Simo.
Am 26.02.2014 18:39, schrieb Simo Sorce:
Do we have applications that we do not trust and open unwanted ports? If we do not trust them why do we install them?
"we" as Fedora is not in the game to decide what a user wants to be reachable because you do not know the exact environment
If we trust them why do we firewall them ?
because in doubt you *have* open ports and do not know the network the enduser is connected to
Considering that the default policy on Fedora is not not start daemon automatically I am trying to understand why having a firewall configured by default is a good idea.
to prevent what happend in the yum-upgrade to F19
* samba pulls cups-libs -> cups-libs pulls avahi-libs -> avahi-libs pulls avahi daemon avahi daemon is enabled at install, it was not installed before the upgrade voila you have a listening service pulled by careless packaging
Note that I am not saying it is not, but it seem one of those Security Dogma that has gone on w/o much formalizing the actual reasons why it makes sense to have a local firewall installed.
nothing has gone, in the case above without iptables avahi would have been accessable from the WAN on 4 machines, frankly you can have installed samba but not enabled for a lot of reasons and then you are at exactly the situation above
Keep in mind that I make an absolute distinction between local firewall and perimeter firewall, the latter is about not trusting all machines in a network to be configured correctly or according to an organization policy which is a completely different use case from a local firewall
* there are also servers with public WAN connections * security is *always* depth of defense, nothing ever will change that
in case of a sane network you have as much barriers as possible beause every one of them could fail by mistake - if it was the only barrier you have a problem, if you have depth of defense you have a timewindow to realize a mistake on whatever layer
On Wed, Feb 26, 2014 at 12:39:31PM -0500, Simo Sorce wrote:
Considering that the default policy on Fedora is not not start daemon automatically I am trying to understand why having a firewall configured by default is a good idea.
It is required by network policy at at least the two large universities where I worked. Now, whether it provides defense-in-depth or just a checkbox item is another issue, but it's nice to have Fedora default to being compliant with typical requirements.
On 26 February 2014 12:31, Matthew Miller mattdm@fedoraproject.org wrote:
On Wed, Feb 26, 2014 at 12:39:31PM -0500, Simo Sorce wrote:
Considering that the default policy on Fedora is not not start daemon automatically I am trying to understand why having a firewall configured by default is a good idea.
It is required by network policy at at least the two large universities where I worked. Now, whether it provides defense-in-depth or just a checkbox item is another issue, but it's nice to have Fedora default to being compliant with typical requirements.
And pretty much every .gov and .mil site I know of and quite a few .com sites. Firewalls by default are so far into various configuration management requirements that you get to spend years trying to undo it and will only come out with needing a firewall and antivirus also.
As Reindl pointed out, we can't guarantee we have no services on by default.. all it takes is the law of unintended consquences before or after RC1 and you have something no one notices until after a release (or it is considered by the release group to not be a problem because we have a firewall already.).
On Wed, 26 Feb 2014, Simo Sorce wrote:
If we trust them why do we firewall them ?
The lure of the Sirens, and parallel construction in language leads one astray here. A security model may specify running proxies, or interior trusted toward exterior untrusted network firewalling, because we do not and cannot know about unknown vulnerabilities
-- Russ herrold
2014-02-26 18:39 GMT+01:00 Simo Sorce simo@redhat.com:
To be honest my question is more about: what is the point of doing this ?
Do we have applications that we do not trust and open unwanted ports ? If we do not trust them why do we install them ? If we trust them why do we firewall them ?
Considering that the default policy on Fedora is not not start daemon automatically I am trying to understand why having a firewall configured by default is a good idea.
AFAICS there are basically these possible ways to answer the question, each valid in some situations:
0) The computer is a router, and applying policy on traffic is its specific job. (This is clearly a special situation that doesn't affect the question of default setup.)
1) The computer is assumed to be competently administered[1] on a homogenous network. This implies that any service running with an open port is intended to run and have that port open, so there is no point with restricting it with a firewall. There is obviously no point in restricting closed ports with a firewall. With this assumption, firewall should be either completely absent or permitting almost all traffic (or perhaps enforcing some kind of minimal policy, filtering out clearly bogus packets) by default.
2) The computer is assumed to be administered by people who make mistakes from time to time; in such a situation having a firewall by default serves as an extra step that "nudges" the administrators to revisit their assumptions and intentions: "Now that your httpd/database is running, did you really want it accessible by the net, or only by localhost?" With this assumption, a firewall should be present, and blocking incoming connections by default; it makes sense to make it fairly easy to enable access to a service after setting it up.
3) The computer is assumed to be running on a non-homogenous network, e.g. providing some services to an internal network and fewer services to a public-facing network. It is unsafe, or at least risky, to expose internal-only services on the public network. With this assumption, a firewall should be present, and blocking incoming connections by default; the system shouldn't be enabling access to a service after setting it up, and leave this to manual administrators' action (unless the system understands how precisely is the network non-homogenous).
4) The computer is assumed to be already compromised, or highly likely to be compromised. In that case, a firewall blocking incoming connections by default would stop *some* communication, but it makes *almost no difference*: Instead of opening a port and waiting for an incoming connection, the attacker can make an outgoing connection, which is not restricted by the usual firewall setup. Because most home routers doing NAT implicitly act as firewalls that block incoming connections, it would be almost *unexpected* if the attacker tried to open a listening port nowadays. Overall, I don't think the firewall is effective in this scenario in most cases, so this scenario doesn't matter in the discussion.[2]
Note that apart from 4), the assumptions are about the *computer*, not th*e individual services*, and are mutually exclusive; it doesn't make sense to treat one set of ports differently from another. This is the reason for my objections to the suggestion that the default setup of the firewall should differ between roles. Mirek
[1] ... and competently designed; let's assume that's true for Fedora Server :)
[2] One thing to consider, and other OSes have been moving in that direction, is to have a firewall that doesn't block *ports* but blocks *executables*. This makes the attacker's work somewhat harder, in that they couldn't just use a standardized shell code to download a new binary and execute it, but they would have to continue manipulating an existing process to make the connections on their behalf. In a sense, we already have this capability with SELinux; and It's unclear how much difference does it make - would this only cause the attacker to add a socket() call to the shell code, and leave the rest of the activity to a subprocess, for example?
On Mon, 2014-03-03 at 19:08 +0100, Miloslav Trmač wrote:
- The computer is assumed to be competently administered[1] on a
homogenous network. This implies that any service running with an open port is intended to run and have that port open, so there is no point with restricting it with a firewall. There is obviously no point in restricting closed ports with a firewall. With this assumption, firewall should be either completely absent or permitting almost all traffic (or perhaps enforcing some kind of minimal policy, filtering out clearly bogus packets) by default.
I think that you badly characterize this case (and perhaps 2 too).
What I think you fail to address is the case where the administrator is competent but the *users* of the system may not be.
In this case services configured and run by the administrator should poke holes, but in general other ports should be firewalled because users may inadvertently run services that open ports w/o realizing it.
This is the case where a firewall make sense as a default installation even though roles are allowed to automatically poke holes at configuration time.
Simo.
2014-03-03 23:13 GMT+01:00 Simo Sorce simo@redhat.com:
On Mon, 2014-03-03 at 19:08 +0100, Miloslav Trmač wrote:
- The computer is assumed to be competently administered[1] on a
homogenous network. This implies that any service running with an open port is intended to run and have that port open, so there is no point with restricting it with a firewall. There is obviously no point in restricting closed ports with a firewall. With this assumption, firewall should be either completely absent or permitting almost all traffic (or perhaps enforcing some kind of minimal policy, filtering out clearly bogus packets) by default.
I think that you badly characterize this case (and perhaps 2 too).
What I think you fail to address is the case where the administrator is competent but the *users* of the system may not be.
In this case services configured and run by the administrator should poke holes, but in general other ports should be firewalled because users may inadvertently run services that open ports w/o realizing it.
This is the case where a firewall make sense as a default installation even though roles are allowed to automatically poke holes at configuration time.
Even in such a case it would not make sense for the *role* to decide whether to poke holes for itself: either the system roles are assumed to be competently administered or not, and in both cases all roles on the system should be treated the same.
And I don't think the case you describe is frequent in the firsts place. It requires:
- A multi-user UNIX system (in itself becoming rare, everyone has a powerful personal computer, and remoting the GUI loses features; you would have a git server or a file server or a VPN server, but not so much a general shell server) - A multi-user UNIX system that *also* provides other roles at the same time (tightly coupling things that probably shouldn't be coupled, using separate VMs would result in more flexibility) - A system that is reachable from an environment that is more hostile than the users (e.g. with a public IP address); again note that the firewall setups we are talking about don't affect outgoing connections, only incoming connections matter.
So, a multi-user server with a public IP address: Yes, there is a specific and frequent kind of them - web hosting servers; but those are also not something that we really can support as a default setup, and longer-term I'd expect them to go the OpenShift way, giving each user a separate container with a separate network namespace. Other than web hosting, are public multi-user servers with ability of users to run arbitrary code really that frequent? Mirek
Am 04.03.2014 13:40, schrieb Miloslav Trmač:
Even in such a case it would not make sense for the /role/ to decide whether to poke holes for itself: either the system roles are assumed to be competently administered or not, and in both cases all roles on the system should be treated the same.
And I don't think the case you describe is frequent in the firsts place. It requires:
- A multi-user UNIX system (in itself becoming rare, everyone has a powerful personal computer, and remoting the GUI loses features; you would have a git server or a file server or a VPN server, but not so much a general shell server)
- A multi-user UNIX system that /also/ provides other roles at the same time (tightly coupling things that probably shouldn't be coupled, using separate VMs would result in more flexibility)
- A system that is reachable from an environment that is more hostile than the users (e.g. with a public IP address); again note that the firewall setups we are talking about don't affect outgoing connections, only incoming connections matter.
So, a multi-user server with a public IP address: Yes, there is a specific and frequent kind of them - web hosting servers; but those are also not something that we really can support as a default setup, and longer-term I'd expect them to go the OpenShift way, giving each user a separate container with a separate network namespace. Other than web hosting, are public multi-user servers with ability of users to run arbitrary code really that frequent? Mirek
first: ship whatever OS with shields down and no packet filter blocking anything incoming is a flawed design and unacceptable (no matter if it is a server or a workstation)
second: you never know in which way a port get opened and if it is because packaging bugs as i showed whith avahi pulled for no reason - nonody can gurantee that similar things are not happening in a future point of time
third: even if you install a service or webapplication and enable it that does not mean at the same time it should be reachable from the web - in most cases the opposite is true because after the first start you typically configure and test whatever you have installed on localhost, otherwise the admin should not do an IT job
fourth: "to run arbitrary code really that frequent" define that - any scripting language with commands like exec(), system() and so on is in danger to run code if it is not perfectly secured what a default installation is unable to do
so there is still a difference if any badly written script executes code and is able to open a unprivileged port and having that one immediately reachable from the WAN or blocked by a packet filter giving the admin a timewindow to realize something is going wrong before more damage happens
and so finally: now, in the past and in any future you have to block any incoming connection in whatever operating system by default or nobody with security knowledge will install that "product" because he is aware about the wrong security attitude of it's creators and that it is not worth the time for a second look
2014-03-04 13:51 GMT+01:00 Reindl Harald h.reindl@thelounge.net:
second:
third:
(lots snipped).
Those are cases I have also numbered 2) and 3) earlier in the thread :) I agree that a firewall by default, for all services, makes sense in these situations.
The email you are quoting is my objections to having the default *differ between roles*. I think that our cases 2) and 3) either apply across the whole computer, or not at all; so having the firewall allow access to some roles by default and not to others doesn't make sense.
fourth:
"to run arbitrary code really that frequent" define that - any scripting language with commands like exec(), system() and so on is in danger to run code if it is not perfectly secured what a default installation is unable to do
I think that's addressed by my case 4): a firewall that only blocks incoming connections is not all that useful in this situation.
and so finally: now, in the past and in any future you have to block any
incoming connection in whatever operating system by default or nobody with security knowledge will install that "product" because he is aware about the wrong security attitude of it's creators and that it is not worth the time for a second look
I'm not at all insisting on having no firewall by default, but my interest in appeasing cargo-culting requests like "you must run a firewall and antivirus and antispyware" is really limited. Not exactly zero, but fairly close to zero. *Why* do we need to block incoming connections? If we have a reason, are we actually deploying the firewall in a way that does handle that reason?
I see having a firewall running by default, but punching holes in it by default, without explicit user involvement, as such a case: the underlying reason to have a firewall seems to be defeated by the way the firewall is being used. Mirek
On Tue, 2014-03-04 at 14:07 +0100, Miloslav Trmač wrote:
I see having a firewall running by default, but punching holes in it by default, without explicit user involvement, as such a case: the underlying reason to have a firewall seems to be defeated by the way the firewall is being used.
Here lies the error of your reasoning.
Roles do not do anything without *explicit user involvement*. You actually have to install *and* setup a role on your system to poke any hole. And not poking holes for some roles makes no sense, because the role can only be used (in the common case) if it is reachable from the network, and if it is unreachable it does not work.
One of the assumptions for roles is that we want to have them working as intended once the setup is complete.
Roles that are not clear cases, as said in the last meeting, will offer a way for the admin to tell what to do, however their default will depend on what we think is the best default.
For example I think the best default for the domain controller role will be to open the firewall, while the best default for the database role will be to keep it closed.
The point is: roles should provide firewall rules and apply the appropriate default, however admins should be able to override the default at setup time.
Simo.
2014-03-04 21:31 GMT+01:00 Simo Sorce simo@redhat.com:
On Tue, 2014-03-04 at 14:07 +0100, Miloslav Trmač wrote:
I see having a firewall running by default, but punching holes in it by default, without explicit user involvement, as such a case: the underlying reason to have a firewall seems to be defeated by the way the firewall is being used.
Here lies the error of your reasoning.
Roles do not do anything without *explicit user involvement*. You actually have to install *and* setup a role on your system to poke any hole.
OK, so let's clarify how explicit the involvement is. When the user runs (fedora-role-deploy $rolename), will this
- Always punch a hole in the firewall, because the "fedora-role-deploy" was an explicit action? - Ask the user, and acting on the answer, which was an explicit action? - Not ask the user, but do what the role thinks is appropriate, because deploying a role was an explicit action?
My primary objection is to the latest option: If the user can't predict the effect of their command, I don't think the command was an "explicit" user action. It's just unpredictable unless you read all documentation, which many people don't.
And not poking holes for some roles makes no sense, because the role can
only be used (in the common case) if it is reachable from the network, and if it is unreachable it does not work.
Yes.
One of the assumptions for roles is that we want to have them working as
intended once the setup is complete.
Yes.
For example I think the best default for the domain controller role will
be to open the firewall, while the best default for the database role will be to keep it closed.
That may be *individually* true, but the user gets differing behavior without having clearly acknowledged or caused such a difference, put together into a single product doesn't give the user sufficient visibility. Mirek
On Wed, 2014-03-05 at 15:35 +0100, Miloslav Trmač wrote:
2014-03-04 21:31 GMT+01:00 Simo Sorce simo@redhat.com:
On Tue, 2014-03-04 at 14:07 +0100, Miloslav Trmač wrote:
I see having a firewall running by default, but punching holes in it by default, without explicit user involvement, as such a case: the underlying reason to have a firewall seems to be defeated by the way the firewall is being used.
Here lies the error of your reasoning.
Roles do not do anything without *explicit user involvement*. You actually have to install *and* setup a role on your system to poke any hole.
OK, so let's clarify how explicit the involvement is. When the user runs (fedora-role-deploy $rolename), will this
- Always punch a hole in the firewall, because the "fedora-role-deploy"
was an explicit action?
- Ask the user, and acting on the answer, which was an explicit action?
- Not ask the user, but do what the role thinks is appropriate, because
deploying a role was an explicit action?
One of the three depending on the Role, they are all on the table, because different roles have different properties and defaults.
My primary objection is to the latest option: If the user can't predict the effect of their command, I don't think the command was an "explicit" user action. It's just unpredictable unless you read all documentation, which many people don't.
I do not understand what you mean. If you deploy a domain controller do you expect it to be in full working condition when you are done configuring it through the role deploy command ? I do very much expect so, and to fullfill that expectation the role deployment script needs to open the appropriate KRB/LDAP/etc.. ports or the role is simply non-functional.
Why wouldn't the admin expect that ?
Note that we currently do not do that in the ipa-server-install script because we didn't have a simple API to call in the past and it is one of the rough edges, so much so that when the script ends we have to print a long blurb to tell the admin he has to open all these ports. That sucks, and very often it happens admins simply turn off the firewall instead of poking only the necessary holes as result.
And not poking holes for some roles makes no sense, because the role can
only be used (in the common case) if it is reachable from the network, and if it is unreachable it does not work.
Yes.
One of the assumptions for roles is that we want to have them working as
intended once the setup is complete.
Yes.
For example I think the best default for the domain controller role will
be to open the firewall, while the best default for the database role will be to keep it closed.
That may be *individually* true, but the user gets differing behavior without having clearly acknowledged or caused such a difference, put together into a single product doesn't give the user sufficient visibility.
I do not understand your point here. Different roles have different characteristics. Roles that do not open ports by default can warn the user that they did not do so and tell them what is the role-command to use to open the default ports for the role.
Simo.
2014-03-05 20:09 GMT+01:00 Simo Sorce simo@redhat.com:
On Wed, 2014-03-05 at 15:35 +0100, Miloslav Trmač wrote:
I do not understand what you mean. If you deploy a domain controller do you expect it to be in full working condition when you are done configuring it through the role deploy command ?
If, say, one week ago I have used exactly the same command to deploy a database, and I still remember that I had to manually modify the firewall, why wouldn't I expect the same command to do the same thing about the domain controller? I'd like the OS commands to behave consistently so that I don't have to remember too much.
For example I think the best default for the domain controller role will
be to open the firewall, while the best default for the database role will be to keep it closed.
That may be *individually* true, but the user gets differing behavior without having clearly acknowledged or caused such a difference, put together into a single product doesn't give the user sufficient
visibility.
I do not understand your point here. Different roles have different characteristics. Roles that do not open ports by default can warn the user that they did not do so and tell them what is the role-command to use to open the default ports for the role.
OK, informing the user[1] would resolve my concern. I'd still prefer the behavior to be consistent, but this would work. Mirek
[1] It seems to me that the user needs to be informed *both* when the service needs the user to modify the firewall, and when it has modified the firewall without asking, but that's really an UI design question that I shouldn't be bikeshedding I suppose.
On Thu, 2014-03-06 at 21:39 +0100, Miloslav Trmač wrote:
2014-03-05 20:09 GMT+01:00 Simo Sorce simo@redhat.com:
On Wed, 2014-03-05 at 15:35 +0100, Miloslav Trmač wrote:
I do not understand what you mean. If you deploy a domain controller do you expect it to be in full working condition when you are done configuring it through the role deploy command ?
If, say, one week ago I have used exactly the same command to deploy a database, and I still remember that I had to manually modify the firewall, why wouldn't I expect the same command to do the same thing about the domain controller? I'd like the OS commands to behave consistently so that I don't have to remember too much.
Sorry I do not understand what you are saying here.
What the role tells you is what are the ports it cares about, the command I suspect will always be the same. Something like: role <rolename> open-firewall-ports
For example I think the best default for the domain controller role will
be to open the firewall, while the best default for the database role will be to keep it closed.
That may be *individually* true, but the user gets differing behavior without having clearly acknowledged or caused such a difference, put together into a single product doesn't give the user sufficient
visibility.
I do not understand your point here. Different roles have different characteristics. Roles that do not open ports by default can warn the user that they did not do so and tell them what is the role-command to use to open the default ports for the role.
OK, informing the user[1] would resolve my concern. I'd still prefer the behavior to be consistent, but this would work. Mirek
[1] It seems to me that the user needs to be informed *both* when the service needs the user to modify the firewall, and when it has modified the firewall without asking, but that's really an UI design question that I shouldn't be bikeshedding I suppose.
Let's not bikeshed then, can we assume that you agree that it is ok for roles to have different defaults (your caveats applied) ?
Simo.
2014-03-06 22:03 GMT+01:00 Simo Sorce simo@redhat.com:
On Thu, 2014-03-06 at 21:39 +0100, Miloslav Trmač wrote:
2014-03-05 20:09 GMT+01:00 Simo Sorce simo@redhat.com:
On Wed, 2014-03-05 at 15:35 +0100, Miloslav Trmač wrote:
I do not understand what you mean. If you deploy a domain controller do you expect it to be in full working condition when you are done configuring it through the role deploy command ?
If, say, one week ago I have used exactly the same command to deploy a database, and I still remember that I had to manually modify the
firewall,
why wouldn't I expect the same command to do the same thing about the domain controller? I'd like the OS commands to behave consistently so
that
I don't have to remember too much.
Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
OK, informing the user[1] would resolve my concern. I'd still prefer the behavior to be consistent, but this would work.
<snip>
Let's not bikeshed then, can we assume that you agree that it is ok for roles to have different defaults (your caveats applied) ?
Yes. Mirek
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com mailto:simo@redhat.com>: Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
* not from usability point of view * not from security point of view - *no* open ports *never ever* as default
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com mailto:simo@redhat.com>: Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point of
view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
I can see Simo's point about the expectation that when you tell a machine "You're a domain controller now!" that it's awkward for it not to immediately be available, while the same is not necessarily true of a database (which might only want local access).
So I have no problems at all with Miloslav's suggestion that we just require an additional argument (which will have to be translated to the API layer in a sensible way) as part of the configuration.
It probably does hit that fine line between usable and secure reasonably well.
Of course, the question becomes one of granularity: I doubt that - --open-firewall-ports is necessarily sufficient. In the case of multi-homed servers, you still may want to have the service visible only on a subset of interfaces. I'd suggest - --open-firewall-ports[=iface1,...] as a reasonable compromise (and again translated acceptably into the Role config API).
And finally, the config API must also be capable of changing the set of open interfaces (such as when local testing has passed and the admin now wants to expose the services publicly).
Am 06.03.2014 22:43, schrieb Stephen Gallagher:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com mailto:simo@redhat.com>: Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point of
view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
i draw the line *strict*
if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
frankly nobody than me knows for what usage the role is intended inside the LAN, specific IP's in the LAN or even the whole world and while nobody than me can now that nobody but me has to open ports
open firewall ports is always the last setp due going in production
there should be no but and if because that is what windows does and that's why i am using Linux ________________________
recently faced on a Win2008R2 acting as vCenter server
* install VMware packages -> ports in the firewall are opened * well, iclosed them *all* exept two single LAN IP's * months later -> update of whatever package * followed by the monthly security scan inside the LAN * one check is if the complete vCenter server is *unreachable* * voila, a few ports opened again
no, i do not want such mis.behavior on any system i would call sane
On 6 March 2014 14:54, Reindl Harald h.reindl@thelounge.net wrote:
Am 06.03.2014 22:43, schrieb Stephen Gallagher:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com mailto:simo@redhat.com>: Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point of
view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
i draw the line *strict*
if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
Then in this case, you wouldn't want to use Roles in any form as they aren't going to help you any. You aren't the target audience for them.. trying to make you the target audience would only work in your environment and no one elses.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/06/2014 05:06 PM, Stephen John Smoogen wrote:
On 6 March 2014 14:54, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
Am 06.03.2014 22:43, schrieb Stephen Gallagher:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com
<mailto:simo@redhat.com mailto:simo@redhat.com>>: Sorry I do
not understand what you are
saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point
of view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
i draw the line *strict*
if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
Then in this case, you wouldn't want to use Roles in any form as they aren't going to help you any. You aren't the target audience for them.. trying to make you the target audience would only work in your environment and no one elses.
I don't think that's necessarily a fair statement. We fully intend for the firewall control on these Roles to be easy to turn off and on at will. Upgrades should never change that state[1]. I don't see any reason why, under those conditions, Roles couldn't work for Mr. Reindl.
[1] I think I can reasonably assert this without controversy.
On 6 March 2014 15:12, Stephen Gallagher sgallagh@redhat.com wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/06/2014 05:06 PM, Stephen John Smoogen wrote:
On 6 March 2014 14:54, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
Am 06.03.2014 22:43, schrieb Stephen Gallagher:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com
<mailto:simo@redhat.com mailto:simo@redhat.com>>: Sorry I do
not understand what you are
saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point
of view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
i draw the line *strict*
if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
Then in this case, you wouldn't want to use Roles in any form as they aren't going to help you any. You aren't the target audience for them.. trying to make you the target audience would only work in your environment and no one elses.
I don't think that's necessarily a fair statement. We fully intend for the firewall control on these Roles to be easy to turn off and on at will. Upgrades should never change that state[1]. I don't see any reason why, under those conditions, Roles couldn't work for Mr. Reindl.
I didn't say that roles couldn't work, just that he isn't the target audience. From what I have read through the years, Harald has a very strict setup which he knows very well and works well for what he needs done. However doing any sort of configuration management outside of what he has in place is going to cause problems. They are ones that can be worked around but you would need to make sure that the default of every role command is noop. Only after he had configured, edited and audited the tasks would he want them to be anything else.
Note this isn't meant to be derogatory to H. Reindl and if it comes across I am sorry.. I have a lot of respect for people who work in such environments and realize that there is a LOT of need for it. I also know that if you are designing a product to meet those types of environments you need to know from the start that 1) nothing happens without express commands and 2) nothing is to be hard coded but configurable before a role is deployed. It usually means where you could come up with a 'generic' 60% solution in 20 lines of code, you now need a 4000 line of code to deal with all the alternatives and options that will come up.
Am 06.03.2014 23:30, schrieb Stephen John Smoogen:
I didn't say that roles couldn't work, just that he isn't the target audience. From what I have read through the years, Harald has a very strict setup which he knows very well and works well for what he needs done
but you do not realize the intention why i care at all!
others not having that strict setup and are at learning how to deal with their os without dangerous defaults they may not realize soon enough is the intention
what i consider is "how should a linux system work for me after the first setup with my knowledge 15 years ago"
On 6 March 2014 15:36, Reindl Harald h.reindl@thelounge.net wrote:
Am 06.03.2014 23:30, schrieb Stephen John Smoogen:
I didn't say that roles couldn't work, just that he isn't the target
audience.
From what I have read through the years, Harald has a very strict setup
which
he knows very well and works well for what he needs done
but you do not realize the intention why i care at all!
others not having that strict setup and are at learning how to deal with their os without dangerous defaults they may not realize soon enough is the intention
what i consider is "how should a linux system work for me after the first setup with my knowledge 15 years ago"
My understanding was that the roles commands were items that the system administrator ran to set up a system to do a certain task and was set up to be done for the 60% of the environments which aren't going to play with defaults in any case. So these were my assumptions:
1) The systems administrator is running these commands. 2) The system administrator level being aimed for is more where they have a task to do and just want it to work without knowing all these things. (EG the people who will install cpanel, webadmin, etc without a thought.) We are just wanting that when they set up those commands they get a working secure default. 3) The goal is to get these systems up without the admin following the usual howto of
disable iptables disable selinux install package x install tar-ball from http://reallygoodsite.com/ run cpanel
because they aren't reading anything deeper than that because the problem they want to solve has nothing to do with the all the packages they are currently installing. All they want is a web calender and it needs all this other stuff before they can get it running.
Since these assumptions seem to be wrong, I will bow out of this conversation.
On Thu, 2014-03-06 at 15:49 -0700, Stephen John Smoogen wrote:
My understanding was that the roles commands were items that the system administrator ran to set up a system to do a certain task and was set up to be done for the 60% of the environments which aren't going to play with defaults in any case.
Exactly, the idea of a role is to have a standard way to deploy some well defined services we classify as 'roles'. The aim is to have the roles fully functional once configured. The definition of 'fully functional' is role-specific of course.
So these were my assumptions:
- The systems administrator is running these commands.
- The system administrator level being aimed for is more where they
have a task to do and just want it to work without knowing all these things. (EG the people who will install cpanel, webadmin, etc without a thought.) We are just wanting that when they set up those commands they get a working secure default. 3) The goal is to get these systems up without the admin following the usual howto of
[snip]
Yes, this is correct, moreover if the admin is expert and has taken the time to read the role documentation (or has experimented previously) I expect he will be able to find the additional command line switches of the 'configure-role' command to change defaults for specific high level configuration items if he needs/wants to.
So in the firewall case I see a more expert admin passing in at invocation time the policy he wants to enforce when it comes to opening firewall ports. If he doesn't, the role-default will be used instead.
Simo.
On Fri, 2014-03-07 at 10:52 -0500, Simo Sorce wrote:
On Thu, 2014-03-06 at 15:49 -0700, Stephen John Smoogen wrote:
My understanding was that the roles commands were items that the system administrator ran to set up a system to do a certain task and was set up to be done for the 60% of the environments which aren't going to play with defaults in any case.
Exactly, the idea of a role is to have a standard way to deploy some well defined services we classify as 'roles'. The aim is to have the roles fully functional once configured. The definition of 'fully functional' is role-specific of course.
So these were my assumptions:
- The systems administrator is running these commands.
- The system administrator level being aimed for is more where they
have a task to do and just want it to work without knowing all these things. (EG the people who will install cpanel, webadmin, etc without a thought.) We are just wanting that when they set up those commands they get a working secure default. 3) The goal is to get these systems up without the admin following the usual howto of
[snip]
Yes, this is correct, moreover if the admin is expert and has taken the time to read the role documentation (or has experimented previously) I expect he will be able to find the additional command line switches of the 'configure-role' command to change defaults for specific high level configuration items if he needs/wants to.
So in the firewall case I see a more expert admin passing in at invocation time the policy he wants to enforce when it comes to opening firewall ports. If he doesn't, the role-default will be used instead.
Simo.
To jump in here, the majority of customers (based on primary and secondary research) turn off the Linux firewall.
They do this for two reasons:
* The Linux firewall interferes with applications.
* The Linux firewall can't be centrally managed.
Firewalld is a starting point for enabling centralized management of Linux firewalls. Server Roles are a step toward having the firewall not interfere with the application, by configuring the firewall as part of the application installation.
Server Roles should improve security by encouraging more people to leave the Linux Firewall turned on.
Russ
On Thu, 2014-03-06 at 17:12 -0500, Stephen Gallagher wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/06/2014 05:06 PM, Stephen John Smoogen wrote:
On 6 March 2014 14:54, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
Am 06.03.2014 22:43, schrieb Stephen Gallagher:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com
<mailto:simo@redhat.com mailto:simo@redhat.com>>: Sorry I do
not understand what you are
saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections? # Ah, firewall... $ fedora-role-deploy --open-firewall-ports potgresql # That's how it is done in Fedora, then. Good to know.
right direction
# Time passes...
$ fedora-role-deploy freeipa # Huh, this is already accessible?
that must not happen
- not from usability point of view * not from security point
of view - *no* open ports *never ever* as default
The debate here is where you draw the line as to "what is default". Deploying a role is *NOT* the same as just installing a package. For package installs, I absolutely agree that we should never be poking holes in the firewall.
i draw the line *strict*
if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
Then in this case, you wouldn't want to use Roles in any form as they aren't going to help you any. You aren't the target audience for them.. trying to make you the target audience would only work in your environment and no one elses.
I don't think that's necessarily a fair statement. We fully intend for the firewall control on these Roles to be easy to turn off and on at will. Upgrades should never change that state[1]. I don't see any reason why, under those conditions, Roles couldn't work for Mr. Reindl.
[1] I think I can reasonably assert this without controversy.
weeeelll, we had some ports change in freeipa, we used to open 8443 and then we changed to proxy everything via 443, so technically we would like to 'close' a port on update if we were back then :-)
Simo.
Am 06.03.2014 23:36, schrieb Simo Sorce:
On Thu, 2014-03-06 at 17:12 -0500, Stephen Gallagher wrote:
I don't think that's necessarily a fair statement. We fully intend for the firewall control on these Roles to be easy to turn off and on at will. Upgrades should never change that state[1]. I don't see any reason why, under those conditions, Roles couldn't work for Mr. Reindl.
[1] I think I can reasonably assert this without controversy.
weeeelll, we had some ports change in freeipa, we used to open 8443 and then we changed to proxy everything via 443, so technically we would like to 'close' a port on update if we were back then :-)
no - you would that only if you are changing my servers configuration to listen on a different port which would be a no-go - and if you are now find a argument why doing so it's the best against defaults someone later may regret after it is too late
Am 06.03.2014 23:06, schrieb Stephen John Smoogen:
On 6 March 2014 14:54, Reindl Harald <h.reindl@thelounge.net mailto:h.reindl@thelounge.net> wrote:
Am 06.03.2014 22:43, schrieb Stephen Gallagher: > On 03/06/2014 04:28 PM, Reindl Harald wrote: > >> Am 06.03.2014 22:13, schrieb Miloslav Trmač: >>> 2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com <mailto:simo@redhat.com> >>> <mailto:simo@redhat.com <mailto:simo@redhat.com>>>: Sorry I do not understand what you are >>> saying here. >>> >>> $ fedora-role-deploy postgresql # Huh, it is refusing >>> connections? # Ah, firewall... $ fedora-role-deploy >>> --open-firewall-ports potgresql # That's how it is done in >>> Fedora, then. Good to know. > >> right direction > >>> # Time passes... >>> >>> $ fedora-role-deploy freeipa # Huh, this is already accessible? > >> that must not happen > >> * not from usability point of view * not from security point of >> view - *no* open ports *never ever* as default > > The debate here is where you draw the line as to "what is default". > Deploying a role is *NOT* the same as just installing a package. For > package installs, I absolutely agree that we should never be poking > holes in the firewall. i draw the line *strict* if i deploy whatever role nobody than me is responsible to open firewall ports because nobody than me can know if it is sane to do so or what i have planned after the depolyment before go in production
Then in this case, you wouldn't want to use Roles in any form as they aren't going to help you any
even if - i would be the target as 3rd party by machines of people using roles and not aware that they need to plug the network cable before the first boot and secure the setup before connect it to the network
anybody now saying "that is not windows you can connect a linux to the internet without get infected" is lofty and may regret that attitude sonner or later
You aren't the target audience for them.. trying to make you the target audience would only work in your environment and no one elses.
what are you talking about?
honestly i find it only bizarre that in the year 2014 *anybody* considers to open any port without *explicit confirmation* of the sysadmin installing the system or even install a OS without a packetfilter
2014-03-06 22:43 GMT+01:00 Stephen Gallagher sgallagh@redhat.com:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com mailto:simo@redhat.com>: Sorry I do not understand what you are saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections?
# Ah, firewall...
$ fedora-role-deploy --open-firewall-ports potgresql
# That's how it is done in Fedora, then. Good to know.
<snip>
So I have no problems at all with Miloslav's suggestion that we just require an additional argument (which will have to be translated to the API layer in a sensible way) as part of the configuration.
So the above was confusing, that's not what I wanted to suggest. The --open-firewall-ports was to be basically "firewall-cmd --permanent --add-service=postgresql", i.e. change the firewall, not a re-deploy of the role. (Though it could have actually been a re-deploy, given our earlier conversation about cattle-like deployment.)
Of course, the question becomes one of granularity: I doubt that
- --open-firewall-ports is necessarily sufficient. In the case of
multi-homed servers, you still may want to have the service visible only on a subset of interfaces. I'd suggest
- --open-firewall-ports[=iface1,...] as a reasonable compromise (and
again translated acceptably into the Role config API).
Wouldn't it be simplest to just use (firewall-cmd --permanent --zone=$my_zone ...) directly? We could of course build a "fedora-role-firewall" facade over it if necessary, but firewalld already has all the necessary functionality AFAICS.
And finally, the config API must also be capable of changing the set
of open interfaces (such as when local testing has passed and the admin now wants to expose the services publicly).
That's a firewalld command away, as well. Mirek
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/07/2014 09:56 AM, Miloslav Trmač wrote:
2014-03-06 22:43 GMT+01:00 Stephen Gallagher <sgallagh@redhat.com mailto:sgallagh@redhat.com>:
On 03/06/2014 04:28 PM, Reindl Harald wrote:
Am 06.03.2014 22:13, schrieb Miloslav Trmač:
2014-03-06 22:03 GMT+01:00 Simo Sorce <simo@redhat.com
<mailto:simo@redhat.com mailto:simo@redhat.com>>: Sorry I do
not understand what you are
saying here.
$ fedora-role-deploy postgresql # Huh, it is refusing connections?
# Ah, firewall...
$ fedora-role-deploy --open-firewall-ports potgresql
# That's how it is done in Fedora, then. Good to know.
<snip>
So I have no problems at all with Miloslav's suggestion that we just require an additional argument (which will have to be translated to the API layer in a sensible way) as part of the configuration.
So the above was confusing, that's not what I wanted to suggest. The --open-firewall-ports was to be basically "firewall-cmd --permanent --add-service=postgresql", i.e. change the firewall, not a re-deploy of the role. (Though it could have actually been a re-deploy, given our earlier conversation about cattle-like deployment.)
Of course, the question becomes one of granularity: I doubt that - --open-firewall-ports is necessarily sufficient. In the case of multi-homed servers, you still may want to have the service visible only on a subset of interfaces. I'd suggest - --open-firewall-ports[=iface1,...] as a reasonable compromise (and again translated acceptably into the Role config API).
Wouldn't it be simplest to just use (firewall-cmd --permanent --zone=$my_zone ...) directly? We could of course build a "fedora-role-firewall" facade over it if necessary, but firewalld already has all the necessary functionality AFAICS.
And finally, the config API must also be capable of changing the set of open interfaces (such as when local testing has passed and the admin now wants to expose the services publicly).
That's a firewalld command away, as well.
I think we're agreeing completely here. That's how I would expect we'd do it "under the hood" too. I was just talking about things from another layer of abstraction. --open-firewall-ports would be a wrapper around "ask the role what ports it wants to use, then tell firewalld to open them on the appropriate interfaces".
adding desktop@ since they are also looking at file system options
On Feb 25, 2014, at 1:42 PM, Stephen Gallagher sgallagh@redhat.com wrote:
=== File system ===
The default file system type for workstation installs should be btrfs.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
XFS is a really good idea for Server.
Follow-up questions:
- Can Server and Workstation WG's choose different defaults for their product's installers?
- Other than lack of shrink support in XFS, I'd say XFS is suitable for Workstation as well. Would the Workstation WG have concerns about the lack of fs shrink support in the default file system? [1]
Btrfs still makes me somewhat nervous, given that its upstream doesn't consider it stable[3].
That wiki entry appears old. The stable aspect was about disk format, which is now stable. And also the experimental description was removed in kernel 3.13. [2]
My main two concerns with Btrfs: 1. With even minor problems users sometimes go straight to the big hammer approach with "btrfsck/btrfs check --repair" rather than the recommended approaches. Remounting with -o recovery, and even using a newer kernel are recommended on linux-btrfs@ significantly more often than btrfs check --repair.
It's a fair question how fail safe the offline repair utility currently is, and should be. In my monitoring of linux-btrfs@, even if btrfs check --repair made things worse, the offline btrfs restore utility enabled files to be extracted.
2. Supporting multiple device volumes in Anaconda. Although multiple device Btrfs volumes work very well when devices are working, there's no device failure notification yet. When a device fails, the volume becomes read-only; and the volume isn't mountable (or bootable) without the use of "degraded" mount option. While this can be done as a boot parameter, it's sort of a non-obvious thing for the typical user. I think it's valid for either WG to consider reducing exposure by dropping Anaconda support for it, or placarding the feature somehow.
[1] LVM Thin Provisioning, if ready for production prime time as a default, is a work around. It's actually better than fs resize anyway.
[2] http://www.mail-archive.com/linux-btrfs%40vger.kernel.org/msg29945.html
Chris Murphy
On Tue, 2014-02-25 at 15:53 -0700, Chris Murphy wrote:
adding desktop@ since they are also looking at file system options
On Feb 25, 2014, at 1:42 PM, Stephen Gallagher sgallagh@redhat.com wrote:
=== File system ===
The default file system type for workstation installs should be btrfs.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
XFS is a really good idea for Server.
Follow-up questions:
- Can Server and Workstation WG's choose different defaults for their
product's installers?
Given my understanding of anaconda's architecture I don't believe this would *technically* present a significant problem. anaconda already has the concept of being used to install different products, and using different defaults for various things depending on what product it's being used to install: this is how RHEL can have different defaults from Fedora.
It would be best to ask the anaconda devs, though. Maybe they think it's a horrible hack and don't want to extend it any further than their paychecks require. CCing bcl and dcantrell.
In terms of *policy*, it'd be up to FESCo, I guess. It seems like a perfectly reasonable point of variance between products to me.
- Other than lack of shrink support in XFS, I'd say XFS is suitable
for Workstation as well. Would the Workstation WG have concerns about the lack of fs shrink support in the default file system? [1]
Btrfs still makes me somewhat nervous, given that its upstream doesn't consider it stable[3].
That wiki entry appears old. The stable aspect was about disk format, which is now stable. And also the experimental description was removed in kernel 3.13. [2]
<snip>
In addition to Chris' points, we discussed btrfs at this week's QA meeting, and agreed that even though it's really not QA's 'job', it seems sensible to just check if Desktop WG has talked to the devs who have, up until now, been taking the job of deciding when btrfs is 'ready for primetime' and developed a plan. Is the btrfs-by-default part of the current tech spec more of a long term aspiration, or is it on the table for F21? Have the concerns about its readiness been evaluated and checked with the domain experts?
Thanks!
On Tue, Feb 25, 2014 at 03:04:30PM -0800, Adam Williamson wrote:
On Tue, 2014-02-25 at 15:53 -0700, Chris Murphy wrote:
adding desktop@ since they are also looking at file system options
On Feb 25, 2014, at 1:42 PM, Stephen Gallagher sgallagh@redhat.com wrote:
=== File system ===
The default file system type for workstation installs should be btrfs.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
XFS is a really good idea for Server.
Follow-up questions:
- Can Server and Workstation WG's choose different defaults for their
product's installers?
Given my understanding of anaconda's architecture I don't believe this would *technically* present a significant problem. anaconda already has the concept of being used to install different products, and using different defaults for various things depending on what product it's being used to install: this is how RHEL can have different defaults from Fedora.a
If the Server and Workstation trees are separate composes, you can have a different default filesystem.
It would be best to ask the anaconda devs, though. Maybe they think it's a horrible hack and don't want to extend it any further than their paychecks require. CCing bcl and dcantrell.
It's not as horrible of a hack as it was in the past.
In terms of *policy*, it'd be up to FESCo, I guess. It seems like a perfectly reasonable point of variance between products to me.
I say the default Fedora filesystem should be up for vote, much like FESCo members or the code name for a release.
Thanks,
On Tue, Feb 25, 2014 at 18:11:35 -0500, David Cantrell dcantrell@redhat.com wrote:
I say the default Fedora filesystem should be up for vote, much like FESCo members or the code name for a release.
I don't think that is a good idea. I'd rather see a small group of subject experts make the decision for the default, than have a large group of Fedora contributors vote, who may not be in a good position to judge the relative merits of file systems.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/26/2014 08:58 AM, Bruno Wolff III wrote:
On Tue, Feb 25, 2014 at 18:11:35 -0500, David Cantrell dcantrell@redhat.com wrote:
I say the default Fedora filesystem should be up for vote, much like FESCo members or the code name for a release.
I don't think that is a good idea. I'd rather see a small group of subject experts make the decision for the default, than have a large group of Fedora contributors vote, who may not be in a good position to judge the relative merits of file systems.
Yeah, for the Products, I think this is well within the purview of the Working Group to decide (hopefully by seeking expert opinion first).
On Wed, Feb 26, 2014 at 8:58 AM, Bruno Wolff III bruno@wolff.to wrote:
On Tue, Feb 25, 2014 at 18:11:35 -0500, David Cantrell dcantrell@redhat.com wrote:
I say the default Fedora filesystem should be up for vote, much like FESCo members or the code name for a release.
I don't think that is a good idea. I'd rather see a small group of subject experts make the decision for the default, than have a large group of Fedora contributors vote, who may not be in a good position to judge the relative merits of file systems.
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
josh
On Wed, Feb 26, 2014 at 9:25 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
----- Original Message -----
On Wed, Feb 26, 2014 at 9:25 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Yep, a lot of fun - three different file systems for free different products. And we are back to the question how much these products could differ - with limited resources we have right now - at least short term. Who can answer it - filesystem/kernel guys, if they are able and willing to support all potential filesystem, as David stated, it's possible in Anaconda but again the same question if the team would be able to maintain more filesystems support with high bar in terms of quality (even for example brtfs limited to bare minimum), QA... And it could be pretty confusing for users but that's up to us/marketing to explain that products aim specific goal and it's for good (if we would be able to support it - then it's for good, if not...).
Adding devel list to CC - I expect another topic Base should be involved too.
And no, no elections for file system. It's really up to WGs and coordination with the rest teams.
Jaroslav
server mailing list server@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/server
On Wed, Feb 26, 2014 at 10:18:12AM -0500, Jaroslav Reznik wrote:
----- Original Message -----
On Wed, Feb 26, 2014 at 9:25 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Yep, a lot of fun - three different file systems for free different products. And we are back to the question how much these products could differ - with limited resources we have right now - at least short term. Who can answer it
- filesystem/kernel guys, if they are able and willing to support all
potential filesystem, as David stated, it's possible in Anaconda but again the same question if the team would be able to maintain more filesystems support with high bar in terms of quality (even for example brtfs limited to bare minimum), QA... And it could be pretty confusing for users but that's up to us/marketing to explain that products aim specific goal and it's for good (if we would be able to support it - then it's for good, if not...).
I think filesystem variance across different Fedoras really impacts QA more than us. We already support a lot of filesystems, but the real hit is the QA test matrix.
Adding devel list to CC - I expect another topic Base should be involved too.
And no, no elections for file system. It's really up to WGs and coordination with the rest teams.
On Wed, 2014-02-26 at 10:24 -0500, David Cantrell wrote:
Yep, a lot of fun - three different file systems for free different products. And we are back to the question how much these products could differ - with limited resources we have right now - at least short term. Who can answer it
- filesystem/kernel guys, if they are able and willing to support all
potential filesystem, as David stated, it's possible in Anaconda but again the same question if the team would be able to maintain more filesystems support with high bar in terms of quality (even for example brtfs limited to bare minimum), QA... And it could be pretty confusing for users but that's up to us/marketing to explain that products aim specific goal and it's for good (if we would be able to support it - then it's for good, if not...).
I think filesystem variance across different Fedoras really impacts QA more than us. We already support a lot of filesystems, but the real hit is the QA test matrix.
Well, there is an indirect impact on devel.
As I said, it's about what we decide is 'release blocking'. The historic approach there, and the one that's really simplest and makes most sense, is 'guided path is release blocking, custom is not'. The more choice there is on the guided path, the more release blocking codepaths we have, and the more release blocker bugs you folks get to treadmill.
If, for instance, we ditched LVM thinp and btrfs from the guided path dropdown, we cut the number of potentially release blocking paths by 50% immediately. It won't necessarily cut the number of release blocking bugs by precisely 50%, but it's pretty hard to believe it wouldn't cut the number at *all*. And I know you folks get tired of the release blocker treadmill.
Remember, what QA release validation testing ultimately *results in* is blocker bugs :)
On Wed, Feb 26, 2014 at 10:18 AM, Jaroslav Reznik jreznik@redhat.com wrote:
----- Original Message -----
On Wed, Feb 26, 2014 at 9:25 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Yep, a lot of fun - three different file systems for free different products. And we are back to the question how much these products could differ - with limited resources we have right now - at least short term. Who can answer it
- filesystem/kernel guys, if they are able and willing to support all
I'm a kernel guy. We already ship all of these filesystems. People already do installs with all of them. Really, it's more about what we consider _sane_ as the default for most users that don't know the difference, and not about shipping them in general.
potential filesystem, as David stated, it's possible in Anaconda but again the same question if the team would be able to maintain more filesystems support with high bar in terms of quality (even for example brtfs limited to bare minimum), QA... And it could be pretty confusing for users but that's
I don't think the support aspect is going to change much either way. The only thing I see possibly happening is more focus on btrfs, but I know my team isn't in a position to spend any significant amount of time on that right now.
josh
On Wed, Feb 26, 2014 at 02:59:00PM +0000, Colin Walters wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/26/2014 11:14 AM, Matthew Miller wrote:
On Wed, Feb 26, 2014 at 02:59:00PM +0000, Colin Walters wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
Could you go into more detail about those various issues?
My off-the-cuff guess is that you don't currently plan to use GRUB2 because it's heavyweight for your use-case? So you'd either need to shrink it down or else find a way to allow syslinux to work (either by supporting XFS in it or using a /boot with ext4)?
I'm trying to deconstruct that from the proposed solutions, so please correct me if I'm way off base.
On Wed, Feb 26, 2014 at 11:30:47AM -0500, Stephen Gallagher wrote:
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
Could you go into more detail about those various issues?
My off-the-cuff guess is that you don't currently plan to use GRUB2 because it's heavyweight for your use-case? So you'd either need to shrink it down or else find a way to allow syslinux to work (either by supporting XFS in it or using a /boot with ext4)?
That's pretty much it. :)
On Wed, 2014-02-26 at 11:14 -0500, Matthew Miller wrote:
On Wed, Feb 26, 2014 at 02:59:00PM +0000, Colin Walters wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
I'm always dubious of 'there shall be only one' decrees - be it installers or desktop environments or file systems.
Also, as has already been pointed out: there are Fedora systems out there using ext4, xfs, btrfs and probably a few other file systems today. If we now suddenly change track and consider btrfs not 'safe enough', wasn't it pretty irresponsible of us to let people use it for their installations ?
For the workstation, I think the options are
- switch to btrfs soon to give it the exposure it needs to get ready (while being careful to limit the supported features, as suse does)
- stick with ext4 until we have some user-visible features (time slider...) that make a switch to btrfs very attractive
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/26/2014 11:42 AM, Matthias Clasen wrote:
On Wed, 2014-02-26 at 11:14 -0500, Matthew Miller wrote:
On Wed, Feb 26, 2014 at 02:59:00PM +0000, Colin Walters wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
I'm always dubious of 'there shall be only one' decrees - be it installers or desktop environments or file systems.
I have no problems (personally) about allowing the different products to select different default filesystems. The reason people choose different filesystems is to serve different workloads, so I think this is just an extension of that.
Also, as has already been pointed out: there are Fedora systems out there using ext4, xfs, btrfs and probably a few other file systems today. If we now suddenly change track and consider btrfs not 'safe enough', wasn't it pretty irresponsible of us to let people use it for their installations ?
I think we're saying "it's not stable enough for the *default*". That's a different statement from "it's not stable enough for use".
For the workstation, I think the options are
- switch to btrfs soon to give it the exposure it needs to get
ready (while being careful to limit the supported features, as suse does)
I'm slightly in favor of this for the Workstation, personally. Without wide adoption, btrfs will never get any better.
- stick with ext4 until we have some user-visible features (time
slider...) that make a switch to btrfs very attractive
Certainly an acceptable answer as well. I don't really see any compelling arguments for XFS in this workload.
On Wed, Feb 26, 2014 at 12:44 PM, Stephen Gallagher sgallagh@redhat.com wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 02/26/2014 11:42 AM, Matthias Clasen wrote:
On Wed, 2014-02-26 at 11:14 -0500, Matthew Miller wrote:
On Wed, Feb 26, 2014 at 02:59:00PM +0000, Colin Walters wrote:
Yeah, agreed here. Everyone wants the latest shiniest thing, even if that thing isn't ready. I really don't want to wade through tons of bug reports for btrfs just because it has a lot of hype.
Also, right now cloud is plain old ext4. Let's see if we can ship *all* of the filesystems! It'll be fun!
Cloud could switch to XFS along with server. The main problem is that it'd make us revisit booting -- either 1) some work into lightening up grub2, 2) testing and possibly enhancing syslinux's xfs support, or 3) a separate /boot with a different filesystem. I don't really love any of those options.
I'm always dubious of 'there shall be only one' decrees - be it installers or desktop environments or file systems.
I have no problems (personally) about allowing the different products to select different default filesystems. The reason people choose different filesystems is to serve different workloads, so I think this is just an extension of that.
Also, as has already been pointed out: there are Fedora systems out there using ext4, xfs, btrfs and probably a few other file systems today. If we now suddenly change track and consider btrfs not 'safe enough', wasn't it pretty irresponsible of us to let people use it for their installations ?
I think we're saying "it's not stable enough for the *default*". That's a different statement from "it's not stable enough for use".
For the workstation, I think the options are
- switch to btrfs soon to give it the exposure it needs to get
ready (while being careful to limit the supported features, as suse does)
I'm slightly in favor of this for the Workstation, personally. Without wide adoption, btrfs will never get any better.
No, that isn't true. Without wide adoption you may not have any impetus for btrfs to get better. However, it getting better is dependent upon wider development, maintenance, and testing. I'm not sure we are in a position to actually do that, and that is the bulk of my hesitation. Throwing something upon Fedora users as a default with the hopes that it will improve is pretty horrible in my opinion, particularly if we aren't able to actually fix things they find.
josh
Hi
On Wed, Feb 26, 2014 at 1:00 PM, Josh Boyer wrote:
No, that isn't true. Without wide adoption you may not have any impetus for btrfs to get better. However, it getting better is dependent upon wider development, maintenance, and testing. I'm not sure we are in a position to actually do that, and that is the bulk of my hesitation. Throwing something upon Fedora users as a default with the hopes that it will improve is pretty horrible in my opinion, particularly if we aren't able to actually fix things they find.
Does Fedora or more specifically Red Hat have anyone working on Btrfs upstream that can help guide the path forward? It can't be possibly be the right decision to let Btrfs be struck in the current position for too long.
Rahul
Does Red Hat provide support for Fedora? If not then in my opinion btrfs would be a great use case for Fedora to push upstream to RHEL. With XFS defaulting in RHEL 7 that's cool so I think we should be ahead of the curve not an Ubuntu competitor or a glorified RHEL release. Like I said in my opinion. On Feb 26, 2014 1:08 PM, "Rahul Sundaram" metherid@gmail.com wrote:
Hi
On Wed, Feb 26, 2014 at 1:00 PM, Josh Boyer wrote:
No, that isn't true. Without wide adoption you may not have any impetus for btrfs to get better. However, it getting better is dependent upon wider development, maintenance, and testing. I'm not sure we are in a position to actually do that, and that is the bulk of my hesitation. Throwing something upon Fedora users as a default with the hopes that it will improve is pretty horrible in my opinion, particularly if we aren't able to actually fix things they find.
Does Fedora or more specifically Red Hat have anyone working on Btrfs upstream that can help guide the path forward? It can't be possibly be the right decision to let Btrfs be struck in the current position for too long.
Rahul
cloud mailing list cloud@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/cloud Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
On Wed, Feb 26, 2014 at 1:08 PM, Rahul Sundaram metherid@gmail.com wrote:
Hi
On Wed, Feb 26, 2014 at 1:00 PM, Josh Boyer wrote:
No, that isn't true. Without wide adoption you may not have any impetus for btrfs to get better. However, it getting better is dependent upon wider development, maintenance, and testing. I'm not sure we are in a position to actually do that, and that is the bulk of my hesitation. Throwing something upon Fedora users as a default with the hopes that it will improve is pretty horrible in my opinion, particularly if we aren't able to actually fix things they find.
Does Fedora or more specifically Red Hat have anyone working on Btrfs upstream that can help guide the path forward? It can't be possibly be the right decision to let Btrfs be struck in the current position for too long.
Fedora is harder to quantify because of the community aspect. I can say that there is nobody on the Fedora Engineering Team (which the Fedora kernel team is a part of) that is working on btrfs upstream. We do have Fedora contributors like Chris Murphy and others who have been doing a lot of testing and bug reporting around btrfs for a while though.
I have less insight as to broader Red Hat involvement. Btrfs is a tech preview in the RHEL7 Beta, so some level of participation is to be expected. How much that translates to upstream development is unclear.
josh
On Wed, 2014-02-26 at 14:05 -0500, Josh Boyer wrote:
I have less insight as to broader Red Hat involvement. Btrfs is a
IIRC RH employed one or two of the primary btrfs devs for a while, but we don't any more. I want to say one of them is working at Facebook now, but I'm not 100% sure.
On Wed, Feb 26, 2014 at 11:37:58AM -0800, Adam Williamson wrote:
On Wed, 2014-02-26 at 14:05 -0500, Josh Boyer wrote:
I have less insight as to broader Red Hat involvement. Btrfs is a
IIRC RH employed one or two of the primary btrfs devs for a while, but we don't any more. I want to say one of them is working at Facebook now, but I'm not 100% sure.
Both are at FB (Chris and Josef).
On 26 February 2014 11:08, Rahul Sundaram metherid@gmail.com wrote:
Hi
On Wed, Feb 26, 2014 at 1:00 PM, Josh Boyer wrote:
No, that isn't true. Without wide adoption you may not have any impetus for btrfs to get better. However, it getting better is dependent upon wider development, maintenance, and testing. I'm not sure we are in a position to actually do that, and that is the bulk of my hesitation. Throwing something upon Fedora users as a default with the hopes that it will improve is pretty horrible in my opinion, particularly if we aren't able to actually fix things they find.
Does Fedora or more specifically Red Hat have anyone working on Btrfs upstream that can help guide the path forward? It can't be possibly be the right decision to let Btrfs be struck in the current position for too long.
It isn't stuck. It is just moving slowly. Remember when btrfs first came out and the older filesystem guys said it could take up to 10 years to get it ready? And a lot of people said it won't take that long... Well it turns out that filesystems are very very hard to get right and corner cases are machine bricking versus crash and reboot. Those corner cases get found by people bricking their systems which means you have to be ready to say "Hey I don't need this enterprise box and its data.... let me throw my workload at it." which is a small subset of users out there. That means progress is slow and painful.
Rahul
server mailing list server@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/server
On Tue, Feb 25, 2014 at 5:53 PM, Chris Murphy lists@colorremedies.com wrote:
adding desktop@ since they are also looking at file system options
On Feb 25, 2014, at 1:42 PM, Stephen Gallagher sgallagh@redhat.com wrote:
=== File system ===
The default file system type for workstation installs should be btrfs.
The default file system is definitely up for some debate, but I'd make an argument for using XFS atop LVM[1] for the default filesystem in the Fedora Server, at least in part because Red Hat's storage experts have done the research for us already and determined that XFS is the recommended fit for Red Hat Enterprise Linux 7.
XFS is a really good idea for Server.
I've yet to actually advocate against this majorly, but I'm pretty against using btrfs as the default for any product. At least in the F21 timeframe. It's simply not ready.
Follow-up questions:
Can Server and Workstation WG's choose different defaults for their product's installers?
Other than lack of shrink support in XFS, I'd say XFS is suitable for Workstation as well. Would the Workstation WG have concerns about the lack of fs shrink support in the default file system? [1]
I don't think shrink support is a factor at all for Workstation.
Btrfs still makes me somewhat nervous, given that its upstream doesn't consider it stable[3].
That wiki entry appears old. The stable aspect was about disk format, which is now stable. And also the experimental description was removed in kernel 3.13. [2]
My main two concerns with Btrfs:
- With even minor problems users sometimes go straight to the big hammer approach with "btrfsck/btrfs check --repair" rather than the recommended approaches. Remounting with -o recovery, and even using a newer kernel are recommended on linux-btrfs@ significantly more often than btrfs check --repair.
That's a pretty poor user experience either way. The filesystem should be the last thing the user has to worry about, and forcing them to upgrade to get their FS fixed is indicative of btrfs not being ready.
- Supporting multiple device volumes in Anaconda. Although multiple device Btrfs volumes work very well when devices are working, there's no device failure notification yet. When a device fails, the volume becomes read-only; and the volume isn't mountable (or bootable) without the use of "degraded" mount option. While this can be done as a boot parameter, it's sort of a non-obvious thing for the typical user. I think it's valid for either WG to consider reducing exposure by dropping Anaconda support for it, or placarding the feature somehow.
If, and it's a very big if, Workstation were to go with btrfs, I would really push for a reduced functionality mode similar to what OpenSUSE is doing. No RAID, no multi-device, etc.
josh
On Feb 26, 2014, at 7:24 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
On Tue, Feb 25, 2014 at 5:53 PM, Chris Murphy lists@colorremedies.com wrote:
XFS is a really good idea for Server.
I've yet to actually advocate against this majorly, but I'm pretty against using btrfs as the default for any product. At least in the F21 timeframe. It's simply not ready.
Jolla/Sailfish OS use it on mobile phones; openSUSE also considers it ready for their next release, in approximately the same time frame as Fedora 21. Btrfs has been offered as an guided partition path option since Fedora 18. It's been visible in the UI since Fedora 14 or 15. It was first proposed as a default for Fedora 16.
I think the WG's need to have some metric by which to make a largely objective decision, and should get their questions/concerns addressed directly from Btrfs developers if they're considering it as a default.
A certain subjectivity is reasonable too, for example whether Fedora Workstation and Server should be biased more toward production/stability, or development/testing than prior Fedoras. I think answering the bias question makes the file system decision more easily fall into place.
Cloud, I think they probably want to stick it out with plain partition ext4 due to booting simplicity.
My main two concerns with Btrfs:
- With even minor problems users sometimes go straight to the big hammer approach with "btrfsck/btrfs check --repair" rather than the recommended approaches. Remounting with -o recovery, and even using a newer kernel are recommended on linux-btrfs@ significantly more often than btrfs check --repair.
That's a pretty poor user experience either way. The filesystem should be the last thing the user has to worry about, and forcing them to upgrade to get their FS fixed is indicative of btrfs not being ready.
The same recommendation happens on the XFS list too when people having file system problems have old kernels and repair tools. Btrfs is young, and an "old" kernel is maybe only 6-9 months old. Fedora kernels are kept exceptionally current, as are the btrfs user space tools which is likely why fewer Fedora users have Btrfs problems compared to distributions that use much older kernels.
Somewhat less often than users immediately trying btrfsck --repair, are users on the XFS list who report having used xfs_repair -L right after a crash instead of first mounting the file system. That's rather damaging too. The offline repair check/repair utility as the first step after a crash is obsolete 10 years ago, yet people still do such things.
The reality is that the repair tool fixes edge cases, because the file system is designed to not really need one. The common problems either don't happen in the first place, are fixed on a normal mount, or are fixed with the recovery mount option.
My suggestion for the Workstation WG is find out if btrfsck --repair is too often causing worse problem. I don't know the answer to that, but I think it needs to be put directly to Btrfs developers. Any other source is just an anecdote.
- Supporting multiple device volumes in Anaconda. Although multiple device Btrfs volumes work very well when devices are working, there's no device failure notification yet. When a device fails, the volume becomes read-only; and the volume isn't mountable (or bootable) without the use of "degraded" mount option. While this can be done as a boot parameter, it's sort of a non-obvious thing for the typical user. I think it's valid for either WG to consider reducing exposure by dropping Anaconda support for it, or placarding the feature somehow.
If, and it's a very big if, Workstation were to go with btrfs, I would really push for a reduced functionality mode similar to what OpenSUSE is doing. No RAID, no multi-device, etc.
I agree, but to some degree this is up to the WG's and anaconda folks to work out. Today if a user chooses 2+ disks, and the default/Automatic/guided installation path with Partition Scheme set to Btrfs, those drives have data profile raid0, and metadata profile raid1. It's been this way for a while. So what you're suggesting is a change at least to the automatic/easy path for Btrfs, and possibly also a change for Manual/custom, and reads like a distinct shift in bias of Fedora.next to something more production/stability oriented than past Fedoras.
Chris Murphy
On Wed, Feb 26, 2014 at 3:32 PM, Chris Murphy lists@colorremedies.com wrote:
On Feb 26, 2014, at 7:24 AM, Josh Boyer jwboyer@fedoraproject.org wrote:
On Tue, Feb 25, 2014 at 5:53 PM, Chris Murphy lists@colorremedies.com wrote:
XFS is a really good idea for Server.
I've yet to actually advocate against this majorly, but I'm pretty against using btrfs as the default for any product. At least in the F21 timeframe. It's simply not ready.
Jolla/Sailfish OS use it on mobile phones; openSUSE also considers it ready for their next release, in approximately the same time frame as Fedora 21. Btrfs has been offered as an guided partition path option since Fedora 18. It's been visible in the UI since Fedora 14 or 15. It was first proposed as a default for Fedora 16.
All true. (Though for clarity, OpenSUSE is offering a reduced feature mode by default.)
I think the WG's need to have some metric by which to make a largely objective decision, and should get their questions/concerns addressed directly from Btrfs developers if they're considering it as a default.
I've discussed some of this with the FS team within Red Hat. I'm hoping to draw them out of hiding, and I hope they come with data.
A certain subjectivity is reasonable too, for example whether Fedora Workstation and Server should be biased more toward production/stability, or development/testing than prior Fedoras. I think answering the bias question makes the file system decision more easily fall into place.
Yes, seems fair.
Cloud, I think they probably want to stick it out with plain partition ext4 due to booting simplicity.
My main two concerns with Btrfs:
- With even minor problems users sometimes go straight to the big hammer approach with "btrfsck/btrfs check --repair" rather than the recommended approaches. Remounting with -o recovery, and even using a newer kernel are recommended on linux-btrfs@ significantly more often than btrfs check --repair.
That's a pretty poor user experience either way. The filesystem should be the last thing the user has to worry about, and forcing them to upgrade to get their FS fixed is indicative of btrfs not being ready.
The same recommendation happens on the XFS list too when people having file system problems have old kernels and repair tools. Btrfs is young, and an "old" kernel is maybe only 6-9 months old.
"Young." 7 years is not young when it comes to having repair and recovery tools, or surviving things like ENOSPC. I realize _those_ things in btrfs-land actually are young, but that is kind of providing some of the hesitation on my side. It took them this long to have those basic features available? How long will it take them to have it so that the corner cases don't break?
Fedora kernels are kept exceptionally current, as are the btrfs user space tools which is likely why fewer Fedora users have Btrfs problems compared to distributions that use much older kernels.
mmm... I have no way to compare against other distributions. I'm glad Fedora is perceived as having better Btrfs support, but I can assure you it isn't because of any concerted effort on our part. However, when it comes to bug reports for filesystems in Fedora kernels, btrfs is by far the leading FS over ext4 or XFS. Some of this is due to age, sure. Most of it is due to sustained effort by a large community of people for upstream development, and a reduced feature set compared to btrfs. In sort, btrfs has a lot of catching up to do, and it's trying to do that while also leapfrogging everything else.
Somewhat less often than users immediately trying btrfsck --repair, are users on the XFS list who report having used xfs_repair -L right after a crash instead of first mounting the file system. That's rather damaging too. The offline repair check/repair utility as the first step after a crash is obsolete 10 years ago, yet people still do such things.
People do stupid things after an error. Yes. That isn't what I'm worried about. I'm worried about hitting those errors to begin with.
The reality is that the repair tool fixes edge cases, because the file system is designed to not really need one. The common problems either don't happen in the first place, are fixed on a normal mount, or are fixed with the recovery mount option.
My suggestion for the Workstation WG is find out if btrfsck --repair is too often causing worse problem. I don't know the answer to that, but I think it needs to be put directly to Btrfs developers. Any other source is just an anecdote.
I think I'd like to see if anything gets discussed at LFS in a month or so. One of the Fedora kernel team members is going to be around, so hopefully we can get some more information from multiple sources involved there.
Also, if btrfsck --repair is a major source of problems, and you have to ask a developer if you'll lose data if you run it, then is the tool _really_ ready? I disagree with the assertion that it isn't needed. The filesystem might be designed to not need one, but either the design or implementation is lacking enough that it is needed and now we're left in a fairly _unstable_ state. That is not what I want as a default FS.
Please don't mistake my hesitation on using it as the default as being against Btrfs. I'm not. I would love to have it work, work well, and used as the default. It allows much more ease of use and interesting features than anything else. I simply do not think it is in a state where that is a safe choice to make. I do not, as a member of a WG or a kernel maintainer, want to continually apologize for people losing data or having unbootable machines because we choose poorly. If we get data and comparisons that show otherwise, as you suggested, then I will be very interested in looking at it as default.
- Supporting multiple device volumes in Anaconda. Although multiple device Btrfs volumes work very well when devices are working, there's no device failure notification yet. When a device fails, the volume becomes read-only; and the volume isn't mountable (or bootable) without the use of "degraded" mount option. While this can be done as a boot parameter, it's sort of a non-obvious thing for the typical user. I think it's valid for either WG to consider reducing exposure by dropping Anaconda support for it, or placarding the feature somehow.
If, and it's a very big if, Workstation were to go with btrfs, I would really push for a reduced functionality mode similar to what OpenSUSE is doing. No RAID, no multi-device, etc.
I agree, but to some degree this is up to the WG's and anaconda folks to work out. Today if a user chooses 2+ disks, and the default/Automatic/guided installation path with Partition Scheme set to Btrfs, those drives have data profile raid0, and metadata profile raid1. It's been this way for a while. So what you're suggesting is a change at least to the automatic/easy path for Btrfs, and possibly also a change for Manual/custom, and reads like a distinct shift in bias of Fedora.next to something more production/stability oriented than past Fedoras.
I agree with what you said here entirely.
I also happen to think that one of the major focus of Workstation is to produce a _product_ that is more stable than past Fedoras. So looking at it from that perspective I don't see a reduced feature set as being that out of line.
josh
server@lists.fedoraproject.org