On 11/07/2013 02:29 AM, Dridi Boukelmoune wrote:
On Wed, Nov 6, 2013 at 6:58 PM, Ivan Afonichev ivan.afonichev@gmail.com wrote:
So what is the decision of community?
Hi,
I've taken a look at the hadoop spec, and builded the httpfs sub-package. It is packaged as a classic all-in-one-dir "catalina base". I believe this goes against the guidelines [1] which state that "Fedora packages must follow the FHS".
The packaging is following the FHS guidelines and putting all files in the correct locations and symlinking. The tomcat package does this as well (look at /usr/share/tomcat).
Is it good to have some /usr/share/*/bin/*.sh files? It is not needed for tomcat package itself but some, not so systemd'ed, stuff like hadoop's httpfs may be happy to use it.
As I understand it, we cannot follow hadoop upstream's packaging just like the tomcat package doesn't. Also a "standard" java WAR (unpacked here) contains its JARs in its WEB-INF/lib directory, which seems to also go against java packaging guidelines [2]. It states that "All architecture-independent JAR files MUST go into %{_javadir} or [...] %{_javadir}-*".
The unpackaged WAR has it's jars replaced with symlinks and tomcat is told to follow symlinks for that service.
Should we package original upstream catalina.sh or we should create some "service tomcat $@" emulation of it?
There actually is a systemd service [3] in the tomcat package. And hadoop has a similar service [4], the difference is that tomcat doesn't stick to upstream's scripts because the guidelines don't allow to work the way they work.
The systemd script that is written for httpfs seems to work just fine and afaik is following Fedora's guidelines.
Rob