README.md | 171 +++++++++++++++++++++++++++-------------------------
fedora-ize | 4 -
pkg/cloudfs.spec.in | 2
pkg/configure.ac | 2
scripts/cloudfs | 16 +++-
5 files changed, 104 insertions(+), 91 deletions(-)
New commits:
commit a0893f212e0b68a1c49a18c9677cbc1370d31070
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Tue Feb 22 12:21:19 2011 -0500
More doc updates.
diff --git a/README.md b/README.md
index c570777..2e12ed7 100644
--- a/README.md
+++ b/README.md
@@ -59,8 +59,8 @@ If that disappears, the -2 RPMs for el6 (which should be equivalent) are
at:
http://jdarcy.fedorapeople.org/el6_rpms/
-To build CloudFS, you need to install the -devel RPMs. Once you've done that,
-you can go into your git tree and do the following:
+To build CloudFS, you need to install the glusterfs-devel RPMs. Once you've
+done that, you can go into your cloudfs git tree[3] and do the following:
./fedora-ize
rsync -aptv SOURCES/ ~/rpmbuild/SOURCES/
@@ -75,8 +75,10 @@ quick. Install the resulting RPM and you're ready for
configuration.
## Configuration ##
-You use the "cloudfs" script to set up CloudFS-specific features. There are
-two main cloudfs commands.
+CloudFS operates on volumes that are modified from those created by GlusterFS,
+as described in the GlusterFS documentation[4]. Once you have created the
+volume in GlusterFS, do *not* start it. Use the "cloudfs" script to set up
+CloudFS-specific features. There are two main cloudfs commands.
* cloudfs init VOLUME USERS
This initalizes the "multi-tenant" (namespace isolation) features of
CloudFS,
@@ -94,7 +96,9 @@ two main cloudfs commands.
These commands must be run *on every server*, and re-run any time you use the
"gluster volume set" command to change volume parameters (which will re-write
-the originals that CloudFS has copied and modified).
+the originals that CloudFS has copied and modified). Also, these changes will
+- unlike changes made with the "gluster" command - not take effect until the
+next time the volume is started.
In addition to rewriting volfiles, you must create subdirectories - again on
each server - for each user plus one for the "junk" pseudo-user. Thus, if you
@@ -102,6 +106,13 @@ have a brick belonging to a volume at server1:/exports/glu and you
add a user
"fred" you will need to create /exports/glu/fred on server1 yourself . . . and
likewise for every other brick in the volume.
+Finally, you're ready to mount. Since the GlusterFS volfile-fetching
+infrastructure can't handle per-tenant volfiles, you'll have to do this the
+"old fashioned" (i.e. pre-3.1) way, by specifying the actual file instead
+of a server.
+
+ glusterfs --volfile my.vol.file /my/mount/point
+
NOTE: this process is recognized to be cumbersome, and will become less so
shortly. In the next version, there will still be a "cloudfs" command/script
to provide perform actions across the entire storage pool from a single
@@ -139,5 +150,6 @@ the work-list items below as well.
[2]
https://fedoraproject.org/wiki/Features/CloudFS
+[3]
http://git.fedorahosted.org/git/?p=CloudFS.git
-
+[4]
http://gluster.com/community/documentation/index.php/Gluster_3.1_Filesyst...
commit 527b046b6bd6a22769bfedcba8c8dff89fd18f6b
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Mon Feb 21 17:00:02 2011 -0500
New doc for building and configuration.
Also changed Markdown header styles.
diff --git a/README.md b/README.md
index 6af15bd..c570777 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,8 @@
-CloudFS
-=======
+# CloudFS #
-Introduction
-------------
+## Introduction ##
-CloudFS is a set of enhancements to GlusterFS[1], allowing a cloud provider to
+CloudFS is a set of enhancements to GlusterFS[1] allowing a cloud provider to
set up a permanent, shared filesystem for their users. This mostly involves
protecting users from each other in various ways, but includes other features
as well:
@@ -27,11 +25,10 @@ as well:
In the future, CloudFS will also include an improved distribution ("DHT")
translator, and multi-site replication. These features are not part of the
-current release. The first-release functionality is more fully described
-in the Fedora 15 feature page[2].
+current release. The first-release functionality is more fully described in
+the Fedora 15 feature page[2].
-Code Structure
---------------
+## Code Structure ##
Most GlusterFS functionality is contained in "translators" which translate a
higher-level operation (e.g. a write) into one or more lower-level operations
@@ -50,80 +47,80 @@ implementation):
* auth (client, TBD): auxiliary/helper code for authentication
-Building
---------
+## Building ##
-To avoid distributing an entire GlusterFS tree without permission, CloudFS is
-currently distributed as a set of overlays (for new files) and patches (for
-existing files) to the official GlusterFS tree. To create a complete CloudFS
-tree, follow these steps:
+CloudFS depends on a specific version of GlusterFS, which is currently only
+packaged for Fedora 15. This version has (as of 2011/02/21) not hit the yum
+repositories yet, but the latest build can be downloaded from:
- git clone
git://git.gluster.com/glusterfs.git cloudfs
- cd cloudfs
- rsync -apt $CLOUDFS_DIR/xlators/ xlators/
- for i in $CLOUDFS_DIR/patches/*; do patch -p1 < $i; done
+
http://koji.fedoraproject.org/koji/buildinfo?buildID=223571
-At this point you can follow the usual GlusterFS build process. If you're
-familiar with building RPMs, you can do something like this:
+If that disappears, the -2 RPMs for el6 (which should be equivalent) are at:
- ./autogen.sh
- ./configure --enable-fusermount
- make dist-gzip
- cp glusterfs-3.1.0git.tar.gz ~/rpmbuild/SOURCES
- cp glusterfs.spec ~/rpmbuild/SPECS
- cd ~/rpmbuild
- rpmbuild -bb SPECS/glusterfs.spec
-
-Work is currently under way to allow building translators "out of tree" much
-as can be done for kernel modules. Once that work is complete, a simpler
-CloudFS build procedure will be implemented, and a separate RPM specfile will
-be created embodying that process.
-
-Configuration
--------------
-
-The work to integrate configuration of the new translators with the current
-gluster CLI is, unfortunately, still TBD. The only method available to
-configure and use the new translators is to edit the "volfiles" by hand, as
-you would have done in 3.0 but with an extra twist. If you have created a
-volume named "fubar" then your volfiles will be in /etc/glusterd/vols/fubar on
-the servers. There will be one fubar-fuse.vol for the clients, and one
-fubar.${HOST}.${PATH}.vol for each "brick" making up the filesystem. To make
-a change globally, you'll need to do the following:
-
-1. Edit one of the brick volfiles, e.g. on host "gnarly"
-
-2. Propagate the changes to the other volfiles on the same host. If you only
- have one brick per server, and all bricks use the same path, you can simply
- copy the edited volfile.
-
-3. On every *other* server, do "volume sync gnarly all" to fetch the edited
- volfiles.
-
-See the CONFIG.txt in each translator's directory for instructions specific
-to that translator.
+
http://jdarcy.fedorapeople.org/el6_rpms/
-The .../scripts directory contains some Python scripts that can help automate
-the process of modifying volfiles. Specifically:
+To build CloudFS, you need to install the -devel RPMs. Once you've done that,
+you can go into your git tree and do the following:
-* filt-log-io.py: inserts a debug/log-io translator between a protocol/server
- volume and each of its subvolumes
-
-* filt-crypto.py: inserts an encryption/crypto translator on top of each
- protocol/client volume. Also disables performance/quick-read, which is
- incompatible with encryption/crypto.
-
-* filt-cloud.py: replaces a simple translator "stack" (from storage/posix up
- to whatever is below protocol/server) with one such stack per named tenant,
- plus a cluster/cloud translator to tie them together.
-
-Running these scripts after each gluster "volume create" or "volume
set"
-command should generate a new volfile with the desired enhancements. They use
-a common volfile parsing/modification library that might be useful for other
-tasks as well.
-
-Work List
----------
+ ./fedora-ize
+ rsync -aptv SOURCES/ ~/rpmbuild/SOURCES/
+ rsync -aptv SPECS/ ~/rpmbuild/SPECS/
+ cd ~/rpmbuild
+ rpmbuild -bb SPECS/cloudfs.spec
+
+The rest is standard rpmbuild stuff. For debugging, you'll probably want to
+prepend CFLAGS=-g to your rpmbuild command line. Since this process only
+builds the CloudFS-specific translators and not all of GlusterFS, it's pretty
+quick. Install the resulting RPM and you're ready for configuration.
+
+## Configuration ##
+
+You use the "cloudfs" script to set up CloudFS-specific features. There are
+two main cloudfs commands.
+
+* cloudfs init VOLUME USERS
+ This initalizes the "multi-tenant" (namespace isolation) features of
CloudFS,
+ by rewriting the server volfiles to include the "cloud" translator and
+ generating per-user client volfiles which include the "login" translator.
+ The USERS file is simply a list of name/password pairs, one pair per line,
+ separated by spaces. There is no provision currently for extra indentation,
+ comments, etc. Note that the per-user client volfiles are placed in
+ /var/lib/glusterd/vols/VOLUME/VOLUME-fuse.vol.USER and you will need to get
+ them to the client(s) yourself.
+
+* cloudfs initc VOLFILE KEY
+ This initializes the encryption feature of CloudFS, by rewriting a client
+ volfile (not a volume name) to include the "crypt" translator.
+
+These commands must be run *on every server*, and re-run any time you use the
+"gluster volume set" command to change volume parameters (which will re-write
+the originals that CloudFS has copied and modified).
+
+In addition to rewriting volfiles, you must create subdirectories - again on
+each server - for each user plus one for the "junk" pseudo-user. Thus, if you
+have a brick belonging to a volume at server1:/exports/glu and you add a user
+"fred" you will need to create /exports/glu/fred on server1 yourself . . . and
+likewise for every other brick in the volume.
+
+NOTE: this process is recognized to be cumbersome, and will become less so
+shortly. In the next version, there will still be a "cloudfs" command/script
+to provide perform actions across the entire storage pool from a single
+command line, but it will be improved in the following ways:
+
+* Instead of specifying users via a file, the list of users and corresponding
+ passwords (or other credentials - see work list) will be maintained by
+ cloudfs itself. The interface will include add-user and del-user commands,
+ which can even be issued dynamically.
+
+* Adding (or removing) users will automatically create (or delete) the
+ per-brick subdirectories. The "junk" pseudo-user will go away.
+
+As a side effect of the way these features are implemented, there will be
+separate cloudfsd and mount.cloudfs commands corresponding to glusterd and
+mount.glusterfs respectively. There will be other changes corresponding to
+the work-list items below as well.
+
+## Work List ##
* crypt translator: stronger encryption, keys in files
@@ -131,17 +128,15 @@ Work List
* auth translator: create
-* build system: out-of-tree build process, specfile
-
* config: CLI integration, other tools for UID/GID mapping, billing, cert/key
management
* doc: pull together per-translator options, CLI extensions, other tools
-Notes
------
+## Notes ##
[1]
http://www.gluster.org or
http://www.gluster.com
+
[2]
https://fedoraproject.org/wiki/Features/CloudFS
commit 3c13ff84acc4f2ba7621b1ed545051ca4307940d
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Thu Feb 3 21:58:46 2011 -0500
Even more packaging changes.
diff --git a/fedora-ize b/fedora-ize
index a5ac4b9..682812d 100755
--- a/fedora-ize
+++ b/fedora-ize
@@ -20,8 +20,8 @@ cp -r xlators/encryption/crypt/src $work/crypt/
cp -r xlators/features/oplock/src $work/oplock/
cp -r xlators/cluster/login/src $work/login/
cp pkg/* $work/
-cp scripts/cloudfs $work/
cp scripts/volfilter.py $work/
+cp scripts/cloudfs $work/
# Configure just enough to get a decent specfile.
cd $work
@@ -32,8 +32,6 @@ cd -
# Create and populate the SOURCES directory.
mkdir -p SOURCES
(cd $mytmp; tar cvfz ../SOURCES/cloudfs-0.5.tgz cloudfs-0.5)
-cp scripts/volfilter.py SOURCES/
-cp scripts/cloudfs SOURCES/
# Create and populate the SPECS directory.
mkdir -p SPECS
diff --git a/pkg/cloudfs.spec.in b/pkg/cloudfs.spec.in
index 30c8a3f..1397438 100644
--- a/pkg/cloudfs.spec.in
+++ b/pkg/cloudfs.spec.in
@@ -17,7 +17,7 @@ URL:
http://cloudfs.org
Source0:
http://cloudfs.org/dist/0.5/cloudfs-0.5.tgz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root
-Requires: glusterfs >= 3.1.1
+Requires: glusterfs = 3.1.2
Requires: openssl
Requires: python
BuildRequires: glusterfs-devel >= 3.1.1
diff --git a/pkg/configure.ac b/pkg/configure.ac
index bc64915..8d3e2c0 100644
--- a/pkg/configure.ac
+++ b/pkg/configure.ac
@@ -65,7 +65,7 @@ if test "x${have_spinlock}" = "xyes"; then
AC_DEFINE(HAVE_SPINLOCK, 1, [define if found spinlock])
fi
-GLUSTER_VERSION=3.1.1
+GLUSTER_VERSION=3.1.2
GF_HOST_OS=""
GF_LDFLAGS="-rdynamic"
GF_HOST_OS="GF_LINUX_HOST_OS"
commit c03e1fec8bb8975b31d620cf1798df86d0ce5108
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Thu Feb 3 21:58:28 2011 -0500
Be even more conservative about performance translators.
diff --git a/scripts/cloudfs b/scripts/cloudfs
index 4b6615c..ed521f1 100755
--- a/scripts/cloudfs
+++ b/scripts/cloudfs
@@ -50,6 +50,14 @@ glusterd_dirs = [
"/etc/glusterd" # Gluster
]
+# These are incompatible with crypt in various ways.
+bad_translators = [
+ "performance/quick-read",
+ "performance/read-ahead",
+ "performance/write-behind",
+ "performance/io-cache"
+]
+
def copy_stack (old_xl,suffix,recursive=False):
if recursive:
new_name = old_xl.name + "-" + suffix
@@ -179,17 +187,17 @@ def do_init_crypt ():
graph, last = volfilter.load(vfname+".save")
opts = { "key": sys.argv[3] }
to_do = [xl for xl in graph.itervalues()
- if xl.type == "performance/quick-read"]
+ if xl.type in bad_translators]
for td in to_do:
volfilter.delete(graph,td)
to_do = [xl for xl in graph.itervalues()
- if xl.type == "performance/write-behind"]
+ if xl.type == "cluster/dht"]
if to_do:
+ # Nice to push as close to dht as we can.
for td in to_do:
- # Nice to push below io-stats etc. if we can
volfilter.push_filter(graph,td,"encryption/crypt",opts)
else:
- # Might as well push it on top.
+ # Push on top if all else fails.
volfilter.push_filter(graph,last,"encryption/crypt",opts)
volfilter.generate(graph,last,file(vfname,"w"))