[repo.or.cz] iwhd.git branch master updated: v0.0-331-g8c22e73
by Jim Meyering
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project iwhd.git.
The branch, master has been updated
via 8c22e73ad9f889f528a0acb7d761bf15d3d91c95 (commit)
from fa248d5fdb71e0c52b0257c59e2c28a9dd82e451 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
http://repo.or.cz/w/iwhd.git/commit/8c22e73ad9f889f528a0acb7d761bf15d3d91c95
commit 8c22e73ad9f889f528a0acb7d761bf15d3d91c95
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Feb 10 14:45:48 2011 +0100
doc: add to NEWS
* NEWS: Update
diff --git a/NEWS b/NEWS
index e69de29..c72f7d9 100644
--- a/NEWS
+++ b/NEWS
@@ -0,0 +1,25 @@
+iwhd NEWS -*- outline -*-
+
+* Noteworthy changes in release ?.? (????-??-??) [?]
+
+** Bug fixes
+
+ not itemized, this time
+
+** New features
+
+ new option: --autostart (-a) to automatically start back-end services
+
+** New APIs
+
+ Change the primary provider to P (an existing provider name):
+ curl -X PUT http://_providers/P/_set_primary
+
+ Get primary provider name:
+ http://host:$port/_providers/_primary
+
+** Infrastructure
+
+ use gnulib
+
+ use libgc for garbage collection
-----------------------------------------------------------------------
Summary of changes:
NEWS | 25 +++++++++++++++++++++++++
1 files changed, 25 insertions(+), 0 deletions(-)
repo.or.cz automatic notification. Contact project admin jim(a)meyering.net
if you want to unsubscribe, or site admin admin(a)repo.or.cz if you receive
no reply.
--
iwhd.git ("image warehouse daemon")
13 years, 3 months
[repo.or.cz] iwhd.git branch master updated: v0.0-330-gfa248d5
by Jim Meyering
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project iwhd.git.
The branch, master has been updated
via fa248d5fdb71e0c52b0257c59e2c28a9dd82e451 (commit)
via e41b7a771670ffdd08bcfe0550810632c6a466de (commit)
via f47eb186d1fb2d2fc6b7698427295f437a39c83b (commit)
via e598b6675690fb79fc9c74ffb11be97f8f258421 (commit)
via 80a978acd30fc420ce4d4139caa955d43ec04ac7 (commit)
via e73b7221f26dc46b07575a18e207836eaed6d7e7 (commit)
via 1f415f5dfaa403b3ef86945806fb93f24335bc20 (commit)
via 9b8036c2cdae166c532a6c510017f2de485e489c (commit)
via 0a5ec6af13b82ad1dd4b1314f98f209660820d06 (commit)
via facb15d171c3d99112291511137d92e300698950 (commit)
via 0b0c8f73f794cf6c9b5427013aacda31e6ed4a54 (commit)
via 16dd48f463abbb2905afe5b74f4905932eb85ca4 (commit)
via fff9a6a6445c089b2a8f557176d89819aac2a3ec (commit)
via ba44b876ad572f2861114a0ba01124c6611a9da8 (commit)
via 33dea68259d6ea08253163373f4c36400f6535ca (commit)
via 8eccf807a16213425c463630b9bdae0c6734ee52 (commit)
via f08ff1ceab77a8e2e06d1170b4a46fb47920d29b (commit)
via 549b989ddd8b189cb3b798d602ee30502aea473a (commit)
via 9414f266a267f5e8a1860ef87ad2a59a3022ff24 (commit)
via ac3e583f5cf58a38f261d12abba558e40b1e7873 (commit)
via b247691b9ea2f7dacbcf1edbedef28206eb2a722 (commit)
via d4de957c8a10b97952b18c705369f305393b6b16 (commit)
via a2eee0a0daabc99ff2b2db49948adfdead278c63 (commit)
via eb90a1c8aedc31c4c6587a9c9b79fc3ba6b4fc54 (commit)
via f941f9f0fac64fcadb87dec827964e0ca3f28316 (commit)
via ef8b8713dd64dce7c32a9b3a3eca2a2c1e1ff7ef (commit)
via d2ab5e735f780571599111bac2cf726c2f9ae62b (commit)
via f5febefdfefc5ebff68388e4899d690d55d83a72 (commit)
via eb66edbe487f746d8bb6c244973bd86bb782a34a (commit)
via a7de76399efe41f1040259ade74837d2eb8736c3 (commit)
via d1aa322fb7750dd52d893f73d499f4f0e27f0c06 (commit)
via f0877f6496e4c7f1df2834bb01681161f379a4a0 (commit)
via 7ec84dde8be9610f33c46b3fbd9dc5607f13510b (commit)
via b98c47711c707bdd25f3d8d08f725321e25924f2 (commit)
via cee9beecc22c373814a7a8e56b2860eacd3a8acd (commit)
via 9e059c3626429c46b6990bc1892605fc7f3a5740 (commit)
via 44ee9894f3b04e3d8baa65b2579d5ca938af6e7e (commit)
via 2e9196c2628eade109cc2bcffb695f3575c7c067 (commit)
via a04b76ca821f27765659fa52a96be620fb8fc431 (commit)
via 8187979f677652a5c617dd435cb6644ad360a9e5 (commit)
via d209a1391ecf2737693f061b8401513599806515 (commit)
via 65df8ac6ad67633bbf42171dd40bfc132368459f (commit)
via e2ecc0a75108543bcde4e3509202a5ac511ef1fe (commit)
via 530db87554e7c801813dce2f4cbb0108334c21de (commit)
via 5685b973f295453871778372a6d1bad9f1b75750 (commit)
from 186b8eca47f80c8cb97c429bcdb7594028880fa2 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
http://repo.or.cz/w/iwhd.git/commit/fa248d5fdb71e0c52b0257c59e2c28a9dd82e451
commit fa248d5fdb71e0c52b0257c59e2c28a9dd82e451
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Feb 8 16:00:37 2011 +0100
maint: speed up configure
* configure.ac (gl_ASSERT_NO_GNULIB_POSIXCHECK): Speed up normal
configure runs.
diff --git a/configure.ac b/configure.ac
index 3fe734d..1d59e3f 100644
--- a/configure.ac
+++ b/configure.ac
@@ -31,6 +31,12 @@ AC_PROG_CXX
AC_PROG_CC
AM_PROG_CC_C_O
gl_EARLY
+
+# Maintainer note - comment this line out if you plan to rerun
+# GNULIB_POSIXCHECK testing to see if M4 should be using more modules.
+# Leave it uncommented for normal releases, for faster ./configure.
+gl_ASSERT_NO_GNULIB_POSIXCHECK
+
AC_PROG_RANLIB
AC_TYPE_UINT64_T
http://repo.or.cz/w/iwhd.git/commit/e41b7a771670ffdd08bcfe0550810632c6a466de
commit e41b7a771670ffdd08bcfe0550810632c6a466de
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Feb 8 15:56:11 2011 +0100
maint: build via make CFLAGS='-DGNULIB_POSIXCHECK=1'; address warnings
* bootstrap.conf: Add most of the recommended modules:
calloc-posix close dup2 mkstemp pipe-posix strstr strtok_r unlink
diff --git a/bootstrap.conf b/bootstrap.conf
index 0e03de7..eb7ab94 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -19,9 +19,12 @@
gnulib_modules='
announce-gen
c-ctype
+calloc-posix
+close
closeout
dirname
do-release-commit-and-tag
+dup2
error
getopt-gnu
gettext-h
@@ -34,6 +37,8 @@ hash-pjw
malloc-gnu
maintainer-makefile
manywarnings
+mkstemp
+pipe-posix
progname
quotearg
realloc-gnu
@@ -43,11 +48,14 @@ stdlib
stpcpy
strerror
string
+strstr
+strtok_r
strtol
strtoul
strtoull
strtoumax
unistd
+unlink
unlocked-io
update-copyright
useless-if-before-free
diff --git a/gnulib-tests/.gitignore b/gnulib-tests/.gitignore
index 9c862bc..259f316 100644
--- a/gnulib-tests/.gitignore
+++ b/gnulib-tests/.gitignore
@@ -1,24 +1,41 @@
/alloca.h
/alloca.in.h
+/anytostr.c
+/asnprintf.c
/binary-io.h
/dup2.c
/fcntl.h
/fcntl.in.h
+/float+.h
+/float.h
+/float.in.h
/getpagesize.c
/gnulib.mk
/ignore-value.h
+/imaxtostr.c
/init.sh
+/inttostr.c
+/inttostr.h
/lstat.c
/macros.h
/malloca.c
/malloca.h
/malloca.valgrind
+/offtostr.c
/open.c
/pathmax.h
+/printf-args.c
+/printf-args.h
+/printf-parse.c
+/printf-parse.h
+/priv-set.c
+/priv-set.h
/putenv.c
/same-inode.h
/setenv.c
/signature.h
+/size_max.h
+/snprintf.c
/stat.c
/stdio-write.c
/stdio.h
@@ -30,6 +47,7 @@
/test-alloca-opt.c
/test-binary-io.c
/test-binary-io.sh
+/test-bitrotate.c
/test-c-ctype.c
/test-dirname.c
/test-dup2.c
@@ -41,7 +59,10 @@
/test-getopt.c
/test-getopt.h
/test-getopt_long.h
+/test-gettimeofday.c
+/test-hash.c
/test-ignore-value.c
+/test-inttostr.c
/test-inttypes.c
/test-lstat.c
/test-lstat.h
@@ -57,10 +78,13 @@
/test-memchr.c
/test-open.c
/test-open.h
+/test-pipe.c
+/test-priv-set.c
/test-quotearg-simple.c
/test-quotearg.h
/test-realloc-gnu.c
/test-setenv.c
+/test-snprintf.c
/test-stat.c
/test-stat.h
/test-stdbool.c
@@ -71,14 +95,19 @@
/test-strerror.c
/test-string.c
/test-strnlen.c
+/test-strstr.c
/test-symlink.c
/test-symlink.h
/test-sys_stat.c
+/test-sys_time.c
/test-sys_wait.h
/test-time.c
/test-unistd.c
+/test-unlink.c
+/test-unlink.h
/test-unsetenv.c
/test-update-copyright.sh
+/test-vasnprintf.c
/test-vc-list-files-cvs.sh
/test-vc-list-files-git.sh
/test-verify.c
@@ -86,6 +115,7 @@
/test-version-etc.c
/test-version-etc.sh
/test-wchar.c
+/test-wctype-h.c
/test-wctype.c
/test-xalloc-die.c
/test-xalloc-die.sh
@@ -96,8 +126,15 @@
/test-xstrtoumax.sh
/time.h
/time.in.h
+/uinttostr.c
+/umaxtostr.c
+/unlinkdir.c
+/unlinkdir.h
/unsetenv.c
+/vasnprintf.c
+/vasnprintf.h
/wctob.c
+/xsize.h
/zerosize-ptr.h
alloca.h
alloca.in.h
diff --git a/lib/.gitignore b/lib/.gitignore
index dc5f382..af6d24d 100644
--- a/lib/.gitignore
+++ b/lib/.gitignore
@@ -5,9 +5,13 @@
/c++defs.h
/c-ctype.c
/c-ctype.h
+/calloc.c
/charset.alias
+/close-hook.c
+/close-hook.h
/close-stream.c
/close-stream.h
+/close.c
/closeout.c
/closeout.h
/config.charset
@@ -15,12 +19,14 @@
/dirname-lgpl.c
/dirname.c
/dirname.h
+/dup2.c
/errno.h
/errno.in.h
/error.c
/error.h
/exitfail.c
/exitfail.h
+/fclose.c
/fpending.c
/fpending.h
/getopt.c
@@ -29,6 +35,7 @@
/getopt1.c
/getopt_int.h
/gettext.h
+/gettimeofday.c
/gnulib.mk
/hash-pjw.c
/hash-pjw.h
@@ -41,11 +48,14 @@
/libiwhd.a
/localcharset.c
/localcharset.h
+/lstat.c
/malloc.c
/mbrtowc.c
/mbsinit.c
/memchr.c
/memchr.valgrind
+/mkstemp.c
+/pipe.c
/progname.c
/progname.h
/quotearg.c
@@ -55,6 +65,7 @@
/ref-add.sin
/ref-del.sed
/ref-del.sin
+/stat.c
/stdarg.h
/stdarg.in.h
/stdbool.h
@@ -63,9 +74,13 @@
/stddef.in.h
/stdint.h
/stdint.in.h
+/stdio-write.c
+/stdio.h
+/stdio.in.h
/stdlib.h
/stdlib.in.h
/stpcpy.c
+/str-two-way.h
/streq.h
/strerror.c
/string.h
@@ -73,16 +88,27 @@
/stripslash.c
/strndup.c
/strnlen.c
+/strstr.c
/strtoimax.c
+/strtok_r.c
/strtol.c
/strtoll.c
/strtoul.c
/strtoull.c
/strtoumax.c
/sys
+/sys_stat.h
+/sys_stat.in.h
+/sys_time.h
+/sys_time.in.h
/sys_wait.in.h
+/tempname.c
+/tempname.h
+/time.h
+/time.in.h
/unistd.h
/unistd.in.h
+/unlink.c
/unlocked-io.h
/verify.h
/version-etc-fsf.c
http://repo.or.cz/w/iwhd.git/commit/f47eb186d1fb2d2fc6b7698427295f437a39c83b
commit f47eb186d1fb2d2fc6b7698427295f437a39c83b
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 25 11:12:05 2011 +0100
tests: reenable excluded gnulib test; run gnulib-tests first
* bootstrap.conf: Don't disable malloca-test. It has been fixed
so it is no longer so slow.
* gnulib: Update to latest.
* Makefile.am (SUBDIRS): Run gnulib-tests before ours,
so the results of ours aren't displaced as gnulib's scroll by.
diff --git a/bootstrap.conf b/bootstrap.conf
index 8194f49..0e03de7 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -15,12 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
-# The malloca test is fine, but inordinately slow.
-# The hash one would need linking help to get -lgc.
-avoided_gnulib_modules='
- --avoid=malloca-tests
-'
-
# gnulib modules used by this package.
gnulib_modules='
announce-gen
diff --git a/gnulib b/gnulib
index 6f0680e..d9f5da6 160000
--- a/gnulib
+++ b/gnulib
@@ -1 +1 @@
-Subproject commit 6f0680eb29a1737d704a1df26aafc00490cd34d8
+Subproject commit d9f5da66f7c95f84b6b28b17cfa4c5248ad2b591
http://repo.or.cz/w/iwhd.git/commit/e598b6675690fb79fc9c74ffb11be97f8f258421
commit e598b6675690fb79fc9c74ffb11be97f8f258421
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Jan 21 16:05:22 2011 +0100
don't pass NULL buffer to formatter in provider list generation
* rest.c (prov_list_generator): Pre-allocate a reasonably-large
buffer, rather than starting with a 0-length buffer and relying on
the ~doubling/realloc loop to make the buffer large enough.
diff --git a/rest.c b/rest.c
index 22d0d25..1fb1aa8 100644
--- a/rest.c
+++ b/rest.c
@@ -1637,6 +1637,14 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
qsort (plist.buf, plist.n_used, sizeof *(plist.buf),
prov_name_compare);
+ // Use a size that is large enough to accommodate the
+ // result of formatting a few providers.
+ // Not important, as long as it's larger than 0.
+ ms->buf_n_alloc = 1024;
+ ms->buf = malloc (ms->buf_n_alloc);
+ if (ms->buf == NULL)
+ return -1;
+
// Emit all provider-related output into memory.
size_t i;
for (i = 0; i < plist.n_used; i++) {
http://repo.or.cz/w/iwhd.git/commit/80a978acd30fc420ce4d4139caa955d43ec04ac7
commit 80a978acd30fc420ce4d4139caa955d43ec04ac7
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 20 14:30:31 2011 +0100
protect remaining uses of prov_hash against concurrent access
* setup.c (hash_get_prov_list): New function.
* setup.h: Declare it.
* replica.c (replicate, replicate_namespace_action): Use
hash_get_prov_list to get all provider pointers at once, and which
locks the hash table before accessing it.
With hash_get_first_prov and hash_get_next_prov that was not possible.
* setup.c (hash_get_first_prov, hash_get_next_prov): Remove functions.
* setup.h: Remove declarations.
diff --git a/replica.c b/replica.c
index 3efe65c..a813d82 100644
--- a/replica.c
+++ b/replica.c
@@ -299,9 +299,16 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
sget.func = repl_sget;
sget.ctx = &qctx;
- provider_t *prov;
- for (prov = hash_get_first_prov (); prov;
- prov = hash_get_next_prov (prov)) {
+ size_t n_prov;
+ provider_t **prov_list = hash_get_prov_list (&n_prov);
+ if (prov_list == NULL) {
+ DPRINTF("failed to allocate space for provider listn");
+ return;
+ }
+
+ size_t i;
+ for (i = 0; i < n_prov; i++) {
+ provider_t *prov = prov_list[i];
if (!strcmp(prov->name, me)) {
continue;
}
@@ -352,9 +359,16 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
static void
replicate_namespace_action (const char *name, repl_t action, my_state *ms)
{
- provider_t *prov;
- for (prov = hash_get_first_prov (); prov;
- prov = hash_get_next_prov (prov)) {
+ size_t n_prov;
+ provider_t **prov_list = hash_get_prov_list (&n_prov);
+ if (prov_list == NULL) {
+ DPRINTF("failed to allocate space for provider listn");
+ return;
+ }
+
+ size_t i;
+ for (i = 0; i < n_prov; i++) {
+ provider_t *prov = prov_list[i];
if (!strcmp(prov->name, me)) {
continue;
}
diff --git a/setup.c b/setup.c
index 6f6d4cf..fc83c0c 100644
--- a/setup.c
+++ b/setup.c
@@ -640,18 +640,6 @@ get_provider_value (const provider_t *prov, const char *fname)
return p ? p->val : NULL;
}
-provider_t *
-hash_get_first_prov (void)
-{
- return hash_get_first (prov_hash);
-}
-
-provider_t *
-hash_get_next_prov (void *p)
-{
- return hash_get_next (prov_hash, p);
-}
-
/* Apply function FN to each provider.
If FN returns 0, stop early and return -1.
Otherwise, return 0 after processing the last provider. */
@@ -673,6 +661,24 @@ prov_do_for_each (prov_iterator_fn fn, void *client_data)
return err;
}
+/* Allocate an array, P, just large enough to hold all provider pointers
+ and fill it in. Set *N to the number of providers and return P.
+ Upon allocation failure return NULL. */
+provider_t **
+hash_get_prov_list (size_t *n)
+{
+ pthread_mutex_lock (&provider_hash_table_lock);
+ *n = hash_get_n_entries (prov_hash);
+ provider_t **p = (xalloc_oversized (*n, sizeof *p)
+ ? NULL : malloc (*n * sizeof *p));
+ if (p) {
+ size_t n_actual = hash_get_entries (prov_hash, (void **) p, *n);
+ assert (n_actual == *n);
+ }
+ pthread_mutex_unlock (&provider_hash_table_lock);
+ return p;
+}
+
void
update_provider (const char *provname, const char *username,
const char *password)
diff --git a/setup.h b/setup.h
index c8273e2..b0b6882 100644
--- a/setup.h
+++ b/setup.h
@@ -55,11 +55,9 @@ int add_provider (Hash_table *h);
provider_t *get_main_provider (void);
void set_main_provider (provider_t *prov);
-provider_t *hash_get_first_prov (void);
-provider_t *hash_get_next_prov (void *p);
-
typedef int (*prov_iterator_fn) (provider_t *, void *);
int prov_do_for_each (prov_iterator_fn fn, void *client_data);
+provider_t **hash_get_prov_list (size_t *n);
struct kv_pair
{
http://repo.or.cz/w/iwhd.git/commit/e73b7221f26dc46b07575a18e207836eaed6d7e7
commit e73b7221f26dc46b07575a18e207836eaed6d7e7
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 20 14:29:04 2011 +0100
remove dead code
* rest.c (prov_list_generator): Don't store into unused member,
ms->prov_iter.
* state_defs.h (_my_state) [prov_iter]: Remove now-unused member.
diff --git a/rest.c b/rest.c
index 68d3371..22d0d25 100644
--- a/rest.c
+++ b/rest.c
@@ -1614,7 +1614,6 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- ms->prov_iter = hash_get_first_prov ();
size_t len = tmpl_prov_header(ms->gen_ctx);
if (!len) {
return -1;
diff --git a/state_defs.h b/state_defs.h
index 2ed0116..21ecbcd 100644
--- a/state_defs.h
+++ b/state_defs.h
@@ -72,7 +72,6 @@ typedef struct _my_state {
pthread_t cache_th;
/* for bucket/object/provider list generators */
tmpl_ctx_t *gen_ctx;
- void *prov_iter;
char *buf;
size_t buf_n_alloc;
size_t buf_n_used;
http://repo.or.cz/w/iwhd.git/commit/1f415f5dfaa403b3ef86945806fb93f24335bc20
commit 1f415f5dfaa403b3ef86945806fb93f24335bc20
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 19 18:16:19 2011 +0100
use symbolic names in place of more hard-coded constants
* rest.c (POST_BUF_SIZE, CB_BLOCK_SIZE): Define constants.
(proxy_get_data): Use them in place of hard-coded constants.
(proxy_query, proxy_list_objs, proxy_api_root): Likewise.
(control_api_root, proxy_bucket_post, show_parts): Likewise.
(proxy_object_post, proxy_list_provs, proxy_add_prov): Likewise.
Reported by Jeff Darcy.
diff --git a/rest.c b/rest.c
index 2d32623..68d3371 100644
--- a/rest.c
+++ b/rest.c
@@ -58,6 +58,13 @@
#define MY_MHD_FLAGS MHD_USE_THREAD_PER_CONNECTION
#endif
+/* Buffer size for MHD_create_post_processor, used to buffer and parse keys. */
+enum { POST_BUF_SIZE = 4096 };
+
+/* Upper bound on the block size used when microhttpd queries
+ the callback function (i.e., I/O buffer size). */
+enum { CB_BLOCK_SIZE = 64 * 1024 };
+
#define gc_register_thread() { struct GC_stack_base gc_stack_base; @@ -326,8 +333,8 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
rc = pipe_cons_wait_init(&ms->pipe);
ms->rc = (rc == 0) ? MHD_HTTP_OK : MHD_HTTP_INTERNAL_SERVER_ERROR;
- resp = MHD_create_response_from_callback(
- MHD_SIZE_UNKNOWN, 65536, proxy_get_cons, pp, child_closer);
+ resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
+ CB_BLOCK_SIZE, proxy_get_cons, pp, child_closer);
if (!resp) {
fprintf(stderr,"MHD_crfc failedn");
if (pp2) {
@@ -780,7 +787,7 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
- ms->post = MHD_create_post_processor(conn,4096,
+ ms->post = MHD_create_post_processor(conn, POST_BUF_SIZE,
query_iterator,ms);
if (!ms->post)
return MHD_NO;
@@ -814,7 +821,7 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
}
ms->query = meta_query_new(ms->bucket,NULL,ms->pipe.data_ptr);
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
- 65536, proxy_query_func, ms, simple_closer);
+ CB_BLOCK_SIZE, proxy_query_func, ms, simple_closer);
if (!resp) {
fprintf(stderr,"MHD_crfc failedn");
simple_closer(ms);
@@ -845,7 +852,7 @@ proxy_list_objs (void *cctx, struct MHD_Connection *conn, const char *url,
ms->query = meta_query_new((char *)ms->bucket,NULL,NULL);
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
- 65536, proxy_query_func, ms, simple_closer);
+ CB_BLOCK_SIZE, proxy_query_func, ms, simple_closer);
if (!resp) {
fprintf(stderr,"MHD_crfc failedn");
simple_closer(ms);
@@ -1012,7 +1019,7 @@ proxy_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
return MHD_NO;
}
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
- 65536, root_blob_generator, ms, simple_closer);
+ CB_BLOCK_SIZE, root_blob_generator, ms, simple_closer);
if (!resp) {
return MHD_NO;
}
@@ -1159,7 +1166,7 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
- ms->post = MHD_create_post_processor(conn,4096,
+ ms->post = MHD_create_post_processor(conn, POST_BUF_SIZE,
post_iterator,ms->dict);
if (!ms->post)
return MHD_NO;
@@ -1229,7 +1236,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
- ms->post = MHD_create_post_processor(conn,4096,
+ ms->post = MHD_create_post_processor(conn, POST_BUF_SIZE,
post_iterator,ms->dict);
if (!ms->post)
return MHD_NO;
@@ -1401,7 +1408,7 @@ show_parts (struct MHD_Connection *conn, my_state *ms)
}
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
- 65536, parts_callback, ms, simple_closer);
+ CB_BLOCK_SIZE, parts_callback, ms, simple_closer);
if (!resp) {
fprintf(stderr,"MHD_crfc failedn");
simple_closer(ms);
@@ -1435,7 +1442,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
- ms->post = MHD_create_post_processor(conn,4096,
+ ms->post = MHD_create_post_processor(conn, POST_BUF_SIZE,
post_iterator,ms->dict);
if (!ms->post)
return MHD_NO;
@@ -1677,7 +1684,7 @@ proxy_list_provs (void *cctx, struct MHD_Connection *conn, const char *url,
(void)data_size;
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
- 65536, prov_list_generator, ms, simple_closer);
+ CB_BLOCK_SIZE, prov_list_generator, ms, simple_closer);
if (!resp) {
fprintf(stderr,"MHD_crfd failedn");
simple_closer(ms);
@@ -1869,7 +1876,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
- ms->post = MHD_create_post_processor(conn,4096,
+ ms->post = MHD_create_post_processor(conn, POST_BUF_SIZE,
prov_iterator,ms->dict);
if (!ms->post)
return MHD_NO;
http://repo.or.cz/w/iwhd.git/commit/9b8036c2cdae166c532a6c510017f2de485e489c
commit 9b8036c2cdae166c532a6c510017f2de485e489c
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 19 17:41:18 2011 +0100
use SMALL_PRIME in place of literal 13 (initial hash table size)
* setup.h (SMALL_PRIME): Define.
* setup.c: s/13/SMALL_PRIME/
* rest.c: Likewise.
Reported by Jeff Darcy.
diff --git a/rest.c b/rest.c
index 06c4338..2d32623 100644
--- a/rest.c
+++ b/rest.c
@@ -1155,7 +1155,8 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ ms->dict = hash_initialize(SMALL_PRIME, NULL, kv_hash,
+ kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1224,7 +1225,8 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ ms->dict = hash_initialize(SMALL_PRIME, NULL, kv_hash,
+ kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1429,7 +1431,8 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ ms->dict = hash_initialize(SMALL_PRIME, NULL, kv_hash,
+ kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1862,7 +1865,8 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ ms->dict = hash_initialize(SMALL_PRIME, NULL, kv_hash,
+ kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
diff --git a/setup.c b/setup.c
index 3f47fca..6f6d4cf 100644
--- a/setup.c
+++ b/setup.c
@@ -330,7 +330,8 @@ convert_provider (int i, provider_t *out)
out->func_tbl = &bad_func_tbl;
}
- out->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ out->attrs = hash_initialize(SMALL_PRIME, NULL, kv_hash,
+ kv_compare, NULL);
iter = json_object_iter(server);
while (iter) {
key = json_object_iter_key(iter);
@@ -413,7 +414,7 @@ add_provider (Hash_table *h)
else
prov->func_tbl = &bad_func_tbl;
- prov->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
+ prov->attrs = hash_initialize(SMALL_PRIME, NULL, kv_hash, kv_compare, NULL);
if (prov->attrs == NULL) {
goto fail;
}
@@ -474,7 +475,7 @@ parse_config_inner (void)
/* Everything looks OK. */
printf("%u replication servers definedn",nservers-1);
pthread_mutex_init(&provider_hash_table_lock, NULL);
- prov_hash = hash_initialize (13, NULL, hash_provider,
+ prov_hash = hash_initialize (SMALL_PRIME, NULL, hash_provider,
compare_providers, NULL);
if (!prov_hash) {
error(0,0,"could not allocate provider hash");
diff --git a/setup.h b/setup.h
index fcc983c..c8273e2 100644
--- a/setup.h
+++ b/setup.h
@@ -69,6 +69,8 @@ struct kv_pair
#define STREQ(a, b) (strcmp (a, b) == 0)
+enum { SMALL_PRIME = 13 };
+
static inline size_t
kv_hash (void const *x, size_t table_size)
{
http://repo.or.cz/w/iwhd.git/commit/0a5ec6af13b82ad1dd4b1314f98f209660820d06
commit 0a5ec6af13b82ad1dd4b1314f98f209660820d06
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 19 17:15:38 2011 +0100
build: make configure fail if gc-devel (aka libgc-dev) is not installed
* configure.ac: Check for <gc.h>.
Reported by Jeff Darcy.
diff --git a/configure.ac b/configure.ac
index 63f6f8f..3fe734d 100644
--- a/configure.ac
+++ b/configure.ac
@@ -74,6 +74,9 @@ AC_CHECK_LIB([xml2], [xmlInitParser],
[AC_MSG_ERROR([Missing required XML2 lib])])
AC_SUBST([XML2_LIB])
+AC_CHECK_HEADER([gc.h], ,
+ [AC_MSG_ERROR([Missing GC development library: gc-devel or libgc-dev])])
+
# from http://www.gnu.org/software/autoconf-archive/
AX_BOOST_BASE
AX_BOOST_SYSTEM
http://repo.or.cz/w/iwhd.git/commit/facb15d171c3d99112291511137d92e300698950
commit facb15d171c3d99112291511137d92e300698950
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 19 17:08:42 2011 +0100
plug a potential leak
This tells GC that when finalizing an "ma" pointer, it must
also release a few more members of that structure.
* rest.c (destroy_state_postprocessor): Also free ms->query
and ms->aquery.
(gc_register_finalizer_ms): Update comment to reflect reality.
diff --git a/rest.c b/rest.c
index 072e1e6..06c4338 100644
--- a/rest.c
+++ b/rest.c
@@ -746,12 +746,17 @@ destroy_state_postprocessor (void *ms_v, void *client_data)
MHD_destroy_post_processor (ms->post);
if (ms->dict)
hash_free (ms->dict);
+ if (ms->query)
+ meta_query_stop (ms->query);
+ if (ms->aquery)
+ meta_query_stop (ms->aquery);
}
/* Tell the garbage collector that when freeing MS, it must invoke
destroy_state_postprocessor(MS). This is required for each ms->post
since they're allocated via MHD_create_post_processor, which is
- in a separate library into which the GC has no view. */
+ in a separate library into which the GC has no view.
+ Likewise for ms->dict, ms->query and ms->aquery. */
static void
gc_register_finalizer_ms(void *ms)
{
http://repo.or.cz/w/iwhd.git/commit/0b0c8f73f794cf6c9b5427013aacda31e6ed4a54
commit 0b0c8f73f794cf6c9b5427013aacda31e6ed4a54
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Jan 17 21:58:38 2011 +0100
remove gnulib hash.c diff hack
Rather than compiling hash.c differently, treat it more like
part of the library that it is, and instead arrange to free
things via our GC finalize handler.
Remove kv_free and all uses.
* setup.h (kv_free): Remove definition. Now unused.
* gl/lib/hash.c.diff: Remove file. Not needed.
* rest.c (destroy_state_postprocessor): Also call hash_free.
* setup.c: Remove uses of kv_free. No need.
diff --git a/bootstrap.conf b/bootstrap.conf
index 8a5972d..8194f49 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -18,7 +18,7 @@
# The malloca test is fine, but inordinately slow.
# The hash one would need linking help to get -lgc.
avoided_gnulib_modules='
- --avoid=hash-tests
+ --avoid=malloca-tests
'
# gnulib modules used by this package.
diff --git a/gl/lib/hash.c.diff b/gl/lib/hash.c.diff
deleted file mode 100644
index 517d4e0..0000000
--- a/gl/lib/hash.c.diff
+++ /dev/null
@@ -1,13 +0,0 @@
-diff --git a/lib/hash.c b/lib/hash.c
-index f3de2aa..27f080e 100644
---- a/lib/hash.c
-+++ b/lib/hash.c
-@@ -43,6 +43,8 @@
- # endif
- #endif
-
-+#include "../gc-wrap.h"
-+
- struct hash_entry
- {
- void *data;
diff --git a/meta.cpp b/meta.cpp
index 404d0d5..110a7c6 100644
--- a/meta.cpp
+++ b/meta.cpp
@@ -116,7 +116,7 @@ public:
getter_t getter;
};
-RepoMeta *it;
+static RepoMeta *it;
RepoMeta::RepoMeta ()
{
diff --git a/rest.c b/rest.c
index 3f9daa0..072e1e6 100644
--- a/rest.c
+++ b/rest.c
@@ -744,6 +744,8 @@ destroy_state_postprocessor (void *ms_v, void *client_data)
my_state *ms = ms_v;
if (ms->post)
MHD_destroy_post_processor (ms->post);
+ if (ms->dict)
+ hash_free (ms->dict);
}
/* Tell the garbage collector that when freeing MS, it must invoke
@@ -1148,8 +1150,7 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL,
- kv_hash, kv_compare, kv_free);
+ ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1218,8 +1219,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL,
- kv_hash, kv_compare, kv_free);
+ ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1424,8 +1424,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL,
- kv_hash, kv_compare, kv_free);
+ ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
@@ -1858,8 +1857,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = hash_initialize(13, NULL,
- kv_hash, kv_compare, kv_free);
+ ms->dict = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
if (!ms->dict)
return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
diff --git a/setup.c b/setup.c
index e2addb0..3f47fca 100644
--- a/setup.c
+++ b/setup.c
@@ -330,7 +330,7 @@ convert_provider (int i, provider_t *out)
out->func_tbl = &bad_func_tbl;
}
- out->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, kv_free);
+ out->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
iter = json_object_iter(server);
while (iter) {
key = json_object_iter_key(iter);
@@ -413,7 +413,7 @@ add_provider (Hash_table *h)
else
prov->func_tbl = &bad_func_tbl;
- prov->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, kv_free);
+ prov->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, NULL);
if (prov->attrs == NULL) {
goto fail;
}
diff --git a/setup.h b/setup.h
index 68440a0..fcc983c 100644
--- a/setup.h
+++ b/setup.h
@@ -84,15 +84,6 @@ kv_compare (void const *x, void const *y)
return STREQ (u->key, v->key) ? true : false;
}
-static inline void
-kv_free (void *x)
-{
- struct kv_pair *p = x;
- free (p->key);
- free (p->val);
- free (p);
-}
-
static inline int
kv_hash_insert_new (Hash_table *ht, char *k, char *v)
{
http://repo.or.cz/w/iwhd.git/commit/16dd48f463abbb2905afe5b74f4905932eb85ca4
commit 16dd48f463abbb2905afe5b74f4905932eb85ca4
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 18 15:13:29 2011 +0100
tests: also check JSON provider lists
* t/init.cfg (emit_provider): Add parameter, is-last, so that
we know whether to print the final comma for JSON output.
* t/basic: Update emit_trivial_provider_list use.
* t/provider: Check both XML and JSON formats.
diff --git a/t/basic b/t/basic
index 27bbd61..808d62a 100644
--- a/t/basic
+++ b/t/basic
@@ -122,7 +122,7 @@ p_name=primary
# Verify that default provider's information is as expected.
curl http://localhost:$port/_providers > p || fail=1
-emit_trivial_provider_list xml "$p_name" fs '' 0 '' '' > p.exp || fail=1
+emit_trivial_provider_list xml "$p_name" fs '' 0 '' '' is-last > p.exp || fail=1
compare p.exp p || fail=1
for i in xml json; do
diff --git a/t/init.cfg b/t/init.cfg
index 4777954..af9356e 100644
--- a/t/init.cfg
+++ b/t/init.cfg
@@ -65,7 +65,7 @@ emit_provider()
shift 1
case $xml_or_json in xml|json);;
*) echo "invalid xml_or_json $xml_or_json" 1>&2; exit 1;; esac
- case $# in 6);; *) echo "emit_provider: wrong # args" 1>&2; exit 1;; esac
+ case $# in 7);; *) echo "emit_provider: wrong # args" 1>&2; exit 1;; esac
if test $xml_or_json = xml; then
printf @@ -76,8 +76,10 @@ emit_provider()
tt<username>%s</username>
tt<password>%s</password>
t</provider>
-' "$@"
+' "$1" "$2" "$3" "$4" "$5" "$6"
else
+ comma=,
+ test "$7" = is-last && comma=
printf 't{
tt"name": "%s",
@@ -86,8 +88,8 @@ emit_provider()
tt"port": %s,
tt"username": "%s",
tt"password": "%s"
-t}
-' "$@"
+t}%s
+' "$1" "$2" "$3" "$4" "$5" "$6" $comma
fi
}
diff --git a/t/provider b/t/provider
index 4f7ce25..82b8e5f 100644
--- a/t/provider
+++ b/t/provider
@@ -47,20 +47,25 @@ for i in $(seq $n); do
sleep 99d; }
done
-# List providers.
-curl http://localhost:$port/_providers > p-list || fail=1
+for z in xml json; do
+ curl_H() { curl -H "Accept: */$z" "$@"; }
+
+ # List providers.
+ curl_H http://localhost:$port/_providers > p-list-$z || fail=1
+
+ # Ensure that each was added:
+ {
+ emit_provider_list_prefix $z
+ for i in $(seq $n); do
+ i=$(expr 1000 + $i)
+ emit_provider $z p-$i s3 localhost 80 u p not-last || fail=1
+ done
+ emit_provider $z primary fs '' 0 '' '' is-last || fail=1
+ emit_provider_list_suffix $z
+ } > p-exp-$z
+ compare p-list-$z p-exp-$z || fail=1
-# Ensure that each was added:
-{
- emit_provider_list_prefix xml
- for i in $(seq $n); do
- i=$(expr 1000 + $i)
- emit_provider xml p-$i s3 localhost 80 u p || fail=1
- done
- emit_provider xml primary fs '' 0 '' '' || fail=1
- emit_provider_list_suffix xml
-} > p-exp
-compare p-list p-exp || fail=1
+done
# ===================================
for i in $(seq $n); do
http://repo.or.cz/w/iwhd.git/commit/fff9a6a6445c089b2a8f557176d89819aac2a3ec
commit fff9a6a6445c089b2a8f557176d89819aac2a3ec
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 18 19:16:16 2011 +0100
list providers: avoid syntax error in JSON output
Without this change, we would print "[,n" at the beginning
of the list of JSON-formatted providers.
* template.c (tmpl_prov_entry): Don't emit the leading ","
on the first entry.
* template.h: Adjust prototype.
* rest.c: Update sole caller.
Spotted by Jeff Darcy.
diff --git a/rest.c b/rest.c
index 486547a..3f9daa0 100644
--- a/rest.c
+++ b/rest.c
@@ -1535,7 +1535,7 @@ prov_fmt (provider_t *prov, void *ms_v)
size_t n_remaining = ms->buf_n_alloc - ms->buf_n_used;
int len = tmpl_prov_entry (ms->buf + ms->buf_n_used,
n_remaining,
- ms->gen_ctx->format->prov_entry,
+ ms->gen_ctx,
prov->name, prov->type,
prov->host, prov->port,
prov->username, prov->password);
diff --git a/template.c b/template.c
index 593bd45..bb0a42a 100644
--- a/template.c
+++ b/template.c
@@ -267,7 +267,7 @@ tmpl_prov_header (tmpl_ctx_t *ctx)
int
tmpl_prov_entry (char *buf, size_t buf_len,
- const char *fmt,
+ tmpl_ctx_t *ctx,
const char *name, const char *type,
const char *host, int port,
const char *user, const char *pass)
@@ -278,8 +278,13 @@ tmpl_prov_entry (char *buf, size_t buf_len,
if (!user) user = "";
if (!pass) pass = "";
+ const char *fmt = ctx->format->prov_entry;
+ if (ctx->index == 0)
+ fmt += ctx->format->z_offset;
int size = snprintf(buf, buf_len, fmt,
name, type, host, port, user, pass);
+ if (0 < size && size < buf_len)
+ ctx->index++;
return size;
}
diff --git a/template.h b/template.h
index e2bb300..b45eb95 100644
--- a/template.h
+++ b/template.h
@@ -53,7 +53,7 @@ size_t tmpl_root_footer (tmpl_ctx_t *ctx);
size_t tmpl_prov_header (tmpl_ctx_t *ctx);
int tmpl_prov_entry (char *buf, size_t buf_len,
- const char *fmt,
+ tmpl_ctx_t *ctx,
const char *name, const char *type,
const char *host, int port,
const char *user, const char *pass);
http://repo.or.cz/w/iwhd.git/commit/ba44b876ad572f2861114a0ba01124c6611a9da8
commit ba44b876ad572f2861114a0ba01124c6611a9da8
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 18 13:05:20 2011 +0100
sort provider list on "name"
* rest.c: Sort provider list output on name:
(struct plist): Define.
(prov_get, prov_name_compare): New functions.
(prov_list_generator): Rather than emitting provider XML/JSON
in arbitrary (hash-traversal) order, first gather an array of
provider_t* pointers, sort them, and *then* emit listing.
* t/init.cfg (emit_provider_list_prefix): New helper.
(emit_provider_list_suffix): Likewise.
(emit_trivial_provider_list): Renamed from emit_provider.
(emit_provider): Emit output for a single provider.
I.e., do not emit prefix and suffix.
* t/basic: s/emit_provider/emit_trivial_provider_list/
* t/provider: Compare full output with expected output,
rather than just grepping for a summary.
This is feasible, now that the provider list is sorted.
diff --git a/rest.c b/rest.c
index 290dfd1..486547a 100644
--- a/rest.c
+++ b/rest.c
@@ -1553,6 +1553,38 @@ prov_fmt (provider_t *prov, void *ms_v)
}
}
+// Aux structure solely to accumulate and sort providers on name.
+struct plist_t
+{
+ provider_t **buf;
+ size_t n_used;
+ size_t n_allocated;
+};
+
+// Accumulate a list of provider pointers.
+static int
+prov_get (provider_t *prov, void *plist_v)
+{
+ struct plist_t *p = plist_v;
+ if (p->n_used == p->n_allocated) {
+ void *v = a2nrealloc (p->buf, &p->n_allocated, sizeof *(p->buf));
+ if (v == NULL)
+ return 0; // tell caller we've failed
+ p->buf = v;
+ }
+ p->buf[p->n_used++] = prov;
+ return 1;
+}
+
+// Compare two providers based on their names.
+static int
+prov_name_compare (const void *av, const void *bv)
+{
+ const provider_t *const *a = av;
+ const provider_t *const *b = bv;
+ return strcmp ((*a)->name, (*b)->name);
+}
+
static ssize_t
prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
{
@@ -1581,12 +1613,24 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
if (ms->gen_ctx == TMPL_CTX_DONE) {
return -1;
}
-
if (ms->buf == NULL) {
- // generate/alloc all provider-related output into memory
- if (prov_do_for_each (prov_fmt, ms) < 0)
+ struct plist_t plist = {NULL, 0, 0};
+
+ // Create list of provider_t pointers.
+ if (prov_do_for_each (prov_get, &plist) < 0)
return -1;
+ // Sort that list on provider names.
+ qsort (plist.buf, plist.n_used, sizeof *(plist.buf),
+ prov_name_compare);
+
+ // Emit all provider-related output into memory.
+ size_t i;
+ for (i = 0; i < plist.n_used; i++) {
+ if (prov_fmt (plist.buf[i], ms) == 0)
+ return -1; // failed
+ }
+
// Abuse the ms->buf_n_alloc member to indicate current offset.
# define buf_offset buf_n_alloc
ms->buf_offset = 0;
diff --git a/t/basic b/t/basic
index 9405cb1..27bbd61 100644
--- a/t/basic
+++ b/t/basic
@@ -122,7 +122,7 @@ p_name=primary
# Verify that default provider's information is as expected.
curl http://localhost:$port/_providers > p || fail=1
-emit_provider xml "$p_name" fs '' 0 '' '' > p.exp || fail=1
+emit_trivial_provider_list xml "$p_name" fs '' 0 '' '' > p.exp || fail=1
compare p.exp p || fail=1
for i in xml json; do
diff --git a/t/init.cfg b/t/init.cfg
index fd0a838..4777954 100644
--- a/t/init.cfg
+++ b/t/init.cfg
@@ -39,6 +39,26 @@ wait_for()
done
}
+emit_provider_list_prefix()
+{
+ test $1 = xml && p='<providers>' || p='['
+ printf '%sn' "$p"
+}
+
+emit_provider_list_suffix()
+{
+ test $1 = xml && p='</providers>' || p=']'
+ printf '%sn' "$p"
+}
+
+emit_trivial_provider_list()
+{
+ local xml_or_json=$1
+ emit_provider_list_prefix $xml_or_json
+ emit_provider "$@"
+ emit_provider_list_suffix $xml_or_json
+}
+
emit_provider()
{
local xml_or_json=$1
@@ -49,20 +69,17 @@ emit_provider()
if test $xml_or_json = xml; then
printf -'<providers>
-t<provider name="%s">
+'t<provider name="%s">
tt<type>%s</type>
tt<host>%s</host>
tt<port>%s</port>
tt<username>%s</username>
tt<password>%s</password>
t</provider>
-</providers>
' "$@"
else
printf -'[
-t{
+'t{
tt"name": "%s",
tt"type": "%s",
tt"host": "%s",
@@ -70,7 +87,6 @@ emit_provider()
tt"username": "%s",
tt"password": "%s"
t}
-]
' "$@"
fi
}
diff --git a/t/provider b/t/provider
index da372b2..4f7ce25 100644
--- a/t/provider
+++ b/t/provider
@@ -30,11 +30,15 @@ wait_for .1 50 "curl -s http://localhost:$port" || { echo iwhd failed to listen; Exit 1; }
fail=0
+
+# Use at least n=300 here, since some processes typically kick in
+# around n=280.
n=400
# ===================================
for i in $(seq $n); do
# Add provider
- p=http://localhost:$port/_providers/p-$i
+ lexico_sortable_i=$(expr 1000 + $i)
+ p=http://localhost:$port/_providers/p-$lexico_sortable_i
curl -d type=s3 -dhost=localhost -dport=80 -dkey=u -dsecret=p $p || fail=1
curl http://localhost:$port/_providers > p-list || fail=1
# Ensure that there is the correct number of p-DDD entries.
@@ -47,9 +51,16 @@ done
curl http://localhost:$port/_providers > p-list || fail=1
# Ensure that each was added:
-for i in $(seq $n); do
- grep 'name="'p-$i'"' p-list || fail=1
-done
+{
+ emit_provider_list_prefix xml
+ for i in $(seq $n); do
+ i=$(expr 1000 + $i)
+ emit_provider xml p-$i s3 localhost 80 u p || fail=1
+ done
+ emit_provider xml primary fs '' 0 '' '' || fail=1
+ emit_provider_list_suffix xml
+} > p-exp
+compare p-list p-exp || fail=1
# ===================================
for i in $(seq $n); do
http://repo.or.cz/w/iwhd.git/commit/33dea68259d6ea08253163373f4c36400f6535ca
commit 33dea68259d6ea08253163373f4c36400f6535ca
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 18 10:28:55 2011 +0100
tests: prepare for improved provider checks in t/provider
* t/basic (emit_provider, emit_bucket_list): Move helper functions...
* t/init.cfg: ...to here.
diff --git a/t/basic b/t/basic
index 49e8ab2..9405cb1 100644
--- a/t/basic
+++ b/t/basic
@@ -56,80 +56,6 @@ cat <<EOF > root.json || framework_failure_
}
EOF
-emit_provider()
-{
- local xml_or_json=$1
- shift 1
- case $xml_or_json in xml|json);;
- *) echo "invalid xml_or_json $xml_or_json" 1>&2; exit 1;; esac
- case $# in 6);; *) echo "emit_provider: wrong # args" 1>&2; exit 1;; esac
-
- if test $xml_or_json = xml; then
- printf -'<providers>
-t<provider name="%s">
-tt<type>%s</type>
-tt<host>%s</host>
-tt<port>%s</port>
-tt<username>%s</username>
-tt<password>%s</password>
-t</provider>
-</providers>
-' "$@"
- else
- printf -'[
-t{
-tt"name": "%s",
-tt"type": "%s",
-tt"host": "%s",
-tt"port": %s,
-tt"username": "%s",
-tt"password": "%s"
-t}
-]
-' "$@"
- fi
-}
-
-emit_bucket_list()
-{
- local xml_or_json=$1
- case $xml_or_json in xml|json);;
- *) echo "invalid xml_or_json $xml_or_json" 1>&2; exit 1;; esac
- shift
-
- local i b k
- if test $xml_or_json = xml; then
- printf '<objects>n'
- for i in "$@"; do
- b=$(echo "$i"|sed 's/:.*//')
- k=$(echo "$i"|sed 's/.*://')
- printf -'t<object>
-tt<bucket>%s</bucket>
-tt<key>%s</key>
-t</object>
-' $b $k
- done
- printf '</objects>n'
- else
- printf '[n'
- for i in "$@"; do
- b=$(echo "$i"|sed 's/:.*//')
- k=$(echo "$i"|sed 's/.*://')
- printf -'t{
-tt"bucket": "%s",
-tt"key": "%s"
-t}
-' $b $k
- done
- printf ']n'
- fi
-
-}
-
printf '[{"path": "FS", "type": "fs", "name": "primary"}]n' > iwhd.cfg || fail=1
@@ -194,7 +120,7 @@ grep ' 403$' role.err || fail=1
p_name=primary
-# Verify that default providers information is as expected.
+# Verify that default provider's information is as expected.
curl http://localhost:$port/_providers > p || fail=1
emit_provider xml "$p_name" fs '' 0 '' '' > p.exp || fail=1
compare p.exp p || fail=1
diff --git a/t/init.cfg b/t/init.cfg
index 0b92496..fd0a838 100644
--- a/t/init.cfg
+++ b/t/init.cfg
@@ -38,3 +38,76 @@ wait_for()
&& { warn_ "EXPIRED: $i x ${sleep_seconds}s: '$cmd'"; return 1; }
done
}
+
+emit_provider()
+{
+ local xml_or_json=$1
+ shift 1
+ case $xml_or_json in xml|json);;
+ *) echo "invalid xml_or_json $xml_or_json" 1>&2; exit 1;; esac
+ case $# in 6);; *) echo "emit_provider: wrong # args" 1>&2; exit 1;; esac
+
+ if test $xml_or_json = xml; then
+ printf +'<providers>
+t<provider name="%s">
+tt<type>%s</type>
+tt<host>%s</host>
+tt<port>%s</port>
+tt<username>%s</username>
+tt<password>%s</password>
+t</provider>
+</providers>
+' "$@"
+ else
+ printf +'[
+t{
+tt"name": "%s",
+tt"type": "%s",
+tt"host": "%s",
+tt"port": %s,
+tt"username": "%s",
+tt"password": "%s"
+t}
+]
+' "$@"
+ fi
+}
+
+emit_bucket_list()
+{
+ local xml_or_json=$1
+ case $xml_or_json in xml|json);;
+ *) echo "invalid xml_or_json $xml_or_json" 1>&2; exit 1;; esac
+ shift
+
+ local i b k
+ if test $xml_or_json = xml; then
+ printf '<objects>n'
+ for i in "$@"; do
+ b=$(echo "$i"|sed 's/:.*//')
+ k=$(echo "$i"|sed 's/.*://')
+ printf +'t<object>
+tt<bucket>%s</bucket>
+tt<key>%s</key>
+t</object>
+' $b $k
+ done
+ printf '</objects>n'
+ else
+ printf '[n'
+ for i in "$@"; do
+ b=$(echo "$i"|sed 's/:.*//')
+ k=$(echo "$i"|sed 's/.*://')
+ printf +'t{
+tt"bucket": "%s",
+tt"key": "%s"
+t}
+' $b $k
+ done
+ printf ']n'
+ fi
+}
http://repo.or.cz/w/iwhd.git/commit/8eccf807a16213425c463630b9bdae0c6734ee52
commit 8eccf807a16213425c463630b9bdae0c6734ee52
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 18 08:51:59 2011 +0100
microhttpd may also spawn threads to call prov_list_generator; tell GC
Just as done for access_handler (registered via MHD_start_daemon),
we have to tell the garbage collector about the thread that runs
MHD_connection_handle_idle and calls prov_list_generator.
The symptom is an abort when the MHD_connection_handle_idle runs.
To diagnose, invoke gdb on the resulting core file, then type
"thread apply all bt" and note the functions at the base of the
stack on the losing thread. That's the one that hasn't yet been
registered for GC.
* rest.c (gc_register_thread): New macro, factored out of...
(access_handler): ...here.
(prov_list_generator): Use it here, too.
diff --git a/rest.c b/rest.c
index ff87c65..290dfd1 100644
--- a/rest.c
+++ b/rest.c
@@ -58,6 +58,14 @@
#define MY_MHD_FLAGS MHD_USE_THREAD_PER_CONNECTION
#endif
+#define gc_register_thread() + { + struct GC_stack_base gc_stack_base; + int st = GC_get_stack_base (&gc_stack_base); + assert (st == GC_SUCCESS); + GC_register_my_thread (&gc_stack_base); + }
+
typedef enum {
URL_ROOT=0, URL_BUCKET, URL_OBJECT, URL_ATTR, URL_INVAL,
URL_QUERY, URL_PROVLIST, URL_PROVIDER, URL_PROVIDER_SET_PRIMARY
@@ -1548,6 +1556,7 @@ prov_fmt (provider_t *prov, void *ms_v)
static ssize_t
prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
{
+ gc_register_thread();
my_state *ms = ctx;
(void)pos;
@@ -2011,10 +2020,7 @@ access_handler (void *cctx, struct MHD_Connection *conn, const char *url,
struct MHD_Response *resp;
my_state *ms = *rctx;
- struct GC_stack_base gc_stack_base;
- int st = GC_get_stack_base (&gc_stack_base);
- assert (st == GC_SUCCESS);
- GC_register_my_thread (&gc_stack_base);
+ gc_register_thread();
if (ms) {
return ms->handler(cctx,conn,url,method,version,
http://repo.or.cz/w/iwhd.git/commit/f08ff1ceab77a8e2e06d1170b4a46fb47920d29b
commit f08ff1ceab77a8e2e06d1170b4a46fb47920d29b
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Jan 14 22:13:27 2011 +0100
also mutex-protect the provider-iterator used in listing
* setup.c (prov_do_for_each): Guard with a mutex.
diff --git a/setup.c b/setup.c
index baabef6..e2addb0 100644
--- a/setup.c
+++ b/setup.c
@@ -658,12 +658,18 @@ int
prov_do_for_each (prov_iterator_fn fn, void *client_data)
{
provider_t *p;
+ int err = 0;
+ pthread_mutex_lock (&provider_hash_table_lock);
for (p = hash_get_first (prov_hash); p;
p = hash_get_next (prov_hash, p)) {
- if (!fn (p, client_data))
- return -1;
+ if (!fn (p, client_data)) {
+ err = -1;
+ break;
+ }
}
- return 0;
+
+ pthread_mutex_unlock (&provider_hash_table_lock);
+ return err;
}
void
http://repo.or.cz/w/iwhd.git/commit/549b989ddd8b189cb3b798d602ee30502aea473a
commit 549b989ddd8b189cb3b798d602ee30502aea473a
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Jan 14 21:49:09 2011 +0100
rewrite provider-listing code so we can protect it with a mutex:
The above is the primary goal, but this change also avoids
printing invalid output for pathologically-long provider records.
When iterating through the provider hash, we must prevent insertion.
Rather than getting/formatting a new provider for each callback
(which would mean holding a mutex for way too long -- and hard to
know if/when to release it), iterate over all providers the first
time and save all output in an allocated buffer. Then serve up
bite-sized pieces of that buffer until it's all output.
* rest.c (a2nrealloc): New function, derived from gnulib's x2nrealloc.
(prov_fmt): New function.
(prov_list_generator): Rewrite, prov_do_for_each and the above.
Assert that header fits in our buffer, rather than silently
truncating it and thus producing invalid output.
Do the same for the footer.
* template.c (tmpl_prov_entry): Rewrite not to use a fixed-size buffer.
Now, this is just a thin layer around snprintf.
* template.h (tmpl_prov_entry): New prototype.
* state_defs.h [struct _my_state] (buf, buf_n_alloc, size_t buf_n_used):
New members.
* setup.c (prov_do_for_each): New function.
* setup.h: Declare it.
diff --git a/rest.c b/rest.c
index 78ac7e0..ff87c65 100644
--- a/rest.c
+++ b/rest.c
@@ -1479,13 +1479,76 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
}
+/* Derived from gnulib's x2nrealloc. */
+static void *
+a2nrealloc (void *p, size_t *pn, size_t s)
+{
+ size_t n = *pn;
+
+ if (! p)
+ {
+ if (! n)
+ {
+ /* The approximate size to use for initial small allocation
+ requests, when the invoking code specifies an old size of
+ zero. 64 bytes is the largest "small" request for the
+ GNU C library malloc. */
+ enum { DEFAULT_MXFAST = 64 };
+
+ n = DEFAULT_MXFAST / s;
+ n += !n;
+ }
+ }
+ else
+ {
+ /* Set N = ceil (1.5 * N) so that progress is made if N == 1.
+ Check for overflow, so that N * S stays in size_t range.
+ The check is slightly conservative, but an exact check isn't
+ worth the trouble. */
+ if ((size_t) -1 / 3 * 2 / s <= n)
+ return NULL;
+ n += (n + 1) / 2;
+ }
+
+ *pn = n;
+ return realloc (p, n * s);
+}
+
+/* Format each provider into malloc'd/realloc'd MS->buf,
+ setting MS->buf_n_alloc and MS->buf_n_used as required. */
+static int
+prov_fmt (provider_t *prov, void *ms_v)
+{
+ my_state *ms = ms_v;
+ if (prov->deleted)
+ return 1;
+
+ while (true) {
+ size_t n_remaining = ms->buf_n_alloc - ms->buf_n_used;
+ int len = tmpl_prov_entry (ms->buf + ms->buf_n_used,
+ n_remaining,
+ ms->gen_ctx->format->prov_entry,
+ prov->name, prov->type,
+ prov->host, prov->port,
+ prov->username, prov->password);
+ if (len < 0)
+ return 0; // tell iterator we've failed
+
+ if (len < n_remaining) {
+ ms->buf_n_used += len;
+ return 1;
+ }
+
+ ms->buf = a2nrealloc (ms->buf, &ms->buf_n_alloc, 1);
+ if (ms->buf == NULL)
+ return 0;
+ }
+}
static ssize_t
prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
{
- my_state *ms = ctx;
- size_t len;
-
+ my_state *ms = ctx;
(void)pos;
if (!ms->gen_ctx) {
@@ -1497,15 +1560,11 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
return -1;
}
ms->prov_iter = hash_get_first_prov ();
- len = tmpl_prov_header(ms->gen_ctx);
+ size_t len = tmpl_prov_header(ms->gen_ctx);
if (!len) {
return -1;
}
- if (len > max) {
- /* FIXME: don't truncate. Doing that would
- result in syntactically invalid output. */
- len = max;
- }
+ assert (len <= max);
memcpy(buf,ms->gen_ctx->buf,len);
return len;
}
@@ -1514,33 +1573,32 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
return -1;
}
- const provider_t *prov = ms->prov_iter;
- if (prov) {
- ms->prov_iter = hash_get_next_prov (ms->prov_iter);
- if (prov->deleted)
- return 0;
- len = tmpl_prov_entry(ms->gen_ctx,prov->name,prov->type,
- prov->host, prov->port, prov->username, prov->password);
- if (!len) {
+ if (ms->buf == NULL) {
+ // generate/alloc all provider-related output into memory
+ if (prov_do_for_each (prov_fmt, ms) < 0)
return -1;
- }
- if (len > max) {
- len = max;
- }
+
+ // Abuse the ms->buf_n_alloc member to indicate current offset.
+# define buf_offset buf_n_alloc
+ ms->buf_offset = 0;
+ }
+
+ if (ms->buf_offset < ms->buf_n_used) {
+ size_t n = MIN (max, ms->buf_n_used - ms->buf_offset);
+ memcpy (buf, ms->buf + ms->buf_offset, n);
+ ms->buf_offset += n;
+ return n;
+ } else {
+ free (ms->buf);
+ ms->buf = NULL;
+ size_t len = tmpl_prov_footer(ms->gen_ctx);
+ if (!len)
+ return -1;
+ assert (len <= max);
memcpy(buf,ms->gen_ctx->buf,len);
+ ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
-
- len = tmpl_prov_footer(ms->gen_ctx);
- if (!len) {
- return -1;
- }
- if (len > max) {
- len = max;
- }
- memcpy(buf,ms->gen_ctx->buf,len);
- ms->gen_ctx = TMPL_CTX_DONE;
- return len;
}
static int
diff --git a/setup.c b/setup.c
index 2945c39..baabef6 100644
--- a/setup.c
+++ b/setup.c
@@ -651,11 +651,26 @@ hash_get_next_prov (void *p)
return hash_get_next (prov_hash, p);
}
+/* Apply function FN to each provider.
+ If FN returns 0, stop early and return -1.
+ Otherwise, return 0 after processing the last provider. */
+int
+prov_do_for_each (prov_iterator_fn fn, void *client_data)
+{
+ provider_t *p;
+ for (p = hash_get_first (prov_hash); p;
+ p = hash_get_next (prov_hash, p)) {
+ if (!fn (p, client_data))
+ return -1;
+ }
+ return 0;
+}
+
void
update_provider (const char *provname, const char *username,
const char *password)
{
- provider_t *prov;
+ provider_t *prov;
DPRINTF("updating %s username=%s password=%sn",
provname, username, password);
diff --git a/setup.h b/setup.h
index e14df27..68440a0 100644
--- a/setup.h
+++ b/setup.h
@@ -58,6 +58,9 @@ void set_main_provider (provider_t *prov);
provider_t *hash_get_first_prov (void);
provider_t *hash_get_next_prov (void *p);
+typedef int (*prov_iterator_fn) (provider_t *, void *);
+int prov_do_for_each (prov_iterator_fn fn, void *client_data);
+
struct kv_pair
{
char *key;
diff --git a/state_defs.h b/state_defs.h
index dcc9cf5..2ed0116 100644
--- a/state_defs.h
+++ b/state_defs.h
@@ -73,6 +73,9 @@ typedef struct _my_state {
/* for bucket/object/provider list generators */
tmpl_ctx_t *gen_ctx;
void *prov_iter;
+ char *buf;
+ size_t buf_n_alloc;
+ size_t buf_n_used;
/* for back-end functions */
backend_thunk_t thunk;
int be_flags;
diff --git a/template.c b/template.c
index 512f49f..593bd45 100644
--- a/template.c
+++ b/template.c
@@ -265,34 +265,22 @@ tmpl_prov_header (tmpl_ctx_t *ctx)
return strlen(ctx->buf);
}
-size_t
-tmpl_prov_entry (tmpl_ctx_t *ctx,
+int
+tmpl_prov_entry (char *buf, size_t buf_len,
+ const char *fmt,
const char *name, const char *type,
const char *host, int port,
const char *user, const char *pass)
{
- int size;
- const tmpl_format_t *fmt = ctx->format;
-
if (!name) name = "";
if (!type) type = "";
if (!host) host = "";
if (!user) user = "";
if (!pass) pass = "";
- size = snprintf(ctx->raw_buf,TMPL_BUF_SIZE,fmt->prov_entry,
- name, type, host, port, user, pass);
- if (size >= TMPL_BUF_SIZE || size < 0) {
- return 0;
- }
- ctx->buf = ctx->raw_buf;
-
- if (size && (ctx->index == 0)) {
- ctx->buf += fmt->z_offset;
- size -= fmt->z_offset;
- }
+ int size = snprintf(buf, buf_len, fmt,
+ name, type, host, port, user, pass);
- ++(ctx->index);
return size;
}
diff --git a/template.h b/template.h
index 423a57b..e2bb300 100644
--- a/template.h
+++ b/template.h
@@ -51,10 +51,13 @@ size_t tmpl_root_entry (tmpl_ctx_t *ctx,
const char *rel, const char *link);
size_t tmpl_root_footer (tmpl_ctx_t *ctx);
size_t tmpl_prov_header (tmpl_ctx_t *ctx);
-size_t tmpl_prov_entry (tmpl_ctx_t *ctx,
- const char *name, const char *type,
- const char *host, int port,
- const char *user, const char *pass);
+
+int tmpl_prov_entry (char *buf, size_t buf_len,
+ const char *fmt,
+ const char *name, const char *type,
+ const char *host, int port,
+ const char *user, const char *pass);
+
size_t tmpl_prov_footer (tmpl_ctx_t *ctx);
size_t tmpl_list_header (tmpl_ctx_t *ctx);
size_t tmpl_list_entry (tmpl_ctx_t *ctx,
http://repo.or.cz/w/iwhd.git/commit/9414f266a267f5e8a1860ef87ad2a59a3022ff24
commit 9414f266a267f5e8a1860ef87ad2a59a3022ff24
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Jan 14 17:47:42 2011 +0100
avoid unnecessary MHD_lookup_connection_value calls
Move "accept_hdr" decl and definition into sole block where it's used
This avoids the unnecessary call to MHD_lookup_connection_value on
all but the first call to each of these event-handling functions.
* rest.c (proxy_query_func, root_blob_generator, parts_callback):
(prov_list_generator): As above.
diff --git a/rest.c b/rest.c
index e499ffe..78ac7e0 100644
--- a/rest.c
+++ b/rest.c
@@ -671,16 +671,15 @@ proxy_query_func (void *ctx, uint64_t pos, char *buf, size_t max)
{
my_state *ms = ctx;
size_t len;
- const char *accept_hdr;
char *bucket;
char *key;
(void)pos;
- accept_hdr = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,
- "Accept");
-
if (!ms->gen_ctx) {
+ const char *accept_hdr
+ = MHD_lookup_connection_value(ms->conn, MHD_HEADER_KIND,
+ "Accept");
ms->gen_ctx = tmpl_get_ctx(accept_hdr);
if (!ms->gen_ctx) {
return -1;
@@ -908,18 +907,18 @@ root_blob_generator (void *ctx, uint64_t pos, char *buf, size_t max)
my_state *ms = ctx;
const fake_bucket_t *fb;
size_t len;
- const char *accept_hdr;
const char *host;
char *bucket;
char *key;
(void)pos;
- accept_hdr = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,
- "Accept");
host = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,"Host");
if (!ms->gen_ctx) {
+ const char *accept_hdr
+ = MHD_lookup_connection_value(ms->conn, MHD_HEADER_KIND,
+ "Accept");
ms->gen_ctx = tmpl_get_ctx(accept_hdr);
if (!ms->gen_ctx) {
return -1;
@@ -1312,18 +1311,18 @@ parts_callback (void *ctx, uint64_t pos, char *buf, size_t max)
{
my_state *ms = ctx;
size_t len;
- const char *accept_hdr;
const char *name;
const char *value;
const char *host;
(void)pos;
- accept_hdr = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,
- "Accept");
host = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,"Host");
if (!ms->gen_ctx) {
+ const char *accept_hdr
+ = MHD_lookup_connection_value(ms->conn, MHD_HEADER_KIND,
+ "Accept");
ms->gen_ctx = tmpl_get_ctx(accept_hdr);
if (!ms->gen_ctx) {
return -1;
@@ -1486,14 +1485,13 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
{
my_state *ms = ctx;
size_t len;
- const char *accept_hdr;
(void)pos;
- accept_hdr = MHD_lookup_connection_value(ms->conn,MHD_HEADER_KIND,
- "Accept");
-
if (!ms->gen_ctx) {
+ const char *accept_hdr
+ = MHD_lookup_connection_value(ms->conn, MHD_HEADER_KIND,
+ "Accept");
ms->gen_ctx = tmpl_get_ctx(accept_hdr);
if (!ms->gen_ctx) {
return -1;
http://repo.or.cz/w/iwhd.git/commit/ac3e583f5cf58a38f261d12abba558e40b1e7873
commit ac3e583f5cf58a38f261d12abba558e40b1e7873
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Jan 14 17:11:10 2011 +0100
handle hash_initialize and MHD_create_post_processor failure
* rest.c (proxy_query, control_api_root, proxy_bucket_post):
(proxy_object_post, proxy_add_prov): Return MHD_NO, rather than
ignoring the failures.
diff --git a/rest.c b/rest.c
index 6388221..e499ffe 100644
--- a/rest.c
+++ b/rest.c
@@ -768,6 +768,8 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->post = MHD_create_post_processor(conn,4096,
query_iterator,ms);
+ if (!ms->post)
+ return MHD_NO;
gc_register_finalizer_ms(ms);
}
else if (*data_size) {
@@ -1141,8 +1143,12 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = hash_initialize(13, NULL,
kv_hash, kv_compare, kv_free);
+ if (!ms->dict)
+ return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ if (!ms->post)
+ return MHD_NO;
gc_register_finalizer_ms(ms);
return MHD_YES;
}
@@ -1207,8 +1213,12 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = hash_initialize(13, NULL,
kv_hash, kv_compare, kv_free);
+ if (!ms->dict)
+ return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ if (!ms->post)
+ return MHD_NO;
gc_register_finalizer_ms(ms);
}
else if (*data_size) {
@@ -1409,8 +1419,12 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = hash_initialize(13, NULL,
kv_hash, kv_compare, kv_free);
+ if (!ms->dict)
+ return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ if (!ms->post)
+ return MHD_NO;
gc_register_finalizer_ms(ms);
}
else if (*data_size) {
@@ -1737,8 +1751,12 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = hash_initialize(13, NULL,
kv_hash, kv_compare, kv_free);
+ if (!ms->dict)
+ return MHD_NO;
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
+ if (!ms->post)
+ return MHD_NO;
gc_register_finalizer_ms(ms);
}
else if (*data_size) {
http://repo.or.cz/w/iwhd.git/commit/b247691b9ea2f7dacbcf1edbedef28206eb2a722
commit b247691b9ea2f7dacbcf1edbedef28206eb2a722
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 13 22:17:00 2011 +0100
avoid a leak via ms->post = MHD_create_post_processor(...
* rest.c (gc_register_finalizer_ms): New function.
(proxy_query, control_api_root, proxy_bucket_post): Use it.
(proxy_object_post, proxy_add_prov): Likewise.
(destroy_state_postprocessor): New function.
diff --git a/rest.c b/rest.c
index bff9c95..6388221 100644
--- a/rest.c
+++ b/rest.c
@@ -730,6 +730,26 @@ proxy_query_func (void *ctx, uint64_t pos, char *buf, size_t max)
return len;
}
+/* Helper used by gc_register_finalizer_ms. */
+static void
+destroy_state_postprocessor (void *ms_v, void *client_data)
+{
+ my_state *ms = ms_v;
+ if (ms->post)
+ MHD_destroy_post_processor (ms->post);
+}
+
+/* Tell the garbage collector that when freeing MS, it must invoke
+ destroy_state_postprocessor(MS). This is required for each ms->post
+ since they're allocated via MHD_create_post_processor, which is
+ in a separate library into which the GC has no view. */
+static void
+gc_register_finalizer_ms(void *ms)
+{
+ if (ms)
+ GC_register_finalizer(ms, destroy_state_postprocessor, 0, 0, 0);
+}
+
static int
proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
const char *method, const char *version, const char *data,
@@ -748,6 +768,7 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->post = MHD_create_post_processor(conn,4096,
query_iterator,ms);
+ gc_register_finalizer_ms(ms);
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1122,6 +1143,7 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ gc_register_finalizer_ms(ms);
return MHD_YES;
}
@@ -1187,6 +1209,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ gc_register_finalizer_ms(ms);
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1388,6 +1411,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
+ gc_register_finalizer_ms(ms);
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1715,6 +1739,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
+ gc_register_finalizer_ms(ms);
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
http://repo.or.cz/w/iwhd.git/commit/d4de957c8a10b97952b18c705369f305393b6b16
commit d4de957c8a10b97952b18c705369f305393b6b16
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 13 18:26:25 2011 +0100
t/provider: warn-then-sleep on failure -- eases debugging
diff --git a/t/provider b/t/provider
index 7895c69..da372b2 100644
--- a/t/provider
+++ b/t/provider
@@ -39,7 +39,8 @@ for i in $(seq $n); do
curl http://localhost:$port/_providers > p-list || fail=1
# Ensure that there is the correct number of p-DDD entries.
test $(grep -c '<provider name="p-[0-9]*">' p-list) = $i || fail=1
- test $fail = 1 && sleep 99d
+ test $fail = 1 && { warn_ "$test_dir_: add $i failed; sleeping forever..."
+ sleep 99d; }
done
# List providers.
http://repo.or.cz/w/iwhd.git/commit/a2eee0a0daabc99ff2b2db49948adfdead278c63
commit a2eee0a0daabc99ff2b2db49948adfdead278c63
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 12 18:36:27 2011 +0100
tell GC about the thread spawned by MHD_start_daemon
rest.c's main program calls MHD_start_daemon to register
access_handler as a function that it will call from the thread
it creates. Normally, the garbage collector learns of pthread_create
calls because they're cpp-wrapped. However, when it's called from
3rd-party libraries as in this case, we can't very well recompile,
so have to use a different approach:
* rest.c (access_handler): Call GC_register_my_thread to inform the
garbage collector of this new thread.
diff --git a/rest.c b/rest.c
index d9c01bd..bff9c95 100644
--- a/rest.c
+++ b/rest.c
@@ -1912,6 +1912,11 @@ access_handler (void *cctx, struct MHD_Connection *conn, const char *url,
struct MHD_Response *resp;
my_state *ms = *rctx;
+ struct GC_stack_base gc_stack_base;
+ int st = GC_get_stack_base (&gc_stack_base);
+ assert (st == GC_SUCCESS);
+ GC_register_my_thread (&gc_stack_base);
+
if (ms) {
return ms->handler(cctx,conn,url,method,version,
data,data_size,rctx);
http://repo.or.cz/w/iwhd.git/commit/eb90a1c8aedc31c4c6587a9c9b79fc3ba6b4fc54
commit eb90a1c8aedc31c4c6587a9c9b79fc3ba6b4fc54
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Jan 12 18:29:17 2011 +0100
insinuate GC into gnulib's hash-related code
* bootstrap.conf: Disable the hash-test, which would otherwise
get link failures due to unresolved GC_malloc, etc.
* gl/lib/hash.c.diff: New file. This patch is automatically
applied to gnulib's hash.c at bootstrap time.
diff --git a/bootstrap.conf b/bootstrap.conf
index 0e03de7..8a5972d 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -15,6 +15,12 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
+# The malloca test is fine, but inordinately slow.
+# The hash one would need linking help to get -lgc.
+avoided_gnulib_modules='
+ --avoid=hash-tests
+'
+
# gnulib modules used by this package.
gnulib_modules='
announce-gen
diff --git a/gl/lib/hash.c.diff b/gl/lib/hash.c.diff
new file mode 100644
index 0000000..517d4e0
--- /dev/null
+++ b/gl/lib/hash.c.diff
@@ -0,0 +1,13 @@
+diff --git a/lib/hash.c b/lib/hash.c
+index f3de2aa..27f080e 100644
+--- a/lib/hash.c
++++ b/lib/hash.c
+@@ -43,6 +43,8 @@
+ # endif
+ #endif
+
++#include "../gc-wrap.h"
++
+ struct hash_entry
+ {
+ void *data;
http://repo.or.cz/w/iwhd.git/commit/f941f9f0fac64fcadb87dec827964e0ca3f28316
commit f941f9f0fac64fcadb87dec827964e0ca3f28316
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Jan 10 20:00:45 2011 +0100
convert all remaining uses of g_hash_* functions
g_hash_table_insert
g_hash_table_remove
g_hash_table_foreach
g_hash_table_find
g_hash_table_iter_init
g_hash_table_iter_next
diff --git a/rest.c b/rest.c
index cab640f..d9c01bd 100644
--- a/rest.c
+++ b/rest.c
@@ -1031,34 +1031,36 @@ post_iterator (void *ctx, enum MHD_ValueKind kind, const char *key,
return MHD_NO;
}
- g_hash_table_insert(ctx,k,new_val);
+ kv_hash_insert_new (ctx, k, new_val);
return MHD_YES;
}
/* Returns TRUE if we found an *invalid* key. */
-static gboolean
-post_find (gpointer key, gpointer value, gpointer ctx)
+static bool
+post_find (void *kvv, void *ctx_v)
{
- (void)value;
- (void)ctx;
-
- if (!is_reserved(key,reserved_attr)) {
- return FALSE;
+ struct kv_pair *kv = kvv;
+ if (!is_reserved(kv->key,reserved_attr)) {
+ return true;
}
- DPRINTF("bad attr %sn", (char *)key);
- return TRUE;
+ DPRINTF("bad attr %sn", kv->key);
+ void **ctx = ctx_v;
+ *ctx = kv;
+ return false;
}
-static void
-post_foreach (gpointer key, gpointer value, gpointer ctx)
+static bool
+post_foreach (void *kvv, void *ms_v)
{
- my_state *ms = ctx;
+ struct kv_pair *kv = kvv;
+ my_state *ms = ms_v;
- DPRINTF("setting %s = %s for %s/%sn",(char *)key, (char *)value,
- ms->bucket,ms->key);
- meta_set_value(ms->bucket,ms->key,key,value);
+ DPRINTF("setting %s = %s for %s/%sn", kv->key, kv->val,
+ ms->bucket, ms->key);
+ meta_set_value(ms->bucket, ms->key, kv->key, kv->val);
+ return true;
}
static int
@@ -1195,9 +1197,9 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
key = kv_hash_lookup(ms->dict,"_key");
if (key) {
strncpy(ms->key,key,MAX_FIELD_LEN-1);
- g_hash_table_remove_FIXME(ms->dict,"_key");
- if (!g_hash_table_find(ms->dict,post_find,ms)) {
- g_hash_table_foreach(ms->dict,post_foreach,ms);
+ kv_hash_delete(ms->dict,"_key");
+ if (!kv_find_val(ms->dict,post_find,NULL)) {
+ hash_do_for_each (ms->dict,post_foreach,ms);
DPRINTF("rereplicate (bucket POST)n");
recheck_replication(ms,NULL);
rc = MHD_HTTP_OK;
@@ -1393,7 +1395,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
}
else {
rc = MHD_HTTP_BAD_REQUEST;
- if (!g_hash_table_find(ms->dict,post_find,ms)) {
+ if (!kv_find_val(ms->dict,post_find,NULL)) {
op = kv_hash_lookup(ms->dict,"op");
if (op) {
if (!strcmp(op,"push")) {
@@ -1545,7 +1547,7 @@ prov_iterator (void *ctx, enum MHD_ValueKind kind, const char *key,
(void)transfer_encoding;
(void)off;
- g_hash_table_insert(ctx,strdup(key),strndup(data,size));
+ kv_hash_insert_new (ctx,strdup(key),strndup(data,size));
/* TBD: check return value for strdups (none avail for insert) */
return MHD_YES;
}
@@ -1709,8 +1711,8 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,NULL,NULL);
+ ms->dict = hash_initialize(13, NULL,
+ kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
}
@@ -1741,7 +1743,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
}
// FIXME: unchecked strdup
- g_hash_table_insert(ms->dict,strdup("name"),prov_name);
+ kv_hash_insert_new (ms->dict,strdup("name"),prov_name);
if (validate_provider (ms->dict)) {
if (!add_provider (ms->dict)) {
diff --git a/setup.c b/setup.c
index 6c23c18..2945c39 100644
--- a/setup.c
+++ b/setup.c
@@ -418,20 +418,17 @@ add_provider (Hash_table *h)
goto fail;
}
- GHashTableIter iter;
- g_hash_table_iter_init (&iter, h);
- while (1) {
- gpointer key;
- gpointer val;
- if (!g_hash_table_iter_next (&iter, &key, &val))
- break;
+ struct kv_pair *kv;
+ for (kv = hash_get_first (h); kv; kv = hash_get_next (h, kv)) {
+ char const *key = kv->key;
+ char const *val = kv->val;
if (!is_reserved_attr(key)) {
if (val) {
- error(0,0,"no value for %s", (char *)key);
+ error(0,0,"no value for %s", key);
continue;
}
- DPRINTF("%p.%s = %sn",prov, (char *)key, (char *)val);
+ DPRINTF("%p.%s = %sn",prov, key, val);
if (!kv_hash_insert_new (prov->attrs, xstrdup(key), xstrdup(val)))
error (0, 0, "exhausted virtual memory");
goto fail;
diff --git a/setup.h b/setup.h
index 3544c89..e14df27 100644
--- a/setup.h
+++ b/setup.h
@@ -114,4 +114,27 @@ kv_hash_lookup (Hash_table const *ht, char const *k)
return p ? p->val : NULL;
}
+static void
+kv_hash_delete (Hash_table *ht, char const *k)
+{
+ struct kv_pair kv;
+ kv.key = (char *) k;
+ struct kv_pair *p = hash_delete (ht, &kv);
+ if (p) {
+ free (p->key);
+ free (p->val);
+ free (p);
+ }
+}
+
+/* Determine whether a key/value pair exists for which PRED_FN returns true.
+ If so, return a pointer to that kv_pair. Otherwise, return NULL. */
+static struct kv_pair *
+kv_find_val (Hash_table *ht, Hash_processor pred_fn, void *ctx)
+{
+ void *found_kv = NULL;
+ hash_do_for_each (ht, pred_fn, &found_kv);
+ return found_kv;
+}
+
#endif
http://repo.or.cz/w/iwhd.git/commit/ef8b8713dd64dce7c32a9b3a3eca2a2c1e1ff7ef
commit ef8b8713dd64dce7c32a9b3a3eca2a2c1e1ff7ef
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Jan 10 15:42:54 2011 +0100
convert remaining g_hash_table_lookup functions to kv_hash_lookup
diff --git a/backend.c b/backend.c
index 2024f72..bd78ad5 100644
--- a/backend.c
+++ b/backend.c
@@ -147,7 +147,7 @@ bad_bcreate (const provider_t *prov, const char *bucket)
static int
bad_register (my_state *ms, const provider_t *prov, const char *next,
- GHashTable *args)
+ Hash_table *args)
{
(void)ms;
(void)prov;
@@ -360,10 +360,10 @@ s3_init_tmpfile (const char *value)
static int
s3_register (my_state *ms, const provider_t *prov, const char *next,
- GHashTable *args)
+ Hash_table *args)
{
- char *kernel = g_hash_table_lookup(args,"kernel");
- char *ramdisk = g_hash_table_lookup(args,"ramdisk");
+ char *kernel = kv_hash_lookup(args,"kernel");
+ char *ramdisk = kv_hash_lookup(args,"ramdisk");
char *api_key;
char *api_secret;
const char *ami_cert;
@@ -400,7 +400,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
DPRINTF(" (using ramdisk %s)n",ramdisk);
}
- api_key = g_hash_table_lookup(args,"api-key");
+ api_key = kv_hash_lookup(args,"api-key");
if (!api_key) {
api_key = (char *)prov->username;
if (!api_key) {
@@ -409,7 +409,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
}
}
- api_secret = g_hash_table_lookup(args,"api-secret");
+ api_secret = kv_hash_lookup(args,"api-secret");
if (!api_secret) {
api_secret = (char *)prov->password;
if (!prov->password) {
@@ -418,7 +418,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
}
}
- cval = g_hash_table_lookup(args,"ami-cert");
+ cval = kv_hash_lookup(args,"ami-cert");
if (cval) {
ami_cert = s3_init_tmpfile(cval);
if (!ami_cert) {
@@ -433,7 +433,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
}
}
- kval = g_hash_table_lookup(args,"ami-key");
+ kval = kv_hash_lookup(args,"ami-key");
if (kval) {
ami_key = s3_init_tmpfile(kval);
if (!ami_cert) {
@@ -448,7 +448,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
}
}
- ami_uid = g_hash_table_lookup(args,"ami-uid");
+ ami_uid = kv_hash_lookup(args,"ami-uid");
if (!ami_uid) {
ami_uid = get_provider_value(prov,"ami-uid");
if (!ami_uid) {
@@ -457,7 +457,7 @@ s3_register (my_state *ms, const provider_t *prov, const char *next,
}
}
- ami_bkt = g_hash_table_lookup(args,"ami-bkt");
+ ami_bkt = kv_hash_lookup(args,"ami-bkt");
if (!ami_bkt) {
ami_bkt = ms->bucket;
}
@@ -796,14 +796,14 @@ curl_bcreate (const provider_t *prov, const char *bucket)
static int
curl_register (my_state *ms, const provider_t *prov, const char *next,
- GHashTable *args)
+ Hash_table *args)
{
char fixed[ADDR_SIZE];
CURL *curl;
struct curl_httppost *first = NULL;
struct curl_httppost *last = NULL;
- char *kernel = g_hash_table_lookup(args,"kernel");
- char *ramdisk = g_hash_table_lookup(args,"ramdisk");
+ char *kernel = kv_hash_lookup(args,"kernel");
+ char *ramdisk = kv_hash_lookup(args,"ramdisk");
int chars;
if (!next) {
diff --git a/backend.h b/backend.h
index 276eefb..7edae08 100644
--- a/backend.h
+++ b/backend.h
@@ -17,6 +17,7 @@
#define _BACKEND_H
#include "state_defs.h"
+#include "hash.h"
typedef void init_func_t (struct _provider *prov);
/* Get provider from passed backend_thunk. */
@@ -32,7 +33,7 @@ typedef int bcreate_func_t (const struct _provider *prov,
const char *bucket);
typedef int register_func_t (my_state *ms,
const struct _provider *prov,
- const char *next, GHashTable *args);
+ const char *next, Hash_table *args);
typedef struct {
const char *name;
diff --git a/rest.c b/rest.c
index 08ba698..cab640f 100644
--- a/rest.c
+++ b/rest.c
@@ -347,7 +347,7 @@ recheck_replication (my_state * ms, char *policy)
if (!policy && ms->dict) {
DPRINTF("using new policy for %s/%sn",ms->bucket,ms->key);
- policy = g_hash_table_lookup(ms->dict,"_policy");
+ policy = kv_hash_lookup (ms->dict, "_policy");
}
if (!policy) {
@@ -1005,7 +1005,7 @@ post_iterator (void *ctx, enum MHD_ValueKind kind, const char *key,
printf("adding %s, size=%zun",key,size);
// TBD: don't assume that values are null-terminated strings
- old_val = g_hash_table_lookup(ctx,key);
+ old_val = kv_hash_lookup(ctx,key);
if (old_val) {
old_len = strlen(old_val);
new_val = malloc(old_len+size+1);
@@ -1116,8 +1116,8 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,NULL,NULL);
+ ms->dict = hash_initialize(13, NULL,
+ kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
return MHD_YES;
@@ -1131,7 +1131,7 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
rc = MHD_HTTP_BAD_REQUEST;
- op = g_hash_table_lookup(ms->dict,"op");
+ op = kv_hash_lookup(ms->dict,"op");
if (op) {
if (!strcmp(op,"rep_status")) {
len = snprintf(buf,sizeof(buf),"%d requestsn",
@@ -1181,8 +1181,8 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,NULL,NULL);
+ ms->dict = hash_initialize(13, NULL,
+ kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
}
@@ -1192,10 +1192,10 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
}
else {
rc = MHD_HTTP_BAD_REQUEST;
- key = g_hash_table_lookup(ms->dict,"_key");
+ key = kv_hash_lookup(ms->dict,"_key");
if (key) {
strncpy(ms->key,key,MAX_FIELD_LEN-1);
- g_hash_table_remove(ms->dict,"_key");
+ g_hash_table_remove_FIXME(ms->dict,"_key");
if (!g_hash_table_find(ms->dict,post_find,ms)) {
g_hash_table_foreach(ms->dict,post_foreach,ms);
DPRINTF("rereplicate (bucket POST)n");
@@ -1204,7 +1204,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
}
}
else if (!strcmp(ms->bucket,"_new")) {
- key = g_hash_table_lookup(ms->dict,"name");
+ key = kv_hash_lookup(ms->dict,"name");
if (key != NULL) {
rc = create_bucket(key,ms);
}
@@ -1227,7 +1227,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
static int
check_location (my_state *ms)
{
- char *loc = g_hash_table_lookup(ms->dict,"depot");
+ char *loc = kv_hash_lookup(ms->dict,"depot");
if (!loc) {
DPRINTF("missing loc on check for %s/%sn",ms->bucket,ms->key);
@@ -1251,7 +1251,7 @@ register_image (my_state *ms)
const provider_t *prov;
char *next;
- site = g_hash_table_lookup(ms->dict,"site");
+ site = kv_hash_lookup(ms->dict,"site");
if (!site) {
printf("site MISSINGn");
return MHD_HTTP_BAD_REQUEST;
@@ -1382,8 +1382,8 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
ms->url = (char *)url;
- ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,NULL,NULL);
+ ms->dict = hash_initialize(13, NULL,
+ kv_hash, kv_compare, kv_free);
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
}
@@ -1394,7 +1394,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
else {
rc = MHD_HTTP_BAD_REQUEST;
if (!g_hash_table_find(ms->dict,post_find,ms)) {
- op = g_hash_table_lookup(ms->dict,"op");
+ op = kv_hash_lookup(ms->dict,"op");
if (op) {
if (!strcmp(op,"push")) {
DPRINTF("rereplicate (obj POST)n");
@@ -1723,7 +1723,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
char *prov_name = url_to_provider_name (url);
/* We're about to insert "name -> $prov_name".
Ensure there is no "name" key already there. */
- const char *name = g_hash_table_lookup (ms->dict, "name");
+ const char *name = kv_hash_lookup (ms->dict, "name");
if (name) {
fprintf(stderr,
"add_provider: do not specify name: name=%sn",
diff --git a/setup.c b/setup.c
index 3d42e07..6c23c18 100644
--- a/setup.c
+++ b/setup.c
@@ -108,11 +108,11 @@ compare_providers (void const *x, void const *y)
}
int
-validate_provider (GHashTable *h)
+validate_provider (Hash_table *h)
{
- const char *name = g_hash_table_lookup (h, "name");
+ const char *name = kv_hash_lookup (h, "name");
assert (name);
- const char *type = g_hash_table_lookup (h, "type");
+ const char *type = kv_hash_lookup (h, "type");
if (type == NULL) {
error (0, 0, "provider %s has no type", name);
return 0;
@@ -132,12 +132,12 @@ validate_provider (GHashTable *h)
int ok = 1;
if (needs & NEED_SERVER) {
- const char *host = g_hash_table_lookup (h, "host");
+ const char *host = kv_hash_lookup (h, "host");
if (!host) {
error (0, 0, "%s: %s-provider requires a host", name, type);
ok = 0;
}
- const char *port = g_hash_table_lookup (h, "port");
+ const char *port = kv_hash_lookup (h, "port");
if (!port) {
error (0, 0, "%s: %s-provider requires a port", name, type);
ok = 0;
@@ -150,12 +150,12 @@ validate_provider (GHashTable *h)
}
if (needs & NEED_CREDS) {
- const char *key = g_hash_table_lookup (h, "key");
+ const char *key = kv_hash_lookup (h, "key");
if (!key) {
error (0, 0, "%s: %s-provider requires a key", name, type);
ok = 0;
}
- const char *secret = g_hash_table_lookup (h, "secret");
+ const char *secret = kv_hash_lookup (h, "secret");
if (!secret) {
error (0, 0, "%s: %s-provider requires a secret", name, type);
ok = 0;
@@ -163,7 +163,7 @@ validate_provider (GHashTable *h)
}
if (needs & NEED_PATH) {
- const char *path = g_hash_table_lookup (h, "path");
+ const char *path = kv_hash_lookup (h, "path");
if (!path) {
error (0, 0, "%s: %s-provider requires a path", name, type);
ok = 0;
@@ -363,9 +363,9 @@ convert_provider (int i, provider_t *out)
}
int
-add_provider (GHashTable *h)
+add_provider (Hash_table *h)
{
- char *name = g_hash_table_lookup (h, "name");
+ char *name = kv_hash_lookup (h, "name");
assert (name);
provider_t *prov = calloc (1, sizeof *prov);
@@ -378,19 +378,19 @@ add_provider (GHashTable *h)
if (prov->name == NULL)
goto fail;
- prov->type = g_hash_table_lookup(h,"type");
+ prov->type = kv_hash_lookup(h,"type");
if (prov->type == NULL)
goto fail;
prov->type = strdup (prov->type);
if (prov->type == NULL)
goto fail;
- prov->host = g_hash_table_lookup(h,"host");
- prov->port = atoi(g_hash_table_lookup(h,"port"));
+ prov->host = kv_hash_lookup(h,"host");
+ prov->port = atoi(kv_hash_lookup(h,"port"));
/* TBD: change key/secret field names to username/password */
- prov->username = g_hash_table_lookup(h,"key");
- prov->password = g_hash_table_lookup(h,"secret");
- prov->path = g_hash_table_lookup(h,"path");
+ prov->username = kv_hash_lookup(h,"key");
+ prov->password = kv_hash_lookup(h,"secret");
+ prov->path = kv_hash_lookup(h,"path");
if (prov->host)
prov->host = strdup (prov->host);
diff --git a/setup.h b/setup.h
index 6bd0779..3544c89 100644
--- a/setup.h
+++ b/setup.h
@@ -49,9 +49,9 @@ const char *get_provider_value (const provider_t *prov,
const char *fname);
const char *auto_config (void);
-int validate_provider (GHashTable *h);
+int validate_provider (Hash_table *h);
provider_t *find_provider (const char *name);
-int add_provider (GHashTable *h);
+int add_provider (Hash_table *h);
provider_t *get_main_provider (void);
void set_main_provider (provider_t *prov);
@@ -103,4 +103,15 @@ kv_hash_insert_new (Hash_table *ht, char *k, char *v)
return 1;
}
+/* Given a hash table and key K, return the value
+ corresponding to K. The caller must not free K. */
+static char *
+kv_hash_lookup (Hash_table const *ht, char const *k)
+{
+ struct kv_pair kv;
+ kv.key = (char *) k;
+ struct kv_pair *p = hash_lookup (ht, &kv);
+ return p ? p->val : NULL;
+}
+
#endif
diff --git a/state_defs.h b/state_defs.h
index 269158c..dcc9cf5 100644
--- a/state_defs.h
+++ b/state_defs.h
@@ -18,6 +18,7 @@
#include <glib.h>
#include <microhttpd.h>
+#include "hash.h"
#include "mpipe.h"
#include "template.h"
@@ -63,7 +64,7 @@ typedef struct _my_state {
void *query; /* object query */
void *aquery; /* attribute query */
/* for bucket-level puts */
- GHashTable *dict;
+ Hash_table *dict;
/* for new producer/consumer model */
pipe_shared pipe;
int from_master;
http://repo.or.cz/w/iwhd.git/commit/d2ab5e735f780571599111bac2cf726c2f9ae62b
commit d2ab5e735f780571599111bac2cf726c2f9ae62b
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 6 22:49:53 2011 +0100
begin converting hash tables from glib to gnulib
diff --git a/bootstrap.conf b/bootstrap.conf
index 625efb4..0e03de7 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -29,6 +29,8 @@ git-version-gen
gitlog-to-changelog
gnu-web-doc-update
gnupload
+hash
+hash-pjw
malloc-gnu
maintainer-makefile
manywarnings
diff --git a/gnulib-tests/.gitignore b/gnulib-tests/.gitignore
index 4b4d539..9c862bc 100644
--- a/gnulib-tests/.gitignore
+++ b/gnulib-tests/.gitignore
@@ -101,26 +101,46 @@
/zerosize-ptr.h
alloca.h
alloca.in.h
+anytostr.c
+asnprintf.c
binary-io.h
dup2.c
fcntl.h
fcntl.in.h
+float+.h
+float.h
+float.in.h
getpagesize.c
gnulib.mk
+hash-pjw.c
+hash-pjw.h
ignore-value.h
+imaxtostr.c
init.sh
+inttostr.c
+inttostr.h
lstat.c
macros.h
malloca.c
malloca.h
malloca.valgrind
+offtostr.c
open.c
pathmax.h
+printf-args.c
+printf-args.h
+printf-parse.c
+printf-parse.h
putenv.c
same-inode.h
setenv.c
signature.h
+size_max.h
+snprintf.c
stat.c
+stdio-write.c
+stdio.h
+stdio.in.h
symlink.c
sys
sys_stat.h
@@ -128,6 +148,7 @@ sys_stat.in.h
test-alloca-opt.c
test-binary-io.c
test-binary-io.sh
+test-bitrotate.c
test-c-ctype.c
test-dirname.c
test-dup2.c
@@ -139,7 +160,9 @@ test-fpending.sh
test-getopt.c
test-getopt.h
test-getopt_long.h
+test-hash.c
test-ignore-value.c
+test-inttostr.c
test-inttypes.c
test-lstat.c
test-lstat.h
@@ -159,11 +182,13 @@ test-quotearg-simple.c
test-quotearg.h
test-realloc-gnu.c
test-setenv.c
+test-snprintf.c
test-stat.c
test-stat.h
test-stdbool.c
test-stddef.c
test-stdint.c
+test-stdio.c
test-stdlib.c
test-strerror.c
test-string.c
@@ -177,6 +202,7 @@ test-time.c
test-unistd.c
test-unsetenv.c
test-update-copyright.sh
+test-vasnprintf.c
test-vc-list-files-cvs.sh
test-vc-list-files-git.sh
test-verify.c
@@ -194,6 +220,11 @@ test-xstrtoumax.c
test-xstrtoumax.sh
time.h
time.in.h
+uinttostr.c
+umaxtostr.c
unsetenv.c
+vasnprintf.c
+vasnprintf.h
wctob.c
+xsize.h
zerosize-ptr.h
diff --git a/replica.c b/replica.c
index 6fb396a..3efe65c 100644
--- a/replica.c
+++ b/replica.c
@@ -115,7 +115,6 @@ repl_worker_del (const repl_item *item)
key = strchr(bucket,'/');
if (!key) {
error(0,0,"invalid path replicating delete for %s",item->path);
- free(bucket);
return;
}
++key;
@@ -126,7 +125,6 @@ repl_worker_del (const repl_item *item)
error(0,0,"got status %d replicating delete for %s",
rc, item->path);
}
- free(bucket);
DPRINTF("finished replicating delete for %s, rc = %dn",item->path,rc);
}
@@ -205,8 +203,6 @@ repl_worker (void *notused ATTRIBUTE_UNUSED)
error(0,0,"bad repl type %d (url=%s) skipped",
item->type, item->path);
}
- free(item->path);
- free(item);
/* No atomic dec without test? Lame. */
(void)g_atomic_int_dec_and_test(&rep_count);
}
@@ -258,7 +254,11 @@ repl_sget (void *ctx, const char *id)
return prov->path;
}
- return g_hash_table_lookup(prov->attrs,id);
+ struct kv_pair kv;
+ kv.key = (char *) id;
+ struct kv_pair *p = hash_lookup (prov->attrs, &kv);
+
+ return p ? p->val : NULL;
}
void
@@ -272,10 +272,6 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
query_ctx_t qctx;
getter_t oget;
getter_t sget;
- GHashTableIter iter;
- gpointer key;
- gpointer value;
- provider_t *prov;
url2 = strdup(url);
if (!url2) {
@@ -303,12 +299,12 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
sget.func = repl_sget;
sget.ctx = &qctx;
- init_prov_iter(&iter);
- while (g_hash_table_iter_next(&iter,&key,&value)) {
- if (!strcmp(key,me)) {
+ provider_t *prov;
+ for (prov = hash_get_first_prov (); prov;
+ prov = hash_get_next_prov (prov)) {
+ if (!strcmp(prov->name, me)) {
continue;
}
- prov = (provider_t *)value;
if (expr) {
qctx.cur_server = prov;
res = eval(expr,&oget,&sget);
@@ -351,28 +347,21 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
g_atomic_int_inc(&rep_count);
sem_post(&queue_sema);
}
-
- free(url2);
}
static void
replicate_namespace_action (const char *name, repl_t action, my_state *ms)
{
- repl_item *item;
- GHashTableIter iter;
- gpointer key;
- gpointer value;
-
- init_prov_iter(&iter);
- while (g_hash_table_iter_next(&iter,&key,&value)) {
- if (!strcmp(key,me)) {
+ provider_t *prov;
+ for (prov = hash_get_first_prov (); prov;
+ prov = hash_get_next_prov (prov)) {
+ if (!strcmp(prov->name, me)) {
continue;
}
DPRINTF("replicating %s(%s) on %sn",
(action == REPL_ODELETE ? "delete" : "create"),
- name,
- ((provider_t *)value)->name);
- item = malloc(sizeof(*item));
+ name, prov->name);
+ repl_item *item = malloc(sizeof(*item));
if (!item) {
error(0,errno,"could not create repl_item for %s",
name);
@@ -381,10 +370,9 @@ replicate_namespace_action (const char *name, repl_t action, my_state *ms)
item->type = action;
item->path = strdup(name);
if (!item->path) {
- free(item);
return;
}
- item->server = (provider_t *)value;
+ item->server = prov;
item->ms = ms;
pthread_mutex_lock(&queue_lock);
if (queue_tail) {
diff --git a/rest.c b/rest.c
index d55ebc5..08ba698 100644
--- a/rest.c
+++ b/rest.c
@@ -38,6 +38,7 @@
#include "dirname.h"
#include "iwh.h"
#include "closeout.h"
+#include "hash.h"
#include "progname.h"
#include "meta.h"
#include "backend.h"
@@ -1445,8 +1446,6 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
{
my_state *ms = ctx;
size_t len;
- gpointer key;
- const provider_t *prov;
const char *accept_hdr;
(void)pos;
@@ -1459,12 +1458,14 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- init_prov_iter(&ms->prov_iter);
+ ms->prov_iter = hash_get_first_prov ();
len = tmpl_prov_header(ms->gen_ctx);
if (!len) {
return -1;
}
if (len > max) {
+ /* FIXME: don't truncate. Doing that would
+ result in syntactically invalid output. */
len = max;
}
memcpy(buf,ms->gen_ctx->buf,len);
@@ -1475,7 +1476,9 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
return -1;
}
- if (g_hash_table_iter_next(&ms->prov_iter,&key,(gpointer *)&prov)) {
+ const provider_t *prov = ms->prov_iter;
+ if (prov) {
+ ms->prov_iter = hash_get_next_prov (ms->prov_iter);
if (prov->deleted)
return 0;
len = tmpl_prov_entry(ms->gen_ctx,prov->name,prov->type,
diff --git a/setup.c b/setup.c
index 6b9454c..3d42e07 100644
--- a/setup.c
+++ b/setup.c
@@ -27,7 +27,6 @@
#include <string.h>
#include <strings.h>
#include <unistd.h>
-#include <assert.h>
#include <jansson.h>
@@ -35,6 +34,7 @@
#include "setup.h"
#include "query.h"
#include "meta.h"
+#include "xalloc.h"
/*
* A config consists of a JSON array of objects, where each object includes:
@@ -67,7 +67,7 @@ extern backend_func_tbl cf_func_tbl;
extern backend_func_tbl fs_func_tbl;
static json_t *config = NULL;
-static GHashTable *prov_hash = NULL;
+static Hash_table *prov_hash = NULL;
static pthread_mutex_t provider_hash_table_lock;
static provider_t *g_main_prov = NULL;
@@ -85,6 +85,28 @@ set_main_provider (provider_t *prov)
g_main_prov = prov;
}
+static void
+hash_insert_new (Hash_table *ht, const void *ent)
+{
+ void *e = hash_insert (ht, ent);
+ assert (e == ent);
+}
+
+static size_t
+hash_provider (void const *x, size_t table_size)
+{
+ provider_t const *p = x;
+ return hash_pjw (p->name, table_size);
+}
+
+static bool
+compare_providers (void const *x, void const *y)
+{
+ provider_t const *u = x;
+ provider_t const *v = y;
+ return STREQ (u->name, v->name) ? true : false;
+}
+
int
validate_provider (GHashTable *h)
{
@@ -308,7 +330,7 @@ convert_provider (int i, provider_t *out)
out->func_tbl = &bad_func_tbl;
}
- out->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
+ out->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, kv_free);
iter = json_object_iter(server);
while (iter) {
key = json_object_iter_key(iter);
@@ -316,12 +338,16 @@ convert_provider (int i, provider_t *out)
if (!is_reserved_attr(key)) {
value = json_string_value(json_object_iter_value(iter));
if (value) {
- value = strdup(value);
+ value = xstrdup(value);
}
if (value) {
DPRINTF("%p.%s = %sn",out,key,value);
- g_hash_table_insert(out->attrs,
- strdup((char *)key), (char *)value);
+ if (!kv_hash_insert_new (out->attrs,
+ xstrdup((char *)key),
+ (char *)value)) {
+ error (0, 0, "exhausted virtual memory");
+ return 0;
+ }
}
else {
error(0,0,"could not extract %u.%s",i,key);
@@ -387,8 +413,10 @@ add_provider (GHashTable *h)
else
prov->func_tbl = &bad_func_tbl;
- prov->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
- // FIXME: can't the above fail?
+ prov->attrs = hash_initialize(13, NULL, kv_hash, kv_compare, kv_free);
+ if (prov->attrs == NULL) {
+ goto fail;
+ }
GHashTableIter iter;
g_hash_table_iter_init (&iter, h);
@@ -404,15 +432,13 @@ add_provider (GHashTable *h)
continue;
}
DPRINTF("%p.%s = %sn",prov, (char *)key, (char *)val);
- g_hash_table_insert(prov->attrs, strdup(key), strdup(val));
- // FIXME: check strdup for failure
+ if (!kv_hash_insert_new (prov->attrs, xstrdup(key), xstrdup(val)))
+ error (0, 0, "exhausted virtual memory");
+ goto fail;
}
}
- /* Note that we must strdup "name", since here it's a key, but above it's a
- value. Not using // strdup here would lead to a use-after-free bug. */
- // FIXME: check strdup for failure
- g_hash_table_insert(prov_hash,strdup(name),prov);
+ hash_insert_new (prov_hash, prov);
pthread_mutex_unlock (&provider_hash_table_lock);
return 1;
@@ -451,7 +477,8 @@ parse_config_inner (void)
/* Everything looks OK. */
printf("%u replication servers definedn",nservers-1);
pthread_mutex_init(&provider_hash_table_lock, NULL);
- prov_hash = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
+ prov_hash = hash_initialize (13, NULL, hash_provider,
+ compare_providers, NULL);
if (!prov_hash) {
error(0,0,"could not allocate provider hash");
goto err;
@@ -470,15 +497,13 @@ parse_config_inner (void)
new_prov = (provider_t *)malloc(sizeof(*new_prov));
if (!new_prov) {
error(0,errno,"could not allocate provider %u",i);
- free((char *)new_key);
goto err_free_hash;
}
if (!convert_provider(i,new_prov)) {
error(0,0,"could not add provider %u",i);
- free((char *)new_key);
- free(new_prov);
}
- g_hash_table_insert(prov_hash,(char *)new_key,new_prov);
+ assert (STREQ (new_key, new_prov->name));
+ hash_insert_new(prov_hash, new_prov);
if (!i) {
g_main_prov = new_prov;
primary = new_prov->name;
@@ -488,7 +513,7 @@ parse_config_inner (void)
return primary;
err_free_hash:
- g_hash_table_destroy(prov_hash);
+ hash_free (prov_hash);
prov_hash = NULL;
err:
return 0;
@@ -580,7 +605,9 @@ get_provider (const char *name)
}
pthread_mutex_lock (&provider_hash_table_lock);
- provider_t *p = g_hash_table_lookup(prov_hash,name);
+ provider_t key;
+ key.name = name;
+ provider_t *p = hash_lookup (prov_hash, &key);
pthread_mutex_unlock (&provider_hash_table_lock);
return p;
@@ -594,7 +621,9 @@ find_provider (const char *name)
}
pthread_mutex_lock (&provider_hash_table_lock);
- provider_t *p = g_hash_table_lookup(prov_hash, name);
+ provider_t key;
+ key.name = name;
+ provider_t *p = hash_lookup (prov_hash, &key);
pthread_mutex_unlock (&provider_hash_table_lock);
return p;
@@ -606,13 +635,23 @@ get_provider_value (const provider_t *prov, const char *fname)
if (!prov || !fname || (*fname == '0')) {
return NULL;
}
- return g_hash_table_lookup(prov->attrs,fname);
+ struct kv_pair kv;
+ kv.key = (char *) fname;
+ struct kv_pair *p = hash_lookup (prov->attrs, &kv);
+
+ return p ? p->val : NULL;
}
-void
-init_prov_iter (GHashTableIter *iter)
+provider_t *
+hash_get_first_prov (void)
+{
+ return hash_get_first (prov_hash);
+}
+
+provider_t *
+hash_get_next_prov (void *p)
{
- g_hash_table_iter_init(iter,prov_hash);
+ return hash_get_next (prov_hash, p);
}
void
diff --git a/setup.h b/setup.h
index a1eb8f8..6bd0779 100644
--- a/setup.h
+++ b/setup.h
@@ -19,7 +19,10 @@
#include <glib.h>
#include <curl/curl.h> /* needed by stuff in state_defs.h (from backend.h) */
#include <microhttpd.h> /* ditto */
+#include <assert.h>
#include "backend.h"
+#include "hash.h"
+#include "hash-pjw.h"
typedef struct _provider {
const char *name;
@@ -31,7 +34,7 @@ typedef struct _provider {
const char *password;
const char *path;
backend_func_tbl *func_tbl;
- GHashTable *attrs;
+ Hash_table *attrs;
char *token;
} provider_t;
@@ -44,7 +47,6 @@ void update_provider (const char *provname,
const char *password);
const char *get_provider_value (const provider_t *prov,
const char *fname);
-void init_prov_iter (GHashTableIter *iter);
const char *auto_config (void);
int validate_provider (GHashTable *h);
@@ -53,4 +55,52 @@ int add_provider (GHashTable *h);
provider_t *get_main_provider (void);
void set_main_provider (provider_t *prov);
+provider_t *hash_get_first_prov (void);
+provider_t *hash_get_next_prov (void *p);
+
+struct kv_pair
+{
+ char *key;
+ char *val;
+};
+
+#define STREQ(a, b) (strcmp (a, b) == 0)
+
+static inline size_t
+kv_hash (void const *x, size_t table_size)
+{
+ struct kv_pair const *p = x;
+ return hash_pjw (p->key, table_size);
+}
+
+static inline bool
+kv_compare (void const *x, void const *y)
+{
+ struct kv_pair const *u = x;
+ struct kv_pair const *v = y;
+ return STREQ (u->key, v->key) ? true : false;
+}
+
+static inline void
+kv_free (void *x)
+{
+ struct kv_pair *p = x;
+ free (p->key);
+ free (p->val);
+ free (p);
+}
+
+static inline int
+kv_hash_insert_new (Hash_table *ht, char *k, char *v)
+{
+ struct kv_pair *kv = malloc (sizeof *kv);
+ if (!kv)
+ return 0;
+ kv->key = k;
+ kv->val = v;
+ void *e = hash_insert (ht, kv);
+ assert (e == kv);
+ return 1;
+}
+
#endif
diff --git a/state_defs.h b/state_defs.h
index d16f1ef..269158c 100644
--- a/state_defs.h
+++ b/state_defs.h
@@ -71,7 +71,7 @@ typedef struct _my_state {
pthread_t cache_th;
/* for bucket/object/provider list generators */
tmpl_ctx_t *gen_ctx;
- GHashTableIter prov_iter;
+ void *prov_iter;
/* for back-end functions */
backend_thunk_t thunk;
int be_flags;
diff --git a/t/basic b/t/basic
index e6125d4..49e8ab2 100644
--- a/t/basic
+++ b/t/basic
@@ -346,7 +346,7 @@ grep _primary p && { warn_ add-provider/reserved-name not rejected; fail=1; }
curl -X PUT $p1_url/_primary > p || fail=1
test -s p && fail=1
new_primary=$(curl http://localhost:$port/_providers/_primary) || fail=1
-test $new_primary = PROVIDER-1 || fail=1
+test "$new_primary" = PROVIDER-1 || fail=1
# Restore the primary attribute to the original.
# FIXME: if I don't restore, the following headless test makes iwhd segfault.
diff --git a/t/provider b/t/provider
index 354918c..7895c69 100644
--- a/t/provider
+++ b/t/provider
@@ -37,6 +37,7 @@ for i in $(seq $n); do
p=http://localhost:$port/_providers/p-$i
curl -d type=s3 -dhost=localhost -dport=80 -dkey=u -dsecret=p $p || fail=1
curl http://localhost:$port/_providers > p-list || fail=1
+ # Ensure that there is the correct number of p-DDD entries.
test $(grep -c '<provider name="p-[0-9]*">' p-list) = $i || fail=1
test $fail = 1 && sleep 99d
done
http://repo.or.cz/w/iwhd.git/commit/f5febefdfefc5ebff68388e4899d690d55d83a72
commit f5febefdfefc5ebff68388e4899d690d55d83a72
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 6 23:13:38 2011 +0100
remove more tests of in-place provider changing
diff --git a/t/basic b/t/basic
index b07375a..e6125d4 100644
--- a/t/basic
+++ b/t/basic
@@ -203,21 +203,6 @@ for i in xml json; do
curl_H() { curl -H "Accept: */$i" "$@"; }
- # Change username and password; ensure that is reflected in providers output.
- curl_H -d provider="$p_name" -d username=u -d password=p - http://localhost:$port/_providers || fail=1
- curl_H http://localhost:$port/_providers > p || fail=1
- emit_provider $i "$p_name" fs '' 0 u p > p.xml || fail=1
- compare p.xml p || fail=1
-
- # Attempt to modify some other attribute; should fail (but curl won't)
- curl_H -d provider="$p_name" -d other_attr=foo - http://localhost:$port/_providers || fail=1
-
- # Attempt to modify nonexistent provider; should fail
- curl_H -d provider=no_such -d username=v -d password=q - http://localhost:$port/_providers || fail=1
-
# List an empty bucket.
curl_H $bucket > b || fail=1
emit_bucket_list $i > b.exp || fail=1
http://repo.or.cz/w/iwhd.git/commit/eb66edbe487f746d8bb6c244973bd86bb782a34a
commit eb66edbe487f746d8bb6c244973bd86bb782a34a
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 6 23:03:25 2011 +0100
do not allow "updating" a provider in place -- now, you must remove and then re-add
diff --git a/rest.c b/rest.c
index 02ecff4..d55ebc5 100644
--- a/rest.c
+++ b/rest.c
@@ -1547,59 +1547,6 @@ prov_iterator (void *ctx, enum MHD_ValueKind kind, const char *key,
return MHD_YES;
}
-
-static int
-proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
- const char *method, const char *version, const char *data,
- size_t *data_size, void **rctx)
-{
- struct MHD_Response *resp;
- my_state *ms = *rctx;
- int rc;
- char *provider;
- char *username;
- char *password;
-
- (void)cctx;
- (void)method;
- (void)version;
-
- if (ms->state == MS_NEW) {
- ms->state = MS_NORMAL;
- ms->url = (char *)url;
- ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,NULL,NULL);
- ms->post = MHD_create_post_processor(conn,4096,
- prov_iterator,ms->dict);
- }
- else if (*data_size) {
- MHD_post_process(ms->post,data,*data_size);
- *data_size = 0;
- }
- else {
- rc = MHD_HTTP_BAD_REQUEST;
- provider = g_hash_table_lookup(ms->dict,"provider");
- username = g_hash_table_lookup(ms->dict,"username");
- password = g_hash_table_lookup(ms->dict,"password");
- if (provider && username && password) {
- update_provider(provider,username,password);
- rc = MHD_HTTP_OK;
- }
- else {
- DPRINTF("provider/username/password MISSINGn");
- }
- resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
- if (!resp) {
- fprintf(stderr,"MHD_crfd failedn");
- return MHD_NO;
- }
- MHD_queue_response(conn,rc,resp);
- MHD_destroy_response(resp);
- }
-
- return MHD_YES;
-}
-
static char *
url_to_provider_name (const char *url)
{
@@ -1876,8 +1823,6 @@ static const rule my_rules[] = {
"DELETE", URL_ATTR, NULL },
{ /* get provider list */
"GET", URL_PROVLIST, proxy_list_provs },
- { /* update a provider */
- "POST", URL_PROVLIST, proxy_update_prov },
{ /* get the primary provider */
"GET", URL_PROVIDER, proxy_primary_prov },
{ /* create a provider */
diff --git a/t/basic b/t/basic
index fa5cdaf..b07375a 100644
--- a/t/basic
+++ b/t/basic
@@ -218,27 +218,6 @@ for i in xml json; do
curl_H -d provider=no_such -d username=v -d password=q http://localhost:$port/_providers || fail=1
- # Try to change just one of username and password, not specifying the other.
- # Ensure that the attempt fails.
- curl_H -d provider="$p_name" -d username=u - http://localhost:$port/_providers || fail=1
- curl_H http://localhost:$port/_providers > p || fail=1
- emit_provider $i "$p_name" fs '' 0 u p > p.exp || fail=1
- compare p.exp p || fail=1
-
- curl_H -d provider="$p_name" -d password=p - http://localhost:$port/_providers || fail=1
- curl_H http://localhost:$port/_providers > p || fail=1
- emit_provider $i "$p_name" fs '' 0 u p > p.exp || fail=1
- compare p.exp p || fail=1
-
- # Try to change both username and password. Now, it must succeed.
- curl_H -d provider="$p_name" -d username=v -d password=q - http://localhost:$port/_providers || fail=1
- curl_H http://localhost:$port/_providers > p || fail=1
- emit_provider $i "$p_name" fs '' 0 v q > p.exp || fail=1
- compare p.exp p || fail=1
-
# List an empty bucket.
curl_H $bucket > b || fail=1
emit_bucket_list $i > b.exp || fail=1
http://repo.or.cz/w/iwhd.git/commit/a7de76399efe41f1040259ade74837d2eb8736c3
commit a7de76399efe41f1040259ade74837d2eb8736c3
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Jan 6 20:16:24 2011 +0100
guard provider-addition with a mutex; tighten provider test
* rest.c: Remove some small, now-unnecessary free calls.
* setup.c (add_provider): Guard with new file-global mutex.
* t/provider: Remove ulimit on core size, so that if one is
dumped, we get something that's usable.
Sleep upon failure.
diff --git a/rest.c b/rest.c
index c7b5272..02ecff4 100644
--- a/rest.c
+++ b/rest.c
@@ -173,8 +173,6 @@ child_closer (void * ctx)
pipe_private *pp = ctx;
DPRINTF("in %sn",__func__);
-
- free(pp);
}
/* Invoked from MHD. */
@@ -268,7 +266,6 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
conn, MHD_HEADER_KIND, "If-None-Match");
if (user_etag && !strcmp(user_etag,my_etag)) {
DPRINTF("ETag match!n");
- free(my_etag);
resp = MHD_create_response_from_data(0,NULL,
MHD_NO,MHD_NO);
MHD_queue_response(conn,MHD_HTTP_NOT_MODIFIED,resp);
@@ -326,7 +323,6 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
fprintf(stderr,"MHD_crfc failedn");
if (pp2) {
/* TBD: terminate thread */
- free(pp2);
}
child_closer(pp);
return MHD_NO;
@@ -497,7 +493,6 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
}
resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
- free(etag);
return MHD_NO;
}
if (etag) {
@@ -730,7 +725,6 @@ proxy_query_func (void *ctx, uint64_t pos, char *buf, size_t max)
len = max;
}
memcpy(buf,ms->gen_ctx->buf,len);
- free(ms->gen_ctx);
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -857,7 +851,6 @@ proxy_delete (void *cctx, struct MHD_Connection *conn, const char *url,
bucket = strtok_r(copied_url,"/",&stctx);
key = strtok_r(NULL,"/",&stctx);
meta_delete(bucket,key);
- free(copied_url);
replicate_delete(url,ms);
}
@@ -956,7 +949,6 @@ root_blob_generator (void *ctx, uint64_t pos, char *buf, size_t max)
len = max;
}
memcpy(buf,ms->gen_ctx->buf,len);
- free(ms->gen_ctx);
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1344,7 +1336,6 @@ parts_callback (void *ctx, uint64_t pos, char *buf, size_t max)
len = max;
}
memcpy(buf,ms->gen_ctx->buf,len);
- free(ms->gen_ctx);
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1507,7 +1498,6 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
len = max;
}
memcpy(buf,ms->gen_ctx->buf,len);
- free(ms->gen_ctx);
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1621,7 +1611,6 @@ url_to_provider_name (const char *url)
strip_trailing_slashes (p);
char *prov_name = strdup (last_component (p));
- free (p);
return prov_name;
}
@@ -1701,8 +1690,7 @@ proxy_set_primary (void *cctx, struct MHD_Connection *conn, const char *url,
set_main_provider (prov);
}
- bad_set:
- free (name);
+ bad_set:;
struct MHD_Response *resp;
resp = MHD_create_response_from_data(0, NULL, MHD_NO, MHD_NO);
@@ -1749,7 +1737,6 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
prov->deleted = 1;
}
- free (prov_name);
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
diff --git a/setup.c b/setup.c
index 6d1536f..6b9454c 100644
--- a/setup.c
+++ b/setup.c
@@ -68,6 +68,7 @@ extern backend_func_tbl fs_func_tbl;
static json_t *config = NULL;
static GHashTable *prov_hash = NULL;
+static pthread_mutex_t provider_hash_table_lock;
static provider_t *g_main_prov = NULL;
provider_t *g_master_prov = NULL;
@@ -345,6 +346,8 @@ add_provider (GHashTable *h)
if (prov == NULL)
return 0;
+ pthread_mutex_lock (&provider_hash_table_lock);
+
prov->name = strdup (name);
if (prov->name == NULL)
goto fail;
@@ -411,16 +414,11 @@ add_provider (GHashTable *h)
// FIXME: check strdup for failure
g_hash_table_insert(prov_hash,strdup(name),prov);
+ pthread_mutex_unlock (&provider_hash_table_lock);
return 1;
fail:
- free ((char *) prov->name);
- free ((char *) prov->type);
- free ((char *) prov->host);
- free ((char *) prov->username);
- free ((char *) prov->password);
- free ((char *) prov->path);
- free (prov);
+ pthread_mutex_unlock (&provider_hash_table_lock);
return 0;
}
@@ -452,6 +450,7 @@ parse_config_inner (void)
/* Everything looks OK. */
printf("%u replication servers definedn",nservers-1);
+ pthread_mutex_init(&provider_hash_table_lock, NULL);
prov_hash = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
if (!prov_hash) {
error(0,0,"could not allocate provider hash");
@@ -579,7 +578,12 @@ get_provider (const char *name)
if (!prov_hash || !name || (*name == '0')) {
return NULL;
}
- return g_hash_table_lookup(prov_hash,name);
+
+ pthread_mutex_lock (&provider_hash_table_lock);
+ provider_t *p = g_hash_table_lookup(prov_hash,name);
+ pthread_mutex_unlock (&provider_hash_table_lock);
+
+ return p;
}
provider_t *
@@ -589,7 +593,9 @@ find_provider (const char *name)
return NULL;
}
+ pthread_mutex_lock (&provider_hash_table_lock);
provider_t *p = g_hash_table_lookup(prov_hash, name);
+ pthread_mutex_unlock (&provider_hash_table_lock);
return p;
}
diff --git a/t/provider b/t/provider
index eb8715c..354918c 100644
--- a/t/provider
+++ b/t/provider
@@ -16,6 +16,7 @@ wait_for .1 50 'mongo localhost:$m_port < /dev/null' || framework_failure_ mongod failed to start
port=9095
+ulimit -c unlimited
printf '[{"path": "FS", "type": "fs", "name": "primary"}]n' > iwhd.cfg || fail=1
@@ -28,12 +29,16 @@ cleanup_() { kill -9 $mongo_pid; kill $iwhd_pid; }
wait_for .1 50 "curl -s http://localhost:$port" || { echo iwhd failed to listen; Exit 1; }
+fail=0
n=400
# ===================================
for i in $(seq $n); do
# Add provider
p=http://localhost:$port/_providers/p-$i
curl -d type=s3 -dhost=localhost -dport=80 -dkey=u -dsecret=p $p || fail=1
+ curl http://localhost:$port/_providers > p-list || fail=1
+ test $(grep -c '<provider name="p-[0-9]*">' p-list) = $i || fail=1
+ test $fail = 1 && sleep 99d
done
# List providers.
http://repo.or.cz/w/iwhd.git/commit/d1aa322fb7750dd52d893f73d499f4f0e27f0c06
commit d1aa322fb7750dd52d893f73d499f4f0e27f0c06
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 4 12:59:23 2011 +0100
tests: add dynamic-provider test
* t/provider: New file.
diff --git a/t/Makefile.am b/t/Makefile.am
index e56860c..263ec73 100644
--- a/t/Makefile.am
+++ b/t/Makefile.am
@@ -17,6 +17,7 @@
TESTS = parser-test basic + provider replication auto
diff --git a/t/provider b/t/provider
new file mode 100644
index 0000000..eb8715c
--- /dev/null
+++ b/t/provider
@@ -0,0 +1,65 @@
+#!/bin/sh
+# add and remove providers
+
+. "${srcdir=.}/init.sh"; path_prepend_ ..
+
+mkdir FS mongod iwhd || framework_failure_ mkdir failed
+
+m_port=$(expr $mongo_base_port + 2)
+
+mongod --port $m_port --pidfilepath mongod/pid --dbpath mongod > mongod.log 2>&1 &
+mongo_pid=$!
+cleanup_() { kill -9 $mongo_pid; }
+
+# Wait for up to 5 seconds for mongod to begin listening.
+wait_for .1 50 'mongo localhost:$m_port < /dev/null' + || framework_failure_ mongod failed to start
+
+port=9095
+
+printf '[{"path": "FS", "type": "fs", "name": "primary"}]n' + > iwhd.cfg || fail=1
+
+iwhd -v -p $port -c iwhd.cfg -d localhost:$m_port &
+iwhd_pid=$!
+cleanup_() { kill -9 $mongo_pid; kill $iwhd_pid; }
+
+# Wait for up to 5 seconds for iwhd to begin listening on $port.
+wait_for .1 50 "curl -s http://localhost:$port" + || { echo iwhd failed to listen; Exit 1; }
+
+n=400
+# ===================================
+for i in $(seq $n); do
+ # Add provider
+ p=http://localhost:$port/_providers/p-$i
+ curl -d type=s3 -dhost=localhost -dport=80 -dkey=u -dsecret=p $p || fail=1
+done
+
+# List providers.
+curl http://localhost:$port/_providers > p-list || fail=1
+
+# Ensure that each was added:
+for i in $(seq $n); do
+ grep 'name="'p-$i'"' p-list || fail=1
+done
+
+# ===================================
+for i in $(seq $n); do
+ # Remove provider
+ p=http://localhost:$port/_providers/p-$i
+ curl -f -X DELETE $p
+done
+
+# List providers.
+curl http://localhost:$port/_providers > p-list || fail=1
+
+# Ensure that the primary one is still there:
+grep 'name="primary"' p-list || fail=1
+
+# Ensure that each has been removed:
+for i in $(seq $n); do
+ grep 'name="'p-$i'"' p-list && fail=1
+done
+
+Exit $fail
http://repo.or.cz/w/iwhd.git/commit/f0877f6496e4c7f1df2834bb01681161f379a4a0
commit f0877f6496e4c7f1df2834bb01681161f379a4a0
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Dec 23 22:04:41 2010 +0100
remove functions and struct members that are no longer needed
* rest.c (free_ms): Remove function and all uses.
* setup.c (delete_provider): Remove function and sole use.
* setup.h (refcnt): Remove struct member and all uses.
* state_defs.h (cleanup, refcnt): Remove struct members.
Remove all uses.
* replica.c: Remove uses of the above.
diff --git a/replica.c b/replica.c
index b17148d..6fb396a 100644
--- a/replica.c
+++ b/replica.c
@@ -205,7 +205,6 @@ repl_worker (void *notused ATTRIBUTE_UNUSED)
error(0,0,"bad repl type %d (url=%s) skipped",
item->type, item->path);
}
- free_ms(item->ms);
free(item->path);
free(item);
/* No atomic dec without test? Lame. */
@@ -338,7 +337,6 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
item->server = prov;
item->size = size;
item->ms = ms;
- g_atomic_int_inc(&ms->refcnt);
pthread_mutex_lock(&queue_lock);
if (queue_tail) {
item->next = queue_tail->next;
@@ -388,7 +386,6 @@ replicate_namespace_action (const char *name, repl_t action, my_state *ms)
}
item->server = (provider_t *)value;
item->ms = ms;
- g_atomic_int_inc(&ms->refcnt);
pthread_mutex_lock(&queue_lock);
if (queue_tail) {
item->next = queue_tail->next;
diff --git a/rest.c b/rest.c
index 3d59747..c7b5272 100644
--- a/rest.c
+++ b/rest.c
@@ -75,44 +75,6 @@ static const char *const (reserved_name[]) = {"_default", "_new", "_policy", "_q
static const char *const (reserved_attr[]) = {"_bucket", "_date", "_etag", "_key", "_loc", "_size", NULL};
static const char *const (reserved_bucket_name[]) = {"_new", "_providers", NULL};
-void
-free_ms (my_state *ms)
-{
- if (!g_atomic_int_dec_and_test(&ms->refcnt)) {
- return;
- }
-
- if (ms->cleanup & CLEANUP_BUF_PTR) {
- free(ms->pipe.data_ptr);
- }
-
- if (ms->cleanup & CLEANUP_POST) {
- MHD_destroy_post_processor(ms->post);
- }
-
- if (ms->cleanup & CLEANUP_DICT) {
- g_hash_table_destroy(ms->dict);
- }
-
- if (ms->cleanup & CLEANUP_QUERY) {
- meta_query_stop(ms->query);
- }
-
- if (ms->cleanup & CLEANUP_TMPL) {
- free(ms->gen_ctx);
- }
-
- if (ms->cleanup & CLEANUP_URL) {
- free(ms->url);
- }
-
- if (ms->cleanup & CLEANUP_AQUERY) {
- meta_attr_stop(ms->aquery);
- }
-
- free(ms);
-}
-
static int
validate_put (struct MHD_Connection *conn)
{
@@ -203,7 +165,6 @@ simple_closer (void *ctx)
my_state *ms = ctx;
DPRINTF("%s: cleaning upn",__func__);
- free_ms(ms);
}
static void
@@ -266,7 +227,6 @@ proxy_get_cons (void *ctx, uint64_t pos, char *buf, size_t max)
pthread_join(ms->cache_th,NULL);
/* TBD: do something about cache failure? */
}
- free_ms(ms);
}
return done;
@@ -297,7 +257,6 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
if (!ms->url) {
return MHD_NO;
}
- ms->cleanup |= CLEANUP_URL;
my_etag = meta_has_copy(ms->bucket,ms->key,me);
if (!my_etag) {
@@ -326,7 +285,6 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_NO,MHD_NO);
MHD_queue_response(conn,MHD_HTTP_NOT_FOUND,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
DPRINTF(" will fetch from %s:%un", master_host,master_port);
@@ -448,7 +406,6 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
}
MHD_queue_response(conn,MHD_HTTP_FORBIDDEN,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
ms->state = MS_NORMAL;
@@ -456,7 +413,6 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
if (!ms->url) {
return MHD_NO;
}
- ms->cleanup |= CLEANUP_URL;
ms->size = 0;
pipe_init_shared(&ms->pipe,ms,1);
pp = pipe_init_private(&ms->pipe);
@@ -539,7 +495,6 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
recheck_replication(ms,NULL);
rc = MHD_HTTP_OK;
}
- free_ms(ms);
resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
free(etag);
@@ -587,7 +542,6 @@ proxy_get_attr (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
@@ -613,7 +567,6 @@ proxy_put_attr (void *cctx, struct MHD_Connection *conn, const char *url,
if (!ms->url) {
return MHD_NO;
}
- ms->cleanup |= CLEANUP_URL;
attrval = MHD_lookup_connection_value(conn,MHD_HEADER_KIND,
"X-redhat-value");
if (attrval) {
@@ -638,7 +591,6 @@ proxy_put_attr (void *cctx, struct MHD_Connection *conn, const char *url,
return MHD_NO;
}
((char *)ms->pipe.data_ptr)[0] = '0';
- ms->cleanup |= CLEANUP_BUF_PTR;
}
(void)strncat(ms->pipe.data_ptr,data,*data_size);
/* TBD: check return value */
@@ -657,22 +609,15 @@ proxy_put_attr (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_queue_response(conn,MHD_HTTP_BAD_REQUEST,
resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
meta_set_value(ms->bucket,ms->key,ms->attr,ms->pipe.data_ptr);
- /* This might get stomped by replication. */
- if (ms->cleanup & CLEANUP_BUF_PTR) {
- free(ms->pipe.data_ptr);
- ms->cleanup &= ~CLEANUP_BUF_PTR;
- }
/*
* We should always re-replicate, because the replication
* policy might refer to this attr.
*/
DPRINTF("rereplicate (attr PUT)n");
recheck_replication(ms,NULL);
- free_ms(ms);
send_resp = 1;
}
@@ -744,7 +689,6 @@ proxy_query_func (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- ms->cleanup |= CLEANUP_TMPL;
len = tmpl_list_header(ms->gen_ctx);
if (!len) {
return -1;
@@ -787,7 +731,6 @@ proxy_query_func (void *ctx, uint64_t pos, char *buf, size_t max)
}
memcpy(buf,ms->gen_ctx->buf,len);
free(ms->gen_ctx);
- ms->cleanup &= ~CLEANUP_TMPL;
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -810,7 +753,6 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->post = MHD_create_post_processor(conn,4096,
query_iterator,ms);
- ms->cleanup |= CLEANUP_POST;
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -829,7 +771,6 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
return MHD_NO;
}
((char *)ms->pipe.data_ptr)[0] = '0';
- ms->cleanup |= CLEANUP_BUF_PTR;
}
(void)strncat(ms->pipe.data_ptr,data,*data_size);
/* TBD: check return value */
@@ -840,7 +781,6 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
return MHD_NO;
}
ms->query = meta_query_new(ms->bucket,NULL,ms->pipe.data_ptr);
- ms->cleanup |= CLEANUP_QUERY;
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
65536, proxy_query_func, ms, simple_closer);
if (!resp) {
@@ -850,7 +790,6 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
}
MHD_queue_response(conn,MHD_HTTP_OK,resp);
MHD_destroy_response(resp);
- //free_ms(ms);
}
return MHD_YES;
@@ -872,7 +811,6 @@ proxy_list_objs (void *cctx, struct MHD_Connection *conn, const char *url,
(void)data_size;
ms->query = meta_query_new((char *)ms->bucket,NULL,NULL);
- ms->cleanup |= CLEANUP_QUERY;
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
65536, proxy_query_func, ms, simple_closer);
@@ -925,14 +863,12 @@ proxy_delete (void *cctx, struct MHD_Connection *conn, const char *url,
resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
- free_ms(ms);
return MHD_NO;
}
error (0, 0, "DELETE BUCKET: rc=%d", rc);
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
@@ -971,7 +907,6 @@ root_blob_generator (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- ms->cleanup |= CLEANUP_TMPL;
ms->gen_ctx->base = host;
len = tmpl_root_header(ms->gen_ctx,"image_warehouse",VERSION);
if (!len) {
@@ -1022,7 +957,6 @@ root_blob_generator (void *ctx, uint64_t pos, char *buf, size_t max)
}
memcpy(buf,ms->gen_ctx->buf,len);
free(ms->gen_ctx);
- ms->cleanup &= ~CLEANUP_TMPL;
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1045,11 +979,8 @@ proxy_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
ms->query = meta_query_new(NULL,"_default",NULL);
if (!ms->query) {
- free_ms(ms);
return MHD_NO;
}
- ms->cleanup |= CLEANUP_QUERY;
-
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
65536, root_blob_generator, ms, simple_closer);
if (!resp) {
@@ -1194,10 +1125,8 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
g_str_hash,g_str_equal,NULL,NULL);
- ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
- ms->cleanup |= CLEANUP_POST;
return MHD_YES;
}
@@ -1237,7 +1166,6 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
@@ -1262,10 +1190,8 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
g_str_hash,g_str_equal,NULL,NULL);
- ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
- ms->cleanup |= CLEANUP_POST;
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1300,7 +1226,6 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
}
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
}
return MHD_YES;
@@ -1375,7 +1300,6 @@ parts_callback (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- ms->cleanup |= CLEANUP_TMPL;
ms->gen_ctx->base = host;
len = tmpl_obj_header(ms->gen_ctx,ms->bucket,ms->key);
if (!len) {
@@ -1421,7 +1345,6 @@ parts_callback (void *ctx, uint64_t pos, char *buf, size_t max)
}
memcpy(buf,ms->gen_ctx->buf,len);
free(ms->gen_ctx);
- ms->cleanup &= ~CLEANUP_TMPL;
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1435,7 +1358,6 @@ show_parts (struct MHD_Connection *conn, my_state *ms)
if (!ms->aquery) {
return MHD_HTTP_NOT_FOUND;
}
- ms->cleanup |= CLEANUP_AQUERY;
resp = MHD_create_response_from_callback(MHD_SIZE_UNKNOWN,
65536, parts_callback, ms, simple_closer);
@@ -1470,10 +1392,8 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
g_str_hash,g_str_equal,NULL,NULL);
- ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
- ms->cleanup |= CLEANUP_POST;
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1517,12 +1437,10 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_NO,MHD_NO);
if (!resp) {
fprintf(stderr,"MHD_crfd failedn");
- free_ms(ms);
return MHD_NO;
}
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
}
}
@@ -1550,7 +1468,6 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
if (!ms->gen_ctx) {
return -1;
}
- ms->cleanup |= CLEANUP_TMPL;
init_prov_iter(&ms->prov_iter);
len = tmpl_prov_header(ms->gen_ctx);
if (!len) {
@@ -1591,7 +1508,6 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
}
memcpy(buf,ms->gen_ctx->buf,len);
free(ms->gen_ctx);
- ms->cleanup &= ~CLEANUP_TMPL;
ms->gen_ctx = TMPL_CTX_DONE;
return len;
}
@@ -1663,10 +1579,8 @@ proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
g_str_hash,g_str_equal,NULL,NULL);
- ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
- ms->cleanup |= CLEANUP_POST;
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1687,12 +1601,10 @@ proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
fprintf(stderr,"MHD_crfd failedn");
- free_ms(ms);
return MHD_NO;
}
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
}
return MHD_YES;
@@ -1819,7 +1731,6 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
struct MHD_Response *resp
= MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
- free_ms(ms);
return MHD_NO;
}
@@ -1835,19 +1746,13 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
DPRINTF("PROXY DELETE PROVIDER prov=%s rc=%dn", prov_name, rc);
if (prov) {
- /* Delete for real if no one is using it.
- Otherwise, just mark it as deleted. */
- if (g_atomic_int_get (&prov->refcnt) == 0)
- delete_provider (prov);
- else
- prov->deleted = 1;
+ prov->deleted = 1;
}
free (prov_name);
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
@@ -1869,10 +1774,8 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
g_str_hash,g_str_equal,NULL,NULL);
- ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
- ms->cleanup |= CLEANUP_POST;
}
else if (*data_size) {
MHD_post_process(ms->post,data,*data_size);
@@ -1922,7 +1825,6 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
}
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
}
return MHD_YES;
@@ -1955,7 +1857,6 @@ proxy_create_bucket (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
- free_ms(ms);
return MHD_YES;
}
@@ -2083,7 +1984,6 @@ access_handler (void *cctx, struct MHD_Connection *conn, const char *url,
if (!ms) {
return MHD_NO;
}
- ms->refcnt = 1;
utype = parse_url(url,ms);
@@ -2107,9 +2007,6 @@ access_handler (void *cctx, struct MHD_Connection *conn, const char *url,
data,data_size,rctx);
}
- /* Don't need this after all. Free before the next check. */
- free_ms(ms);
-
if (!strcmp(method,"QUIT")) {
(void)sem_post((sem_t *)cctx);
return MHD_NO;
diff --git a/setup.c b/setup.c
index 4b3627e..6d1536f 100644
--- a/setup.c
+++ b/setup.c
@@ -331,7 +331,6 @@ convert_provider (int i, provider_t *out)
out->token = NULL;
out->deleted = 0;
- out->refcnt = 0;
return 1;
}
@@ -595,19 +594,6 @@ find_provider (const char *name)
return p;
}
-void
-delete_provider (provider_t *prov)
-{
- // Remove from prov_hash, then free
-
- g_hash_table_destroy(prov->attrs);
- // FIXME: free other members, too?
-
- // FIXME: unchecked strdup
- char *name = strdup (prov->name);
- g_hash_table_remove(prov_hash, prov->name);
-}
-
const char *
get_provider_value (const provider_t *prov, const char *fname)
{
diff --git a/setup.h b/setup.h
index 76fef20..a1eb8f8 100644
--- a/setup.h
+++ b/setup.h
@@ -27,7 +27,6 @@ typedef struct _provider {
const char *host;
int port;
int deleted;
- gint refcnt;
const char *username;
const char *password;
const char *path;
@@ -51,7 +50,6 @@ const char *auto_config (void);
int validate_provider (GHashTable *h);
provider_t *find_provider (const char *name);
int add_provider (GHashTable *h);
-void delete_provider (provider_t *prov);
provider_t *get_main_provider (void);
void set_main_provider (provider_t *prov);
diff --git a/state_defs.h b/state_defs.h
index 898152f..d16f1ef 100644
--- a/state_defs.h
+++ b/state_defs.h
@@ -44,8 +44,6 @@ typedef struct {
} backend_thunk_t;
typedef struct _my_state {
- volatile gint refcnt;
- int cleanup;
/* for everyone */
MHD_AccessHandlerCallback handler;
ms_state state;
@@ -90,6 +88,4 @@ typedef struct _my_state {
#define BACKEND_GET_SIZE 0x01 /* used in put_child_func */
-void free_ms (my_state *ms);
-
#endif
http://repo.or.cz/w/iwhd.git/commit/7ec84dde8be9610f33c46b3fbd9dc5607f13510b
commit 7ec84dde8be9610f33c46b3fbd9dc5607f13510b
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Dec 23 18:36:38 2010 +0100
garbage-collection fix-up
Do not instruct libraries to free things that they do not allocate,
since they would use the system "free" function.
* setup.c, rest.c: Do not tell g_hash_table_new_full to free anything.
* rest.c (proxy_get_attr): Do not tell MHD to free anything.
diff --git a/rest.c b/rest.c
index fafeb9e..3d59747 100644
--- a/rest.c
+++ b/rest.c
@@ -316,7 +316,6 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
MHD_destroy_response(resp);
return MHD_YES;
}
- free(my_etag);
ms->from_master = 0;
}
else {
@@ -384,7 +383,6 @@ static void
recheck_replication (my_state * ms, char *policy)
{
int rc;
- int free_it = FALSE;
char fixed[MAX_FIELD_LEN];
if (is_reserved(ms->key,reserved_name)) {
@@ -398,8 +396,6 @@ recheck_replication (my_state * ms, char *policy)
}
if (!policy) {
- /* If we get a policy here or below, we have to free it. */
- free_it = TRUE;
DPRINTF("fetching policy for %s/%sn",ms->bucket,ms->key);
rc = meta_get_value(ms->bucket,ms->key, "_policy", &policy);
}
@@ -418,9 +414,6 @@ recheck_replication (my_state * ms, char *policy)
*/
snprintf(fixed,sizeof(fixed),"%s/%s",ms->bucket,ms->key);
replicate(fixed,0,policy,ms);
- if (free_it) {
- free(policy);
- }
}
else {
DPRINTF(" could not find a policy anywhere!n");
@@ -554,7 +547,6 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
}
if (etag) {
MHD_add_response_header(resp,"ETag",etag);
- free(etag);
}
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
@@ -583,7 +575,7 @@ proxy_get_attr (void *cctx, struct MHD_Connection *conn, const char *url,
if (meta_get_value(ms->bucket,ms->key,ms->attr,&fixed) == 0) {
resp = MHD_create_response_from_data(strlen(fixed),fixed,
- MHD_YES,MHD_NO);
+ MHD_NO,MHD_NO);
rc = MHD_HTTP_OK;
}
else {
@@ -1201,7 +1193,7 @@ control_api_root (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,free,free);
+ g_str_hash,g_str_equal,NULL,NULL);
ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
@@ -1269,7 +1261,7 @@ proxy_bucket_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,free,free);
+ g_str_hash,g_str_equal,NULL,NULL);
ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
@@ -1477,7 +1469,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,free,free);
+ g_str_hash,g_str_equal,NULL,NULL);
ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
post_iterator,ms->dict);
@@ -1670,7 +1662,7 @@ proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,free,free);
+ g_str_hash,g_str_equal,NULL,NULL);
ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
@@ -1876,7 +1868,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
ms->state = MS_NORMAL;
ms->url = (char *)url;
ms->dict = g_hash_table_new_full(
- g_str_hash,g_str_equal,free,free);
+ g_str_hash,g_str_equal,NULL,NULL);
ms->cleanup |= CLEANUP_DICT;
ms->post = MHD_create_post_processor(conn,4096,
prov_iterator,ms->dict);
diff --git a/setup.c b/setup.c
index 5821c74..4b3627e 100644
--- a/setup.c
+++ b/setup.c
@@ -307,7 +307,7 @@ convert_provider (int i, provider_t *out)
out->func_tbl = &bad_func_tbl;
}
- out->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,free,free);
+ out->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
iter = json_object_iter(server);
while (iter) {
key = json_object_iter_key(iter);
@@ -385,7 +385,7 @@ add_provider (GHashTable *h)
else
prov->func_tbl = &bad_func_tbl;
- prov->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,free,free);
+ prov->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
// FIXME: can't the above fail?
GHashTableIter iter;
@@ -453,7 +453,7 @@ parse_config_inner (void)
/* Everything looks OK. */
printf("%u replication servers definedn",nservers-1);
- prov_hash = g_hash_table_new_full(g_str_hash,g_str_equal,free,free);
+ prov_hash = g_hash_table_new_full(g_str_hash,g_str_equal,NULL,NULL);
if (!prov_hash) {
error(0,0,"could not allocate provider hash");
goto err;
http://repo.or.cz/w/iwhd.git/commit/b98c47711c707bdd25f3d8d08f725321e25924f2
commit b98c47711c707bdd25f3d8d08f725321e25924f2
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Dec 22 17:25:19 2010 +0100
use garbage collection
Add -lgc when linking.
* gc-wrap.h: New file, to map malloc, realloc, free,
etc. to GC'd equivalents.
* iwh.h: Include it.
* template.c: Include it.
* Makefile.am (iwhd_LDADD): Add -lgc and -lpthread.
* t/Makefile.am (parser_LDADD): Likewise.
* Makefile.am (iwhd_SOURCES): Add gc-wrap.h.
(TESTS): Move the simpler parser-test to precede all others.
* iwhd.spec.in (BuildRequires): Require gc-devel.
* qparser.y (free_value): Remove function.
* meta.cpp, replica.c: Remove all uses.
* query.h: Remove declaration.
* rest.c (main): Call GC_INIT.
* qparser.y (main) [PARSER_UNIT_TEST]: Likewise.
* mpipe.c: Include unistd.h here, ...
* mpipe.h: ...not here.
Don't include the following, either: fcntl.h, stdlib.h, string.h,
strings.h, sys/stat.h. They were not used, and got in the way
of gc-wrap's redefinitions.
diff --git a/Makefile.am b/Makefile.am
index ddafa71..fc767f0 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -40,6 +40,7 @@ iwhd_SOURCES = auto.c backend.c backend.h + gc-wrap.h iwh.h meta.cpp meta.h @@ -94,6 +95,7 @@ rpm: dist iwhd.spec
iwhd_CPPFLAGS = $(HAIL_CFLAGS) -I$(top_srcdir)/lib
iwhd_LDADD = lib/libiwhd.a + -lgc -lpthread -lmongoclient $(BOOST_SYSTEM_LIB) $(BOOST_THREAD_LIB) diff --git a/gc-wrap.h b/gc-wrap.h
new file mode 100644
index 0000000..70c4b9d
--- /dev/null
+++ b/gc-wrap.h
@@ -0,0 +1,35 @@
+#include <string.h>
+#define GC_THREADS
+#include "gc.h"
+
+#ifndef __cplusplus
+# define malloc(n) GC_MALLOC(n)
+# define calloc(m,n) GC_MALLOC((m)*(n))
+# define free(p) GC_FREE(p)
+# define realloc(p,n) GC_REALLOC((p),(n))
+#endif
+
+static inline char *
+my_strdup (char const *s)
+{
+ size_t len = strlen (s);
+ void *t = GC_MALLOC (len + 1);
+ if (t == NULL)
+ return NULL;
+ return (char *) memcpy (t, s, len + 1);
+}
+# undef strdup
+# define strdup(s) my_strdup(s)
+
+static inline char *
+my_strndup (char const *s, size_t n)
+{
+ size_t len = strnlen (s, n);
+ char *t = (char *) GC_MALLOC (len + 1);
+ if (t == NULL)
+ return NULL;
+ t[len] = '0';
+ return (char *) memcpy (t, s, len);
+}
+# undef strndup
+# define strndup(s, n) my_strndup(s, n)
diff --git a/iwh.h b/iwh.h
index 87b923f..e8cf710 100644
--- a/iwh.h
+++ b/iwh.h
@@ -69,3 +69,5 @@ GLOBAL(const char *, me, "here");
#define AUTO_MONGOD_PORT 27018
int auto_start (int dbport);
+
+#include "gc-wrap.h"
diff --git a/iwhd.spec.in b/iwhd.spec.in
index 31e0f23..4b26dce 100644
--- a/iwhd.spec.in
+++ b/iwhd.spec.in
@@ -17,6 +17,7 @@ BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
BuildRequires: boost-devel
BuildRequires: boost-filesystem
+BuildRequires: gc-devel
BuildRequires: glib2-devel
BuildRequires: hail-devel
BuildRequires: jansson-devel
diff --git a/meta.cpp b/meta.cpp
index 7f6f8bf..404d0d5 100644
--- a/meta.cpp
+++ b/meta.cpp
@@ -440,9 +440,6 @@ RepoQuery::RepoQuery (const char *bucket, const char *key, const char *qstr,
RepoQuery::~RepoQuery ()
{
cout << "in " << __func__ << endl;
- if (expr) {
- free_value(expr);
- }
delete curs;
}
diff --git a/mpipe.c b/mpipe.c
index a66cde3..d4ddb7c 100644
--- a/mpipe.c
+++ b/mpipe.c
@@ -15,6 +15,7 @@
#include <config.h>
#include <assert.h>
+#include <unistd.h>
#include "iwh.h"
#include "mpipe.h"
diff --git a/mpipe.h b/mpipe.h
index 093a8ab..62edbaa 100644
--- a/mpipe.h
+++ b/mpipe.h
@@ -16,17 +16,11 @@
#if !defined(_MPIPE_H)
#define _MPIPE_H
-#include <fcntl.h>
#include <poll.h>
#include <pthread.h>
#include <semaphore.h>
#include <stdint.h>
#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-#include <strings.h>
-#include <unistd.h>
-#include <sys/stat.h>
/*
* This is an in-memory "pipe" construct with a twist: it lets you have
diff --git a/qparser.y b/qparser.y
index d70334a..c186890 100644
--- a/qparser.y
+++ b/qparser.y
@@ -387,41 +387,6 @@ print_value (const value_t *v)
_print_value(v,0);
}
-void
-free_value (value_t *v)
-{
- if (v == NULL || v == &invalid) {
- return;
- }
-
- //free((void *)v->resolved);
-
- switch (v->type) {
- case T_STRING:
- case T_OFIELD:
- case T_SFIELD:
- case T_ID:
- free(v->as_str);
- free(v);
- break;
- case T_LINK:
- free_value(v->as_tree.left);
- free(v->as_tree.right);
- free(v);
- break;
- case T_COMP:
- case T_AND:
- case T_OR:
- free_value(v->as_tree.right);
- /* Fall through. */
- case T_NOT:
- free_value(v->as_tree.left);
- /* Fall through. */
- default:
- free(v);
- }
-}
-
#include "qlexer.c"
value_t *
@@ -433,8 +398,6 @@ parse (const char *text)
YY_BUFFER_STATE buf = yy_scan_string (text, scanner);
value_t *result = NULL;
value_t *r = yyparse (scanner, &result) == 0 ? result : NULL;
- if (r == NULL)
- free_value (result);
yy_delete_buffer (buf, scanner);
yylex_destroy (scanner);
return r;
@@ -647,6 +610,7 @@ main (int argc, char **argv)
{
int fail = 0;
unsigned int i;
+ GC_INIT ();
for (i = 1; i < argc; ++i)
{
value_t *expr = parse (argv[i]);
@@ -654,7 +618,7 @@ main (int argc, char **argv)
{
printf ("could not parse '%s'n", argv[i]);
fail = 1;
- goto next;
+ continue;
}
print_value (expr);
@@ -663,12 +627,9 @@ main (int argc, char **argv)
if (str)
{
printf ("s= %sn", str);
- goto next;
+ continue;
}
printf ("d= %dn", eval (expr, &unit_oget, &unit_sget));
-
- next:
- free_value (expr);
}
return fail;
diff --git a/query.h b/query.h
index 43ac83c..ef6729e 100644
--- a/query.h
+++ b/query.h
@@ -69,9 +69,9 @@ typedef struct {
#define CALL_GETTER(g,x) g->func(g->ctx,x)
/*
- * In the normal case a caller would invoke parse once, eval multiple times,
- * and free_value once. print_value is just for debugging/testing.
- * TBD: make parse reentrant (eval already is, free_value doesn't need to be.
+ * In the normal case a caller would invoke parse once and eval multiple times.
+ * print_value is just for debugging/testing.
+ * TBD: make parse reentrant (eval already is).
* Unfortunately, a quick scan of generated code and information on the web
* seems to indicate that even a "reentrant" bison parser only encapsulates
* user state and still relies quite a bit on internal globals. That might
@@ -79,7 +79,6 @@ typedef struct {
*/
int eval (const value_t *expr,
const getter_t *oget, const getter_t *sget);
-void free_value (value_t *);
void print_value (const value_t *);
value_t *parse (const char *text);
diff --git a/replica.c b/replica.c
index 2030009..b17148d 100644
--- a/replica.c
+++ b/replica.c
@@ -354,9 +354,6 @@ replicate (const char *url, size_t size, const char *policy, my_state *ms)
sem_post(&queue_sema);
}
- if (expr) {
- free_value(expr);
- }
free(url2);
}
diff --git a/rest.c b/rest.c
index 802c698..fafeb9e 100644
--- a/rest.c
+++ b/rest.c
@@ -2202,6 +2202,7 @@ main (int argc, char **argv)
set_program_name (argv[0]);
atexit (close_stdout);
+ GC_INIT ();
for (;;) switch (getopt_long(argc,argv,"ac:d:m:p:v",my_options,NULL)) {
case 'a':
diff --git a/t/Makefile.am b/t/Makefile.am
index 8787f42..e56860c 100644
--- a/t/Makefile.am
+++ b/t/Makefile.am
@@ -15,9 +15,9 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
TESTS = + parser-test basic replication - parser-test auto
EXTRA_DIST = @@ -31,7 +31,7 @@ BUILT_SOURCES =
BUILT_SOURCES += parser.c
MAINTAINERCLEANFILES = $(BUILT_SOURCES)
parser_CPPFLAGS = -I$(top_srcdir) -I$(top_srcdir)/lib
-parser_LDADD = -L../lib -liwhd
+parser_LDADD = -L../lib -liwhd -lgc -lpthread
parser.c: Makefile.am
rm -f $@-t $@
diff --git a/template.c b/template.c
index d343d4c..512f49f 100644
--- a/template.c
+++ b/template.c
@@ -19,6 +19,7 @@
#include <stdlib.h>
#include <string.h>
#include "template.h"
+#include "gc-wrap.h"
static const char xml_root_header[] = " <api service="%s" version="%s">
http://repo.or.cz/w/iwhd.git/commit/cee9beecc22c373814a7a8e56b2860eacd3a8acd
commit cee9beecc22c373814a7a8e56b2860eacd3a8acd
Author: Jim Meyering <meyering(a)redhat.com>
Date: Sat Dec 18 11:08:25 2010 +0100
maint: rename file-scoped global s/main_prov/g_main_prov/, and...
* setup.c (g_main_prov): Rename from main_prov, to avoid confusion
between this file-scoped global and the locals of the same name
in other compilation units.
(g_master_prov): Rename from master_prov, for consistency.
Though this one is truly global...
* setup.h (g_master_prov): Update here,...
* rest.c (g_master_prov): ...and here.
diff --git a/rest.c b/rest.c
index 24097b5..802c698 100644
--- a/rest.c
+++ b/rest.c
@@ -341,7 +341,7 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
}
provider_t *main_prov = get_main_provider();
ms->thunk.parent = ms;
- ms->thunk.prov = ms->from_master ? master_prov : main_prov;
+ ms->thunk.prov = ms->from_master ? g_master_prov : main_prov;
pthread_create(&ms->backend_th,NULL,
ms->thunk.prov->func_tbl->get_child_func,&ms->thunk);
/* TBD: check return value */
diff --git a/setup.c b/setup.c
index 907078d..5821c74 100644
--- a/setup.c
+++ b/setup.c
@@ -69,19 +69,19 @@ extern backend_func_tbl fs_func_tbl;
static json_t *config = NULL;
static GHashTable *prov_hash = NULL;
-static provider_t *main_prov = NULL;
-provider_t *master_prov = NULL;
+static provider_t *g_main_prov = NULL;
+provider_t *g_master_prov = NULL;
provider_t *
get_main_provider (void)
{
- return main_prov;
+ return g_main_prov;
}
void
set_main_provider (provider_t *prov)
{
- main_prov = prov;
+ g_main_prov = prov;
}
int
@@ -482,7 +482,7 @@ parse_config_inner (void)
}
g_hash_table_insert(prov_hash,(char *)new_key,new_prov);
if (!i) {
- main_prov = new_prov;
+ g_main_prov = new_prov;
primary = new_prov->name;
}
new_prov->func_tbl->init_func(new_prov);
@@ -511,10 +511,10 @@ parse_config (char *cfg_file)
* TBD: initialize this in a separate module-init function, passing
* in master_host and master_port instead of using globals.
*/
- if (!master_prov) {
- master_prov = malloc(sizeof(*master_prov));
- if (master_prov) {
- master_prov->func_tbl = &curl_func_tbl;
+ if (!g_master_prov) {
+ g_master_prov = malloc(sizeof(*g_master_prov));
+ if (g_master_prov) {
+ g_master_prov->func_tbl = &curl_func_tbl;
}
}
diff --git a/setup.h b/setup.h
index 06d73b7..76fef20 100644
--- a/setup.h
+++ b/setup.h
@@ -36,7 +36,7 @@ typedef struct _provider {
char *token;
} provider_t;
-provider_t *master_prov;
+provider_t *g_master_prov;
const char *parse_config (char *);
provider_t *get_provider (const char *name);
http://repo.or.cz/w/iwhd.git/commit/9e059c3626429c46b6990bc1892605fc7f3a5740
commit 9e059c3626429c46b6990bc1892605fc7f3a5740
Author: Jim Meyering <meyering(a)redhat.com>
Date: Sat Dec 18 10:47:58 2010 +0100
rename s/_set_primary/_primary/: more RESTful
diff --git a/rest.c b/rest.c
index f8e77d7..24097b5 100644
--- a/rest.c
+++ b/rest.c
@@ -1771,7 +1771,7 @@ proxy_set_primary (void *cctx, struct MHD_Connection *conn, const char *url,
char *name = NULL;
unsigned int rc = MHD_HTTP_BAD_REQUEST;
- /* URL is guaranteed to be of the form "/_providers/NAME/_set_primary"
+ /* URL is guaranteed to be of the form "/_providers/NAME/_primary"
Extract NAME: */
bool valid = memcmp (url, "/_providers/", strlen("/_providers/")) == 0;
if (!valid) {
@@ -2064,7 +2064,7 @@ parse_url (const char *url, my_state *ms)
eindex = URL_PROVIDER;
else if (eindex == URL_ATTR
&& !strcmp (parts[URL_BUCKET], "_providers")
- && !strcmp (parts[URL_ATTR], "_set_primary"))
+ && !strcmp (parts[URL_ATTR], "_primary"))
eindex = URL_PROVIDER_SET_PRIMARY;
DPRINTF("parse_url: %d: %s %s %s", eindex, parts[URL_BUCKET],
diff --git a/t/basic b/t/basic
index 3646333..fa5cdaf 100644
--- a/t/basic
+++ b/t/basic
@@ -379,7 +379,7 @@ curl http://localhost:$port/_primary > p || fail=1
grep _primary p && { warn_ add-provider/reserved-name not rejected; fail=1; }
# Move the "primary" attribute to a different provider.
-curl -X PUT $p1_url/_set_primary > p || fail=1
+curl -X PUT $p1_url/_primary > p || fail=1
test -s p && fail=1
new_primary=$(curl http://localhost:$port/_providers/_primary) || fail=1
test $new_primary = PROVIDER-1 || fail=1
@@ -388,7 +388,7 @@ test $new_primary = PROVIDER-1 || fail=1
# FIXME: if I don't restore, the following headless test makes iwhd segfault.
# Investigate that.
p1_url=http://localhost:$port/_providers/primary
-curl -X PUT $p1_url/_set_primary > p || fail=1
+curl -X PUT $p1_url/_primary > p || fail=1
http://repo.or.cz/w/iwhd.git/commit/44ee9894f3b04e3d8baa65b2579d5ca938af6e7e
commit 44ee9894f3b04e3d8baa65b2579d5ca938af6e7e
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Dec 17 17:07:47 2010 +0100
new interface: curl -X PUT http://_providers/PROVIDER/_set_primary
* rest.c: Include <errno.h>.
(proxy_set_primary): New function.
(parse_url): Handle new type: URL_PROVIDER_SET_PRIMARY.
* setup.c (set_main_provider): New function.
* setup.h (set_main_provider): Declare it.
* t/basic: Exercise the new functionality.
diff --git a/rest.c b/rest.c
index 253c2c8..f8e77d7 100644
--- a/rest.c
+++ b/rest.c
@@ -28,6 +28,7 @@
#include <unistd.h>
#include <sys/stat.h>
#include <assert.h>
+#include <errno.h>
#include <microhttpd.h>
#include <hstor.h> /* only for ARRAY_SIZE at this point */
@@ -58,7 +59,7 @@
typedef enum {
URL_ROOT=0, URL_BUCKET, URL_OBJECT, URL_ATTR, URL_INVAL,
- URL_QUERY, URL_PROVLIST, URL_PROVIDER
+ URL_QUERY, URL_PROVLIST, URL_PROVIDER, URL_PROVIDER_SET_PRIMARY
} url_type;
typedef struct {
@@ -1755,6 +1756,62 @@ proxy_primary_prov (void *cctx, struct MHD_Connection *conn, const char *url,
}
static int
+proxy_set_primary (void *cctx, struct MHD_Connection *conn, const char *url,
+ const char *method, const char *version, const char *data,
+ size_t *data_size, void **rctx)
+{
+ (void)cctx;
+ (void)method;
+ (void)version;
+ (void)data;
+
+ DPRINTF("PROXY SET PRIMARY PROVIDER (%s)n", url);
+
+ my_state *ms = *rctx;
+ char *name = NULL;
+ unsigned int rc = MHD_HTTP_BAD_REQUEST;
+
+ /* URL is guaranteed to be of the form "/_providers/NAME/_set_primary"
+ Extract NAME: */
+ bool valid = memcmp (url, "/_providers/", strlen("/_providers/")) == 0;
+ if (!valid) {
+ error (0, 0, "invalid request: %s", url);
+ goto bad_set;
+ }
+ const char *start = url + strlen("/_providers/");
+ const char *slash = strchr (start, '/');
+ if (slash == NULL) {
+ error (0, 0, "invalid request: %s", url);
+ goto bad_set;
+ }
+ name = strndup (start, slash - start);
+ if (name == NULL) {
+ error (0, errno, "failed to extract provider name: %s", url);
+ goto bad_set;
+ }
+
+ /* If it's not a provider name, you lose. */
+ provider_t *prov = find_provider (name);
+ if (prov) {
+ rc = MHD_HTTP_OK;
+ set_main_provider (prov);
+ }
+
+ bad_set:
+ free (name);
+
+ struct MHD_Response *resp;
+ resp = MHD_create_response_from_data(0, NULL, MHD_NO, MHD_NO);
+ if (!resp) {
+ return MHD_NO;
+ }
+ MHD_queue_response(conn,rc,resp);
+ MHD_destroy_response(resp);
+
+ return MHD_YES;
+}
+
+static int
proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
const char *method, const char *version, const char *data,
size_t *data_size, void **rctx)
@@ -1947,6 +2004,8 @@ static const rule my_rules[] = {
"POST", URL_PROVIDER, proxy_add_prov },
{ /* delete a provider */
"DELETE", URL_PROVIDER, proxy_delete_prov },
+ { /* set the primary provider */
+ "PUT", URL_PROVIDER_SET_PRIMARY, proxy_set_primary },
{ NULL, 0, NULL }
};
@@ -2003,6 +2062,10 @@ parse_url (const char *url, my_state *ms)
if (eindex == URL_OBJECT
&& !strcmp (parts[URL_BUCKET], "_providers"))
eindex = URL_PROVIDER;
+ else if (eindex == URL_ATTR
+ && !strcmp (parts[URL_BUCKET], "_providers")
+ && !strcmp (parts[URL_ATTR], "_set_primary"))
+ eindex = URL_PROVIDER_SET_PRIMARY;
DPRINTF("parse_url: %d: %s %s %s", eindex, parts[URL_BUCKET],
parts[URL_OBJECT], parts[URL_ATTR]);
diff --git a/setup.c b/setup.c
index 45ff653..907078d 100644
--- a/setup.c
+++ b/setup.c
@@ -78,6 +78,12 @@ get_main_provider (void)
return main_prov;
}
+void
+set_main_provider (provider_t *prov)
+{
+ main_prov = prov;
+}
+
int
validate_provider (GHashTable *h)
{
diff --git a/setup.h b/setup.h
index 482a9c7..06d73b7 100644
--- a/setup.h
+++ b/setup.h
@@ -53,5 +53,6 @@ provider_t *find_provider (const char *name);
int add_provider (GHashTable *h);
void delete_provider (provider_t *prov);
provider_t *get_main_provider (void);
+void set_main_provider (provider_t *prov);
#endif
diff --git a/t/basic b/t/basic
index 879647d..3646333 100644
--- a/t/basic
+++ b/t/basic
@@ -378,6 +378,18 @@ curl -dtype=http -dhost=localhost -dport=9091 $p_reserved_url || fail=1
curl http://localhost:$port/_primary > p || fail=1
grep _primary p && { warn_ add-provider/reserved-name not rejected; fail=1; }
+# Move the "primary" attribute to a different provider.
+curl -X PUT $p1_url/_set_primary > p || fail=1
+test -s p && fail=1
+new_primary=$(curl http://localhost:$port/_providers/_primary) || fail=1
+test $new_primary = PROVIDER-1 || fail=1
+
+# Restore the primary attribute to the original.
+# FIXME: if I don't restore, the following headless test makes iwhd segfault.
+# Investigate that.
+p1_url=http://localhost:$port/_providers/primary
+curl -X PUT $p1_url/_set_primary > p || fail=1
+
# Test "headless" operation (no access to metadata DB).
http://repo.or.cz/w/iwhd.git/commit/2e9196c2628eade109cc2bcffb695f3575c7c067
commit 2e9196c2628eade109cc2bcffb695f3575c7c067
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Dec 17 15:37:14 2010 +0100
get primary provider name via http://host:$port/_providers/_primary
Get a URL of the form http://.../_providers/_primary
to obtain the name of the primary provider.
* rest.c (proxy_primary_prov): Implement it.
(proxy_add_prov): Prohibit addition of a provider
with the reserved name, "_primary".
* t/basic: Exercise the new functionality.
diff --git a/rest.c b/rest.c
index 49d9cce..253c2c8 100644
--- a/rest.c
+++ b/rest.c
@@ -1721,6 +1721,40 @@ url_to_provider_name (const char *url)
}
static int
+proxy_primary_prov (void *cctx, struct MHD_Connection *conn, const char *url,
+ const char *method, const char *version, const char *data,
+ size_t *data_size, void **rctx)
+{
+ (void)cctx;
+ (void)method;
+ (void)version;
+ (void)data;
+
+ DPRINTF("PROXY GET PRIMARY PROVIDER (%s)n", url);
+
+ my_state *ms = *rctx;
+
+ // "/_providers/_primary" is the only one we accept for now.
+ bool valid = strcmp (url, "/_providers/_primary") == 0;
+ unsigned int rc = (valid ? MHD_HTTP_OK : MHD_HTTP_BAD_REQUEST);
+ if (!valid)
+ error (0, 0, "invalid request: %s", url);
+
+ const char *name = get_main_provider()->name;
+ struct MHD_Response *resp;
+ resp = MHD_create_response_from_data(valid ? strlen (name) : 0,
+ valid ? (void *) name : NULL,
+ MHD_NO, MHD_NO);
+ if (!resp) {
+ return MHD_NO;
+ }
+ MHD_queue_response(conn,rc,resp);
+ MHD_destroy_response(resp);
+
+ return MHD_YES;
+}
+
+static int
proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
const char *method, const char *version, const char *data,
size_t *data_size, void **rctx)
@@ -1807,6 +1841,16 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
name);
goto add_fail;
}
+
+ // another reserved word: provider name
+ // FIXME: don't hard-code it here
+ if (strcmp (prov_name, "_primary") == 0) {
+ fprintf(stderr,
+ "add_provider: %s is a reserved namen",
+ prov_name);
+ goto add_fail;
+ }
+
// FIXME: unchecked strdup
g_hash_table_insert(ms->dict,strdup("name"),prov_name);
@@ -1897,6 +1941,8 @@ static const rule my_rules[] = {
"GET", URL_PROVLIST, proxy_list_provs },
{ /* update a provider */
"POST", URL_PROVLIST, proxy_update_prov },
+ { /* get the primary provider */
+ "GET", URL_PROVIDER, proxy_primary_prov },
{ /* create a provider */
"POST", URL_PROVIDER, proxy_add_prov },
{ /* delete a provider */
diff --git a/t/basic b/t/basic
index f8f8455..879647d 100644
--- a/t/basic
+++ b/t/basic
@@ -363,6 +363,22 @@ curl -f -X DELETE http://localhost:$port/_providers/no-such 2> p
test $? = 22 || fail=1
grep ' 404$' p || fail=1
+# Get the name of the current primary provider.
+curl http://localhost:$port/_providers/_primary > p || fail=1
+test "$(cat p)" = primary || fail=1
+
+# Trying to GET with anything other than "_primary" returns the empty string.
+curl http://localhost:$port/_providers/anything > p 2>/dev/null || fail=1
+test -s p && fail=1
+
+# Try to add a provider with the reserved name. It must fail.
+p_reserved_url=http://localhost:$port/_providers/_primary
+curl -dtype=http -dhost=localhost -dport=9091 $p_reserved_url || fail=1
+# Ensure it was not added.
+curl http://localhost:$port/_primary > p || fail=1
+grep _primary p && { warn_ add-provider/reserved-name not rejected; fail=1; }
+
+
# Test "headless" operation (no access to metadata DB).
kill -9 $mongo_pid
http://repo.or.cz/w/iwhd.git/commit/a04b76ca821f27765659fa52a96be620fb8fc431
commit a04b76ca821f27765659fa52a96be620fb8fc431
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Dec 16 18:27:10 2010 +0100
tests: clean up provider-deletion test
diff --git a/t/basic b/t/basic
index eaaf6e9..f8f8455 100644
--- a/t/basic
+++ b/t/basic
@@ -358,13 +358,11 @@ curl http://localhost:$port/_providers > p || fail=1
grep PROVIDER-2 p && { warn_ $ME_: provider deletion failed; fail=1; }
# Delete a non-existent provider.
-curl -f -X DELETE http://localhost:$port/_providers/no-such
+curl -f -X DELETE http://localhost:$port/_providers/no-such 2> p
+# ensure it fails; expect exit-22 and http: 404
test $? = 22 || fail=1
-# FIXME ensure it fails
-cat p
+grep ' 404$' p || fail=1
-curl http://localhost:$port/_providers > p || fail=1
-cat p
# Test "headless" operation (no access to metadata DB).
kill -9 $mongo_pid
http://repo.or.cz/w/iwhd.git/commit/8187979f677652a5c617dd435cb6644ad360a9e5
commit 8187979f677652a5c617dd435cb6644ad360a9e5
Author: Jim Meyering <meyering(a)redhat.com>
Date: Wed Dec 15 12:45:28 2010 +0100
use new function, get_main_provider, rather than global "main_prov"
* setup.c (get_main_provider): New function.
(main_prov): Declare static.
* setup.h (main_prov): Remove global decl.
(get_main_provider): Declare.
* rest.c, replica.c: Update all uses of "main_prov".
diff --git a/replica.c b/replica.c
index f0c130e..2030009 100644
--- a/replica.c
+++ b/replica.c
@@ -73,7 +73,7 @@ proxy_repl_prod (void *ctx)
void *result;
thunk.parent = item->ms;
- thunk.prov = main_prov;
+ thunk.prov = get_main_provider();
result = thunk.prov->func_tbl->get_child_func(&thunk);
return result;
diff --git a/rest.c b/rest.c
index 1dff3a9..49d9cce 100644
--- a/rest.c
+++ b/rest.c
@@ -338,6 +338,7 @@ proxy_get_data (void *cctx, struct MHD_Connection *conn, const char *url,
if (!pp) {
return MHD_NO;
}
+ provider_t *main_prov = get_main_provider();
ms->thunk.parent = ms;
ms->thunk.prov = ms->from_master ? master_prov : main_prov;
pthread_create(&ms->backend_th,NULL,
@@ -468,6 +469,7 @@ proxy_put_data (void *cctx, struct MHD_Connection *conn, const char *url,
if (!pp) {
return MHD_NO;
}
+ provider_t *main_prov = get_main_provider();
pp->prov = main_prov;
ms->be_flags = BACKEND_GET_SIZE;
pthread_create(&ms->backend_th,NULL,
@@ -913,6 +915,7 @@ proxy_delete (void *cctx, struct MHD_Connection *conn, const char *url,
DPRINTF("PROXY DELETE %sn",url);
+ provider_t *main_prov = get_main_provider();
ms->thunk.parent = ms;
ms->thunk.prov = main_prov;
rc = ms->thunk.prov->func_tbl->delete_func(main_prov,
@@ -1151,6 +1154,7 @@ create_bucket (char *name, my_state *ms)
return MHD_HTTP_BAD_REQUEST;
}
+ provider_t *main_prov = get_main_provider();
rc = main_prov->func_tbl->bcreate_func(main_prov,name);
if (rc == MHD_HTTP_OK) {
if (meta_set_value(name,"_default", "_policy","0") != 0) {
@@ -1739,8 +1743,8 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
char *prov_name = url_to_provider_name (url);
provider_t *prov = find_provider (prov_name);
- // don't allow removal of current main_prov.
- if (prov == main_prov)
+ // don't allow removal of current main provider.
+ if (prov == get_main_provider())
prov = NULL;
int rc = prov ? MHD_HTTP_OK : MHD_HTTP_NOT_FOUND;
@@ -2165,6 +2169,7 @@ args_done:
sem_init(&the_sem,0,0);
if (verbose) {
+ provider_t *main_prov = get_main_provider();
printf("primary store type is %sn",main_prov->type);
if (master_host) {
printf("operating as slave to %s:%un",
diff --git a/setup.c b/setup.c
index bddc446..45ff653 100644
--- a/setup.c
+++ b/setup.c
@@ -69,9 +69,15 @@ extern backend_func_tbl fs_func_tbl;
static json_t *config = NULL;
static GHashTable *prov_hash = NULL;
-provider_t *main_prov = NULL;
+static provider_t *main_prov = NULL;
provider_t *master_prov = NULL;
+provider_t *
+get_main_provider (void)
+{
+ return main_prov;
+}
+
int
validate_provider (GHashTable *h)
{
diff --git a/setup.h b/setup.h
index bf44732..482a9c7 100644
--- a/setup.h
+++ b/setup.h
@@ -36,7 +36,6 @@ typedef struct _provider {
char *token;
} provider_t;
-provider_t *main_prov;
provider_t *master_prov;
const char *parse_config (char *);
@@ -53,5 +52,6 @@ int validate_provider (GHashTable *h);
provider_t *find_provider (const char *name);
int add_provider (GHashTable *h);
void delete_provider (provider_t *prov);
+provider_t *get_main_provider (void);
#endif
http://repo.or.cz/w/iwhd.git/commit/d209a1391ecf2737693f061b8401513599806515
commit d209a1391ecf2737693f061b8401513599806515
Author: Jim Meyering <meyering(a)redhat.com>
Date: Sat Dec 11 15:58:18 2010 +0100
reject an attempt to add a provider with "name" parameter
The "name" is specified as part of the URL, not via a parameter.
* rest.c (proxy_add_prov): Handle undesired "name" parameter properly.
* t/basic: Exercise the above.
diff --git a/rest.c b/rest.c
index 76cf95e..1dff3a9 100644
--- a/rest.c
+++ b/rest.c
@@ -1801,8 +1801,7 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
fprintf(stderr,
"add_provider: do not specify name: name=%sn",
name);
- return MHD_NO;
- // FIXME: be careful that this does not leak "ms"
+ goto add_fail;
}
// FIXME: unchecked strdup
g_hash_table_insert(ms->dict,strdup("name"),prov_name);
@@ -1817,6 +1816,8 @@ proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
else {
DPRINTF("invalid providern");
}
+
+ add_fail:
resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
if (!resp) {
fprintf(stderr,"MHD_crfd failedn");
diff --git a/t/basic b/t/basic
index 09a0311..eaaf6e9 100644
--- a/t/basic
+++ b/t/basic
@@ -344,6 +344,13 @@ curl -dtype=http -dhost=localhost -dport=9091 $p2_url || fail=1
curl http://localhost:$port/_providers > p || fail=1
grep PROVIDER-2 p || fail=1
+# Add provider using a "name" parameter (not permitted):
+p3_url=http://localhost:$port/_providers/PROVIDER-3
+curl -dtype=http -dname=X -dhost=localhost -dport=9091 $p3_url || fail=1
+# Ensure it was not added.
+curl http://localhost:$port/_providers > p || fail=1
+grep PROVIDER-3 p && { warn_ add-provider-w/name-param not rejected; fail=1; }
+
# Delete a provider.
curl -f -X DELETE $p2_url 2> p || fail=1
# Ensure it was deleted.
http://repo.or.cz/w/iwhd.git/commit/65df8ac6ad67633bbf42171dd40bfc132368459f
commit 65df8ac6ad67633bbf42171dd40bfc132368459f
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Dec 10 21:11:27 2010 +0100
add provider ref-counting; FIXME: partial impl. (i.e., no incr)
delete_provider: New function.
diff --git a/rest.c b/rest.c
index da4dedb..76cf95e 100644
--- a/rest.c
+++ b/rest.c
@@ -1737,7 +1737,6 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
}
char *prov_name = url_to_provider_name (url);
- DPRINTF("PROXY DELETE PROVIDER prov=%sn", prov_name);
provider_t *prov = find_provider (prov_name);
// don't allow removal of current main_prov.
@@ -1745,10 +1744,17 @@ proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
prov = NULL;
int rc = prov ? MHD_HTTP_OK : MHD_HTTP_NOT_FOUND;
- if (prov)
- prov->deleted = 1;
- error (0, 0, "DELETE PROV: rc=%d", rc);
+ DPRINTF("PROXY DELETE PROVIDER prov=%s rc=%dn", prov_name, rc);
+
+ if (prov) {
+ /* Delete for real if no one is using it.
+ Otherwise, just mark it as deleted. */
+ if (g_atomic_int_get (&prov->refcnt) == 0)
+ delete_provider (prov);
+ else
+ prov->deleted = 1;
+ }
free (prov_name);
MHD_queue_response(conn,rc,resp);
diff --git a/setup.c b/setup.c
index cb4a85b..bddc446 100644
--- a/setup.c
+++ b/setup.c
@@ -319,6 +319,7 @@ convert_provider (int i, provider_t *out)
out->token = NULL;
out->deleted = 0;
+ out->refcnt = 0;
return 1;
}
@@ -582,6 +583,19 @@ find_provider (const char *name)
return p;
}
+void
+delete_provider (provider_t *prov)
+{
+ // Remove from prov_hash, then free
+
+ g_hash_table_destroy(prov->attrs);
+ // FIXME: free other members, too?
+
+ // FIXME: unchecked strdup
+ char *name = strdup (prov->name);
+ g_hash_table_remove(prov_hash, prov->name);
+}
+
const char *
get_provider_value (const provider_t *prov, const char *fname)
{
diff --git a/setup.h b/setup.h
index c3bda74..bf44732 100644
--- a/setup.h
+++ b/setup.h
@@ -26,7 +26,8 @@ typedef struct _provider {
const char *type;
const char *host;
int port;
- int deleted;
+ int deleted;
+ gint refcnt;
const char *username;
const char *password;
const char *path;
@@ -51,5 +52,6 @@ const char *auto_config (void);
int validate_provider (GHashTable *h);
provider_t *find_provider (const char *name);
int add_provider (GHashTable *h);
+void delete_provider (provider_t *prov);
#endif
http://repo.or.cz/w/iwhd.git/commit/e2ecc0a75108543bcde4e3509202a5ac511ef1fe
commit e2ecc0a75108543bcde4e3509202a5ac511ef1fe
Author: Jim Meyering <meyering(a)redhat.com>
Date: Thu Dec 23 18:17:23 2010 +0100
don't use xstrndup via base_name
* rest.c (url_to_provider_name): Rewrite not to use base_name, since
that function uses xstrndup, which exits on OOM.
diff --git a/rest.c b/rest.c
index 5c5fce8..da4dedb 100644
--- a/rest.c
+++ b/rest.c
@@ -1704,12 +1704,15 @@ proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
static char *
url_to_provider_name (const char *url)
{
- char *prov_name = base_name (url);
- if (prov_name)
- strip_trailing_slashes (prov_name);
+ char *p = strdup (url);
+ if (p == NULL)
+ return NULL;
+
/* Ensure we handle trailing slashes (i.e., remove them). */
- assert (prov_name == NULL
- || (*prov_name && prov_name[strlen(prov_name)-1] != '/'));
+ strip_trailing_slashes (p);
+
+ char *prov_name = strdup (last_component (p));
+ free (p);
return prov_name;
}
http://repo.or.cz/w/iwhd.git/commit/530db87554e7c801813dce2f4cbb0108334c21de
commit 530db87554e7c801813dce2f4cbb0108334c21de
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Nov 15 17:50:53 2010 +0100
allow dynamic addition/deletion of providers
* setup.h (struct _provider) [deleted]: New member.
* setup.c (validate_provider): New function.
(json_validate_server): Renamed from validate_server.
(convert_provider): Initialize new "deleted" member.
(add_provider, find_provider): New functions.
Declare new functions.
* rest.c (prov_list_generator): Don't list a provider that
is marked as deleted.
(url_to_provider_name): New function.
(proxy_delete_prov, proxy_add_prov): New functions.
(my_rules): Add corresponding entries in this table.
* t/basic: Add minimal tests of new functionality.
diff --git a/rest.c b/rest.c
index 29bd936..5c5fce8 100644
--- a/rest.c
+++ b/rest.c
@@ -34,6 +34,7 @@
#include <curl/curl.h>
#include <glib.h>
+#include "dirname.h"
#include "iwh.h"
#include "closeout.h"
#include "progname.h"
@@ -57,7 +58,7 @@
typedef enum {
URL_ROOT=0, URL_BUCKET, URL_OBJECT, URL_ATTR, URL_INVAL,
- URL_QUERY, URL_PROVLIST
+ URL_QUERY, URL_PROVLIST, URL_PROVIDER
} url_type;
typedef struct {
@@ -931,6 +932,7 @@ proxy_delete (void *cctx, struct MHD_Connection *conn, const char *url,
free_ms(ms);
return MHD_NO;
}
+ error (0, 0, "DELETE BUCKET: rc=%d", rc);
MHD_queue_response(conn,rc,resp);
MHD_destroy_response(resp);
@@ -1464,7 +1466,7 @@ proxy_object_post (void *cctx, struct MHD_Connection *conn, const char *url,
(void)method;
(void)version;
- DPRINTF("PROXY POST (%s, %zu)n",url,*data_size);
+ DPRINTF("PROXY POST obj (%s, %zu)n",url,*data_size);
if (ms->state == MS_NEW) {
ms->state = MS_NORMAL;
@@ -1569,6 +1571,8 @@ prov_list_generator (void *ctx, uint64_t pos, char *buf, size_t max)
}
if (g_hash_table_iter_next(&ms->prov_iter,&key,(gpointer *)&prov)) {
+ if (prov->deleted)
+ return 0;
len = tmpl_prov_entry(ms->gen_ctx,prov->name,prov->type,
prov->host, prov->port, prov->username, prov->password);
if (!len) {
@@ -1697,6 +1701,126 @@ proxy_update_prov (void *cctx, struct MHD_Connection *conn, const char *url,
return MHD_YES;
}
+static char *
+url_to_provider_name (const char *url)
+{
+ char *prov_name = base_name (url);
+ if (prov_name)
+ strip_trailing_slashes (prov_name);
+ /* Ensure we handle trailing slashes (i.e., remove them). */
+ assert (prov_name == NULL
+ || (*prov_name && prov_name[strlen(prov_name)-1] != '/'));
+ return prov_name;
+}
+
+static int
+proxy_delete_prov (void *cctx, struct MHD_Connection *conn, const char *url,
+ const char *method, const char *version, const char *data,
+ size_t *data_size, void **rctx)
+{
+ DPRINTF("PROXY DELETE PROVIDER %sn",url);
+ (void)cctx;
+ (void)method;
+ (void)version;
+ (void)data;
+ (void)data_size;
+
+ my_state *ms = *rctx;
+ struct MHD_Response *resp
+ = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
+ if (!resp) {
+ free_ms(ms);
+ return MHD_NO;
+ }
+
+ char *prov_name = url_to_provider_name (url);
+ DPRINTF("PROXY DELETE PROVIDER prov=%sn", prov_name);
+ provider_t *prov = find_provider (prov_name);
+
+ // don't allow removal of current main_prov.
+ if (prov == main_prov)
+ prov = NULL;
+
+ int rc = prov ? MHD_HTTP_OK : MHD_HTTP_NOT_FOUND;
+ if (prov)
+ prov->deleted = 1;
+
+ error (0, 0, "DELETE PROV: rc=%d", rc);
+
+ free (prov_name);
+ MHD_queue_response(conn,rc,resp);
+ MHD_destroy_response(resp);
+
+ free_ms(ms);
+ return MHD_YES;
+}
+
+static int
+proxy_add_prov (void *cctx, struct MHD_Connection *conn, const char *url,
+ const char *method, const char *version, const char *data,
+ size_t *data_size, void **rctx)
+{
+ DPRINTF("PROXY ADD PROVIDER %sn",url);
+ struct MHD_Response *resp;
+ my_state *ms = *rctx;
+
+ (void)cctx;
+ (void)method;
+ (void)version;
+
+ if (ms->state == MS_NEW) {
+ ms->state = MS_NORMAL;
+ ms->url = (char *)url;
+ ms->dict = g_hash_table_new_full(
+ g_str_hash,g_str_equal,free,free);
+ ms->cleanup |= CLEANUP_DICT;
+ ms->post = MHD_create_post_processor(conn,4096,
+ prov_iterator,ms->dict);
+ ms->cleanup |= CLEANUP_POST;
+ }
+ else if (*data_size) {
+ MHD_post_process(ms->post,data,*data_size);
+ *data_size = 0;
+ }
+ else {
+ int rc = MHD_HTTP_BAD_REQUEST;
+ char *prov_name = url_to_provider_name (url);
+ /* We're about to insert "name -> $prov_name".
+ Ensure there is no "name" key already there. */
+ const char *name = g_hash_table_lookup (ms->dict, "name");
+ if (name) {
+ fprintf(stderr,
+ "add_provider: do not specify name: name=%sn",
+ name);
+ return MHD_NO;
+ // FIXME: be careful that this does not leak "ms"
+ }
+ // FIXME: unchecked strdup
+ g_hash_table_insert(ms->dict,strdup("name"),prov_name);
+
+ if (validate_provider (ms->dict)) {
+ if (!add_provider (ms->dict)) {
+ DPRINTF("add provider failedn");
+ } else {
+ rc = MHD_HTTP_OK;
+ }
+ }
+ else {
+ DPRINTF("invalid providern");
+ }
+ resp = MHD_create_response_from_data(0,NULL,MHD_NO,MHD_NO);
+ if (!resp) {
+ fprintf(stderr,"MHD_crfd failedn");
+ return MHD_NO;
+ }
+ MHD_queue_response(conn,rc,resp);
+ MHD_destroy_response(resp);
+ free_ms(ms);
+ }
+
+ return MHD_YES;
+}
+
static int
proxy_create_bucket (void *cctx, struct MHD_Connection *conn, const char *url,
const char *method, const char *version, const char *data,
@@ -1759,6 +1883,10 @@ static const rule my_rules[] = {
"GET", URL_PROVLIST, proxy_list_provs },
{ /* update a provider */
"POST", URL_PROVLIST, proxy_update_prov },
+ { /* create a provider */
+ "POST", URL_PROVIDER, proxy_add_prov },
+ { /* delete a provider */
+ "DELETE", URL_PROVIDER, proxy_delete_prov },
{ NULL, 0, NULL }
};
@@ -1812,6 +1940,12 @@ parse_url (const char *url, my_state *ms)
}
}
+ if (eindex == URL_OBJECT
+ && !strcmp (parts[URL_BUCKET], "_providers"))
+ eindex = URL_PROVIDER;
+
+ DPRINTF("parse_url: %d: %s %s %s", eindex, parts[URL_BUCKET],
+ parts[URL_OBJECT], parts[URL_ATTR]);
return eindex;
}
diff --git a/setup.c b/setup.c
index 91d9f64..cb4a85b 100644
--- a/setup.c
+++ b/setup.c
@@ -27,6 +27,7 @@
#include <string.h>
#include <strings.h>
#include <unistd.h>
+#include <assert.h>
#include <jansson.h>
@@ -71,8 +72,74 @@ static GHashTable *prov_hash = NULL;
provider_t *main_prov = NULL;
provider_t *master_prov = NULL;
+int
+validate_provider (GHashTable *h)
+{
+ const char *name = g_hash_table_lookup (h, "name");
+ assert (name);
+ const char *type = g_hash_table_lookup (h, "type");
+ if (type == NULL) {
+ error (0, 0, "provider %s has no type", name);
+ return 0;
+ }
+
+ unsigned int needs;
+ if (!strcasecmp(type,"s3") || !strcasecmp(type,"cf")) {
+ needs = NEED_SERVER | NEED_CREDS;
+ } else if (!strcasecmp(type,"http")) {
+ needs = NEED_SERVER;
+ } else if (!strcasecmp(type,"fs")) {
+ needs = NEED_PATH;
+ } else {
+ error (0, 0, "provider %s has invalid type: %s", name, type);
+ return 0;
+ }
+
+ int ok = 1;
+ if (needs & NEED_SERVER) {
+ const char *host = g_hash_table_lookup (h, "host");
+ if (!host) {
+ error (0, 0, "%s: %s-provider requires a host", name, type);
+ ok = 0;
+ }
+ const char *port = g_hash_table_lookup (h, "port");
+ if (!port) {
+ error (0, 0, "%s: %s-provider requires a port", name, type);
+ ok = 0;
+ }
+ // ensure port is a positive integer with 5 or fewer digits
+ if (5 < strlen (port) || strcspn (port, "0123456789")) {
+ error (0, 0, "%s: %s-provider: invalid port: %s", name, type, port);
+ ok = 0;
+ }
+ }
+
+ if (needs & NEED_CREDS) {
+ const char *key = g_hash_table_lookup (h, "key");
+ if (!key) {
+ error (0, 0, "%s: %s-provider requires a key", name, type);
+ ok = 0;
+ }
+ const char *secret = g_hash_table_lookup (h, "secret");
+ if (!secret) {
+ error (0, 0, "%s: %s-provider requires a secret", name, type);
+ ok = 0;
+ }
+ }
+
+ if (needs & NEED_PATH) {
+ const char *path = g_hash_table_lookup (h, "path");
+ if (!path) {
+ error (0, 0, "%s: %s-provider requires a path", name, type);
+ ok = 0;
+ }
+ }
+
+ return ok;
+}
+
static int
-validate_server (unsigned int i)
+json_validate_server (unsigned int i)
{
json_t *server;
json_t *elem;
@@ -209,6 +276,7 @@ convert_provider (int i, provider_t *out)
out->username = dup_json_string(server,"key");
out->password = dup_json_string(server,"secret");
out->path = dup_json_string(server,"path");
+ /* FIXME: detect failed "dup_*" calls. */
/* TBD: do this a cleaner way. */
if (!strcasecmp(out->type,"s3")) {
@@ -231,6 +299,7 @@ convert_provider (int i, provider_t *out)
iter = json_object_iter(server);
while (iter) {
key = json_object_iter_key(iter);
+ error(0,0,"convert-provider: ITER key: %s",key);
if (!is_reserved_attr(key)) {
value = json_string_value(json_object_iter_value(iter));
if (value) {
@@ -249,10 +318,100 @@ convert_provider (int i, provider_t *out)
}
out->token = NULL;
+ out->deleted = 0;
return 1;
}
+int
+add_provider (GHashTable *h)
+{
+ char *name = g_hash_table_lookup (h, "name");
+ assert (name);
+
+ provider_t *prov = calloc (1, sizeof *prov);
+ if (prov == NULL)
+ return 0;
+
+ prov->name = strdup (name);
+ if (prov->name == NULL)
+ goto fail;
+
+ prov->type = g_hash_table_lookup(h,"type");
+ if (prov->type == NULL)
+ goto fail;
+ prov->type = strdup (prov->type);
+ if (prov->type == NULL)
+ goto fail;
+
+ prov->host = g_hash_table_lookup(h,"host");
+ prov->port = atoi(g_hash_table_lookup(h,"port"));
+ /* TBD: change key/secret field names to username/password */
+ prov->username = g_hash_table_lookup(h,"key");
+ prov->password = g_hash_table_lookup(h,"secret");
+ prov->path = g_hash_table_lookup(h,"path");
+
+ if (prov->host)
+ prov->host = strdup (prov->host);
+ if (prov->username)
+ prov->username = strdup (prov->username);
+ if (prov->password)
+ prov->password = strdup (prov->password);
+ if (prov->path)
+ prov->path = strdup (prov->path);
+
+ /* TBD: do this a cleaner way. */
+ if (!strcasecmp(prov->type,"s3"))
+ prov->func_tbl = &s3_func_tbl;
+ else if (!strcasecmp(prov->type,"http"))
+ prov->func_tbl = &curl_func_tbl;
+ else if (!strcasecmp(prov->type,"cf"))
+ prov->func_tbl = &cf_func_tbl;
+ else if (!strcasecmp(prov->type,"fs"))
+ prov->func_tbl = &fs_func_tbl;
+ else
+ prov->func_tbl = &bad_func_tbl;
+
+ prov->attrs = g_hash_table_new_full(g_str_hash,g_str_equal,free,free);
+ // FIXME: can't the above fail?
+
+ GHashTableIter iter;
+ g_hash_table_iter_init (&iter, h);
+ while (1) {
+ gpointer key;
+ gpointer val;
+ if (!g_hash_table_iter_next (&iter, &key, &val))
+ break;
+
+ if (!is_reserved_attr(key)) {
+ if (val) {
+ error(0,0,"no value for %s", (char *)key);
+ continue;
+ }
+ DPRINTF("%p.%s = %sn",prov, (char *)key, (char *)val);
+ g_hash_table_insert(prov->attrs, strdup(key), strdup(val));
+ // FIXME: check strdup for failure
+ }
+ }
+
+ /* Note that we must strdup "name", since here it's a key, but above it's a
+ value. Not using // strdup here would lead to a use-after-free bug. */
+ // FIXME: check strdup for failure
+ g_hash_table_insert(prov_hash,strdup(name),prov);
+
+ return 1;
+
+ fail:
+ free ((char *) prov->name);
+ free ((char *) prov->type);
+ free ((char *) prov->host);
+ free ((char *) prov->username);
+ free ((char *) prov->password);
+ free ((char *) prov->path);
+ free (prov);
+ return 0;
+}
+
static const char *
parse_config_inner (void)
{
@@ -274,7 +433,7 @@ parse_config_inner (void)
}
for (i = 0; i < nservers; ++i) {
- if (!validate_server(i)) {
+ if (!json_validate_server(i)) {
goto err;
}
}
@@ -402,7 +561,7 @@ auto_config(void)
return primary;
}
-const provider_t *
+provider_t *
get_provider (const char *name)
{
if (!prov_hash || !name || (*name == '0')) {
@@ -411,6 +570,18 @@ get_provider (const char *name)
return g_hash_table_lookup(prov_hash,name);
}
+provider_t *
+find_provider (const char *name)
+{
+ if (!prov_hash || !name || (*name == '0')) {
+ return NULL;
+ }
+
+ provider_t *p = g_hash_table_lookup(prov_hash, name);
+
+ return p;
+}
+
const char *
get_provider_value (const provider_t *prov, const char *fname)
{
diff --git a/setup.h b/setup.h
index 9a0c533..c3bda74 100644
--- a/setup.h
+++ b/setup.h
@@ -26,6 +26,7 @@ typedef struct _provider {
const char *type;
const char *host;
int port;
+ int deleted;
const char *username;
const char *password;
const char *path;
@@ -38,14 +39,17 @@ provider_t *main_prov;
provider_t *master_prov;
const char *parse_config (char *);
-const provider_t *get_provider (const char *name);
-void update_provider (const char *provname,
+provider_t *get_provider (const char *name);
+void update_provider (const char *provname,
const char *username,
const char *password);
const char *get_provider_value (const provider_t *prov,
const char *fname);
void init_prov_iter (GHashTableIter *iter);
-const char *auto_config (void);
+const char *auto_config (void);
+int validate_provider (GHashTable *h);
+provider_t *find_provider (const char *name);
+int add_provider (GHashTable *h);
#endif
diff --git a/t/basic b/t/basic
index b3eba10..09a0311 100644
--- a/t/basic
+++ b/t/basic
@@ -324,14 +324,45 @@ echo bye | curl -f -T - $bucket/trunc_test || fail=1
cat FS/b1/trunc_test
test "$(cat FS/b1/trunc_test)" = bye || fail=1
+# TBD: add attribute-delete tests when that functionality is implemented
+
+# TBD: add white-box tests for attributes in mongo
+
+# Add a provider:
+p1_url=http://localhost:$port/_providers/PROVIDER-1
+curl -d type=s3 -dhost=localhost -dport=80 -dkey=u -dsecret=p + $p1_url || fail=1
+
+# Ensure it was added
+curl http://localhost:$port/_providers > p || fail=1
+grep PROVIDER-1 p || fail=1
+
+# Add another provider:
+p2_url=http://localhost:$port/_providers/PROVIDER-2
+curl -dtype=http -dhost=localhost -dport=9091 $p2_url || fail=1
+# Ensure it was added.
+curl http://localhost:$port/_providers > p || fail=1
+grep PROVIDER-2 p || fail=1
+
+# Delete a provider.
+curl -f -X DELETE $p2_url 2> p || fail=1
+# Ensure it was deleted.
+curl http://localhost:$port/_providers > p || fail=1
+grep PROVIDER-2 p && { warn_ $ME_: provider deletion failed; fail=1; }
+
+# Delete a non-existent provider.
+curl -f -X DELETE http://localhost:$port/_providers/no-such
+test $? = 22 || fail=1
+# FIXME ensure it fails
+cat p
+
+curl http://localhost:$port/_providers > p || fail=1
+cat p
+
# Test "headless" operation (no access to metadata DB).
kill -9 $mongo_pid
cleanup_() { kill $iwhd_pid; }
curl $bucket/attr_put > f3copy
test "$(cat f3copy)" = "nothing" || fail=1
-# TBD: add attribute-delete tests when that functionality is implemented
-
-# TBD: add white-box tests for attributes in mongo
-
Exit $fail
http://repo.or.cz/w/iwhd.git/commit/5685b973f295453871778372a6d1bad9f1b75750
commit 5685b973f295453871778372a6d1bad9f1b75750
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Jan 10 18:51:20 2011 +0100
fix an unchecked strdup
diff --git a/rest.c b/rest.c
index cb4e238..29bd936 100644
--- a/rest.c
+++ b/rest.c
@@ -1103,8 +1103,14 @@ post_iterator (void *ctx, enum MHD_ValueKind kind, const char *key,
new_val[size] = '0';
}
- g_hash_table_insert(ctx,strdup(key),new_val);
- /* TBD: check return value for strdups (none avail for insert) */
+ char *k = strdup (key);
+ if (!k) {
+ free (new_val);
+ return MHD_NO;
+ }
+
+ g_hash_table_insert(ctx,k,new_val);
+
return MHD_YES;
}
-----------------------------------------------------------------------
Summary of changes:
Makefile.am | 2 +
backend.c | 26 +-
backend.h | 3 +-
bootstrap.conf | 10 +
configure.ac | 9 +
gc-wrap.h | 35 +++
gnulib | 2 +-
gnulib-tests/.gitignore | 68 +++++
iwh.h | 2 +
iwhd.spec.in | 1 +
lib/.gitignore | 26 ++
meta.cpp | 5 +-
mpipe.c | 1 +
mpipe.h | 6 -
qparser.y | 45 +---
query.h | 7 +-
replica.c | 64 ++---
rest.c | 710 ++++++++++++++++++++++++++++++++---------------
setup.c | 307 +++++++++++++++++++--
setup.h | 97 ++++++-
state_defs.h | 11 +-
t/Makefile.am | 5 +-
t/basic | 186 +++++--------
t/init.cfg | 91 ++++++
t/provider | 88 ++++++
template.c | 28 +--
template.h | 11 +-
27 files changed, 1340 insertions(+), 506 deletions(-)
create mode 100644 gc-wrap.h
create mode 100644 t/provider
repo.or.cz automatic notification. Contact project admin jim(a)meyering.net
if you want to unsubscribe, or site admin admin(a)repo.or.cz if you receive
no reply.
--
iwhd.git ("image warehouse daemon")
13 years, 3 months
[repo.or.cz] iwhd.git branch master updated: v0.0-285-g186b8ec
by Jim Meyering
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project iwhd.git.
The branch, master has been updated
via 186b8eca47f80c8cb97c429bcdb7594028880fa2 (commit)
via 114b3cb856c58f24af540c6e9153f42e833c67d2 (commit)
from f22452b530a85b36a50cc8f7dc9fb9ea8387389b (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
http://repo.or.cz/w/iwhd.git/commit/186b8eca47f80c8cb97c429bcdb7594028880fa2
commit 186b8eca47f80c8cb97c429bcdb7594028880fa2
Author: Jim Meyering <meyering(a)redhat.com>
Date: Mon Feb 7 19:34:57 2011 +0100
build: update gnulib submodule to latest
diff --git a/gnulib b/gnulib
index a036b76..6f0680e 160000
--- a/gnulib
+++ b/gnulib
@@ -1 +1 @@
-Subproject commit a036b7684f9671ee53999773785d1865603c3849
+Subproject commit 6f0680eb29a1737d704a1df26aafc00490cd34d8
diff --git a/gnulib-tests/.gitignore b/gnulib-tests/.gitignore
index 8c4281e..4b4d539 100644
--- a/gnulib-tests/.gitignore
+++ b/gnulib-tests/.gitignore
@@ -46,6 +46,7 @@
/test-lstat.c
/test-lstat.h
/test-malloc-gnu.c
+/test-malloca.c
/test-mbrtowc.c
/test-mbrtowc1.sh
/test-mbrtowc2.sh
http://repo.or.cz/w/iwhd.git/commit/114b3cb856c58f24af540c6e9153f42e833c67d2
commit 114b3cb856c58f24af540c6e9153f42e833c67d2
Author: Pete Zaitcev <zaitcev(a)redhat.com>
Date: Fri Feb 4 16:53:26 2011 -0700
avoid hang when creating an object in non-existing bucket
This hang occurs when doing something like the following, without
creating "templates" first:
echo hello | curl -T - http://lembas:9090/templates/my_file
This bug appears to have been introduced due to an incomplete
change during the cons_error/cons_init_error split.
* mpipe.c (pipe_cons_siginit): Use cons_init_done, not cons_init.
(pipe_prod_wait_init): Use cons_init_error, not cons_init.
* t/basic: add test for hang-no-parent bug
diff --git a/mpipe.c b/mpipe.c
index a528285..a66cde3 100644
--- a/mpipe.c
+++ b/mpipe.c
@@ -126,7 +126,7 @@ pipe_cons_siginit (pipe_shared *ps, int error)
}
pthread_cond_broadcast(&ps->prod_cond);
DPRINTF("consumer init signal (total %u done %u error %u)n",
- ps->cons_total,ps->cons_done,ps->cons_error);
+ ps->cons_total,ps->cons_init_done,ps->cons_init_error);
pthread_mutex_unlock(&ps->lock);
}
@@ -146,7 +146,7 @@ pipe_prod_wait_init (pipe_shared *ps)
ps->cons_total,ps->cons_init_done,ps->cons_init_error);
}
pthread_mutex_unlock(&ps->lock);
- return ps->cons_error;
+ return ps->cons_init_error;
}
void
diff --git a/t/basic b/t/basic
index d828ac2..b3eba10 100644
--- a/t/basic
+++ b/t/basic
@@ -267,6 +267,9 @@ tr -s 't n' ' ' < q.xml > k && mv k q.xml
printf '[ { "bucket": "b99", "key": "my_file" } ] ' > exp.xml
compare q.xml exp.xml || fail=1
+# Before 2011-02-07, this would cause iwhd to hang.
+printf hang | curl -T - http://localhost:$port/no-such/fff
+
# Add a single attribute to an object using the PUT method.
bucket=$bucket
printf nothing | curl -T - $bucket/attr_put || fail=1
-----------------------------------------------------------------------
Summary of changes:
gnulib | 2 +-
gnulib-tests/.gitignore | 1 +
mpipe.c | 4 ++--
t/basic | 3 +++
4 files changed, 7 insertions(+), 3 deletions(-)
repo.or.cz automatic notification. Contact project admin jim(a)meyering.net
if you want to unsubscribe, or site admin admin(a)repo.or.cz if you receive
no reply.
--
iwhd.git ("image warehouse daemon")
13 years, 3 months
[repo.or.cz] iwhd.git branch master updated: v0.0-283-gf22452b
by Jim Meyering
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project iwhd.git.
The branch, master has been updated
via f22452b530a85b36a50cc8f7dc9fb9ea8387389b (commit)
via 6d0ab2da3a3fb3c92dde97a4072a317a8a63bba3 (commit)
via ad4a71c5af22dd71c159181b37e63fc9ff16a31b (commit)
via 1bd814bab002ff719896c7ce6717f5758a73791d (commit)
from 923ba16951c01909de4ef456baf9bc410844f055 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------
http://repo.or.cz/w/iwhd.git/commit/f22452b530a85b36a50cc8f7dc9fb9ea8387389b
commit f22452b530a85b36a50cc8f7dc9fb9ea8387389b
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Fri Feb 4 17:33:30 2011 +0100
don't segfault on a simple query
* qparser.y (free_value): Don't free v->resolved.
(string_value): Disable caching of v->resolved.
* rest.c (proxy_query): Don't free ms here. It's still in use.
* t/basic: Add test case to trigger crash reported by Steve Loranz.
diff --git a/qparser.y b/qparser.y
index 64f025a..d70334a 100644
--- a/qparser.y
+++ b/qparser.y
@@ -394,7 +394,7 @@ free_value (value_t *v)
return;
}
- free((void *)v->resolved);
+ //free((void *)v->resolved);
switch (v->type) {
case T_STRING:
@@ -450,6 +450,9 @@ string_value (value_t *v, const getter_t *oget, const getter_t *sget)
{
const char *left;
+ /* Disable this caching, which seems to be invalid. */
+ v->resolved = NULL;
+
switch (v->type) {
case T_STRING:
return v->as_str;
diff --git a/rest.c b/rest.c
index 21df05b..cb4e238 100644
--- a/rest.c
+++ b/rest.c
@@ -854,7 +854,7 @@ proxy_query (void *cctx, struct MHD_Connection *conn, const char *url,
}
MHD_queue_response(conn,MHD_HTTP_OK,resp);
MHD_destroy_response(resp);
- free_ms(ms);
+ //free_ms(ms);
}
return MHD_YES;
diff --git a/t/basic b/t/basic
index 6a30fcc..d828ac2 100644
--- a/t/basic
+++ b/t/basic
@@ -257,6 +257,16 @@ for i in xml json; do
done
+# Crash reported by Steve Loranz
+curl -f -X PUT http://localhost:$port/b99
+echo hello | curl -T - http://localhost:$port/b99/my_file
+printf mock | curl -T - http://localhost:$port/b99/my_file/target
+curl -H 'Accept: */json' -d '$target=="mock"' + http://localhost:$port/b99/_query > q.xml
+tr -s 't n' ' ' < q.xml > k && mv k q.xml
+printf '[ { "bucket": "b99", "key": "my_file" } ] ' > exp.xml
+compare q.xml exp.xml || fail=1
+
# Add a single attribute to an object using the PUT method.
bucket=$bucket
printf nothing | curl -T - $bucket/attr_put || fail=1
http://repo.or.cz/w/iwhd.git/commit/6d0ab2da3a3fb3c92dde97a4072a317a8a63bba3
commit 6d0ab2da3a3fb3c92dde97a4072a317a8a63bba3
Author: Jim Meyering <meyering(a)redhat.com>
Date: Tue Jan 25 11:12:05 2011 +0100
tests: reenable excluded gnulib test; run gnulib-tests first
* bootstrap.conf: Don't disable malloca-test. It has been fixed
so it is no longer so slow.
* Makefile.am (SUBDIRS): Run gnulib-tests before ours,
so the results of ours aren't displaced as gnulib's scroll by.
diff --git a/Makefile.am b/Makefile.am
index ce65678..ddafa71 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -20,7 +20,7 @@ AM_CPPFLAGS = -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include
iwhd_YFLAGS = -d
-SUBDIRS = lib . t man gnulib-tests
+SUBDIRS = lib . gnulib-tests t man
ACLOCAL_AMFLAGS = -I m4
# iwhd is short for Image WareHouse Daemon.
diff --git a/bootstrap.conf b/bootstrap.conf
index 36685ff..625efb4 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -15,11 +15,6 @@
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
-# This test is fine, but inordinately slow.
-avoided_gnulib_modules='
- --avoid=malloca-tests
-'
-
# gnulib modules used by this package.
gnulib_modules='
announce-gen
http://repo.or.cz/w/iwhd.git/commit/ad4a71c5af22dd71c159181b37e63fc9ff16a31b
commit ad4a71c5af22dd71c159181b37e63fc9ff16a31b
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Feb 4 09:17:17 2011 +0100
build: update gnulib submodule to latest
diff --git a/.gitignore b/.gitignore
index 892be03..f95dca3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,7 +1,9 @@
*.o
+*/.deps
*~
.#*
.deps
+/GNUmakefile
/aclocal.m4
/autom4te.cache/
/build-aux
@@ -10,11 +12,14 @@
/config.log
/config.status
/configure
+/gnulib-tests
/iwhd
/iwhd-*.tar.gz
/iwhd-qparser.c
/iwhd-qparser.h
/iwhd.spec
+/m4/.gitignore
+/maint.mk
/man/iwhd.8
/qlexer.c
/stamp-h1
@@ -25,5 +30,3 @@ ChangeLog
Makefile
Makefile.in
#*#
-/GNUmakefile
-/gnulib-tests
diff --git a/doc/.gitignore b/doc/.gitignore
index df37308..e7a8672 100644
--- a/doc/.gitignore
+++ b/doc/.gitignore
@@ -1 +1,2 @@
+/gendocs_template
gendocs_template
diff --git a/gnulib b/gnulib
index 9040dc9..a036b76 160000
--- a/gnulib
+++ b/gnulib
@@ -1 +1 @@
-Subproject commit 9040dc9567e2b0c9268f5a1e5c84173c0db6d37a
+Subproject commit a036b7684f9671ee53999773785d1865603c3849
diff --git a/gnulib-tests/.gitignore b/gnulib-tests/.gitignore
index 43d1106..8c4281e 100644
--- a/gnulib-tests/.gitignore
+++ b/gnulib-tests/.gitignore
@@ -1,3 +1,103 @@
+/alloca.h
+/alloca.in.h
+/binary-io.h
+/dup2.c
+/fcntl.h
+/fcntl.in.h
+/getpagesize.c
+/gnulib.mk
+/ignore-value.h
+/init.sh
+/lstat.c
+/macros.h
+/malloca.c
+/malloca.h
+/malloca.valgrind
+/open.c
+/pathmax.h
+/putenv.c
+/same-inode.h
+/setenv.c
+/signature.h
+/stat.c
+/stdio-write.c
+/stdio.h
+/stdio.in.h
+/symlink.c
+/sys
+/sys_stat.h
+/sys_stat.in.h
+/test-alloca-opt.c
+/test-binary-io.c
+/test-binary-io.sh
+/test-c-ctype.c
+/test-dirname.c
+/test-dup2.c
+/test-environ.c
+/test-errno.c
+/test-fcntl-h.c
+/test-fpending.c
+/test-fpending.sh
+/test-getopt.c
+/test-getopt.h
+/test-getopt_long.h
+/test-ignore-value.c
+/test-inttypes.c
+/test-lstat.c
+/test-lstat.h
+/test-malloc-gnu.c
+/test-mbrtowc.c
+/test-mbrtowc1.sh
+/test-mbrtowc2.sh
+/test-mbrtowc3.sh
+/test-mbrtowc4.sh
+/test-mbsinit.c
+/test-mbsinit.sh
+/test-memchr.c
+/test-open.c
+/test-open.h
+/test-quotearg-simple.c
+/test-quotearg.h
+/test-realloc-gnu.c
+/test-setenv.c
+/test-stat.c
+/test-stat.h
+/test-stdbool.c
+/test-stddef.c
+/test-stdint.c
+/test-stdio.c
+/test-stdlib.c
+/test-strerror.c
+/test-string.c
+/test-strnlen.c
+/test-symlink.c
+/test-symlink.h
+/test-sys_stat.c
+/test-sys_wait.h
+/test-time.c
+/test-unistd.c
+/test-unsetenv.c
+/test-update-copyright.sh
+/test-vc-list-files-cvs.sh
+/test-vc-list-files-git.sh
+/test-verify.c
+/test-verify.sh
+/test-version-etc.c
+/test-version-etc.sh
+/test-wchar.c
+/test-wctype.c
+/test-xalloc-die.c
+/test-xalloc-die.sh
+/test-xstrtol.c
+/test-xstrtol.sh
+/test-xstrtoul.c
+/test-xstrtoumax.c
+/test-xstrtoumax.sh
+/time.h
+/time.in.h
+/unsetenv.c
+/wctob.c
+/zerosize-ptr.h
alloca.h
alloca.in.h
binary-io.h
diff --git a/lib/.gitignore b/lib/.gitignore
index 22f818d..dc5f382 100644
--- a/lib/.gitignore
+++ b/lib/.gitignore
@@ -1,95 +1,105 @@
-basename-lgpl.c
-basename.c
-c-ctype.c
-c-ctype.h
-close-stream.c
-close-stream.h
-closeout.c
-closeout.h
-config.charset
-dirname-lgpl.c
-dirname.c
-dirname.h
-errno.h
-errno.in.h
-error.c
-error.h
-exitfail.c
-exitfail.h
-fpending.c
-fpending.h
-getopt.c
-getopt.h
-getopt.in.h
-getopt1.c
-getopt_int.h
-gettext.h
-gnulib.mk
-intprops.h
-inttypes.h
-inttypes.in.h
-iswblank.c
-localcharset.c
-localcharset.h
-malloc.c
-mbrtowc.c
-mbsinit.c
-memchr.c
-memchr.valgrind
-progname.c
-progname.h
-quotearg.c
-quotearg.h
-realloc.c
-ref-add.sed
-ref-add.sin
-ref-del.sed
-ref-del.sin
-stdarg.h
-stdarg.in.h
-stdbool.h
-stdbool.in.h
-stddef.h
-stddef.in.h
-stdint.h
-stdint.in.h
-stdlib.h
-stdlib.in.h
-stpcpy.c
-streq.h
-strerror.c
-string.h
-string.in.h
-stripslash.c
-strndup.c
-strnlen.c
-strtoimax.c
-strtol.c
-strtoll.c
-strtoul.c
-strtoull.c
-strtoumax.c
-sys
-sys_wait.h
-sys_wait.in.h
-unistd.h
-unistd.in.h
-unlocked-io.h
-verify.h
-version-etc-fsf.c
-version-etc.c
-version-etc.h
-wchar.h
-wchar.in.h
-wctype.h
-wctype.in.h
-xalloc-die.c
-xalloc.h
-xmalloc.c
-xstrndup.c
-xstrndup.h
-xstrtol-error.c
-xstrtol.c
-xstrtol.h
-xstrtoul.c
-xstrtoumax.c
+/arg-nonnull.h
+/basename-lgpl.c
+/basename.c
+/bitrotate.h
+/c++defs.h
+/c-ctype.c
+/c-ctype.h
+/charset.alias
+/close-stream.c
+/close-stream.h
+/closeout.c
+/closeout.h
+/config.charset
+/configmake.h
+/dirname-lgpl.c
+/dirname.c
+/dirname.h
+/errno.h
+/errno.in.h
+/error.c
+/error.h
+/exitfail.c
+/exitfail.h
+/fpending.c
+/fpending.h
+/getopt.c
+/getopt.h
+/getopt.in.h
+/getopt1.c
+/getopt_int.h
+/gettext.h
+/gnulib.mk
+/hash-pjw.c
+/hash-pjw.h
+/hash.c
+/hash.h
+/intprops.h
+/inttypes.h
+/inttypes.in.h
+/iswblank.c
+/libiwhd.a
+/localcharset.c
+/localcharset.h
+/malloc.c
+/mbrtowc.c
+/mbsinit.c
+/memchr.c
+/memchr.valgrind
+/progname.c
+/progname.h
+/quotearg.c
+/quotearg.h
+/realloc.c
+/ref-add.sed
+/ref-add.sin
+/ref-del.sed
+/ref-del.sin
+/stdarg.h
+/stdarg.in.h
+/stdbool.h
+/stdbool.in.h
+/stddef.h
+/stddef.in.h
+/stdint.h
+/stdint.in.h
+/stdlib.h
+/stdlib.in.h
+/stpcpy.c
+/streq.h
+/strerror.c
+/string.h
+/string.in.h
+/stripslash.c
+/strndup.c
+/strnlen.c
+/strtoimax.c
+/strtol.c
+/strtoll.c
+/strtoul.c
+/strtoull.c
+/strtoumax.c
+/sys
+/sys_wait.in.h
+/unistd.h
+/unistd.in.h
+/unlocked-io.h
+/verify.h
+/version-etc-fsf.c
+/version-etc.c
+/version-etc.h
+/warn-on-use.h
+/wchar.h
+/wchar.in.h
+/wctype.h
+/wctype.in.h
+/xalloc-die.c
+/xalloc.h
+/xmalloc.c
+/xstrndup.c
+/xstrndup.h
+/xstrtol-error.c
+/xstrtol.c
+/xstrtol.h
+/xstrtoul.c
+/xstrtoumax.c
http://repo.or.cz/w/iwhd.git/commit/1bd814bab002ff719896c7ce6717f5758a73791d
commit 1bd814bab002ff719896c7ce6717f5758a73791d
Author: Jim Meyering <meyering(a)redhat.com>
Date: Fri Feb 4 09:17:01 2011 +0100
maint: update files copied from gnulib
* t/init.sh: Update from gnulib.
* bootstrap: Likewise.
diff --git a/bootstrap b/bootstrap
index f3e3be2..e9ec11e 100755
--- a/bootstrap
+++ b/bootstrap
@@ -1,6 +1,6 @@
#! /bin/sh
# Print a version string.
-scriptversion=2010-07-06.10; # UTC
+scriptversion=2011-01-21.16; # UTC
# Bootstrap this package from checked-out sources.
@@ -42,24 +42,32 @@ local_gl_dir=gl
bt='._bootmp'
bt_regex=`echo "$bt"| sed 's/./[.]/g'`
bt2=${bt}2
+me=$0
usage() {
cat <<EOF
-Usage: $0 [OPTION]...
+Usage: $me [OPTION]...
Bootstrap this package from the checked-out sources.
Options:
- --gnulib-srcdir=DIRNAME Specify the local directory where gnulib
+ --gnulib-srcdir=DIRNAME specify the local directory where gnulib
sources reside. Use this if you already
have gnulib sources on your machine, and
do not want to waste your bandwidth downloading
- them again. Defaults to $GNULIB_SRCDIR.
- --copy Copy files instead of creating symbolic links.
- --force Attempt to bootstrap even if the sources seem
- not to have been checked out.
- --skip-po Do not download po files.
-
-If the file $0.conf exists in the same directory as this script, its
+ them again. Defaults to $GNULIB_SRCDIR
+ --bootstrap-sync if this bootstrap script is not identical to
+ the version in the local gnulib sources,
+ update this script, and then restart it with
+ /bin/sh or the shell $CONFIG_SHELL
+ --no-bootstrap-sync do not check whether bootstrap is out of sync
+ --copy copy files instead of creating symbolic links
+ --force attempt to bootstrap even if the sources seem
+ not to have been checked out
+ --no-git do not use git to update gnulib. Requires that
+ --gnulib-srcdir point to a correct gnulib snapshot
+ --skip-po do not download po files
+
+If the file $me.conf exists in the same directory as this script, its
contents are read as shell variables to configure the bootstrap.
For build prerequisites, environment variables like $AUTOCONF and $AMTAR
@@ -80,6 +88,10 @@ gnulib_modules=
# Any gnulib files needed that are not in modules.
gnulib_files=
+# A function to be called to edit gnulib.mk right after it's created.
+# Override it via your own definition in bootstrap.conf.
+gnulib_mk_hook() { :; }
+
# A function to be called after everything else in this script.
# Override it via your own definition in bootstrap.conf.
bootstrap_epilogue() { :; }
@@ -164,6 +176,13 @@ copy=false
# on which version control system (if any) is used in the source directory.
vc_ignore=auto
+# Set this to true in bootstrap.conf to enable --bootstrap-sync by
+# default.
+bootstrap_sync=false
+
+# Use git to update gnulib sources
+use_git=true
+
# find_tool ENVVAR NAMES...
# -------------------------
# Search for a required program. Use the value of ENVVAR, if set,
@@ -188,11 +207,11 @@ find_tool ()
find_tool_error_prefix="$$find_tool_envvar: "
fi
if test x"$find_tool_res" = x; then
- echo >&2 "$0: one of these is required: $find_tool_names"
+ echo >&2 "$me: one of these is required: $find_tool_names"
exit 1
fi
($find_tool_res --version </dev/null) >/dev/null 2>&1 || {
- echo >&2 "$0: ${find_tool_error_prefix}cannot run $find_tool_res --version"
+ echo >&2 "$me: ${find_tool_error_prefix}cannot run $find_tool_res --version"
exit 1
}
eval "$find_tool_envvar=$find_tool_res"
@@ -235,12 +254,25 @@ do
checkout_only_file=;;
--copy)
copy=true;;
+ --bootstrap-sync)
+ bootstrap_sync=true;;
+ --no-bootstrap-sync)
+ bootstrap_sync=false;;
+ --no-git)
+ use_git=false;;
*)
echo >&2 "$0: $option: unknown option"
exit 1;;
esac
done
+if $use_git || test -d "$GNULIB_SRCDIR"; then
+ :
+else
+ echo "$0: Error: --no-git requires --gnulib-srcdir" >&2
+ exit 1
+fi
+
if test -n "$checkout_only_file" && test ! -r "$checkout_only_file"; then
echo "$0: Bootstrapping from a non-checked-out distribution is risky." >&2
exit 1
@@ -257,6 +289,21 @@ insert_sorted_if_absent() {
|| exit 1
}
+# Adjust $PATTERN for $VC_IGNORE_FILE and insert it with
+# insert_sorted_if_absent.
+insert_vc_ignore() {
+ vc_ignore_file="$1"
+ pattern="$2"
+ case $vc_ignore_file in
+ *.gitignore)
+ # A .gitignore entry that does not start with `/' applies
+ # recursively to subdirectories, so prepend `/' to every
+ # .gitignore entry.
+ pattern=`echo "$pattern" | sed s,^,/,`;;
+ esac
+ insert_sorted_if_absent "$vc_ignore_file" "$pattern"
+}
+
# Die if there is no AC_CONFIG_AUX_DIR($build_aux) line in configure.ac.
found_aux_dir=no
grep '^[ ]*AC_CONFIG_AUX_DIR(['"$build_aux"'])' configure.ac @@ -275,7 +322,7 @@ if test ! -d $build_aux; then
mkdir $build_aux
for dot_ig in x $vc_ignore; do
test $dot_ig = x && continue
- insert_sorted_if_absent $dot_ig $build_aux
+ insert_vc_ignore $dot_ig $build_aux
done
fi
@@ -325,17 +372,18 @@ get_version() {
$app --version >/dev/null 2>&1 || return 1
$app --version 2>&1 |
- sed -n '# extract version within line
- s/.*[v ]{1,}([0-9]{1,}.[.a-z0-9-]*).*/1/
- t done
+ sed -n '# Move version to start of line.
+ s/.*[v ]([0-9])/1/
+
+ # Skip lines that do not start with version.
+ /^[0-9]/!d
- # extract version at start of line
- s/^([0-9]{1,}.[.a-z0-9-]*).*/1/
- t done
+ # Remove characters after the version.
+ s/[^.a-z0-9-].*//
- d
+ # The first component must be digits only.
+ s/^([0-9]*)[a-z-].*/1/
- :done
#the following essentially does s/5.005/5.5/
s/.0*([1-9])/.1/g
p
@@ -346,18 +394,26 @@ check_versions() {
ret=0
while read app req_ver; do
+ # We only need libtoolize from the libtool package.
+ if test "$app" = libtool; then
+ app=libtoolize
+ fi
+ # Exempt git if --no-git is in effect.
+ if test "$app" = git; then
+ $use_git || continue
+ fi
# Honor $APP variables ($TAR, $AUTOCONF, etc.)
- appvar=`echo $app | tr '[a-z]' '[A-Z]'`
+ appvar=`echo $app | tr '[a-z]-' '[A-Z]_'`
test "$appvar" = TAR && appvar=AMTAR
eval "app=${$appvar-$app}"
inst_ver=$(get_version $app)
if [ ! "$inst_ver" ]; then
- echo "Error: '$app' not found" >&2
+ echo "$me: Error: '$app' not found" >&2
ret=1
elif [ ! "$req_ver" = "-" ]; then
latest_ver=$(sort_ver $req_ver $inst_ver | cut -d' ' -f2)
if [ ! "$latest_ver" = "$inst_ver" ]; then
- echo "Error: '$app' version == $inst_ver is too old" >&2
+ echo "$me: Error: '$app' version == $inst_ver is too old" >&2
echo " '$app' version >= $req_ver is required" >&2
ret=1
fi
@@ -370,16 +426,30 @@ check_versions() {
print_versions() {
echo "Program Min_version"
echo "----------------------"
- printf "$buildreq"
+ printf %s "$buildreq"
echo "----------------------"
# can't depend on column -t
}
+use_libtool=0
+# We'd like to use grep -E, to see if any of LT_INIT,
+# AC_PROG_LIBTOOL, AM_PROG_LIBTOOL is used in configure.ac,
+# but that's not portable enough (e.g., for Solaris).
+grep '^[ ]*A[CM]_PROG_LIBTOOL' configure.ac >/dev/null + && use_libtool=1
+grep '^[ ]*LT_INIT' configure.ac >/dev/null + && use_libtool=1
+if test $use_libtool = 1; then
+ find_tool LIBTOOLIZE glibtoolize libtoolize
+fi
+
if ! printf "$buildreq" | check_versions; then
- test -f README-prereq &&
- echo "See README-prereq for notes on obtaining these prerequisite programs:" >&2
- echo
- print_versions
+ echo >&2
+ if test -f README-prereq; then
+ echo "$0: See README-prereq for how to get the prerequisite programs" >&2
+ else
+ echo "$0: Please install the prerequisite programs" >&2
+ fi
exit 1
fi
@@ -390,11 +460,11 @@ if test -d .git && (git --version) >/dev/null 2>/dev/null ; then
if git config merge.merge-changelog.driver >/dev/null ; then
:
elif (git-merge-changelog --version) >/dev/null 2>/dev/null ; then
- echo "initializing git-merge-changelog driver"
+ echo "$0: initializing git-merge-changelog driver"
git config merge.merge-changelog.name 'GNU-style ChangeLog merge driver'
git config merge.merge-changelog.driver 'git-merge-changelog %O %A %B'
else
- echo "consider installing git-merge-changelog from gnulib"
+ echo "$0: consider installing git-merge-changelog from gnulib"
fi
fi
@@ -410,7 +480,7 @@ git_modules_config () {
}
gnulib_path=`git_modules_config submodule.gnulib.path`
-: ${gnulib_path=gnulib}
+test -z "$gnulib_path" && gnulib_path=gnulib
# Get gnulib files.
@@ -463,6 +533,16 @@ case ${GNULIB_SRCDIR--} in
;;
esac
+if $bootstrap_sync; then
+ cmp -s "$0" "$GNULIB_SRCDIR/build-aux/bootstrap" || {
+ echo "$0: updating bootstrap and restarting..."
+ exec sh -c + 'cp "$1" "$2" && shift && exec "${CONFIG_SHELL-/bin/sh}" "$@"' + -- "$GNULIB_SRCDIR/build-aux/bootstrap" + "$0" "$@" --no-bootstrap-sync
+ }
+fi
+
gnulib_tool=$GNULIB_SRCDIR/gnulib-tool
<$gnulib_tool || exit
@@ -471,7 +551,7 @@ gnulib_tool=$GNULIB_SRCDIR/gnulib-tool
download_po_files() {
subdir=$1
domain=$2
- echo "$0: getting translations into $subdir for $domain..."
+ echo "$me: getting translations into $subdir for $domain..."
cmd=`printf "$po_download_command_format" "$domain" "$subdir"`
eval "$cmd"
}
@@ -505,7 +585,7 @@ update_po_files() {
! test -f "$po_dir/$po.po" ||
! $SHA1SUM -c --status "$cksum_file" < "$new_po" > /dev/null; then
- echo "updated $po_dir/$po.po..."
+ echo "$me: updated $po_dir/$po.po..."
cp "$new_po" "$po_dir/$po.po" && $SHA1SUM < "$new_po" > "$cksum_file"
fi
@@ -543,20 +623,20 @@ symlink_to_dir()
for dot_ig in x $vc_ignore; do
test $dot_ig = x && continue
ig=$parent/$dot_ig
- insert_sorted_if_absent $ig `echo "$dst_dir"|sed 's,.*/,,'`
+ insert_vc_ignore $ig `echo "$dst_dir"|sed 's,.*/,,'`
done
fi
if $copy; then
{
test ! -h "$dst" || {
- echo "$0: rm -f $dst" &&
+ echo "$me: rm -f $dst" &&
rm -f "$dst"
}
} &&
test -f "$dst" &&
cmp -s "$src" "$dst" || {
- echo "$0: cp -fp $src $dst" &&
+ echo "$me: cp -fp $src $dst" &&
cp -fp "$src" "$dst"
}
else
@@ -570,7 +650,7 @@ symlink_to_dir()
*)
case /$dst/ in
*//* | */../* | */./* | /*/*/*/*/*/)
- echo >&2 "$0: invalid symlink calculation: $src -> $dst"
+ echo >&2 "$me: invalid symlink calculation: $src -> $dst"
exit 1;;
/*/*/*/*/) dot_dots=../../../;;
/*/*/*/) dot_dots=../../;;
@@ -578,7 +658,7 @@ symlink_to_dir()
esac;;
esac
- echo "$0: ln -fs $dot_dots$src $dst" &&
+ echo "$me: ln -fs $dot_dots$src $dst" &&
ln -fs "$dot_dots$src" "$dst"
}
fi
@@ -611,7 +691,7 @@ cp_mark_as_generated()
cmp -s "$cp_src" "$cp_dst" || {
# Copy the file first to get proper permissions if it
# doesn't already exist. Then overwrite the copy.
- echo "$0: cp -f $cp_src $cp_dst" &&
+ echo "$me: cp -f $cp_src $cp_dst" &&
rm -f "$cp_dst" &&
cp "$cp_src" "$cp_dst-t" &&
sed "s!$bt_regex/!!g" "$cp_src" > "$cp_dst-t" &&
@@ -629,7 +709,7 @@ cp_mark_as_generated()
if cmp -s "$cp_dst-t" "$cp_dst"; then
rm -f "$cp_dst-t"
else
- echo "$0: cp $cp_src $cp_dst # with edits" &&
+ echo "$me: cp $cp_src $cp_dst # with edits" &&
mv -f "$cp_dst-t" "$cp_dst"
fi
fi
@@ -648,7 +728,7 @@ version_controlled_file() {
elif test -d .svn; then
svn log -r HEAD "$dir/$file" > /dev/null 2>&1 && found=yes
else
- echo "$0: no version control for $dir/$file?" >&2
+ echo "$me: no version control for $dir/$file?" >&2
fi
test $found = yes
}
@@ -660,7 +740,8 @@ slurp() {
for file in `ls -a $1/$dir`; do
case $file in
.|..) continue;;
- .*) continue;; # FIXME: should all file names starting with "." be ignored?
+ # FIXME: should all file names starting with "." be ignored?
+ .*) continue;;
esac
test -d $1/$dir/$file && continue
for excluded_file in $excluded_files; do
@@ -669,18 +750,20 @@ slurp() {
if test $file = Makefile.am && test "X$gnulib_mk" != XMakefile.am; then
copied=$copied${sep}$gnulib_mk; sep=$nl
remove_intl='/^[^#].*/intl/s/^/#/;'"s!$bt_regex/!!g"
- sed "$remove_intl" $1/$dir/$file | cmp - $dir/$gnulib_mk > /dev/null || {
- echo "$0: Copying $1/$dir/$file to $dir/$gnulib_mk ..." &&
+ sed "$remove_intl" $1/$dir/$file |
+ cmp - $dir/$gnulib_mk > /dev/null || {
+ echo "$me: Copying $1/$dir/$file to $dir/$gnulib_mk ..." &&
rm -f $dir/$gnulib_mk &&
- sed "$remove_intl" $1/$dir/$file >$dir/$gnulib_mk
+ sed "$remove_intl" $1/$dir/$file >$dir/$gnulib_mk &&
+ gnulib_mk_hook $dir/$gnulib_mk
}
elif { test "${2+set}" = set && test -r $2/$dir/$file; } ||
version_controlled_file $dir $file; then
- echo "$0: $dir/$file overrides $1/$dir/$file"
+ echo "$me: $dir/$file overrides $1/$dir/$file"
else
copied=$copied$sep$file; sep=$nl
if test $file = gettext.m4; then
- echo "$0: patching m4/gettext.m4 to remove need for intl/* ..."
+ echo "$me: patching m4/gettext.m4 to remove need for intl/* ..."
rm -f $dir/$file
sed '
/^AC_DEFUN([AM_INTL_SUBDIR],/,/^]/c@@ -700,18 +783,25 @@ slurp() {
test $dot_ig = x && continue
ig=$dir/$dot_ig
if test -n "$copied"; then
- insert_sorted_if_absent $ig "$copied"
+ insert_vc_ignore $ig "$copied"
# If an ignored file name ends with .in.h, then also add
# the name with just ".h". Many gnulib headers are generated,
# e.g., stdint.in.h -> stdint.h, dirent.in.h ->..., etc.
# Likewise for .gperf -> .h, .y -> .c, and .sin -> .sed
- f=`echo "$copied"|sed 's/.in.h$/.h/;s/.sin$/.sed/;s/.y$/.c/;s/.gperf$/.h/'`
- insert_sorted_if_absent $ig "$f"
+ f=`echo "$copied" |
+ sed '
+ s/.in.h$/.h/
+ s/.sin$/.sed/
+ s/.y$/.c/
+ s/.gperf$/.h/
+ '
+ `
+ insert_vc_ignore $ig "$f"
# For files like sys_stat.in.h and sys_time.in.h, record as
# ignorable the directory we might eventually create: sys/.
f=`echo "$copied"|sed 's/sys_.*.in.h$/sys/'`
- insert_sorted_if_absent $ig "$f"
+ insert_vc_ignore $ig "$f"
fi
done
done
@@ -736,6 +826,12 @@ gnulib_tool_options=" --local-dir $local_gl_dir $gnulib_tool_option_extras "
+if test $use_libtool = 1; then
+ case "$gnulib_tool_options " in
+ *' --libtool '*) ;;
+ *) gnulib_tool_options="$gnulib_tool_options --libtool" ;;
+ esac
+fi
echo "$0: $gnulib_tool $gnulib_tool_options --import ..."
$gnulib_tool $gnulib_tool_options --import $gnulib_modules &&
slurp $bt || exit
@@ -778,20 +874,12 @@ grep -E '^[ ]*AC_CONFIG_HEADERS?>' configure.ac >/dev/null ||
for command in libtool - "${ACLOCAL-aclocal} --force -I m4" + "${ACLOCAL-aclocal} --force -I m4 $ACLOCAL_FLAGS" "${AUTOCONF-autoconf} --force" "${AUTOHEADER-autoheader} --force" "${AUTOMAKE-automake} --add-missing --copy --force-missing"
do
if test "$command" = libtool; then
- use_libtool=0
- # We'd like to use grep -E, to see if any of LT_INIT,
- # AC_PROG_LIBTOOL, AM_PROG_LIBTOOL is used in configure.ac,
- # but that's not portable enough (e.g., for Solaris).
- grep '^[ ]*A[CM]_PROG_LIBTOOL' configure.ac >/dev/null - && use_libtool=1
- grep '^[ ]*LT_INIT' configure.ac >/dev/null - && use_libtool=1
test $use_libtool = 0 && continue
command="${LIBTOOLIZE-libtoolize} -c -f"
diff --git a/t/init.sh b/t/init.sh
index 366701a..71c6516 100644
--- a/t/init.sh
+++ b/t/init.sh
@@ -74,10 +74,10 @@ Exit () { set +e; (exit $1); exit $1; }
# the reason for skip/failure to console, rather than to the .log files.
: ${stderr_fileno_=2}
-warn_() { echo "$@" 1>&$stderr_fileno_; }
-fail_() { warn_ "$ME_: failed test: $@"; Exit 1; }
-skip_() { warn_ "$ME_: skipped test: $@"; Exit 77; }
-framework_failure_() { warn_ "$ME_: set-up failure: $@"; Exit 99; }
+warn_ () { echo "$@" 1>&$stderr_fileno_; }
+fail_ () { warn_ "$ME_: failed test: $@"; Exit 1; }
+skip_ () { warn_ "$ME_: skipped test: $@"; Exit 77; }
+framework_failure_ () { warn_ "$ME_: set-up failure: $@"; Exit 99; }
# Sanitize this shell to POSIX mode, if possible.
DUALCASE=1; export DUALCASE
@@ -111,7 +111,7 @@ fi
# Eval this code in a subshell to determine a shell's suitability.
# 10 - passes all tests; ok to use
-# 9 - ok, but enabling "set -x" corrupts application stderr; prefer higher score
+# 9 - ok, but enabling "set -x" corrupts app stderr; prefer higher score
# ? - not ok
gl_shell_test_script_='
test $(echo y) = y || exit 1
@@ -193,7 +193,7 @@ fi
test -n "$EXEEXT" && shopt -s expand_aliases
# Enable glibc's malloc-perturbing option.
-# This is cheap and useful for exposing code that depends on the fact that
+# This is useful for exposing code that depends on the fact that
# malloc-related functions often return memory that is mostly zeroed.
# If you have the time and cycles, use valgrind to do an even better job.
: ${MALLOC_PERTURB_=87}
@@ -202,22 +202,22 @@ export MALLOC_PERTURB_
# This is a stub function that is run upon trap (upon regular exit and
# interrupt). Override it with a per-test function, e.g., to unmount
# a partition, or to undo any other global state changes.
-cleanup_() { :; }
+cleanup_ () { :; }
if ( diff --version < /dev/null 2>&1 | grep GNU ) > /dev/null 2>&1; then
- compare() { diff -u "$@"; }
+ compare () { diff -u "$@"; }
elif ( cmp --version < /dev/null 2>&1 | grep GNU ) > /dev/null 2>&1; then
- compare() { cmp -s "$@"; }
+ compare () { cmp -s "$@"; }
else
- compare() { cmp "$@"; }
+ compare () { cmp "$@"; }
fi
# An arbitrary prefix to help distinguish test directories.
-testdir_prefix_() { printf gt; }
+testdir_prefix_ () { printf gt; }
# Run the user-overridable cleanup_ function, remove the temporary
# directory and exit with the incoming value of $?.
-remove_tmp_()
+remove_tmp_ ()
{
__st=$?
cleanup_
@@ -233,7 +233,7 @@ remove_tmp_()
# contains only the specified bytes (see the case stmt below), then print
# a space-separated list of those names and return 0. Otherwise, don't
# print anything and return 1. Naming constraints apply also to DIR.
-find_exe_basenames_()
+find_exe_basenames_ ()
{
feb_dir_=$1
feb_fail_=0
@@ -245,6 +245,9 @@ find_exe_basenames_()
# below, just skip it.
test "x$feb_file_" = "x$feb_dir_/*.exe" && test ! -f "$feb_file_" && continue
+ # Exempt [.exe, since we can't create a function by that name, yet
+ # we can't invoke [ by PATH search anyways due to shell builtins.
+ test "x$feb_file_" = "x$feb_dir_/[.exe" && continue
case $feb_file_ in
*[!-a-zA-Z/0-9_.+]*) feb_fail_=1; break;;
*) # Remove leading file name components as well as the .exe suffix.
@@ -263,7 +266,7 @@ find_exe_basenames_()
# PROG that simply invokes PROG.exe, then return 0. If any selected
# file name or the directory name, $1, contains an unexpected character,
# define no alias and return 1.
-create_exe_shims_()
+create_exe_shims_ ()
{
case $EXEEXT in
'') return 0 ;;
@@ -272,7 +275,7 @@ create_exe_shims_()
esac
base_names_=`find_exe_basenames_ $1` - || { echo "$0 (exe_shim): skipping directory: $1" 1>&2; return 1; }
+ || { echo "$0 (exe_shim): skipping directory: $1" 1>&2; return 0; }
if test -n "$base_names_"; then
for base_ in $base_names_; do
@@ -285,7 +288,7 @@ create_exe_shims_()
# Use this function to prepend to PATH an absolute name for each
# specified, possibly-$initial_cwd_-relative, directory.
-path_prepend_()
+path_prepend_ ()
{
while test $# != 0; do
path_dir_=$1
@@ -308,7 +311,7 @@ path_prepend_()
export PATH
}
-setup_()
+setup_ ()
{
if test "$VERBOSE" = yes; then
# Test whether set -x may cause the selected shell to corrupt an
@@ -324,12 +327,19 @@ setup_()
fi
initial_cwd_=$PWD
+ fail=0
pfx_=`testdir_prefix_`
test_dir_=`mktempd_ "$initial_cwd_" "$pfx_-$ME_.XXXX"` || fail_ "failed to create temporary directory in $initial_cwd_"
cd "$test_dir_"
+ # As autoconf-generated configure scripts do, ensure that IFS
+ # is defined initially, so that saving and restoring $IFS works.
+ gl_init_sh_nl_='
+'
+ IFS=" "" $gl_init_sh_nl_"
+
# This trap statement, along with a trap on 0 below, ensure that the
# temporary directory, $test_dir_, is removed upon exit as well as
# upon receipt of any of the listed signals.
@@ -354,7 +364,7 @@ setup_()
# - make only $MAX_TRIES_ attempts
# Helper function. Print $N pseudo-random bytes from a-zA-Z0-9.
-rand_bytes_()
+rand_bytes_ ()
{
n_=$1
@@ -386,7 +396,7 @@ rand_bytes_()
| LC_ALL=C tr -c $chars_ 01234567$chars_$chars_$chars_
}
-mktempd_()
+mktempd_ ()
{
case $# in
2);;
@@ -407,11 +417,10 @@ mktempd_()
case $template_ in
*XXXX) ;;
- *) fail_ "invalid template: $template_ (must have a suffix of at least 4 X's)";;
+ *) fail_ + "invalid template: $template_ (must have a suffix of at least 4 X's)";;
esac
- fail=0
-
# First, try to use mktemp.
d=`unset TMPDIR; mktemp -d -t -p "$destdir_" "$template_" 2>/dev/null` || fail=1
-----------------------------------------------------------------------
Summary of changes:
.gitignore | 7 +-
Makefile.am | 2 +-
bootstrap | 210 +++++++++++++++++++++++++++++++++--------------
bootstrap.conf | 5 -
doc/.gitignore | 1 +
gnulib | 2 +-
gnulib-tests/.gitignore | 100 ++++++++++++++++++++++
lib/.gitignore | 200 +++++++++++++++++++++++---------------------
qparser.y | 5 +-
rest.c | 2 +-
t/basic | 10 ++
t/init.sh | 53 +++++++-----
12 files changed, 408 insertions(+), 189 deletions(-)
repo.or.cz automatic notification. Contact project admin jim(a)meyering.net
if you want to unsubscribe, or site admin admin(a)repo.or.cz if you receive
no reply.
--
iwhd.git ("image warehouse daemon")
13 years, 3 months