Hi,
I have explored in detail why the test responder_cache_req-tests failed so often. I created a new VM with RHEL 6.7.
OBSERVATION:
How we know, CI machines are under pressure, so I wrote simple cpu_braker, see [1]. I ran the tests 50 times with cpu_braker (average load > 2.60, only 1 CPU). Results:
[ RUN ] test_users_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1875: error: Failure!
[ RUN ] test_users_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1879: error: Failure!
[ RUN ] test_groups_by_filter_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1972: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:2051: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2055: error: Failure!
These errors say they failed to retrieve data from the cache. Tests inserts two test values into the cache at the beginning of their run, and then tries to pull it back. And sometime if they are under pressure, they fail.
For a more detailed explanation, I added some printf(). I ran the test 25 times. The results:
[ RUN ] test_users_by_filter_valid >>> ... sysdb_store_user at [1439384336] (src/db/sysdb_ops.c:1882) >>> ... cache_req_input_create at [1439384337] (src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439384337)] (src/responder/common/responder_cache_req.c:44) >>> ... sysdb_store_user at [1439384337] (src/db/sysdb_ops.c:1882) >>> ... recent_filter = [(lastUpdate>=1439384337)] (src/responder/common/responder_cache_req.c:44) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1748: error: Failure!
[ RUN ] test_users_by_filter_multiple_domains_valid >>> ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) >>> ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) >>> ... cache_req_input_create at [1439384175] (src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439384175)] (src/responder/common/responder_cache_req.c:44) >>> ... recent_filter = [(lastUpdate>=1439384175)] (src/responder/common/responder_cache_req.c:44) 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1874: error: Failure!
[ RUN ] test_groups_by_filter_valid >>> ... sysdb_store_group at [1439385276] (src/db/sysdb_ops.c:2042) >>> ... cache_req_input_create at [1439385277] (src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439385277)] (src/responder/common/responder_cache_req.c:67) >>> ... sysdb_store_group at [1439385277] (src/db/sysdb_ops.c:2042) >>> ... recent_filter = [(lastUpdate>=1439385277)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1971: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid >>> ... sysdb_store_group at [1439385286] (src/db/sysdb_ops.c:2042) >>> ... sysdb_store_group at [1439385287] (src/db/sysdb_ops.c:2042) >>> ... cache_req_input_create at [1439385287] (src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439385287)] (src/responder/common/responder_cache_req.c:67) >>> ... recent_filter = [(lastUpdate>=1439385287)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2054: error: Failure!
As we can see, we have discovered a new failing test test_users_by_filter_valid.
REPRODUCER:
Use cpu_braker [1] and observation patch [2] and try some iterations... # for i in {1..50} ; do ./responder_cache_req-tests ; done
SOLUTION?
The problem is caused by trying to retrieve records from the cache, with the time filter set. A time filter we have set by the time of the request creation. However, we create the request in our tests after inserting records into the cache. Therefore, it may vary the data records time and the time filter.
So, solution is create the request and then insert records or create request and set: # req.req_start = req.req_start - 1.
Please, can you help me? For example see function: test_users_by_filter_multiple_domains_valid() in src/tests/cmocka/test_responder_cache_req.c:1834
Regards
Petr
ATTACHMENTS:
[1] cpu_braker.c [2] 0001-TEST-Observation-patch.patch
On (12/08/15 17:57), Petr Cech wrote:
Hi,
I have explored in detail why the test responder_cache_req-tests failed so often. I created a new VM with RHEL 6.7.
OBSERVATION:
How we know, CI machines are under pressure, so I wrote simple cpu_braker, see [1]. I ran the tests 50 times with cpu_braker (average load > 2.60, only 1 CPU). Results:
[ RUN ] test_users_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1875: error: Failure!
[ RUN ] test_users_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1879: error: Failure!
[ RUN ] test_groups_by_filter_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1972: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:2051: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2055: error: Failure!
These errors say they failed to retrieve data from the cache. Tests inserts two test values into the cache at the beginning of their run, and then tries to pull it back. And sometime if they are under pressure, they fail.
For a more detailed explanation, I added some printf(). I ran the test 25 times. The results:
[ RUN ] test_users_by_filter_valid
... sysdb_store_user at [1439384336] (src/db/sysdb_ops.c:1882) ... cache_req_input_create at [1439384337]
(src/responder/common/responder_cache_req.c:122)
... recent_filter = [(lastUpdate>=1439384337)]
(src/responder/common/responder_cache_req.c:44)
... sysdb_store_user at [1439384337] (src/db/sysdb_ops.c:1882) ... recent_filter = [(lastUpdate>=1439384337)]
(src/responder/common/responder_cache_req.c:44) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1748: error: Failure!
[ RUN ] test_users_by_filter_multiple_domains_valid
... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) ... cache_req_input_create at [1439384175]
(src/responder/common/responder_cache_req.c:122)
... recent_filter = [(lastUpdate>=1439384175)]
(src/responder/common/responder_cache_req.c:44)
... recent_filter = [(lastUpdate>=1439384175)]
(src/responder/common/responder_cache_req.c:44) 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1874: error: Failure!
[ RUN ] test_groups_by_filter_valid
... sysdb_store_group at [1439385276] (src/db/sysdb_ops.c:2042) ... cache_req_input_create at [1439385277]
(src/responder/common/responder_cache_req.c:122)
... recent_filter = [(lastUpdate>=1439385277)]
(src/responder/common/responder_cache_req.c:67)
... sysdb_store_group at [1439385277] (src/db/sysdb_ops.c:2042) ... recent_filter = [(lastUpdate>=1439385277)]
(src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1971: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid
... sysdb_store_group at [1439385286] (src/db/sysdb_ops.c:2042) ... sysdb_store_group at [1439385287] (src/db/sysdb_ops.c:2042) ... cache_req_input_create at [1439385287]
(src/responder/common/responder_cache_req.c:122)
... recent_filter = [(lastUpdate>=1439385287)]
(src/responder/common/responder_cache_req.c:67)
... recent_filter = [(lastUpdate>=1439385287)]
(src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2054: error: Failure!
As we can see, we have discovered a new failing test test_users_by_filter_valid.
REPRODUCER:
Use cpu_braker [1] and observation patch [2] and try some iterations... # for i in {1..50} ; do ./responder_cache_req-tests ; done
SOLUTION?
The problem is caused by trying to retrieve records from the cache, with the time filter set. A time filter we have set by the time of the request creation. However, we create the request in our tests after inserting records into the cache. Therefore, it may vary the data records time and the time filter.
So, solution is create the request and then insert records or create request and set: # req.req_start = req.req_start - 1.
Please, can you help me? For example see function: test_users_by_filter_multiple_domains_valid() in src/tests/cmocka/test_responder_cache_req.c:1834
Regards
Petr
ATTACHMENTS:
[1] cpu_braker.c [2] 0001-TEST-Observation-patch.patch
From b58608eaadca863b28b0cc80b0588fa536d508b8 Mon Sep 17 00:00:00 2001 From: Petr Cech pcech@redhat.com Date: Wed, 12 Aug 2015 15:41:03 +0200 Subject: [PATCH] [TEST]: Observation patch
This patch is part of reproducer, nothing more.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/db/sysdb_ops.c | 6 ++++++ src/responder/common/responder_cache_req.c | 11 +++++++++++ 2 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/src/db/sysdb_ops.c b/src/db/sysdb_ops.c index d1d43ebe6c71611f3371b2f4ccf5f7911909c9de..341380c2ed5d30ea1f63b25bdf2a860a1296a979 100644 --- a/src/db/sysdb_ops.c +++ b/src/db/sysdb_ops.c @@ -1878,6 +1878,9 @@ int sysdb_store_user(struct sss_domain_info *domain, errno_t sret = EOK; bool in_transaction = false;
- printf(">>> ... sysdb_store_user at [%lld] (%s:%i)\n",
(long long) now, __FILE__, __LINE__);
FYI: You can use debug macros as well for trouble shooting issues. Debugging in most of test can be enabled with command line argument --debug-level=9 becasue test contais macro SSSD_DEBUG_OPTS in struct poptOption array and dubuggin is initialized with another macro "DEBUG_CLI_INIT(debug_level);"
I often use a trick to overrride default debug_level and to avoid using command line arguments. debug_level = 0xFFF0; DEBUG_CLI_INIT(debug_level);
HTH
LS
On 08/13/2015 07:49 AM, Lukas Slebodnik wrote:
On (12/08/15 17:57), Petr Cech wrote:
Hi,
I have explored in detail why the test responder_cache_req-tests failed so often. I created a new VM with RHEL 6.7.
OBSERVATION:
How we know, CI machines are under pressure, so I wrote simple cpu_braker, see [1]. I ran the tests 50 times with cpu_braker (average load > 2.60, only 1 CPU). Results:
[ RUN ] test_users_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1875: error: Failure! [ RUN ] test_users_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1879: error: Failure! [ RUN ] test_groups_by_filter_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1972: error: Failure! [ RUN ] test_groups_by_filter_multiple_domains_valid 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:2051: error: Failure! [ RUN ] test_groups_by_filter_multiple_domains_valid 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2055: error: Failure!These errors say they failed to retrieve data from the cache. Tests inserts two test values into the cache at the beginning of their run, and then tries to pull it back. And sometime if they are under pressure, they fail.
For a more detailed explanation, I added some printf(). I ran the test 25 times. The results:
[ RUN ] test_users_by_filter_valid >>> ... sysdb_store_user at [1439384336] (src/db/sysdb_ops.c:1882) >>> ... cache_req_input_create at [1439384337](src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439384337)] (src/responder/common/responder_cache_req.c:44) >>> ... sysdb_store_user at [1439384337] (src/db/sysdb_ops.c:1882) >>> ... recent_filter = [(lastUpdate>=1439384337)] (src/responder/common/responder_cache_req.c:44) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1748: error: Failure!
[ RUN ] test_users_by_filter_multiple_domains_valid >>> ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) >>> ... sysdb_store_user at [1439384174] (src/db/sysdb_ops.c:1882) >>> ... cache_req_input_create at [1439384175](src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439384175)] (src/responder/common/responder_cache_req.c:44) >>> ... recent_filter = [(lastUpdate>=1439384175)] (src/responder/common/responder_cache_req.c:44) 0x2 != 0 src/tests/cmocka/test_responder_cache_req.c:1874: error: Failure!
[ RUN ] test_groups_by_filter_valid >>> ... sysdb_store_group at [1439385276] (src/db/sysdb_ops.c:2042) >>> ... cache_req_input_create at [1439385277](src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439385277)] (src/responder/common/responder_cache_req.c:67) >>> ... sysdb_store_group at [1439385277] (src/db/sysdb_ops.c:2042) >>> ... recent_filter = [(lastUpdate>=1439385277)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:1971: error: Failure!
[ RUN ] test_groups_by_filter_multiple_domains_valid >>> ... sysdb_store_group at [1439385286] (src/db/sysdb_ops.c:2042) >>> ... sysdb_store_group at [1439385287] (src/db/sysdb_ops.c:2042) >>> ... cache_req_input_create at [1439385287](src/responder/common/responder_cache_req.c:122) >>> ... recent_filter = [(lastUpdate>=1439385287)] (src/responder/common/responder_cache_req.c:67) >>> ... recent_filter = [(lastUpdate>=1439385287)] (src/responder/common/responder_cache_req.c:67) 0x1 != 0x2 src/tests/cmocka/test_responder_cache_req.c:2054: error: Failure!
As we can see, we have discovered a new failing test test_users_by_filter_valid.
REPRODUCER:
Use cpu_braker [1] and observation patch [2] and try some iterations... # for i in {1..50} ; do ./responder_cache_req-tests ; done
SOLUTION?
The problem is caused by trying to retrieve records from the cache, with the time filter set. A time filter we have set by the time of the request creation. However, we create the request in our tests after inserting records into the cache. Therefore, it may vary the data records time and the time filter.
So, solution is create the request and then insert records or create request and set: # req.req_start = req.req_start - 1.
Please, can you help me? For example see function: test_users_by_filter_multiple_domains_valid() in src/tests/cmocka/test_responder_cache_req.c:1834
Regards
Petr
ATTACHMENTS:
[1] cpu_braker.c [2] 0001-TEST-Observation-patch.patch From b58608eaadca863b28b0cc80b0588fa536d508b8 Mon Sep 17 00:00:00 2001 From: Petr Cech pcech@redhat.com Date: Wed, 12 Aug 2015 15:41:03 +0200 Subject: [PATCH] [TEST]: Observation patch
This patch is part of reproducer, nothing more.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/db/sysdb_ops.c | 6 ++++++ src/responder/common/responder_cache_req.c | 11 +++++++++++ 2 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/src/db/sysdb_ops.c b/src/db/sysdb_ops.c index d1d43ebe6c71611f3371b2f4ccf5f7911909c9de..341380c2ed5d30ea1f63b25bdf2a860a1296a979 100644 --- a/src/db/sysdb_ops.c +++ b/src/db/sysdb_ops.c @@ -1878,6 +1878,9 @@ int sysdb_store_user(struct sss_domain_info *domain, errno_t sret = EOK; bool in_transaction = false;
- printf(">>> ... sysdb_store_user at [%lld] (%s:%i)\n",
(long long) now, __FILE__, __LINE__);FYI: You can use debug macros as well for trouble shooting issues. Debugging in most of test can be enabled with command line argument --debug-level=9 becasue test contais macro SSSD_DEBUG_OPTS in struct poptOption array and dubuggin is initialized with another macro "DEBUG_CLI_INIT(debug_level);"
I often use a trick to overrride default debug_level and to avoid using command line arguments. debug_level = 0xFFF0; DEBUG_CLI_INIT(debug_level);
HTH
LS
Thanks.
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
Hi, I wrote patch. Regards Petr
On (14/08/15 15:17), Petr Cech wrote:
Hi, I wrote patch. Regards Petr
From c871c97862997df4e724647f1a0ce7297f2f059b Mon Sep 17 00:00:00 2001 From: Petr Cech pcech@redhat.com Date: Fri, 14 Aug 2015 13:17:22 +0200 Subject: [PATCH] TEST: Fix for responder_cache_req-tests
Tests, that do not pass, have a problem with time. Time for writing records into database varied from time of creating a request, that is used for filtering records internally.
The patch modifies the time of creation record (adds one second to now()), so it should not be different times there.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/tests/cmocka/test_responder_cache_req.c | 18 ++++++++++++------ 1 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/tests/cmocka/test_responder_cache_req.c b/src/tests/cmocka/test_responder_cache_req.c index 032fe429ac88b8cc9113976329ea04837f287276..4f77fe767e016496652a97c7a73fc9e29ba7faf0 100644 --- a/src/tests/cmocka/test_responder_cache_req.c +++ b/src/tests/cmocka/test_responder_cache_req.c @@ -1721,9 +1721,10 @@ void test_users_by_filter_valid(void **state) test_ctx = talloc_get_type_abort(*state, struct cache_req_test_ctx); test_ctx->create_user = true;
- /* set (time+1) to avoid failure request time filter */ ret = sysdb_store_user(test_ctx->tctx->dom, TEST_USER_NAME2, "pwd", 1001, 1001, NULL, NULL, NULL, "cn="TEST_USER_NAME2",dc=test", NULL,
NULL, 1000, time(NULL));
assert_int_equal(ret, EOK);NULL, 1000, time(NULL)+1);
Although, this patch fix intermitent failures there are few problems.
The protopype of function sysdb_store_user is: /* this function does not check that all user members are actually present */
/* if one of the basic attributes is empty ("") as opposed to NULL, * this will just remove it */
int sysdb_store_user(struct sss_domain_info *domain, const char *name, const char *pwd, uid_t uid, gid_t gid, const char *gecos, const char *homedir, const char *shell, const char *orig_dn, struct sysdb_attrs *attrs, char **remove_attrs, uint64_t cache_timeout, time_t now);
and if now is 0 then we will get the current time. 1912 /* get transaction timestamp */ 1913 if (!now) { 1914 now = time(NULL); 1915 }
I do not understand why we shoudl set current time (now) to future "time(NULL)+1". I didn't check it properly, but if now is used as transaction timestamp (according to comment) it should not be from futire.
LS
On 08/17/2015 08:52 AM, Lukas Slebodnik wrote:
From c871c97862997df4e724647f1a0ce7297f2f059b Mon Sep 17 00:00:00 2001
From: Petr Cechpcech@redhat.com Date: Fri, 14 Aug 2015 13:17:22 +0200 Subject: [PATCH] TEST: Fix for responder_cache_req-tests
Tests, that do not pass, have a problem with time. Time for writing records into database varied from time of creating a request, that is used for filtering records internally.
The patch modifies the time of creation record (adds one second to now()), so it should not be different times there.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/tests/cmocka/test_responder_cache_req.c | 18 ++++++++++++------ 1 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/tests/cmocka/test_responder_cache_req.c b/src/tests/cmocka/test_responder_cache_req.c index 032fe429ac88b8cc9113976329ea04837f287276..4f77fe767e016496652a97c7a73fc9e29ba7faf0 100644 --- a/src/tests/cmocka/test_responder_cache_req.c +++ b/src/tests/cmocka/test_responder_cache_req.c @@ -1721,9 +1721,10 @@ void test_users_by_filter_valid(void **state) test_ctx = talloc_get_type_abort(*state, struct cache_req_test_ctx); test_ctx->create_user = true;
- /* set (time+1) to avoid failure request time filter */ ret = sysdb_store_user(test_ctx->tctx->dom, TEST_USER_NAME2, "pwd", 1001, 1001, NULL, NULL, NULL, "cn="TEST_USER_NAME2",dc=test", NULL,
NULL, 1000, time(NULL));
assert_int_equal(ret, EOK);NULL, 1000, time(NULL)+1);Although, this patch fix intermitent failures there are few problems.
The protopype of function sysdb_store_user is: /* this function does not check that all user members are actually present */
/* if one of the basic attributes is empty ("") as opposed to NULL,
- this will just remove it */
int sysdb_store_user(struct sss_domain_info *domain, const char *name, const char *pwd, uid_t uid, gid_t gid, const char *gecos, const char *homedir, const char *shell, const char *orig_dn, struct sysdb_attrs *attrs, char **remove_attrs, uint64_t cache_timeout, time_t now);
and if now is 0 then we will get the current time. 1912 /* get transaction timestamp */ 1913 if (!now) { 1914 now = time(NULL); 1915 }
I do not understand why we shoudl set current time (now) to future "time(NULL)+1". I didn't check it properly, but if now is used as transaction timestamp (according to comment) it should not be from futire.
LS
The initial value was time(now) and it could be simply 0, I agree with that. (I've tried time(now) -> 0, but unfortunately it was not enough. The problem is elsewhere.)
The problem is reading the data. There is a filter from a certain time, internally used time is set to time of creating request for reading data. But this request is creating after inserting data. Therefore, you can insert a timestamp data and timestamp of request creation vary, especially if the machine is busy.
Completely correct solution (meaning clear) would be create a request to read data in the beginning of the test, then insert data and then try to read it. I tried this, I had complication with mock then.
Petr
On (17/08/15 10:17), Petr Cech wrote:
On 08/17/2015 08:52 AM, Lukas Slebodnik wrote:
From c871c97862997df4e724647f1a0ce7297f2f059b Mon Sep 17 00:00:00 2001
From: Petr Cechpcech@redhat.com Date: Fri, 14 Aug 2015 13:17:22 +0200 Subject: [PATCH] TEST: Fix for responder_cache_req-tests
Tests, that do not pass, have a problem with time. Time for writing records into database varied from time of creating a request, that is used for filtering records internally.
The patch modifies the time of creation record (adds one second to now()), so it should not be different times there.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/tests/cmocka/test_responder_cache_req.c | 18 ++++++++++++------ 1 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/tests/cmocka/test_responder_cache_req.c b/src/tests/cmocka/test_responder_cache_req.c index 032fe429ac88b8cc9113976329ea04837f287276..4f77fe767e016496652a97c7a73fc9e29ba7faf0 100644 --- a/src/tests/cmocka/test_responder_cache_req.c +++ b/src/tests/cmocka/test_responder_cache_req.c @@ -1721,9 +1721,10 @@ void test_users_by_filter_valid(void **state) test_ctx = talloc_get_type_abort(*state, struct cache_req_test_ctx); test_ctx->create_user = true;
- /* set (time+1) to avoid failure request time filter */ ret = sysdb_store_user(test_ctx->tctx->dom, TEST_USER_NAME2, "pwd", 1001, 1001, NULL, NULL, NULL, "cn="TEST_USER_NAME2",dc=test", NULL,
NULL, 1000, time(NULL));
assert_int_equal(ret, EOK);NULL, 1000, time(NULL)+1);Although, this patch fix intermitent failures there are few problems.
The protopype of function sysdb_store_user is: /* this function does not check that all user members are actually present */
/* if one of the basic attributes is empty ("") as opposed to NULL,
- this will just remove it */
int sysdb_store_user(struct sss_domain_info *domain, const char *name, const char *pwd, uid_t uid, gid_t gid, const char *gecos, const char *homedir, const char *shell, const char *orig_dn, struct sysdb_attrs *attrs, char **remove_attrs, uint64_t cache_timeout, time_t now);
and if now is 0 then we will get the current time. 1912 /* get transaction timestamp */ 1913 if (!now) { 1914 now = time(NULL); 1915 }
I do not understand why we shoudl set current time (now) to future "time(NULL)+1". I didn't check it properly, but if now is used as transaction timestamp (according to comment) it should not be from futire.
LS
The initial value was time(now) and it could be simply 0, I agree with that. (I've tried time(now) -> 0, but unfortunately it was not enough. The problem is elsewhere.)
The problem is reading the data. There is a filter from a certain time, internally used time is set to time of creating request for reading data. But this request is creating after inserting data. Therefore, you can insert a timestamp data and timestamp of request creation vary, especially if the machine is busy.
But busy machine can be also in production. Can this situation occur in real deployment?
The unit test should be considered as additional documentation. Your solution would mean we can/should use "time(NULL)+1" even in production. Which does not seems to be a correct solution.
Completely correct solution (meaning clear) would be create a request to read data in the beginning of the test, then insert data and then try to read it. I tried this, I had complication with mock then.
If you have troubles with mock/cmocka then please at least try to bisect which patch broke it and ask an author for help.
BTW I cannot see it in coding guidelines but it's better to add spaces around operators. "time(NULL)+1" -> "time(NULL) + 1"
LS
On 08/17/2015 10:37 AM, Lukas Slebodnik wrote:
On (17/08/15 10:17), Petr Cech wrote:
On 08/17/2015 08:52 AM, Lukas Slebodnik wrote:
From c871c97862997df4e724647f1a0ce7297f2f059b Mon Sep 17 00:00:00 2001
From: Petr Cechpcech@redhat.com Date: Fri, 14 Aug 2015 13:17:22 +0200 Subject: [PATCH] TEST: Fix for responder_cache_req-tests
Tests, that do not pass, have a problem with time. Time for writing records into database varied from time of creating a request, that is used for filtering records internally.
The patch modifies the time of creation record (adds one second to now()), so it should not be different times there.
Resolves: https://fedorahosted.org/sssd/ticket/2730
src/tests/cmocka/test_responder_cache_req.c | 18 ++++++++++++------ 1 files changed, 12 insertions(+), 6 deletions(-)
diff --git a/src/tests/cmocka/test_responder_cache_req.c b/src/tests/cmocka/test_responder_cache_req.c index 032fe429ac88b8cc9113976329ea04837f287276..4f77fe767e016496652a97c7a73fc9e29ba7faf0 100644 --- a/src/tests/cmocka/test_responder_cache_req.c +++ b/src/tests/cmocka/test_responder_cache_req.c @@ -1721,9 +1721,10 @@ void test_users_by_filter_valid(void **state) test_ctx = talloc_get_type_abort(*state, struct cache_req_test_ctx); test_ctx->create_user = true;
- /* set (time+1) to avoid failure request time filter */ ret = sysdb_store_user(test_ctx->tctx->dom, TEST_USER_NAME2, "pwd", 1001, 1001, NULL, NULL, NULL, "cn="TEST_USER_NAME2",dc=test", NULL,
NULL, 1000, time(NULL));
assert_int_equal(ret, EOK);NULL, 1000, time(NULL)+1);Although, this patch fix intermitent failures there are few problems.
The protopype of function sysdb_store_user is: /* this function does not check that all user members are actually present */
/* if one of the basic attributes is empty ("") as opposed to NULL,
- this will just remove it */
int sysdb_store_user(struct sss_domain_info *domain, const char *name, const char *pwd, uid_t uid, gid_t gid, const char *gecos, const char *homedir, const char *shell, const char *orig_dn, struct sysdb_attrs *attrs, char **remove_attrs, uint64_t cache_timeout, time_t now);
and if now is 0 then we will get the current time. 1912 /* get transaction timestamp */ 1913 if (!now) { 1914 now = time(NULL); 1915 }
I do not understand why we shoudl set current time (now) to future "time(NULL)+1". I didn't check it properly, but if now is used as transaction timestamp (according to comment) it should not be from futire.
LS
The initial value was time(now) and it could be simply 0, I agree with that. (I've tried time(now) -> 0, but unfortunately it was not enough. The problem is elsewhere.)
The problem is reading the data. There is a filter from a certain time, internally used time is set to time of creating request for reading data. But this request is creating after inserting data. Therefore, you can insert a timestamp data and timestamp of request creation vary, especially if the machine is busy.
But busy machine can be also in production. Can this situation occur in real deployment?
The unit test should be considered as additional documentation. Your solution would mean we can/should use "time(NULL)+1" even in production. Which does not seems to be a correct solution.
OK, I understand. I will try to rewrite test in right way.
Now we have: 1. insert data (timestamp A) 2.create request for reading (timestamp B) 3. reading with condition time >= timestamp B
But it should by 0. timestamp 0 1. insert data (timestamp A) 2.create request for reading (timestamp B) 3. reading with condition time >= timestamp 0
Petr
Completely correct solution (meaning clear) would be create a request to read data in the beginning of the test, then insert data and then try to read it. I tried this, I had complication with mock then.
If you have troubles with mock/cmocka then please at least try to bisect which patch broke it and ask an author for help.
BTW I cannot see it in coding guidelines but it's better to add spaces around operators. "time(NULL)+1" -> "time(NULL) + 1"
LS _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
sssd-devel@lists.fedorahosted.org