Hi,
the attached patches implement https://fedorahosted.org/sssd/ticket/2553.
I'm sorry the patches took so long, but only during testing I realized the deleting of users not updated by the wildcard lookup is flawed. In case there were more entries in cache that match the wildcard than the limit on the wildcard search, we might be deleting legitimate entries that might contain cached credentials..
What I did instead was never remove any entries during wildcard lookups, but instead, only return entries that were updated during the wildcard lookup in the cache_req. That way, stale entries would be deleted by a direct lookup (maybe as a result of getpwuid when examining a file) or can be deleted by the cleanup task.
I'm a bit nervous about the LDAP changes, please review ruthlessly :-) I would also like for the option defaults to be carefully checked during review.
One question about the by-name-and-domain lookups -- should we check the domain we receive from the cache_req is the same as we requested?
btw I really like how the cache_req and the infopipe code are structured. It was quite easy to extend without any hacks.
On Thu, Jun 18, 2015 at 10:09:06AM +0200, Jakub Hrozek wrote:
Hi,
the attached patches implement https://fedorahosted.org/sssd/ticket/2553.
I'm sorry the patches took so long, but only during testing I realized the deleting of users not updated by the wildcard lookup is flawed. In case there were more entries in cache that match the wildcard than the limit on the wildcard search, we might be deleting legitimate entries that might contain cached credentials..
What I did instead was never remove any entries during wildcard lookups, but instead, only return entries that were updated during the wildcard lookup in the cache_req. That way, stale entries would be deleted by a direct lookup (maybe as a result of getpwuid when examining a file) or can be deleted by the cleanup task.
I'm a bit nervous about the LDAP changes, please review ruthlessly :-) I would also like for the option defaults to be carefully checked during review.
One question about the by-name-and-domain lookups -- should we check the domain we receive from the cache_req is the same as we requested?
btw I really like how the cache_req and the infopipe code are structured. It was quite easy to extend without any hacks.
On 06/18/2015 10:09 AM, Jakub Hrozek wrote:
Hi,
the attached patches implement https://fedorahosted.org/sssd/ticket/2553.
I'm sorry the patches took so long, but only during testing I realized the deleting of users not updated by the wildcard lookup is flawed. In case there were more entries in cache that match the wildcard than the limit on the wildcard search, we might be deleting legitimate entries that might contain cached credentials..
What I did instead was never remove any entries during wildcard lookups, but instead, only return entries that were updated during the wildcard lookup in the cache_req. That way, stale entries would be deleted by a direct lookup (maybe as a result of getpwuid when examining a file) or can be deleted by the cleanup task.
I'm a bit nervous about the LDAP changes, please review ruthlessly :-) I would also like for the option defaults to be carefully checked during review.
One question about the by-name-and-domain lookups -- should we check the domain we receive from the cache_req is the same as we requested?
btw I really like how the cache_req and the infopipe code are structured. It was quite easy to extend without any hacks.
I haven't tried those patches yet, just proof read them. I have just few comments to the cache_req part:
I think you have to rebase it on top of Sumit's cert patches, since they modify cache_req.
Can you put updated_*_by_filter to the top of the file so you can keep functions that controls logic of cache_req together? It may be also better to move them into the sysdb module?
If I read the code correctly, you always return only records that were updated. It this sufficient? I think we need to return all records that match the provided filter.
if (state->result == NULL || state->result->count == 0 || state->input->type == CACHE_REQ_USER_BY_FILTER || state->input->type == CACHE_REQ_GROUP_BY_FILTER) { ret = ENOENT; } else {
I would like to avoid this type of code whenever possible. Can we replace it with a macro e.g. always_check_cache(state->input) or function?
Is it a valid operation to use a filter that needs to be parse into name and domain (user*@domain)?
On Thu, Jun 18, 2015 at 07:24:46PM +0200, Pavel Březina wrote:
On 06/18/2015 10:09 AM, Jakub Hrozek wrote:
Hi,
the attached patches implement https://fedorahosted.org/sssd/ticket/2553.
I'm sorry the patches took so long, but only during testing I realized the deleting of users not updated by the wildcard lookup is flawed. In case there were more entries in cache that match the wildcard than the limit on the wildcard search, we might be deleting legitimate entries that might contain cached credentials..
What I did instead was never remove any entries during wildcard lookups, but instead, only return entries that were updated during the wildcard lookup in the cache_req. That way, stale entries would be deleted by a direct lookup (maybe as a result of getpwuid when examining a file) or can be deleted by the cleanup task.
I'm a bit nervous about the LDAP changes, please review ruthlessly :-) I would also like for the option defaults to be carefully checked during review.
One question about the by-name-and-domain lookups -- should we check the domain we receive from the cache_req is the same as we requested?
btw I really like how the cache_req and the infopipe code are structured. It was quite easy to extend without any hacks.
I haven't tried those patches yet, just proof read them. I have just few comments to the cache_req part:
I think you have to rebase it on top of Sumit's cert patches, since they modify cache_req.
Done.
Can you put updated_*_by_filter to the top of the file so you can keep functions that controls logic of cache_req together?
Done.
It may be also better to move them into the sysdb module?
I would do that if we anticipate more listing intefaces using this kind of filter.
If I read the code correctly, you always return only records that were updated. It this sufficient? I think we need to return all records that match the provided filter.
Please check my logic here, but I think the way it's coded now is safer. On the back end side, we can't reasonably detect if an entry has been removed if we only use the wildcard lookup. So the back end can't reasonably decide which entries have been removed from the server.
The reason is the limit. There might be more entries on the server side that those that match the filter, so what the back end sees is just a subset. And the entries outside the limit can be cached by direct lookups, so we simply shouldn't touch them..
What the code does now is only returns entries updated by the wildcard lookup. The entries that were not updated are not returned to the fronted -- they just sit in the cache until some application requests them with a direct lookup. The direct lookup can detect removal of the object on the server side and also remove the cached object.
Does that make sense?
if (state->result == NULL || state->result->count == 0 || state->input->type == CACHE_REQ_USER_BY_FILTER || state->input->type == CACHE_REQ_GROUP_BY_FILTER) { ret = ENOENT; } else {
I would like to avoid this type of code whenever possible. Can we replace it with a macro e.g. always_check_cache(state->input) or function?
OK, done. Please feel free to suggest a better function name.
Is it a valid operation to use a filter that needs to be parse into name and domain (user*@domain)?
I don't think so, we have a specialized function for that.
On 07/09/2015 12:17 PM, Jakub Hrozek wrote:
It may be also better to move them into the sysdb module?
I would do that if we anticipate more listing intefaces using this kind of filter.
I disagree since in my opinion it belongs into sysdb, but I will not insist.
If I read the code correctly, you always return only records that were updated. It this sufficient? I think we need to return all records that match the provided filter.
Please check my logic here, but I think the way it's coded now is safer. On the back end side, we can't reasonably detect if an entry has been removed if we only use the wildcard lookup. So the back end can't reasonably decide which entries have been removed from the server.
The reason is the limit. There might be more entries on the server side that those that match the filter, so what the back end sees is just a subset. And the entries outside the limit can be cached by direct lookups, so we simply shouldn't touch them..
What the code does now is only returns entries updated by the wildcard lookup. The entries that were not updated are not returned to the fronted -- they just sit in the cache until some application requests them with a direct lookup. The direct lookup can detect removal of the object on the server side and also remove the cached object.
Does that make sense?
Yes, thank you for explanation. I think the logic is sound.
if (state->result == NULL || state->result->count == 0 || state->input->type == CACHE_REQ_USER_BY_FILTER || state->input->type == CACHE_REQ_GROUP_BY_FILTER) { ret = ENOENT; } else {
I would like to avoid this type of code whenever possible. Can we replace it with a macro e.g. always_check_cache(state->input) or function?
OK, done. Please feel free to suggest a better function name.
Is it a valid operation to use a filter that needs to be parse into name and domain (user*@domain)?
I don't think so, we have a specialized function for that.
What function?
*Patch #01 tests: Move N_ELEMENTS definition to tests/common.h* ACK
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
*Patch #03 DP: Add DP_WILDCARD and SSS_DP_WILDCARD_USER/SSS_DP_WILDCARD_GROUP*
} else if (info->type == SSS_DP_WILDCARD_USER ||
info->type == SSS_DP_WILDCARD_GROUP) {
if (info->extra) {
filter = talloc_asprintf(info, "%s=%s:%s", DP_WILDCARD,
info->opt_name, info->extra);
} else {
filter = talloc_asprintf(info, "%s=%s", DP_WILDCARD,
info->opt_name);
}
Do you think it is safe to use the same prefix for both users and groups look up?
*Patch #04 cache_req: Extend cache_req with wildcard lookups*
case CACHE_REQ_USER_BY_FILTER: case CACHE_REQ_GROUP_BY_FILTER: /* Nothing to do, adding a wildcard request to ncache doesn't * make sense */
/* Return true if the request can be cached or false if the cache_req
- code needs to contact the DP every time
*/
Can you finish the sentence with dot so it is consistent with the rest of the code?
static bool cache_req_cachable(struct cache_req_input *input)
How about cache_req_avoid_cache?
*Patch #05 UTIL: Add sss_filter_sanitize_ex*
- const char has_all_allow_asterisk_expected[] = "\5c\28user\29*name";
Can you add comment with unescaped characters so it is human-readable?
*Patch #06 LDAP: Fetch users and groups using wildcards*
@@ -953,6 +991,14 @@ static void groups_get_done(struct tevent_req *subreq) * group we have nothing to do here. */ break;
case BE_FILTER_WILDCARD:
/* We can't know if all users are up-to-date, especially in a large
* environment. Do not delete any records, let the responder fetch
* the entries they are requested in
*/
break;
Copy and paste error... you meant groups here.
*Patch #07 LDAP: Add sdap_get_and_parse_generic_send* ACK
*Patch #08 LDAP: Use sdap_get_and_parse_generic_/_recv* ACK
*Patch #09 LDAP: Add sdap_lookup_type enum* ACK
*Patch #10 LDAP: Add the wildcard_limit option*
+ldap_pwdlockout_dn = str, None, false
This probably belongs to separate patch...
*Patch #11 IFP: Add wildcard requests*
size_t ifp_list_ctx_cap(struct ifp_list_ctx *list_ctx, size_t entries)
ifp_list_ctx_remaining_capacity? I had no idea what the function does before I read its code.
On Fri, Jul 10, 2015 at 01:30:07PM +0200, Pavel Březina wrote:
On 07/09/2015 12:17 PM, Jakub Hrozek wrote:
It may be also better to move them into the sysdb module?
I would do that if we anticipate more listing intefaces using this kind of filter.
I disagree since in my opinion it belongs into sysdb, but I will not insist.
If I read the code correctly, you always return only records that were updated. It this sufficient? I think we need to return all records that match the provided filter.
Please check my logic here, but I think the way it's coded now is safer. On the back end side, we can't reasonably detect if an entry has been removed if we only use the wildcard lookup. So the back end can't reasonably decide which entries have been removed from the server.
The reason is the limit. There might be more entries on the server side that those that match the filter, so what the back end sees is just a subset. And the entries outside the limit can be cached by direct lookups, so we simply shouldn't touch them..
What the code does now is only returns entries updated by the wildcard lookup. The entries that were not updated are not returned to the fronted -- they just sit in the cache until some application requests them with a direct lookup. The direct lookup can detect removal of the object on the server side and also remove the cached object.
Does that make sense?
Yes, thank you for explanation. I think the logic is sound.
if (state->result == NULL || state->result->count == 0 || state->input->type == CACHE_REQ_USER_BY_FILTER || state->input->type == CACHE_REQ_GROUP_BY_FILTER) { ret = ENOENT; } else {
I would like to avoid this type of code whenever possible. Can we replace it with a macro e.g. always_check_cache(state->input) or function?
OK, done. Please feel free to suggest a better function name.
Is it a valid operation to use a filter that needs to be parse into name and domain (user*@domain)?
I don't think so, we have a specialized function for that.
ListByDomainAndName
What function?
*Patch #01 tests: Move N_ELEMENTS definition to tests/common.h* ACK
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
*Patch #03 DP: Add DP_WILDCARD and SSS_DP_WILDCARD_USER/SSS_DP_WILDCARD_GROUP*
} else if (info->type == SSS_DP_WILDCARD_USER ||
info->type == SSS_DP_WILDCARD_GROUP) {
if (info->extra) {
filter = talloc_asprintf(info, "%s=%s:%s", DP_WILDCARD,
info->opt_name, info->extra);
} else {
filter = talloc_asprintf(info, "%s=%s", DP_WILDCARD,
info->opt_name);
}
Do you think it is safe to use the same prefix for both users and groups look up?
yes, because the target is defined with be_type.
*Patch #04 cache_req: Extend cache_req with wildcard lookups*
case CACHE_REQ_USER_BY_FILTER: case CACHE_REQ_GROUP_BY_FILTER: /* Nothing to do, adding a wildcard request to ncache doesn't * make sense */
/* Return true if the request can be cached or false if the cache_req
- code needs to contact the DP every time
*/
Can you finish the sentence with dot so it is consistent with the rest of the code?
Done.
static bool cache_req_cachable(struct cache_req_input *input)
How about cache_req_avoid_cache?
OK, renamed.
*Patch #05 UTIL: Add sss_filter_sanitize_ex*
- const char has_all_allow_asterisk_expected[] = "\5c\28user\29*name";
Can you add comment with unescaped characters so it is human-readable?
Done.
*Patch #06 LDAP: Fetch users and groups using wildcards*
@@ -953,6 +991,14 @@ static void groups_get_done(struct tevent_req *subreq) * group we have nothing to do here. */ break;
case BE_FILTER_WILDCARD:
/* We can't know if all users are up-to-date, especially in a large
* environment. Do not delete any records, let the responder fetch
* the entries they are requested in
*/
break;
Copy and paste error... you meant groups here.
Yes, thanks. Fixed.
*Patch #07 LDAP: Add sdap_get_and_parse_generic_send* ACK
*Patch #08 LDAP: Use sdap_get_and_parse_generic_/_recv* ACK
*Patch #09 LDAP: Add sdap_lookup_type enum* ACK
*Patch #10 LDAP: Add the wildcard_limit option*
+ldap_pwdlockout_dn = str, None, false
This probably belongs to separate patch...
Yeah, this was wrong.
*Patch #11 IFP: Add wildcard requests*
size_t ifp_list_ctx_cap(struct ifp_list_ctx *list_ctx, size_t entries)
ifp_list_ctx_remaining_capacity? I had no idea what the function does before I read its code.
OK, renamed.
Thanks for the review, new patches are attached.
On 07/13/2015 06:25 PM, Jakub Hrozek wrote:
On Fri, Jul 10, 2015 at 01:30:07PM +0200, Pavel Březina wrote:
On 07/09/2015 12:17 PM, Jakub Hrozek wrote:
It may be also better to move them into the sysdb module?
I would do that if we anticipate more listing intefaces using this kind of filter.
I disagree since in my opinion it belongs into sysdb, but I will not insist.
If I read the code correctly, you always return only records that were updated. It this sufficient? I think we need to return all records that match the provided filter.
Please check my logic here, but I think the way it's coded now is safer. On the back end side, we can't reasonably detect if an entry has been removed if we only use the wildcard lookup. So the back end can't reasonably decide which entries have been removed from the server.
The reason is the limit. There might be more entries on the server side that those that match the filter, so what the back end sees is just a subset. And the entries outside the limit can be cached by direct lookups, so we simply shouldn't touch them..
What the code does now is only returns entries updated by the wildcard lookup. The entries that were not updated are not returned to the fronted -- they just sit in the cache until some application requests them with a direct lookup. The direct lookup can detect removal of the object on the server side and also remove the cached object.
Does that make sense?
Yes, thank you for explanation. I think the logic is sound.
if (state->result == NULL || state->result->count == 0 || state->input->type == CACHE_REQ_USER_BY_FILTER || state->input->type == CACHE_REQ_GROUP_BY_FILTER) { ret = ENOENT; } else {
I would like to avoid this type of code whenever possible. Can we replace it with a macro e.g. always_check_cache(state->input) or function?
OK, done. Please feel free to suggest a better function name.
Is it a valid operation to use a filter that needs to be parse into name and domain (user*@domain)?
I don't think so, we have a specialized function for that.
ListByDomainAndName
Ah, ok.
What function?
*Patch #01 tests: Move N_ELEMENTS definition to tests/common.h* ACK
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
*Patch #03 DP: Add DP_WILDCARD and SSS_DP_WILDCARD_USER/SSS_DP_WILDCARD_GROUP*
} else if (info->type == SSS_DP_WILDCARD_USER ||
info->type == SSS_DP_WILDCARD_GROUP) {
if (info->extra) {
filter = talloc_asprintf(info, "%s=%s:%s", DP_WILDCARD,
info->opt_name, info->extra);
} else {
filter = talloc_asprintf(info, "%s=%s", DP_WILDCARD,
info->opt_name);
}
Do you think it is safe to use the same prefix for both users and groups look up?
yes, because the target is defined with be_type.
Ok. Ack.
*Patch #04 cache_req: Extend cache_req with wildcard lookups*
case CACHE_REQ_USER_BY_FILTER: case CACHE_REQ_GROUP_BY_FILTER: /* Nothing to do, adding a wildcard request to ncache doesn't * make sense */
/* Return true if the request can be cached or false if the cache_req
- code needs to contact the DP every time
*/
Can you finish the sentence with dot so it is consistent with the rest of the code?
Done.
static bool cache_req_cachable(struct cache_req_input *input)
How about cache_req_avoid_cache?
OK, renamed.
*Patch #05 UTIL: Add sss_filter_sanitize_ex*
- const char has_all_allow_asterisk_expected[] = "\5c\28user\29*name";
Can you add comment with unescaped characters so it is human-readable?
Done.
*Patch #06 LDAP: Fetch users and groups using wildcards*
@@ -953,6 +991,14 @@ static void groups_get_done(struct tevent_req *subreq) * group we have nothing to do here. */ break;
case BE_FILTER_WILDCARD:
/* We can't know if all users are up-to-date, especially in a large
* environment. Do not delete any records, let the responder fetch
* the entries they are requested in
*/
break;
Copy and paste error... you meant groups here.
Yes, thanks. Fixed.
*Patch #07 LDAP: Add sdap_get_and_parse_generic_send* ACK
*Patch #08 LDAP: Use sdap_get_and_parse_generic_/_recv* ACK
*Patch #09 LDAP: Add sdap_lookup_type enum* ACK
*Patch #10 LDAP: Add the wildcard_limit option*
+ldap_pwdlockout_dn = str, None, false
This probably belongs to separate patch...
Yeah, this was wrong.
*Patch #11 IFP: Add wildcard requests*
size_t ifp_list_ctx_cap(struct ifp_list_ctx *list_ctx, size_t entries)
ifp_list_ctx_remaining_capacity? I had no idea what the function does before I read its code.
OK, renamed.
Thanks for the review, new patches are attached.
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On Tue, Jul 14, 2015 at 02:39:13PM +0200, Pavel Březina wrote:
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
Let's try to again.
This is the only change in the set.
On 07/14/2015 03:18 PM, Jakub Hrozek wrote:
On Tue, Jul 14, 2015 at 02:39:13PM +0200, Pavel Březina wrote:
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
Let's try to again.
This is the only change in the set.
Ack.
On Wed, Jul 15, 2015 at 11:02:51AM +0200, Pavel Březina wrote:
On 07/14/2015 03:18 PM, Jakub Hrozek wrote:
On Tue, Jul 14, 2015 at 02:39:13PM +0200, Pavel Březina wrote:
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
Let's try to again.
This is the only change in the set.
Ack.
Thanks a lot for the review. I will file a ticket for the IPA support, but that should be a separate patch atop the responder support.
* bdf32fbb3c947dd1b2c54d1c21d8028a1ddc80e6 * b9e74a747b8f1012bba3575f3e4289ef4877d64a * 1f2fc55ecf7b5e170b2c0752304d1a2ecebc5259 * 5b2ca5cc0e22dd184e3eba84af2c00d7065c59c7 * f4e643ed7df771f83e903a6309f7ff0917819d25 * 2922461ea5357f4035a5ca7bdd84013db8767376 * fa7921c8259539b750f7e9e7bcd82aa72020826a * fd04b25eaa5cd105da4122854d8bc1e702760e60 * cdc44abdf944b0de541fe93ecd77df4d09c856b1 * 696c17580b49d6817f1dd33915e0e209dcfe4225 * 429f8454a40b939604e9a96d780661a94a38da2e
On (14/07/15 15:18), Jakub Hrozek wrote:
On Tue, Jul 14, 2015 at 02:39:13PM +0200, Pavel Březina wrote:
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
Let's try to again.
This is the only change in the set.
From 63574fbe77359ba1c84aac38dbbdf04053a69d92 Mon Sep 17 00:00:00 2001 From: Jakub Hrozek jhrozek@redhat.com Date: Wed, 17 Jun 2015 16:13:51 +0200 Subject: [PATCH 10/11] LDAP: Add the wildcard_limit option
Related: https://fedorahosted.org/sssd/ticket/2553
Adds a new wildcard_limit option that is set by default to 1000 (one page). This option limits the number of entries that can by default be returned by a wildcard search.
src/config/SSSDConfig/__init__.py.in | 1 + src/config/etc/sssd.api.d/sssd-ad.conf | 1 + src/config/etc/sssd.api.d/sssd-ipa.conf | 1 + src/config/etc/sssd.api.d/sssd-ldap.conf | 1 + src/man/sssd-ldap.5.xml | 17 +++++++++++++++++ src/providers/ad/ad_opts.h | 1 + src/providers/ipa/ipa_opts.h | 1 + src/providers/ldap/ldap_opts.h | 1 + src/providers/ldap/sdap.h | 1 + src/providers/ldap/sdap_async_groups.c | 8 +++++++- src/providers/ldap/sdap_async_users.c | 8 +++++++- 11 files changed, 39 insertions(+), 2 deletions(-)
I wanted to test my intigroups memory cache patches but I had to spent a time with reviewving failed downstream tests.
git bisect claims that this patch caused a regression. Please take a look into ticket https://fedorahosted.org/sssd/ticket/2723 for more details.
LS
On Mon, Jul 20, 2015 at 06:15:47PM +0200, Lukas Slebodnik wrote:
On (14/07/15 15:18), Jakub Hrozek wrote:
On Tue, Jul 14, 2015 at 02:39:13PM +0200, Pavel Březina wrote:
*Patch #02 SYSDB: Add functions to look up multiple entries including name and custom filter*
+static char *enum_filter(TALLOC_CTX *mem_ctx,
const char *base_filter,
const char *name_filter,
const char *addtl_filter)
You are leaking memory here if any of the allocation fails. I know it will be freed in the caller but it is not a good practice.
As discussed on IRC, I added a context.
Nack. You added a tmp_ctx but you still use mem_ctx :-)
Let's try to again.
This is the only change in the set.
From 63574fbe77359ba1c84aac38dbbdf04053a69d92 Mon Sep 17 00:00:00 2001 From: Jakub Hrozek jhrozek@redhat.com Date: Wed, 17 Jun 2015 16:13:51 +0200 Subject: [PATCH 10/11] LDAP: Add the wildcard_limit option
Related: https://fedorahosted.org/sssd/ticket/2553
Adds a new wildcard_limit option that is set by default to 1000 (one page). This option limits the number of entries that can by default be returned by a wildcard search.
src/config/SSSDConfig/__init__.py.in | 1 + src/config/etc/sssd.api.d/sssd-ad.conf | 1 + src/config/etc/sssd.api.d/sssd-ipa.conf | 1 + src/config/etc/sssd.api.d/sssd-ldap.conf | 1 + src/man/sssd-ldap.5.xml | 17 +++++++++++++++++ src/providers/ad/ad_opts.h | 1 + src/providers/ipa/ipa_opts.h | 1 + src/providers/ldap/ldap_opts.h | 1 + src/providers/ldap/sdap.h | 1 + src/providers/ldap/sdap_async_groups.c | 8 +++++++- src/providers/ldap/sdap_async_users.c | 8 +++++++- 11 files changed, 39 insertions(+), 2 deletions(-)
I wanted to test my intigroups memory cache patches but I had to spent a time with reviewving failed downstream tests.
git bisect claims that this patch caused a regression. Please take a look into ticket https://fedorahosted.org/sssd/ticket/2723 for more details.
LS
OK, I assigned the ticket to myself since I broke it.
Thanks for finding it.
sssd-devel@lists.fedorahosted.org