On Thu, Jun 18, 2015 at 07:24:46PM +0200, Pavel Březina wrote:
On 06/18/2015 10:09 AM, Jakub Hrozek wrote:
>Hi,
>
>the attached patches implement
>https://fedorahosted.org/sssd/ticket/2553.
>
>I'm sorry the patches took so long, but only during testing I realized the
>deleting of users not updated by the wildcard lookup is flawed. In case
>there were more entries in cache that match the wildcard than the limit
>on the wildcard search, we might be deleting legitimate entries that might
>contain cached credentials..
>
>What I did instead was never remove any entries during wildcard lookups,
>but instead, only return entries that were updated during the wildcard
>lookup in the cache_req. That way, stale entries would be deleted by a
>direct lookup (maybe as a result of getpwuid when examining a file) or
>can be deleted by the cleanup task.
>
>I'm a bit nervous about the LDAP changes, please review ruthlessly :-)
>I would also like for the option defaults to be carefully checked during
>review.
>
>One question about the by-name-and-domain lookups -- should we check the
>domain we receive from the cache_req is the same as we requested?
>
>btw I really like how the cache_req and the infopipe code are structured. It
>was quite easy to extend without any hacks.
I haven't tried those patches yet, just proof read them. I have just few
comments to the cache_req part:
I think you have to rebase it on top of Sumit's cert patches, since they
modify cache_req.
Done.
Can you put updated_*_by_filter to the top of the file so you can keep
functions that controls logic of cache_req together?
Done.
It may be also better
to move them into the sysdb module?
I would do that if we anticipate more listing intefaces using this
kind of filter.
If I read the code correctly, you always return only records that were
updated. It this sufficient? I think we need to return all records that
match the provided filter.
Please check my logic here, but I think the way it's coded now is
safer. On the back end side, we can't reasonably detect if an entry has
been removed if we only use the wildcard lookup. So the back end can't
reasonably decide which entries have been removed from the server.
The reason is the limit. There might be more entries on the server side
that those that match the filter, so what the back end sees is just a
subset. And the entries outside the limit can be cached by direct
lookups, so we simply shouldn't touch them..
What the code does now is only returns entries updated by the wildcard
lookup. The entries that were not updated are not returned to the
fronted -- they just sit in the cache until some application requests
them with a direct lookup. The direct lookup can detect removal of the
object on the server side and also remove the cached object.
Does that make sense?
> if (state->result == NULL || state->result->count == 0 ||
> state->input->type == CACHE_REQ_USER_BY_FILTER ||
> state->input->type == CACHE_REQ_GROUP_BY_FILTER) {
> ret = ENOENT;
> } else {
I would like to avoid this type of code whenever possible. Can we replace it
with a macro e.g. always_check_cache(state->input) or function?
OK, done. Please feel free to suggest a better function name.
Is it a valid operation to use a filter that needs to be parse into name and
domain (user*@domain)?
I don't think so, we have a specialized function for that.