On Tue, Jun 04, 2013 at 02:28:32PM +0200, Sumit Bose wrote:
> On Tue, Jun 04, 2013 at 02:18:35PM +0200, Jakub Hrozek wrote:
> > On Tue, Jun 04, 2013 at 12:05:12PM +0200, Sumit Bose wrote:
> > > On Fri, May 31, 2013 at 04:15:18PM +0200, Jakub Hrozek wrote:
> > > > On Fri, May 31, 2013 at 01:37:11PM +0200, Sumit Bose wrote:
> > > > > Hi,
> > > > >
> > > > > recently the patch "Allow flat name in the FQname
format" was commit to
> > > > > master. The flat domain name is determined at runtime but
currently only when
> > > > > the responders receive a request with an unknown domain name.
> > > > >
> > > > > If now the flat domain name is used in the FQname and the nss
responder
> > > > > receives e.g. a 'getent passwd DOM\username' request
with the flat
> > > > > domain name after startup everything is fine. Because after
startup the
> > > > > domain part of the given fully qualified user name is not know
and a
> > > > > request will be send to the backends to look it up. If the
request is
> > > > > done the flat domain name is know and can be used in the
returned
> > > > > FQname.
> > > > >
> > > > > if on the other hand the nss responder receives a 'getent
passwd
> > > > > username(a)domain.name' with the domain name from sssd.conf
the domain
> > > > > part of the user name is known and there is no reason to send a
> > > > > get_domains request to the backend. Hence the flat domain name
is not
> > > > > known when the FQname for the response is constructed and will
be
> > > > > replaced by the full name.
> > > > >
> > > > > To avoid this the following patch will always run a get_domains
request
> > > > > at startup to get the needed domain data.
> > > > >
> > > > > Fixes
https://fedorahosted.org/sssd/ticket/1951.
> > > > >
> > > > > bye,
> > > > > Sumit
> > > >
> > > > Works fine, after startup there is a subdomain created and the flat
name
> > > > discovered.
> > > >
> > > > But I wonder if it was worth it to add some kind of check in
> > > > be_get_subdomains() to avoid issuing another request if the previous
> > > > came within some interval and if it did just return success.
Currently
> > > > this patch triggers an LDAP search per responder.
> > > >
> > > > The downside I see is that there is already a similar check in the
> > > > responders themselves, so this would be a bit of duplication.
> > >
> > > Thank you for the review. I've added two patches which adds a generic
> > > request queue to the data provider code and let the get_subdomains
> > > request use it. With this you should only see two requests going to the
> > > server, one triggered by the starting responders and the other is the
> > > online callback. I think it is not worth the time to try to optimize
> > > this out as well, because there are different requirements for the
> > > responders and the online callback.
> > >
> > > bye,
> > > Sumit
> >
> > Thank you, this is much better, with the additional check in the AD
> > subdomain code to shortcut back, I no longer see a search per responder.
> >
> > [PATCH 1/3] Lookup domains at startup
> > Ack
> >
> > [PATCH 2/3] Add be request queue
> > Just one small change below, I can change the part before pushing if you
> > agree:
> >
> > > From d3482d3a25f2e65749a3d988938f7e7183e45880 Mon Sep 17 00:00:00 2001
> > > From: Sumit Bose <sbose(a)redhat.com>
> > > Date: Fri, 31 May 2013 22:00:17 +0200
> > > Subject: [PATCH 2/3] Add be request queue
> > >
> > > For some backend targets it might be not desirable to run requests in
> > > parallel but to serialize them. To avoid that each provider has to
> > > implement a queue for this target this patch implements a generic queue
> > > which collects incoming requests before they are send to the target.
> > > ---
> > > src/providers/data_provider_be.c | 119
++++++++++++++++++++++++++++++++++++++
> > > src/providers/dp_backend.h | 11 ++++
> > > 2 files changed, 130 insertions(+), 0 deletions(-)
> > >
> > > diff --git a/src/providers/data_provider_be.c
b/src/providers/data_provider_be.c
> > > index cd67156..49a7a89 100644
> > > --- a/src/providers/data_provider_be.c
> > > +++ b/src/providers/data_provider_be.c
> > > @@ -301,6 +301,125 @@ static errno_t be_file_request(TALLOC_CTX *mem_ctx,
> > > return EOK;
> > > }
> > >
> > > +static errno_t be_queue_request(TALLOC_CTX *queue_mem_ctx,
> > > + struct bet_queue_item **req_queue,
> > > + TALLOC_CTX *req_mem_ctx,
> > > + struct be_req *be_req,
> > > + be_req_fn_t fn)
> > > +{
> > > + struct bet_queue_item *item;
> > > + int ret;
> > > +
> > > + if (*req_queue == NULL) {
> > > + DEBUG(SSSDBG_TRACE_ALL, ("Queue is empty, " \
> > > + "running request
immediately.\n"));
> > > + ret = be_file_request(req_mem_ctx, be_req, fn);
> > > + if (ret != EOK) {
> > > + DEBUG(SSSDBG_OP_FAILURE, ("be_file_request
failed.\n"));
> > > + return ret;
> > > + }
> > > + }
> > > +
> > > + item = talloc_zero(queue_mem_ctx, struct bet_queue_item);
> > > + if (item == NULL) {
> > > + DEBUG(SSSDBG_OP_FAILURE, ("talloc_zero failed, cannot add
item to " \
> > > + "request queue.\n"));
> > > + } else {
> > > + DEBUG(SSSDBG_TRACE_ALL, ("Adding request to
queue.\n"));
> > > + item->mem_ctx = req_mem_ctx;
> > > + item->be_req = be_req;
> > > + item->fn = fn;
> > > +
> > > + DLIST_ADD(*req_queue, item);
> >
> > DLIST_ADD adds to the front of the list and when talking about
"queues",
> > typically the user adds to the tail of the list to keep the FIFO
> > behaviour. I think DLIST_ADD_END() would read better here.
>
> I agree, I picked DLIST_ADD because it might be a bit faster, but the
> queues should be empty most of the time and keeping the order might be a
> good idea as well.
>
> Thank you for review.
>
> bye,
> Sumit
Thanks, attached are patches I will be pushing.