I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
On Thu, Apr 25, 2013 at 11:18:28AM +0200, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
I have a couple of questions: * The create functions has no memory context parameter. I was thinking that in some cases you might want to allocate the structure on something else than be_ctx and more importantly, cancel the task when the context in question goes away * The design page says that "cancelling request if current tevent request bound to this task takes more than timeout seconds" is a job of the be_periodic_task_create(). Does it also reschedule the job in that case? Is the rescheduling configurable? * What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
One grammar nitpick - " The periodic tasks will be held by back end." should probably say "owned by back end"
On 04/29/2013 10:00 AM, Jakub Hrozek wrote:
On Thu, Apr 25, 2013 at 11:18:28AM +0200, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Hi, I didn't expect any major issue with the design so I already wrote the code on Friday. You can take a look at nss-periodic branch in my repo.
I have a couple of questions:
- The create functions has no memory context parameter. I was thinking that in some cases you might want to allocate the structure on something else than be_ctx and more importantly, cancel the task when the context in question goes away
Do you have any particular use case? I didn't thought of any. But I guess it won't hurt to add it. If anything, it will make easier to ensure that it is freed when private data (like sdap_id_ctx) is freed.
- The design page says that "cancelling request if current tevent request bound to this task takes more than timeout seconds" is a job of the be_periodic_task_create(). Does it also reschedule the job in that case?
Yes.
Is the rescheduling configurable?
How would you like to configure it?
- What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
If this is needed then you should create the task in online callback and destroy it in offline callback.
One grammar nitpick - " The periodic tasks will be held by back end." should probably say "owned by back end"
On Mon, Apr 29, 2013 at 11:07:09AM +0200, Pavel Březina wrote:
On 04/29/2013 10:00 AM, Jakub Hrozek wrote:
On Thu, Apr 25, 2013 at 11:18:28AM +0200, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Hi, I didn't expect any major issue with the design so I already wrote the code on Friday. You can take a look at nss-periodic branch in my repo.
Yeah, sorry, my fault for not responding sooner.
I have a couple of questions:
- The create functions has no memory context parameter. I was thinking that in some cases you might want to allocate the structure on something else than be_ctx and more importantly, cancel the task when the context in question goes away
Do you have any particular use case? I didn't thought of any. But I guess it won't hurt to add it. If anything, it will make easier to ensure that it is freed when private data (like sdap_id_ctx) is freed.
No, that was just future extensibility.
- The design page says that "cancelling request if current tevent request bound to this task takes more than timeout seconds" is a job of the be_periodic_task_create(). Does it also reschedule the job in that case?
Yes.
Is the rescheduling configurable?
How would you like to configure it?
I was thinking that in some cases you might not want to reschedule at all. If the task fails, it's failed. But I guess this could be handled in the task itself by freeing the data structure?
- What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
If this is needed then you should create the task in online callback and destroy it in offline callback.
OK, can you add that bit to the design page?
One grammar nitpick - " The periodic tasks will be held by back end." should probably say "owned by back end"
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On 04/29/2013 11:35 AM, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 11:07:09AM +0200, Pavel Březina wrote:
On 04/29/2013 10:00 AM, Jakub Hrozek wrote:
On Thu, Apr 25, 2013 at 11:18:28AM +0200, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Hi, I didn't expect any major issue with the design so I already wrote the code on Friday. You can take a look at nss-periodic branch in my repo.
Yeah, sorry, my fault for not responding sooner.
I have a couple of questions:
- The create functions has no memory context parameter. I was thinking that in some cases you might want to allocate the structure on something else than be_ctx and more importantly, cancel the task when the context in question goes away
Do you have any particular use case? I didn't thought of any. But I guess it won't hurt to add it. If anything, it will make easier to ensure that it is freed when private data (like sdap_id_ctx) is freed.
No, that was just future extensibility.
- The design page says that "cancelling request if current tevent request bound to this task takes more than timeout seconds" is a job of the be_periodic_task_create(). Does it also reschedule the job in that case?
Yes.
Is the rescheduling configurable?
How would you like to configure it?
I was thinking that in some cases you might not want to reschedule at all. If the task fails, it's failed. But I guess this could be handled in the task itself by freeing the data structure?
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
- What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
If this is needed then you should create the task in online callback and destroy it in offline callback.
OK, can you add that bit to the design page?
Will do.
One grammar nitpick - " The periodic tasks will be held by back end." should probably say "owned by back end"
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
This way also the periodic task handler can report exactly what error cause a periodic task to fail and report it at a high level.
Freeing the task memory context instead would free the task w/o reporting any error.
Simo.
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
This way also the periodic task handler can report exactly what error cause a periodic task to fail and report it at a high level. Freeing the task memory context instead would free the task w/o reporting any error.
Simo.
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default. In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
On Mon, Apr 29, 2013 at 08:42:30PM +0200, Pavel Březina wrote:
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Fine, let's go ahead with the current design. In 1.10 the only task using this framework would be the background refresh anyway, right? Then when we convert the other tasks (note: file tickets to convert the other tasks) we'll make sure that the behaviour in faulty cases stays the same so that we don't regress.
On Mon, 2013-04-29 at 20:55 +0200, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 08:42:30PM +0200, Pavel Březina wrote:
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Fine, let's go ahead with the current design. In 1.10 the only task using this framework would be the background refresh anyway, right? Then when we convert the other tasks (note: file tickets to convert the other tasks) we'll make sure that the behaviour in faulty cases stays the same so that we don't regress.
Sorry, just one more thing, is ERR_SROP_PERIODIC_TASK meant to be used to also 'gracefully' terminate a task ? Or only in case of fatal error. If only in case of fatal error I wonder if using a generic error code name that can be reused in other function wouldn't be more sensible.
Simo.
On 04/29/2013 09:49 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:55 +0200, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 08:42:30PM +0200, Pavel Březina wrote:
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote: > > Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will > be > completely destroyed, otherwise it will reschedule: when EOK, it will > fire the request in last-execution-time + period, otherwise now + > period. > Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Fine, let's go ahead with the current design. In 1.10 the only task using this framework would be the background refresh anyway, right? Then when we convert the other tasks (note: file tickets to convert the other tasks) we'll make sure that the behaviour in faulty cases stays the same so that we don't regress.
Sorry, just one more thing, is ERR_SROP_PERIODIC_TASK meant to be used to also 'gracefully' terminate a task ?
Yes.
Or only in case of fatal error.
If only in case of fatal error I wonder if using a generic error code name that can be reused in other function wouldn't be more sensible.
Maybe we can provide both?
ERR_SROP_PERIODIC_TASK for graceful termination, ERR_FATAL for unexpected termination?
Simo.
On 04/29/2013 02:55 PM, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 08:42:30PM +0200, Pavel Březina wrote:
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote:
Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will be completely destroyed, otherwise it will reschedule: when EOK, it will fire the request in last-execution-time + period, otherwise now + period.
Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Fine, let's go ahead with the current design. In 1.10 the only task using this framework would be the background refresh anyway, right? Then when we convert the other tasks (note: file tickets to convert the other tasks) we'll make sure that the behaviour in faulty cases stays the same so that we don't regress.
This is the one that will fix a known netgroup issue, right?
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On 04/29/2013 10:29 PM, Dmitri Pal wrote:
On 04/29/2013 02:55 PM, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 08:42:30PM +0200, Pavel Březina wrote:
On 04/29/2013 08:12 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 20:05 +0200, Pavel Březina wrote:
On 04/29/2013 07:52 PM, Simo Sorce wrote:
On Mon, 2013-04-29 at 11:54 +0200, Pavel Březina wrote: > Ah, yes. If the request returns ERR_STOP_PERIODIC_TASK, the task will > be > completely destroyed, otherwise it will reschedule: when EOK, it will > fire the request in last-execution-time + period, otherwise now + > period. > Wouldn't it be simpler to simply stop on any error, and reschedule only when EOK is returned ? This way the task can bubble up the error all the wya w/o having to mask it to a ERR_STOP_PERIODIC_TASK error at the very end.
No, you want to terminate the periodic task only on fatal error which you can't recover from. E.g. you want to reschedule the task when you can't connect to LDAP, but terminate it if LDAP server doesn't support some extension.
If you return ERR_STOP_PERIODIC_TASK, the task will not be rescheduled. It will be rescheduled on any other error code and logged as appropriate.
I guess the point here is what you want to do by default.
AFAIK none of current periodic tasks ends willingly, only if tevent timer cannot be created. So I think it is OK to anticipate that number of error codes on which you want to continue is far greater than those on which you want to stop.
But then again, we can change it any time, if it won't fit our future needs.
In either case you need to check and remap error codes before returning.
If you define ERR_STOP_PERIODIC_TASK, then you want a continue by default, let's stop only on white listed errors.
If there is a new 'fatal' error you do not catch in whatever function normally converts to ERR_STOP_PERIODIC_TASK. Then the task will be rescheduled.
The other point is that with ERR_STOP_PERIODIC_TASK, you also have to log where you map because it won't bubble up.
I can see the advantages as well though, so I do not have a strong preference at this point, carry on :)
Simo.
I have just amended the design page: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Fine, let's go ahead with the current design. In 1.10 the only task using this framework would be the background refresh anyway, right? Then when we convert the other tasks (note: file tickets to convert the other tasks) we'll make sure that the behaviour in faulty cases stays the same so that we don't regress.
This is the one that will fix a known netgroup issue, right?
Right.
On 04/29/2013 11:35 AM, Jakub Hrozek wrote:
On Mon, Apr 29, 2013 at 11:07:09AM +0200, Pavel Březina wrote:
On 04/29/2013 10:00 AM, Jakub Hrozek wrote:
On Thu, Apr 25, 2013 at 11:18:28AM +0200, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
Hi, I didn't expect any major issue with the design so I already wrote the code on Friday. You can take a look at nss-periodic branch in my repo.
Yeah, sorry, my fault for not responding sooner.
I have a couple of questions:
- The create functions has no memory context parameter. I was thinking that in some cases you might want to allocate the structure on something else than be_ctx and more importantly, cancel the task when the context in question goes away
Do you have any particular use case? I didn't thought of any. But I guess it won't hurt to add it. If anything, it will make easier to ensure that it is freed when private data (like sdap_id_ctx) is freed.
No, that was just future extensibility.
- The design page says that "cancelling request if current tevent request bound to this task takes more than timeout seconds" is a job of the be_periodic_task_create(). Does it also reschedule the job in that case?
Yes.
Is the rescheduling configurable?
How would you like to configure it?
I was thinking that in some cases you might not want to reschedule at all. If the task fails, it's failed. But I guess this could be handled in the task itself by freeing the data structure?
- What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
If this is needed then you should create the task in online callback and destroy it in offline callback.
OK, can you add that bit to the design page?
I will amend my answer - if we need it, we will implement it as a part of be_periodic_task.
Task may successfully open connection to database, turning SSSD back online. AFAIK there is currently no use case for disabling the task when offline.
One grammar nitpick - " The periodic tasks will be held by back end." should probably say "owned by back end"
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On Mon, Apr 29, 2013 at 08:47:00PM +0200, Pavel Březina wrote:
- What about online/offline callbacks? Some tasks should be re-enabled when the BE goes online or cancelled when the BE is offline
If this is needed then you should create the task in online callback and destroy it in offline callback.
OK, can you add that bit to the design page?
I will amend my answer - if we need it, we will implement it as a part of be_periodic_task.
Task may successfully open connection to database, turning SSSD back online. AFAIK there is currently no use case for disabling the task when offline.
What about the periodic DNS update task?
Another use case for this will be future GPO work, which will need this periodic capability to retrieve/apply group policies at regular intervals.
Yassir.
----- Original Message -----
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks _______________________________________________ sssd-devel mailing list sssd-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/sssd-devel
On 04/25/2013 11:18 AM, Pavel Březina wrote:
I've been assigned ticket https://fedorahosted.org/sssd/ticket/1713: [RFE] Add a task to the SSSD to periodically refresh cached entries
I have recently created a ticket (#1891) to unite API for managing periodic tasks. We already quite a periodic task (enumeration, sudo, dyndns, #1713) when each of them implements custom API.
None of these are generic enough to be used for #1713 so I will have to create a new one. I'm not suggesting to refactor the old code now, that will be done when #1891 is scheduled.
But I think it is a good idea to create the generic one now instead of a new feature-specific. It will be basically the same amount of work.
I wrote a design document: https://fedorahosted.org/sssd/wiki/DesignDocs/PeriodicTasks
I have updated the document with following changes:
- change namespace from be_periodic_task to be_ptask - add enum be_ptask_offline - add be_ptask_enable - add be_ptask_disable - reworked "When offline" section
sssd-devel@lists.fedorahosted.org