On 5 Mar 2019, at 08:30, Mark Reynolds <mreynolds(a)redhat.com>
On 2/22/19 11:46 AM, Mark Reynolds wrote:
> I want to start a brief discussion about a major problem we have backend transaction
plugins and the entry caches. I'm finding that when we get into a nested state of be
txn plugins and one of the later plugins that is called fails then while we don't
commit the disk changes (they are aborted/rolled back) we DO keep the entry cache
> For example, a modrdn operation triggers the referential integrity plugin which
renames the member attribute in some group and changes that group's entry cache entry,
but then later on the memberOf plugin fails for some reason. The database transaction is
aborted, but the entry cache changes that RI plugin did are still present :-( I have also
found other entry cache issues with modrdn and BE TXN plugins, and we know of other
currently non-reproducible entry cache crashes as well related to mishandling of cache
entries after failed operations.
> It's time to rework how we use the entry cache. We basically need a transaction
style caching mechanism - we should not commit any entry cache changes until the original
operation is fully successful. Unfortunately the way the entry cache is currently
designed and used it will be a major change to try to change it.
> William wrote up this doc:
> But this also does not currently cover the nested plugin scenario either (not yet).
I do know how how difficult it would be to implement William's proposal, or how
difficult it would be to incorporate the txn style caching into his design. What kind of
time frame could this even be implemented in? William what are your thoughts?
> If William's design is too huge of a change that will take too long to safely
implement then perhaps we need to look into revising the existing cache design where we
use "cache_add_tentative" style functions and only apply them at the end of the
op. This is also not a trivial change.
> And what impact would changing the entry cache have on Ludwig's plugable backend
> Anyway we need to start thinking about redesigning the entry cache - no matter what
approach we want to take. If anyone has any ideas or comments please share them, but I
think due to the severity of this flaw redesigning the entry cache should be one of our
next major goals in DS (1.4.1?).
We are actually seeing more of these cases popping up now, so we need to do something
soon. I had proposed we could always just flush the entire cache when a backend txn op
fails, but Ludwig had a much better idea that we could implement a type of csn in the
entry cache. So when a backend txn plugin fails, we flush the entry cache entries with a
csn >= start of the parent operation.
So until LMDB or a new caching mechanism is implemented this could be a viable/realistic
Well, the cache currently works by having a version number (I think watermark?) in the
cache. That’s how old content is removed (entries < watermark). So perhaps we could get
the watermark id at the start of an operation and anything where watermark >= current
op, we flush on rc != 0.
(disclaimer, this is from my memory and may not represent real cache behaviour, so could
be wildly wrong).
> 389-devel mailing list -- 389-devel(a)lists.fedoraproject.org
> To unsubscribe send an email to 389-devel-leave(a)lists.fedoraproject.org
> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
389-devel mailing list -- 389-devel(a)lists.fedoraproject.org
To unsubscribe send an email to 389-devel-leave(a)lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
Software Engineer, 389 Directory Server