From [PATCH 0/0] A shared memory cache to perform better:
0/4: Actual memory cache implementation These is the bulk of the work, these patches are still a bit rough at the edges, grep for FIXMEs and TODOs and you'll see some plumbing (for example configure options in sssd to set expiration time and cache sizes are missing and are still harcoded).
Simo.
On Tue, 2012-01-03 at 18:00 -0500, Simo Sorce wrote:
From [PATCH 0/0] A shared memory cache to perform better:
0/4: Actual memory cache implementation These is the bulk of the work, these patches are still a bit rough at the edges, grep for FIXMEs and TODOs and you'll see some plumbing (for example configure options in sssd to set expiration time and cache sizes are missing and are still harcoded).
Patch 0008: Ack
Patch 0009: Nack
As the fixmes say, please parameterize the cache sizes and timeout period.
You have a time-of-check/time-of-use bug with the stat of mc_ctx->file. Please open it first and then use fstat() instead, to avoid a race-condition exploit.
Please use errno_t for the return code of functions that return errno values.
I think the check_header pieces are unnecessary. When the NSS provider is started, it should always remove an existing fast cache. This will make the behavior better match users' expectations.
Don't use ftruncate() to generate the cache file. It's not safe on all platforms or filesystems. Probably better to use fseek() (even though it's a bit slower. It's a rare operation).
I will review the other patches tomorrow.
On Mon, 2012-01-09 at 16:30 -0500, Stephen Gallagher wrote:
On Tue, 2012-01-03 at 18:00 -0500, Simo Sorce wrote:
From [PATCH 0/0] A shared memory cache to perform better:
0/4: Actual memory cache implementation These is the bulk of the work, these patches are still a bit rough at the edges, grep for FIXMEs and TODOs and you'll see some plumbing (for example configure options in sssd to set expiration time and cache sizes are missing and are still harcoded).
Patch 0008: Ack
Patch 0009: Nack
As the fixmes say, please parameterize the cache sizes and timeout period.
You have a time-of-check/time-of-use bug with the stat of mc_ctx->file. Please open it first and then use fstat() instead, to avoid a race-condition exploit.
Not sure what race condition is there.
sssd_nss is the only thing that can create/manipulate that file, in what case do you see a race condition ?
The problem of using fstat() is that it unnecessarily(IMO) complicates the code for this case.
Please use errno_t for the return code of functions that return errno values.
Ok.
I think the check_header pieces are unnecessary. When the NSS provider is started, it should always remove an existing fast cache. This will make the behavior better match users' expectations.
Not sure about this, it unnecessarily causes cache lookups to go though the pipe to re-populate the fast cache. Perhaps we need to discuss a bit more pros-cons of either approach.
Don't use ftruncate() to generate the cache file. It's not safe on all platforms or filesystems. Probably better to use fseek() (even though it's a bit slower. It's a rare operation).
I asked our filesystem gurus and they think ftruncate() is the best way to go. (and the kernel sources confirm it works with every local file systems even VFAT :-)
I will review the other patches tomorrow.
Thanks!
Simo.
On Mon, 2012-01-09 at 18:24 -0500, Simo Sorce wrote:
I will review the other patches tomorrow.
Thanks!
Oh btw I just found a bug in the client libs that would cause libnss_sss.so to reopen the fast cache file at every operation until it ran out of file descriptors, oops :-)
New patch attched.
Simo.
On Mon, 2012-01-09 at 18:24 -0500, Simo Sorce wrote:
On Mon, 2012-01-09 at 16:30 -0500, Stephen Gallagher wrote:
On Tue, 2012-01-03 at 18:00 -0500, Simo Sorce wrote:
From [PATCH 0/0] A shared memory cache to perform better:
0/4: Actual memory cache implementation These is the bulk of the work, these patches are still a bit rough at the edges, grep for FIXMEs and TODOs and you'll see some plumbing (for example configure options in sssd to set expiration time and cache sizes are missing and are still harcoded).
Patch 0008: Ack
Patch 0009: Nack
As the fixmes say, please parameterize the cache sizes and timeout period.
You have a time-of-check/time-of-use bug with the stat of mc_ctx->file. Please open it first and then use fstat() instead, to avoid a race-condition exploit.
Not sure what race condition is there.
sssd_nss is the only thing that can create/manipulate that file, in what case do you see a race condition ?
If filesystem protections are ever incorrect, then you have a situation where theoretically a user could do something like replace the mmap file with a symlink to /etc/passwd in the short gap between the stat() check and the file open. We would then fail to validate the file and promptly destroy it, hosing the system.
The problem of using fstat() is that it unnecessarily(IMO) complicates the code for this case.
I think it's a necessary complication. Please use it.
Please use errno_t for the return code of functions that return errno values.
Ok.
I think the check_header pieces are unnecessary. When the NSS provider is started, it should always remove an existing fast cache. This will make the behavior better match users' expectations.
Not sure about this, it unnecessarily causes cache lookups to go though the pipe to re-populate the fast cache. Perhaps we need to discuss a bit more pros-cons of either approach.
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright with) the fast cache being emptied on service restart. Since we still have the not-quite-as-fast persistent LDB cache, I think the gain isn't worth the user confusion.
Don't use ftruncate() to generate the cache file. It's not safe on all platforms or filesystems. Probably better to use fseek() (even though it's a bit slower. It's a rare operation).
I asked our filesystem gurus and they think ftruncate() is the best way to go. (and the kernel sources confirm it works with every local file systems even VFAT :-)
Well, my concern was more with porting to other platforms, but I suppose we can burn those bridges when we come to them.
On Mon, 2012-01-09 at 18:24 -0500, Simo Sorce wrote:
On Mon, 2012-01-09 at 16:30 -0500, Stephen Gallagher wrote:
On Tue, 2012-01-03 at 18:00 -0500, Simo Sorce wrote:
From [PATCH 0/0] A shared memory cache to perform better:
0/4: Actual memory cache implementation These is the bulk of the work, these patches are still a bit rough at the edges, grep for FIXMEs and TODOs and you'll see some plumbing (for example configure options in sssd to set expiration time and cache sizes are missing and are still harcoded).
Patch 0008: Ack
Patch 0009: Nack
As the fixmes say, please parameterize the cache sizes and timeout period.
You have a time-of-check/time-of-use bug with the stat of mc_ctx->file. Please open it first and then use fstat() instead, to avoid a race-condition exploit.
Not sure what race condition is there.
sssd_nss is the only thing that can create/manipulate that file, in what case do you see a race condition ?
If filesystem protections are ever incorrect, then you have a situation where theoretically a user could do something like replace the mmap file with a symlink to /etc/passwd in the short gap between the stat() check and the file open. We would then fail to validate the file and promptly destroy it, hosing the system.
Although it's very unlikely scenario, I agree that we always choose not to leave anything to chance in similar scenarios. Is there any downside to moving the code?
The problem of using fstat() is that it unnecessarily(IMO) complicates the code for this case.
I think it's a necessary complication. Please use it.
Please use errno_t for the return code of functions that return errno values.
Ok.
I think the check_header pieces are unnecessary. When the NSS provider is started, it should always remove an existing fast cache. This will make the behavior better match users' expectations.
Not sure about this, it unnecessarily causes cache lookups to go though the pipe to re-populate the fast cache. Perhaps we need to discuss a bit more pros-cons of either approach.
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright with) the fast cache being emptied on service restart. Since we still have the not-quite-as-fast persistent LDB cache, I think the gain isn't worth the user confusion.
I agree
Don't use ftruncate() to generate the cache file. It's not safe on all platforms or filesystems. Probably better to use fseek() (even though it's a bit slower. It's a rare operation).
I asked our filesystem gurus and they think ftruncate() is the best way to go. (and the kernel sources confirm it works with every local file systems even VFAT :-)
Well, my concern was more with porting to other platforms, but I suppose we can burn those bridges when we come to them.
I thought we've already done that in several parts of the code, haven't we?
I also have one cosmetic proposal for patch #0009:
1) On line 452 use MC_ALIGN64 instead of (n_elem + 7) & (~7)
Thanks Jan
On Tue, 2012-01-10 at 10:49 +0100, Jan Zelený wrote:
I also have one cosmetic proposal for patch #0009:
- On line 452 use MC_ALIGN64 instead of (n_elem + 7) & (~7)
The reason I did not use MC_ALIGN64 is that I thought it would be confusing as n_elem is not a pointer but a counter. I'll change the code and add more explanation on why using MC_ALIGN64 is ok there.
Simo.
On 01/10/2012 10:17 AM, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:49 +0100, Jan Zelený wrote:
I also have one cosmetic proposal for patch #0009:
- On line 452 use MC_ALIGN64 instead of (n_elem + 7) & (~7)
The reason I did not use MC_ALIGN64 is that I thought it would be confusing as n_elem is not a pointer but a counter. I'll change the code and add more explanation on why using MC_ALIGN64 is ok there.
Simo.
As there any SELinux implication with this feature?
On Tue, 2012-01-10 at 10:59 -0500, Dmitri Pal wrote:
As there any SELinux implication with this feature?
I guess you mean the whole work not the email you quoted. In that case the answer is yes, clients must be allowed to access the files we create, but that is similar to the requirement to allow clients to access the sssd_nss pipes so adjusting the selinux policies should be easy.
Simo.
On 01/10/2012 11:26 AM, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:59 -0500, Dmitri Pal wrote:
As there any SELinux implication with this feature?
I guess you mean the whole work not the email you quoted.
Yes. Sorry. It just occurred to me as I looked that email and I did not have time to find the right place for this comment.
In that case the answer is yes, clients must be allowed to access the files we create, but that is similar to the requirement to allow clients to access the sssd_nss pipes so adjusting the selinux policies should be easy.
So there is a special change to SELinux policy on per cache client basis? Should we file any SELinux bugs to make policy changes for this feature?
Simo.
On Tue, 2012-01-10 at 12:56 -0500, Dmitri Pal wrote:
On 01/10/2012 11:26 AM, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:59 -0500, Dmitri Pal wrote:
As there any SELinux implication with this feature?
I guess you mean the whole work not the email you quoted.
Yes. Sorry. It just occurred to me as I looked that email and I did not have time to find the right place for this comment.
In that case the answer is yes, clients must be allowed to access the files we create, but that is similar to the requirement to allow clients to access the sssd_nss pipes so adjusting the selinux policies should be easy.
So there is a special change to SELinux policy on per cache client basis? Should we file any SELinux bugs to make policy changes for this feature?
Yes, once the patches are accepted :-)
Simo.
On Mon, 2012-01-09 at 20:29 -0500, Stephen Gallagher wrote:
On Mon, 2012-01-09 at 18:24 -0500, Simo Sorce wrote:
Not sure what race condition is there.
sssd_nss is the only thing that can create/manipulate that file, in what case do you see a race condition ?
If filesystem protections are ever incorrect, then you have a situation where theoretically a user could do something like replace the mmap file with a symlink to /etc/passwd in the short gap between the stat() check and the file open. We would then fail to validate the file and promptly destroy it, hosing the system.
If that happens we have bigger problems, remember we do not store this file in /tmp so the normal paranoia associated with user writable directories need not apply.
The problem of using fstat() is that it unnecessarily(IMO) complicates the code for this case.
I think it's a necessary complication. Please use it.
No, I really think it is unnecessary, and the above explanation is close to cargo cult security to be honest.
If users can write to /var/lib/sss/mc they can simply create a passwd file where their user has uid 0 and then login and become root. IE we have much bigger issues than a race condition, if that dir is writable.
Please use errno_t for the return code of functions that return errno values.
Ok.
I think the check_header pieces are unnecessary. When the NSS provider is started, it should always remove an existing fast cache. This will make the behavior better match users' expectations.
Not sure about this, it unnecessarily causes cache lookups to go though the pipe to re-populate the fast cache. Perhaps we need to discuss a bit more pros-cons of either approach.
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright with) the fast cache being emptied on service restart. Since we still have the not-quite-as-fast persistent LDB cache, I think the gain isn't worth the user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Don't use ftruncate() to generate the cache file. It's not safe on all platforms or filesystems. Probably better to use fseek() (even though it's a bit slower. It's a rare operation).
I asked our filesystem gurus and they think ftruncate() is the best way to go. (and the kernel sources confirm it works with every local file systems even VFAT :-)
Well, my concern was more with porting to other platforms, but I suppose we can burn those bridges when we come to them.
My thinking.
Simo.
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
Simo.
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
Oops had a little erro converting to use errno, new patch attached to fix just that, all other patches should be fine.
Simo.
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
New revision.
I removed bootid from the cache file header, given we recreate the file every time sssd restarts it was useles.
I replaced it with a seed, and made the seed used in the hash table randomly regenerated every time the hash table is recreated.
This is used to avoid collision based attacks [1].
Simo.
[1] https://lwn.net/Articles/474912/
On Thu, 2012-01-12 at 11:49 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
New revision.
I removed bootid from the cache file header, given we recreate the file every time sssd restarts it was useles.
I replaced it with a seed, and made the seed used in the hash table randomly regenerated every time the hash table is recreated.
This is used to avoid collision based attacks [1].
Simo.
I started looking at these last night. I have yet to complete a full review, but I'm attaching a set of patches rebased atop the current master so several of us can start reviewing.
On Thu, 2012-01-12 at 11:49 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
New revision.
I removed bootid from the cache file header, given we recreate the file every time sssd restarts it was useles.
I replaced it with a seed, and made the seed used in the hash table randomly regenerated every time the hash table is recreated.
This is used to avoid collision based attacks [1].
Simo.
I started looking at these last night. I have yet to complete a full review, but I'm attaching a set of patches rebased atop the current master so several of us can start reviewing.
Ok, I finished the review. I am fine with these patches in general, I have just a few comments and/or questions:
Why only one hash table for both name and uidkey? I think using two would make things a little bit more simple and fast (of course a tradeoff would be slightly higher memory consumption).
responder/nss/nsssrv_mmap_cache.c:220 - this condition is unnecessarry because of the condition on line 208
responder/nss/nsssrv_mmap_cache.c:242 - the assignment is redundant
responder/nss/nsssrv_mmap_cache.c:275 - why this condition? I think the following while will cover it
responder/nss/nsssrv_mmap_cache.c:516 - sizeof(uint32_t) -> sizeof(h.status)
responder/nss/nsssrv_mmap_cache.c:517 - sizeof(uint32_t) -> sizeof(h.status)
responder/nss/nsssrv_mmap_cache.c:561 - shouldn't the umask better be 0133?
close vs. munmap Just out of curiosity: does it matter which should be first? I found two places in the code where the order is different:
sss_client/nss_mc_common.c:146 responder/nss/nsssrv_mmap_cache.c:717
That's all from my side. As I've said, my comments are rather minor things and suggestions, so in general I give these patches a green light.
Thanks Jan
On Fri, 2012-03-16 at 14:25 +0100, Jan Zelený wrote:
On Thu, 2012-01-12 at 11:49 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
Sure, we can talk about it. I'm looking at it from the users' perspectives, who I think would generally expect (and be alright
with)
the fast cache being emptied on service restart. Since we still have
the
not-quite-as-fast persistent LDB cache, I think the gain isn't worth
the
user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
New revision.
I removed bootid from the cache file header, given we recreate the file every time sssd restarts it was useles.
I replaced it with a seed, and made the seed used in the hash table randomly regenerated every time the hash table is recreated.
This is used to avoid collision based attacks [1].
Simo.
I started looking at these last night. I have yet to complete a full review, but I'm attaching a set of patches rebased atop the current master so several of us can start reviewing.
Ok, I finished the review. I am fine with these patches in general, I have just a few comments and/or questions:
Why only one hash table for both name and uidkey? I think using two would make things a little bit more simple and fast (of course a tradeoff would be slightly higher memory consumption).
I wanted to limit memory consumption, it is important especially in 32bit apps that have a much more limited virtual memory (remember that this is mmapped in any application).
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I think I have line numbers off by 1 ... but also do not see the unnecessary condition. cur is adavanced since the test in line 208/209 so it is not testing the same thing afaics.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond the table it will cause a segfault.
responder/nss/nsssrv_mmap_cache.c:516
- sizeof(uint32_t) -> sizeof(h.status)
yes it is better to not assume.
responder/nss/nsssrv_mmap_cache.c:517
- sizeof(uint32_t) -> sizeof(h.status)
same as above, yes.
responder/nss/nsssrv_mmap_cache.c:561
- shouldn't the umask better be 0133?
I don't think we care for explicitly removing the executable bit, the default umask doesn't set it. What matters here is that it is nor writable.
close vs. munmap Just out of curiosity: does it matter which should be first? I found two places in the code where the order is different:
no it shouldn't matter.
sss_client/nss_mc_common.c:146 responder/nss/nsssrv_mmap_cache.c:717
I changed the second one to do the munmap first
That's all from my side. As I've said, my comments are rather minor things and suggestions, so in general I give these patches a green light.
Attached new patches with some of the picks fixed.
Simo.
On Fri, 2012-03-16 at 14:25 +0100, Jan Zelený wrote:
On Thu, 2012-01-12 at 11:49 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 14:33 -0500, Simo Sorce wrote:
On Tue, 2012-01-10 at 10:15 -0500, Simo Sorce wrote:
> Sure, we can talk about it. I'm looking at it from the users' > perspectives, who I think would generally expect (and be > alright
with)
> the fast cache being emptied on service restart. Since we still > have
the
> not-quite-as-fast persistent LDB cache, I think the gain isn't > worth
the
> user confusion.
Ok, I think I can understand this, it will also simplify the code I guess so I'll change the patch.
Attached new set of patches.
I changed the init code to always create a new cache file on restart, incidentally this also got rid of the stat() call, so that problem has been put to rest permanently :)
I also changed all functions to use errno_t as the return type in their declaration when they return a errno code.
I think all the suggestions seen in the thread so far have been implemented as requested at this point, with the exception of the part where I should implement and use new options as that required additional code instead of mere refactoring. I will add those features in a later rebase.
New revision.
I removed bootid from the cache file header, given we recreate the file every time sssd restarts it was useles.
I replaced it with a seed, and made the seed used in the hash table randomly regenerated every time the hash table is recreated.
This is used to avoid collision based attacks [1].
Simo.
I started looking at these last night. I have yet to complete a full review, but I'm attaching a set of patches rebased atop the current master so several of us can start reviewing.
Ok, I finished the review. I am fine with these patches in general, I have just a few comments and/or questions:
Why only one hash table for both name and uidkey? I think using two would make things a little bit more simple and fast (of course a tradeoff would be slightly higher memory consumption).
I wanted to limit memory consumption, it is important especially in 32bit apps that have a much more limited virtual memory (remember that this is mmapped in any application).
True, I haven't considered the mmapping by any application.
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I think I have line numbers off by 1 ... but also do not see the unnecessary condition. cur is adavanced since the test in line 208/209 so it is not testing the same thing afaics.
I hope I read the code correctly, this is by far the most complex part of all your patches.
First you have this condition:
if (mcc->free_table[t] == 0xff) { cur = ((cur + 8) & ~7); if (cur >= tot_slots) { cur = 0; } continue; }
Right after that you have following cycle with condition right after it: for (t = ((cur + 8) & ~7) ; cur < t; cur++) { MC_PROBE_BIT(mcc->free_table, cur, used); if (!used) break; } if (cur == t) { if (cur >= tot_slots) { cur = 0; } continue; }
Considering the condition if (mcc->free_table[t] == 0xff), the cur will never equal to t. That would imply that there was no bit set to zero in mcc-
free_table[t], which is already ruled out by the previous condition. Instead
of this condition, I'd simply place a comment above the for cycle: "There is a zero bit in the byte, find it". I think it would improve readability of the code.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond the table it will cause a segfault.
I know that, my point was that from my understanding the slot variable can either contain valid number or MC_INVALID_VAL, nothing else. Of course if I missed anything or you are just being defensive just in case, I withdraw my comment.
responder/nss/nsssrv_mmap_cache.c:516
- sizeof(uint32_t) -> sizeof(h.status)
yes it is better to not assume.
responder/nss/nsssrv_mmap_cache.c:517
- sizeof(uint32_t) -> sizeof(h.status)
same as above, yes.
responder/nss/nsssrv_mmap_cache.c:561
- shouldn't the umask better be 0133?
I don't think we care for explicitly removing the executable bit, the default umask doesn't set it. What matters here is that it is nor writable.
Ok
close vs. munmap Just out of curiosity: does it matter which should be first? I found two places
in the code where the order is different:
no it shouldn't matter.
sss_client/nss_mc_common.c:146 responder/nss/nsssrv_mmap_cache.c:717
I changed the second one to do the munmap first
That's all from my side. As I've said, my comments are rather minor things and suggestions, so in general I give these patches a green light.
Attached new patches with some of the picks fixed.
Thanks Jan
On Mon, 2012-03-19 at 10:21 +0100, Jan Zelený wrote:
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I think I have line numbers off by 1 ... but also do not see the unnecessary condition. cur is adavanced since the test in line 208/209 so it is not testing the same thing afaics.
I hope I read the code correctly, this is by far the most complex part of all your patches.
First you have this condition:
if (mcc->free_table[t] == 0xff) { cur = ((cur + 8) & ~7); if (cur >= tot_slots) { cur = 0; } continue; }
Right after that you have following cycle with condition right after it: for (t = ((cur + 8) & ~7) ; cur < t; cur++) { MC_PROBE_BIT(mcc->free_table, cur, used); if (!used) break; } if (cur == t) { if (cur >= tot_slots) { cur = 0; } continue; }
Considering the condition if (mcc->free_table[t] == 0xff), the cur will never equal to t. That would imply that there was no bit set to zero in mcc-
free_table[t], which is already ruled out by the previous condition. Instead
of this condition, I'd simply place a comment above the for cycle: "There is a zero bit in the byte, find it". I think it would improve readability of the code.
Ah excellent analysis, you are totally correct. I attached a patch that removes the unnecessary check and adds comments to make it easier to read.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond the table it will cause a segfault.
I know that, my point was that from my understanding the slot variable can either contain valid number or MC_INVALID_VAL, nothing else. Of course if I missed anything or you are just being defensive just in case, I withdraw my comment.
Purely defensive yeah.
Simo.
On Mon, 2012-03-19 at 10:21 +0100, Jan Zelený wrote:
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I think I have line numbers off by 1 ... but also do not see the unnecessary condition. cur is adavanced since the test in line 208/209 so it is not testing the same thing afaics.
I hope I read the code correctly, this is by far the most complex part of all your patches.
First you have this condition:
if (mcc->free_table[t] == 0xff) {
cur = ((cur + 8) & ~7); if (cur >= tot_slots) { cur = 0; } continue;
}
Right after that you have following cycle with condition right after it: for (t = ((cur + 8) & ~7) ; cur < t; cur++) {
MC_PROBE_BIT(mcc->free_table, cur, used); if (!used) break;
} if (cur == t) {
if (cur >= tot_slots) { cur = 0; } continue;
}
Considering the condition if (mcc->free_table[t] == 0xff), the cur will never equal to t. That would imply that there was no bit set to zero in mcc-
free_table[t], which is already ruled out by the previous condition. Instead
of this condition, I'd simply place a comment above the for cycle: "There is a zero bit in the byte, find it". I think it would improve readability of the code.
Ah excellent analysis, you are totally correct. I attached a patch that removes the unnecessary check and adds comments to make it easier to read.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond the table it will cause a segfault.
I know that, my point was that from my understanding the slot variable can either contain valid number or MC_INVALID_VAL, nothing else. Of course if I missed anything or you are just being defensive just in case, I withdraw my comment.
Purely defensive yeah.
Simo.
Much more readable now, thank you
Ack to all patches. When pushing, please note that this one replaces the original 0003-nsssrv-Add-memory-cache-record-handling-utils.patch
Jan
On Mon, 2012-03-19 at 14:40 +0100, Jan Zelený wrote:
On Mon, 2012-03-19 at 10:21 +0100, Jan Zelený wrote:
responder/nss/nsssrv_mmap_cache.c:220
- this condition is unnecessarry because of the condition on line 208
Can you post the actual code snippest, if I understand what you mean I think I have line numbers off by 1 ... but also do not see the unnecessary condition. cur is adavanced since the test in line 208/209 so it is not testing the same thing afaics.
I hope I read the code correctly, this is by far the most complex part of all your patches.
First you have this condition:
if (mcc->free_table[t] == 0xff) {
cur = ((cur + 8) & ~7); if (cur >= tot_slots) { cur = 0; } continue;
}
Right after that you have following cycle with condition right after it: for (t = ((cur + 8) & ~7) ; cur < t; cur++) {
MC_PROBE_BIT(mcc->free_table, cur, used); if (!used) break;
} if (cur == t) {
if (cur >= tot_slots) { cur = 0; } continue;
}
Considering the condition if (mcc->free_table[t] == 0xff), the cur will never equal to t. That would imply that there was no bit set to zero in mcc-
free_table[t], which is already ruled out by the previous condition. Instead
of this condition, I'd simply place a comment above the for cycle: "There is a zero bit in the byte, find it". I think it would improve readability of the code.
Ah excellent analysis, you are totally correct. I attached a patch that removes the unnecessary check and adds comments to make it easier to read.
responder/nss/nsssrv_mmap_cache.c:242
- the assignment is redundant
I guess it's t, yeah that's a leftover ..
responder/nss/nsssrv_mmap_cache.c:275
- why this condition? I think the following while will cover it
slot is used to dereference data from the table, if it accesses beyond the table it will cause a segfault.
I know that, my point was that from my understanding the slot variable can either contain valid number or MC_INVALID_VAL, nothing else. Of course if I missed anything or you are just being defensive just in case, I withdraw my comment.
Purely defensive yeah.
Simo.
Much more readable now, thank you
Ack to all patches. When pushing, please note that this one replaces the original 0003-nsssrv-Add-memory-cache-record-handling-utils.patch
Pushed to master. Exciting!
sssd-devel@lists.fedorahosted.org