I accidentally found this issue in ldap_id.c and then checked to code
if I can find other places as well.
There are still two places, in ipa_hbac_rule_info_send() and
sdap_id_op_connect_send() where NULL is not returned immediately if
tevent_req_create() fails, but I think both are save.
We have 19 patches in 1.11 branch on top of latest release (1.11.7)
I went through bugs filed against 1.11 branch and filter
crashes and the most important bugs.
Attached are cherrry-picked patches from 1.12/master
So we can release the latest 1.11 version. It will be just a bug-fix release.
So some conservative distribution can use it or in another words.
we don't want have 1.11 branch neither in unreleased not in unclosed state :-)
BTW I test packages with LDAP KRB5 and AD test. Of course there are any
regressions because it is a bug fix only release.
This ticket is little bit related to #2855
I searched a little bit and here is a small sumary of
using autofs + atomic (containers)
>We have a pull request in RUNC to eliminate our patch.
>A second feature of this pull request would be to allow us to pass in
>the MOUNT_SHARED flag
>This would allow us to modify the hosts mount table from inside of a
>container. With this feature
>we would be able to run a service like autofs inside of a container but
>have it modify the HOST
>file system and those of other containers.
>I think if we want to get autofs to work on "atomic host" we need to run
>it in a container.
The attached patch will reduce dependency tree in such container.
I created patch with separate pacakge because "sss" is not by default
in nsswithc.conf for "automount". But this file could be part of sssd-client
but on the other hand automount directly dlopen libsss_autofs.so
I'm not sure which solution would be better.
I'm starting implementing tlog  configuration interfaces and would like
to know what you'd like to use best in SSSD.
Among tlog parameters are:
Path to the shell to start
The text for the warning about the session being recorded
Logging latency, seconds - how long to cache recorded data before logging
Maximum log message payload, bytes
Log target (file / syslog / perhaps journald later)
Log target options:
I guess out of these only a few would be controlled by SSSD.
I'd like to have three interfaces implemented:
Configuration file in /etc, in JSON (tlog needs it anyway)
Ideally, all the parameters should be controllable from any of them, but the
setting priority would be as above.
Our main use case for the start would require faking tlog as the shell in
nss_sss, passing the real shell in pam_sss via an environment variable and
letting the administrator configure the rest via the configuration file.
Command-line interface would be used to support "login" asking for login
shell, ssh doing the same and passing commands to execute, and testing.
Later we might want to add more parameters passed via pam_sss and environment
SSSD may also choose to write the tlog config file, but I think that it's
better to leave that for the administrators and only use environment
variable(s) from pam_sss instead.
Regarding that, I'm actually thinking about simply accepting the same data as
configuration file provides via an environment variable. I.e. in JSON. It
wouldn't need to be complete, and will be overlaid on top of what was read
from the configuration file. So for the start pam_sss would need to pass this,
Later it might grow into something like this:
"warning": "WARNING! Your session is being recorded!\n",
The above would require implementing JSON string escaping, but it's not
difficult and pretty much the same as C string escaping everyone's familiar
with (see http://json.org).
The alternatives are:
* Supplying all the possible options via separate environment variables.
That would require documenting them separately.
* Having an environment variable containing command-line options instead.
However, the latter would require handling word-splitting and unquoting
the same way shell does, and that's non-trivial without asking an actual
shell to do it. Whereas tlog already has a JSON parser.
So would the above be suitable for SSSD? Would pam_sss be OK with passing more
parameters, than just the shell to start? Do you have any other ideas,
objections? Please write!
I've been working on this for some time now. The current interface has
several disadvantages, mostly code duplication, poor memory hierarchy
and it is hard to add new DP methods. I also disliked the fact that
there is no module constructor and we have to create id_context
everywhere and that we work with DBus on module side. All this and more
should be solved in these patches.
It is still work in progress and this is a showcase or proof of concept
if you will. In the first version I want to keep DP methods as is, just
use a new interface between DP and backend. This allows us to take
advantage of the new code without actually changing anything in
responder. I would also like to implement module's constructor so id
context may be shared in a better way.
Currently only sudo is implemented and I'd like to wait for you opinions
In the second version I would like to change DP methods to be more
granular and to have automatically parsed parameters and different
output parameters so we donẗ have to misuse return codes. This, however,
will require changes on responder side (mostly to handle error
correctly). I'm thinking about writing something similar as cache_req
for this purpose. There is a commented code in the last patch that shows
how this change may simplify handlers.
Here is a short list of benefits over current code:
* data provider is a black box completely separated from backend
* method handlers are just simple tevent requests on backend side
* no need of spy on be_client
* simplified and error proof adding of new responders
* simplified adding of new methods
* reply to D-Bus message is completely handled by DP code
* each target can have several methods defined
* properties can be added on objects
* each method can have output parameters
* modules now support constructor
* improved debugging
* clear memory hierarchy
The code compiles but it is not yet used.
The first patches just rename files so it is clear what is data provider
and what is backend. It is something I know we wanted to do a long time
ago but there was never a good enough reason to touch those files.
I hope you like it.