[389-users] Accessing TCP options data in 389ds Hello,

Michael Lang michael.lang at CTBTO.ORG
Sat Jul 13 05:36:59 UTC 2013


On 07/13/2013 12:07 AM, Grzegorz Dwornicki wrote:
>
> Ok thanks for clarification. I thought you might do this in simpler way.
>


We are doing it in the following way (doesn't matter which load balancer 
you pick as long as it can preserve the real IP in which ever matter, 
like in IPVS it's called gateway mode)


the Load Balancer has the Service IP (eq. 192.168.0.1)
   selfIP (192.168.0.2)
   floating (192.168.0.4)
with two nodes configured
     Node1 (eq 192.168.0.5)
     Node2 (eq 192.168.0.6)

most likely on all implementations you'll have the same-subnet 
restriction to be able to preserve the IP (IPVS can do something like 
tunnel-mode)

Normally it would work in that way, that you'll need to set the default 
gateway for both Nodes to the Floating IP (which floats between the LB's 
in case of failure)
of the LB which then is able to "rewrite" the response and your Nodes 
will see the real Client IP of the request.

[ client (192.168.1.1) ] -> [ LB (192.168.0.1) ] -> [ Node1 
(192.168.0.5) sees 192.168.1.1 ] -> [ LB (rewrites) ]  -> [ client ]

this works fine as long as you have enough network segments and 
configure them to as minimum possible hosts (net + broadcast + 3 IPs 
from the LB plus N Nodes)

it's different within the same subnet and in the above example you need 
to send "everything" which is not in the same subnet (Backup, 
replication, ...) through the Loadbalancer loosing the Node1/2 outgoing 
source restriction to possible other services.

We've implemented it using source policy routing, which is more or less 
the same as above but limits which packets are being sent through the 
Load balancer to an iptables set of MARK  rules.

So you setup the Node1/2 without changing the default Gateway. You 
configure iptables to classify (LDAP is quite easy for that) all 
originated packets from source port 389 to any client except Replication 
Nodes, to be marked. With source policy routing you define a new routing 
table for these packets to have the default route set to the Floating IP 
of the LB.

example: (assume the same IP's as above)

Nodesetup:
$ iptables -t mangle -N LB
$ iptables -t mangle -A  OUTPUT -p tcp --sport 389 -j LB
$ iptables -t mangle -A LB -d 192.168.0.5 -j RETURN  # exclude 
replication for node2
$ iptables -t mangle -A LB -d 192.168.0.2 -j RETURN  # exclude network 
IP from LB1
$ iptables -t mangle -A LB -d 192.168.0.3 -j RETURN  # exclude network 
IP from LB2
$ iptables -t mangle -A LB -j MARK --set-mark 0x1

these rules create a Table (LB) which does nothing for the Replication 
Node(s) and the Interfaces of the LBs (which in most cases will be used 
for Health checks)
for anything else it Marks the Packet with 0x1

$ ip route add default via 192.168.0.1 via 192.168.0.4 table  101
$ ip rule add fwmark 0x1 lookup 101 pref 1

those two ip commands create another routing table where packets marked 
with 0x1 will be sent through the floating IP of the 
Loadbalancer(192.168.0.4)

this works pretty fine also within the same subnet, only exception if 
those two nodes need to communicate with the same service name/address 
(meaning them self authenticating agains LDAP. Then you can only add a 
"local" dummy IP with the LB Service address so that the packets will 
stay locally.

regards
mIke



> 12 lip 2013 23:57, "Justin Kinney" <jakinne+389-users at gmail.com 
> <mailto:jakinne%2B389-users at gmail.com>> napisa?(a):
>
>
>
>
>     On Fri, Jul 12, 2013 at 2:50 PM, Grzegorz Dwornicki
>     <gd1100 at gmail.com <mailto:gd1100 at gmail.com>> wrote:
>
>         That is true but load balancer iptables see incoming requests
>         as they are. I'm not sure that this is what you need. What
>         information you wish to receive? Besides the real client IP?
>
>     At the moment, the search node behind the load balancer sees the
>     source of all requests as the egress interface of the load
>     balancer. Access to the load balancer is not possible, but it can
>     insert the data as described into the options field of the TCP
>     header, and so it may be possible to do something with iptables on
>     the search node. The goal here (if possible) is simply to log the
>     true client IP address in the access log of the search node.
>
>         12 lip 2013 23:48, "Justin Kinney"
>         <jakinne+389-users at gmail.com
>         <mailto:jakinne%2B389-users at gmail.com>> napisa?(a):
>
>
>
>
>             On Fri, Jul 12, 2013 at 2:32 PM, Grzegorz Dwornicki
>             <gd1100 at gmail.com <mailto:gd1100 at gmail.com>> wrote:
>
>                 Are you doing this on loadbalancer? You can use
>                 iptables with log target but if this is not
>                 sufficient, then some kind of sniffer like tcpdump
>                 might be helpful
>
>
>             The loadbalancer will add the client ip address to the TCP
>             options field of the client request prior to passing to
>             the servicing node behind the LB.
>
>                 12 lip 2013 23:27, "Rich Megginson"
>                 <rmeggins at redhat.com <mailto:rmeggins at redhat.com>>
>                 napisa?(a):
>
>                     On 07/12/2013 03:25 PM, Justin Kinney wrote:
>>                     Hello,
>>
>>                     I'm investigating the possibility of logging
>>                     client IP address where 389ds is deployed behind
>>                     a load balancer. Today, we lose the true client
>>                     IP address as the source IP is replaced with the
>>                     load balancer's before the packet hits the 389
>>                     host. Has anybody solved this issue before?
>>
>>                     For HTTP based services, this problem is trivial
>>                     to overcome by grokking the X-Forwarded-For
>>                     header from the request, but obviously this
>>                     doesn't work with a service like LDAP deployed
>>                     behind a TCP based load balancing instance.
>>
>>                     One option is to use a direct server return (DSR)
>>                     configuration with our load balancer and host,
>>                     but that adds a lot of overhead to our
>>                     environment in terms of configuration complexity,
>>                     so I'd like to avoid that.
>>
>>                     Another option is using an interesting capability
>>                     of our load balancer (and I'm not sure how unique
>>                     this feature is - I'd be interested in hearing if
>>                     anyone else has run across it). It can insert the
>>                     client IP address into the TCP stream, as
>>                     arbitrary data in the options field of the TCP
>>                     header. Existence of an address is also indicated
>>                     by a magic number (which can uniquely identify
>>                     the VIP on the load balancer).
>>
>>                     What would it take to modify 389 to access the
>>                     raw TCP header, parse the options field to get
>>                     the true client IP, and then associate it with
>>                     the request? Ideally, the client IP would be
>>                     accessible in the access log.
>
>                     I don't know - what are the TCP/IP/socket API
>                     calls that are required to get this data?
>
>>
>>                     Thanks in advance,
>>                     Justin
>>
>>
>>                     --
>>                     389 users mailing list
>>                     389-users at lists.fedoraproject.org  <mailto:389-users at lists.fedoraproject.org>
>>                     https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>                     --
>                     389 users mailing list
>                     389-users at lists.fedoraproject.org
>                     <mailto:389-users at lists.fedoraproject.org>
>                     https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>                 --
>                 389 users mailing list
>                 389-users at lists.fedoraproject.org
>                 <mailto:389-users at lists.fedoraproject.org>
>                 https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>
>             --
>             389 users mailing list
>             389-users at lists.fedoraproject.org
>             <mailto:389-users at lists.fedoraproject.org>
>             https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>         --
>         389 users mailing list
>         389-users at lists.fedoraproject.org
>         <mailto:389-users at lists.fedoraproject.org>
>         https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>
>     --
>     389 users mailing list
>     389-users at lists.fedoraproject.org
>     <mailto:389-users at lists.fedoraproject.org>
>     https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
>
> --
> 389 users mailing list
> 389-users at lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/389-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.fedoraproject.org/pipermail/389-users/attachments/20130713/da05b576/attachment.html>


More information about the 389-users mailing list