Hi listers
I have a very curious problem here:
I changed some CNAME entries in the named for a specific domain this morning.
when i now (from an internal workstation) do a dig @nameserver cname
i get a different answer, depending on whether @nameserver points to the local address 192.168.... or on the public ip address 212.90..... of the nameserver
in the first case, named returns the old (incorrect) address of cname. in the second case, named returns the new (correct) address of cname.
when i let @nameserver point to the secondary nameserver, the above difference does not show up, it always returns the correct address.
from the outside world, the new (correct) value is always returned.
what could the problem be and how to avoid it?
thanks in advance
suomi
On 06/29/2011 01:26 PM, fedora wrote:
from the outside world, the new (correct) value is always returned.
what could the problem be and how to avoid it?
You are viewing the contents of different caches. Your internal "view" has a cache and your "external" view has another one (I assume you're using "bind views"). So, for example, if you do:
dig @127.0.0.1 whatever.com
..you may not get the same result as:
dig @my-external-ip whatever.com
I don't know how's your bind setup but which cache you get will depend on the configuration you have. If your views are separated based on destination ip (contrary to source ip) then you could place your external ip in your /etc/resolv.conf so that all responses to queries within your server match that of the external clients. But then again, be careful on how you reference your server when using dig because as I mentioned, you'll get different results.
HTH, Jorge
On 06/29/2011 11:04 AM, Jorge Fábregas wrote:
On 06/29/2011 01:26 PM, fedora wrote:
from the outside world, the new (correct) value is always returned.
what could the problem be and how to avoid it?
You are viewing the contents of different caches. Your internal "view" has a cache and your "external" view has another one (I assume you're using "bind views"). So, for example, if you do:
dig @127.0.0.1 whatever.com
..you may not get the same result as:
dig @my-external-ip whatever.com
I don't know how's your bind setup but which cache you get will depend on the configuration you have. If your views are separated based on destination ip (contrary to source ip) then you could place your external ip in your /etc/resolv.conf so that all responses to queries within your server match that of the external clients. But then again, be careful on how you reference your server when using dig because as I mentioned, you'll get different results.
You might also check to see if you have nscd running on the machine and if so, you might want to purge its cache: "nscd -i hosts".
---------------------------------------------------------------------- - Rick Stevens, Systems Engineer, C2 Hosting ricks@nerd.com - - AIM/Skype: therps2 ICQ: 22643734 Yahoo: origrps2 - - - - Brain: The organ with which we think that we think. - ----------------------------------------------------------------------
On Wed, 2011-06-29 at 19:26 +0200, fedora wrote:
I changed some CNAME entries in the named for a specific domain this morning.
How did you make the change, and where? And how is the change supposed to propagate to the other name servers?
when i now (from an internal workstation) do a dig @nameserver cname
i get a different answer, depending on whether @nameserver points to the local address 192.168.... or on the public ip address 212.90..... of the nameserver
in the first case, named returns the old (incorrect) address of cname. in the second case, named returns the new (correct) address of cname.
when i let @nameserver point to the secondary nameserver, the above difference does not show up, it always returns the correct address.
from the outside world, the new (correct) value is always returned.
And, how have you set up your domain records?
If they have a long time to live, then anything that has queried that record can cache the results for that length of time, before bothering a master server to check for newer records. Likewise with any client that has queried any server.
Long expiry times reduce network traffic, but can cause stale data to be served by slave servers for a longer time.
Master servers can be configured to notify their slaves to update their records, when master records are changed, but that's an option, not a mandatory behaviour.
Also, when you changed your record data, did you (or some software) update the zone's serial number. If the serial number doesn't increment, then no slave (or client?) may consider checking for updates to records.
There's several bits of data in a zone file that are related to the need to check for updates to any records in it.
Serial number - must increment when any records are changed. If the serial hasn't changed, then the records are considered to be the same as last time (and this will affect anything related to what I mention below). If it has changed, then slaves/clients should get new data, now. Of course, they may not bother to check, if they've cached records, and when they did the caching, they were told to hang onto the data for a long time. They'll check the serial number, then update records, after their caching periods.
Refresh period - slaves will check the master for any changes /this/ many seconds after the last query.
Retry period - wait /this/ many seconds before trying again, if the master didn't respond.
Retire period - expire any cached records /this/ many seconds since they were cached/updated, if unable to refresh them. i.e. The slave can dole out old information for this amount of time, then expunge it.
Time to live period - other servers should dole out and hold onto their cached data for /this/ amount of time, even if they should have tried updating in the meantime, but hadn't been able to.
Hi Tim i think the problem comes from the different views a DNS provides.
in my config i should be more specific about which zone belongs to which view. On an internal view (which by default i catch when accessing the DNS on the internal network) it may be of no importance when an external address changes. because i am not so familiar with views-configuring in named, as a workaround, i told the DHCPD to give me the external address for all DNS, so that i can get a hold of the external view of the DNS. And in the external view, the addresses are correct.
suomi
On 2011-06-30 08:38, Tim wrote:
On Wed, 2011-06-29 at 19:26 +0200, fedora wrote:
I changed some CNAME entries in the named for a specific domain this morning.
How did you make the change, and where? And how is the change supposed to propagate to the other name servers?
when i now (from an internal workstation) do a dig @nameserver cname
i get a different answer, depending on whether @nameserver points to the local address 192.168.... or on the public ip address 212.90..... of the nameserver
in the first case, named returns the old (incorrect) address of cname. in the second case, named returns the new (correct) address of cname.
when i let @nameserver point to the secondary nameserver, the above difference does not show up, it always returns the correct address.
from the outside world, the new (correct) value is always returned.
And, how have you set up your domain records?
If they have a long time to live, then anything that has queried that record can cache the results for that length of time, before bothering a master server to check for newer records. Likewise with any client that has queried any server.
Long expiry times reduce network traffic, but can cause stale data to be served by slave servers for a longer time.
Master servers can be configured to notify their slaves to update their records, when master records are changed, but that's an option, not a mandatory behaviour.
Also, when you changed your record data, did you (or some software) update the zone's serial number. If the serial number doesn't increment, then no slave (or client?) may consider checking for updates to records.
There's several bits of data in a zone file that are related to the need to check for updates to any records in it.
Serial number - must increment when any records are changed. If the serial hasn't changed, then the records are considered to be the same as last time (and this will affect anything related to what I mention below). If it has changed, then slaves/clients should get new data, now. Of course, they may not bother to check, if they've cached records, and when they did the caching, they were told to hang onto the data for a long time. They'll check the serial number, then update records, after their caching periods.
Refresh period - slaves will check the master for any changes /this/ many seconds after the last query.
Retry period - wait /this/ many seconds before trying again, if the master didn't respond.
Retire period - expire any cached records /this/ many seconds since they were cached/updated, if unable to refresh them. i.e. The slave can dole out old information for this amount of time, then expunge it.
Time to live period - other servers should dole out and hold onto their cached data for /this/ amount of time, even if they should have tried updating in the meantime, but hadn't been able to.