Actions: | Security

AllGoodBits.org

Navigation: Home | Services | Tools | Articles | Other

A Pattern for DNS Architecture

This is a pattern for a mid-size organisation to provide internal DNS service, authoritatively serving zones we control and also providing cached resolution for zones we do not control. It aims to be:

It is not intended for scenarios with thousands of zones, massive record churn or other high-performance situations, at least not without modification.

First the pattern, then some descriptive material to explain the pattern.

Request Resolution

For clarity, here are basically 3 types of request:

  1. An internal request for a record in a zone that we control (desktop client -> "db1.example.com, please?"): Internal Resolution of Controlled Zones
  2. An internal request for a record in a zone that we do not control (desktop client -> "www.google.com, please?"): Internal Resolution of External Zones
  3. An external request for a record in a zone that we control (customer's desktop -> "www.example.com, please?"): External Resolution of Controlled Zones

The basic ideas of resolution are that when a client makes DNS request: * it gets a correct, current answer * that answer is cached to reduce the cost of another client making the same query

Requests from Internal Clients

  • On every linux host, /etc/resolv.conf contains a list of internal dns servers, our caching resolvers:

    nameserver 10.41.0.10
    nameserver 10.8.3.15
    

    I will refer to the servers listed in these entries as cache/slave servers, or internal slaves.

Internal Resolution of Controlled Zones
  • Each cache/slave runs a caching resolver and a slave authoritative server for the zones we control.
  • For each controlled zone, the named.conf(5) will contain a zone stanza to specify that it will be authoritative for that zone and will accept notifications (zone transfers) from the master.
Internal Resolution of External Zones

From the perspective of the client, these requests are the same as queries for internally controlled zones. However, the cache/slave machines do not consider themselves to be authoritative for these zones, so they initially check their caches, then forward, then recurse until they can answer.

  • Each cache/slave first forwards requests that it cannot answer, which means that if the destination(s) to which the request is forwarded cannot answer the request, a traditional DNS recursion algorithm will attempt to resolve the request. Therefore, named.conf(5) will contain:

    forward first;
    fowarders { <ip of upstream server #1>; <ip of upstream server #2>; };
    

External Resolution of Controlled Zones

When a machine that is outside our network makes a request for a record in one of the controlled zones, it should be answered. The registrar has a list of publicly accessible authoritative nameservers for each zone, each of which is a slave receiving zone transfers from our Hidden Master.

Each slave should be listed in an ACL in named.conf on the Hidden Master, so that the master sends it zone transfers:

acl slaves {
  <ip address of slave #1>;
  <ip address of slave #2>;
};
zone "example.com" {
  type master;
  file "example.com.db";
  notify yes;
  allow-transfer {
       slaves;
    };
};

Zone Administration

Hidden Master

The Hidden Master is the machine where I, the zone admin, manage my zones. This makes for a single authoritative source of information and allows software to take care of proprogating my changes.

  • Updates to zone files are made to a single machine only, which then pushes the updates to its slaves using zone transfers (notifications).
  • The primary nameserver (MNAME) in the Statement Of Authority (SOA) record should not be the name of the hidden master; that would mean it is not hidden. It should be the name of one of the slaves.

Slaves

Slaves are defined as nameservers that receive notifications of changes to zones via zone transfers.

Internal or private Slaves
  • Each cache/slave is configured as a slave authoritative server for zones we control. This means that it is effectively acting as a backup authoritative name server for the master. So stanzas will look similar to this:

    zone "example.com" {
      type slave;
      file "slaves/example.com.db";
      masters {
        <ip of Hidden Master>;
      };
    };
    
Publicly Accessible Slaves
  • Publicly accessible slaves should not be open to queries for records in zones not under our control, or in other words, should not permit recursion:

    options {
      recursion no;
    };
    
  • Each cache/slave is configured as a slave authoritative server for zones we control. This means that it is effectively acting as a backup authoritative name server for the master. Note, that the IP address of the hidden master in this stanza is the IP address that the slave will see the zone transfer connection come from, which would be the external IP of your LAN, if your hidden master is behind NAT. So stanzas will look similar to this:

    zone "example.com" {
      type slave;
      file "slaves/example.com.db";
      masters {
        <ip of Hidden Master>;
      };
    };
    

Split DNS, as provided by Views

If you want to provide different answers depending on the source of the query, that is easily achievable with this setup. It is a little controversial as whether this is a good idea; there are some who suggest just using a different zone internally. In the general case, I do not take either side in this argument; I advise a choice based on the specifics of the case.

For a single zone, example.com, we can have an internal view and an external view that fit into the above pattern as follows. Remember that the order of views is significant; a client request will be resolved in the context of the first view that it matches.