17. Replication
Replicated directories are a fundamental requirement for delivering a resilient enterprise deployment.
OpenLDAP has various configuration options for creating a replicated directory. The following sections will discuss these.
17.1. Push Based
17.1.1. Replacing Slurpd
Slurpd replication has been deprecated in favor of Syncrepl replication and has been completely removed from OpenLDAP 2.4.
Why was it replaced?
The slurpd daemon was the original replication mechanism inherited from UMich's LDAP and operates in push mode: the master pushes changes to the slaves. It has been replaced for many reasons, in brief:
- It is not reliable
- It is extremely sensitive to the ordering of records in the replog
- It can easily go out of sync, at which point manual intervention is required to resync the slave database with the master directory
- It isn't very tolerant of unavailable servers. If a slave goes down for a long time, the replog may grow to a size that's too large for slurpd to process
What was it replaced with?
Syncrepl
Why is Syncrepl better?
- Syncrepl is self-synchronizing; you can start with a database in any state from totally empty to fully synced and it will automatically do the right thing to achieve and maintain synchronization
- Syncrepl can operate in either direction
- Data updates can be minimal or maximal
How do I implement a pushed based replication system using Syncrepl?
The easiest way is to point an LDAP backend (Backends and slapd-ldap(8)) to your slave directory and setup Syncrepl to point to your Master database.
REFERENCE test045/048 for better explanation of above.
If you imagine Syncrepl pulling down changes from the Master server, and then pushing those changes out to your slave servers via slapd-ldap(8). This is called proxy mode (elaborate/confirm?).
DIAGRAM HERE
BETTER EXAMPLE here from test045/048 for different push/multiproxy examples.
Here's an example:
include ./schema/core.schema include ./schema/cosine.schema include ./schema/inetorgperson.schema include ./schema/openldap.schema include ./schema/nis.schema pidfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.pid argsfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.args modulepath ../servers/slapd/back-bdb/ moduleload back_bdb.la modulepath ../servers/slapd/back-monitor/ moduleload back_monitor.la modulepath ../servers/slapd/overlays/ moduleload syncprov.la modulepath ../servers/slapd/back-ldap/ moduleload back_ldap.la # We don't need any access to this DSA restrict all ####################################################################### # consumer proxy database definitions ####################################################################### database ldap suffix "dc=example,dc=com" rootdn "cn=Whoever" uri ldap://localhost:9012/ lastmod on # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply # without the need to write the UpdateDN before starting replication acl-bind bindmethod=simple binddn="cn=Monitor" credentials=monitor # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply # without the need to write the UpdateDN before starting replication syncrepl rid=1 provider=ldap://localhost:9011/ binddn="cn=Manager,dc=example,dc=com" bindmethod=simple credentials=secret searchbase="dc=example,dc=com" filter="(objectClass=*)" attrs="*,structuralObjectClass,entryUUID,entryCSN,creatorsName,createTimestamp,modifiersName,modifyTimestamp" schemachecking=off scope=sub type=refreshAndPersist retry="5 5 300 5" overlay syncprov database monitor
DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?)
ANOTHER DIAGRAM HERE
As you can see, you can let your imagination go wild using Syncrepl and slapd-ldap(8) tailoring your replication to fit your specific network topology.
17.2. Pull Based
17.2.1. LDAP Sync Replication
The
Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for short) protocol as the replica synchronization protocol. It provides a stateful replication which supports both pull-based and push-based synchronization and does not mandate the use of a history store.
Syncrepl keeps track of the status of the replication content by maintaining and exchanging synchronization cookies. Because the syncrepl consumer and provider maintain their content status, the consumer can poll the provider content to perform incremental synchronization by asking for the entries required to make the consumer replica up-to-date with the provider content. Syncrepl also enables convenient management of replicas by maintaining replica status. The consumer replica can be constructed from a consumer-side or a provider-side backup at any synchronization status. Syncrepl can automatically resynchronize the consumer replica up-to-date with the current provider content.
Syncrepl supports both pull-based and push-based synchronization. In its basic refreshOnly synchronization mode, the provider uses pull-based synchronization where the consumer servers need not be tracked and no history information is maintained. The information required for the provider to process periodic polling requests is contained in the synchronization cookie of the request itself. To optimize the pull-based synchronization, syncrepl utilizes the present phase of the LDAP Sync protocol as well as its delete phase, instead of falling back on frequent full reloads. To further optimize the pull-based synchronization, the provider can maintain a per-scope session log as a history store. In its refreshAndPersist mode of synchronization, the provider uses a push-based synchronization. The provider keeps track of the consumer servers that have requested a persistent search and sends them necessary updates as the provider replication content gets modified.
With syncrepl, a consumer server can create a replica without changing the provider's configurations and without restarting the provider server, if the consumer server has appropriate access privileges for the DIT fragment to be replicated. The consumer server can stop the replication also without the need for provider-side changes and restart.
Syncrepl supports both partial and sparse replications. The shadow DIT fragment is defined by a general search criteria consisting of base, scope, filter, and attribute list. The replica content is also subject to the access privileges of the bind identity of the syncrepl replication connection.
17.2.1.1. The LDAP Content Synchronization Protocol
The LDAP Sync protocol allows a client to maintain a synchronized copy of a DIT fragment. The LDAP Sync operation is defined as a set of controls and other protocol elements which extend the LDAP search operation. This section introduces the LDAP Content Sync protocol only briefly. For more information, refer to RFC4533.
The LDAP Sync protocol supports both polling and listening for changes by defining two respective synchronization operations: refreshOnly and refreshAndPersist. Polling is implemented by the refreshOnly operation. The client copy is synchronized to the server copy at the time of polling. The server finishes the search operation by returning SearchResultDone at the end of the search operation as in the normal search. The listening is implemented by the refreshAndPersist operation. Instead of finishing the search after returning all entries currently matching the search criteria, the synchronization search remains persistent in the server. Subsequent updates to the synchronization content in the server cause additional entry updates to be sent to the client.
The refreshOnly operation and the refresh stage of the refreshAndPersist operation can be performed with a present phase or a delete phase.
In the present phase, the server sends the client the entries updated within the search scope since the last synchronization. The server sends all requested attributes, be it changed or not, of the updated entries. For each unchanged entry which remains in the scope, the server sends a present message consisting only of the name of the entry and the synchronization control representing state present. The present message does not contain any attributes of the entry. After the client receives all update and present entries, it can reliably determine the new client copy by adding the entries added to the server, by replacing the entries modified at the server, and by deleting entries in the client copy which have not been updated nor specified as being present at the server.
The transmission of the updated entries in the delete phase is the same as in the present phase. The server sends all the requested attributes of the entries updated within the search scope since the last synchronization to the client. In the delete phase, however, the server sends a delete message for each entry deleted from the search scope, instead of sending present messages. The delete message consists only of the name of the entry and the synchronization control representing state delete. The new client copy can be determined by adding, modifying, and removing entries according to the synchronization control attached to the SearchResultEntry message.
In the case that the LDAP Sync server maintains a history store and can determine which entries are scoped out of the client copy since the last synchronization time, the server can use the delete phase. If the server does not maintain any history store, cannot determine the scoped-out entries from the history store, or the history store does not cover the outdated synchronization state of the client, the server should use the present phase. The use of the present phase is much more efficient than a full content reload in terms of the synchronization traffic. To reduce the synchronization traffic further, the LDAP Sync protocol also provides several optimizations such as the transmission of the normalized entryUUIDs and the transmission of multiple entryUUIDs in a single syncIdSet message.
At the end of the refreshOnly synchronization, the server sends a synchronization cookie to the client as a state indicator of the client copy after the synchronization is completed. The client will present the received cookie when it requests the next incremental synchronization to the server.
When refreshAndPersist synchronization is used, the server sends a synchronization cookie at the end of the refresh stage by sending a Sync Info message with TRUE refreshDone. It also sends a synchronization cookie by attaching it to SearchResultEntry generated in the persist stage of the synchronization search. During the persist stage, the server can also send a Sync Info message containing the synchronization cookie at any time the server wants to update the client-side state indicator. The server also updates a synchronization indicator of the client at the end of the persist stage.
In the LDAP Sync protocol, entries are uniquely identified by the entryUUID attribute value. It can function as a reliable identifier of the entry. The DN of the entry, on the other hand, can be changed over time and hence cannot be considered as the reliable identifier. The entryUUID is attached to each SearchResultEntry or SearchResultReference as a part of the synchronization control.
17.2.1.2. Syncrepl Details
The syncrepl engine utilizes both the refreshOnly and the refreshAndPersist operations of the LDAP Sync protocol. If a syncrepl specification is included in a database definition, slapd(8) launches a syncrepl engine as a slapd(8) thread and schedules its execution. If the refreshOnly operation is specified, the syncrepl engine will be rescheduled at the interval time after a synchronization operation is completed. If the refreshAndPersist operation is specified, the engine will remain active and process the persistent synchronization messages from the provider.
The syncrepl engine utilizes both the present phase and the delete phase of the refresh synchronization. It is possible to configure a per-scope session log in the provider server which stores the entryUUIDs of a finite number of entries deleted from a replication content. Multiple replicas of single provider content share the same per-scope session log. The syncrepl engine uses the delete phase if the session log is present and the state of the consumer server is recent enough that no session log entries are truncated after the last synchronization of the client. The syncrepl engine uses the present phase if no session log is configured for the replication content or if the consumer replica is too outdated to be covered by the session log. The current design of the session log store is memory based, so the information contained in the session log is not persistent over multiple provider invocations. It is not currently supported to access the session log store by using LDAP operations. It is also not currently supported to impose access control to the session log.
As a further optimization, even in the case the synchronization search is not associated with any session log, no entries will be transmitted to the consumer server when there has been no update in the replication context.
The syncrepl engine, which is a consumer-side replication engine, can work with any backends. The LDAP Sync provider can be configured as an overlay on any backend, but works best with the back-bdb or back-hdb backend.
The LDAP Sync provider maintains a contextCSN for each database as the current synchronization state indicator of the provider content. It is the largest entryCSN in the provider context such that no transactions for an entry having smaller entryCSN value remains outstanding. The contextCSN could not just be set to the largest issued entryCSN because entryCSN is obtained before a transaction starts and transactions are not committed in the issue order.
The provider stores the contextCSN of a context in the contextCSN attribute of the context suffix entry. The attribute is not written to the database after every update operation though; instead it is maintained primarily in memory. At database start time the provider reads the last saved contextCSN into memory and uses the in-memory copy exclusively thereafter. By default, changes to the contextCSN as a result of database updates will not be written to the database until the server is cleanly shut down. A checkpoint facility exists to cause the contextCSN to be written out more frequently if desired.
Note that at startup time, if the provider is unable to read a contextCSN from the suffix entry, it will scan the entire database to determine the value, and this scan may take quite a long time on a large database. When a contextCSN value is read, the database will still be scanned for any entryCSN values greater than it, to make sure the contextCSN value truly reflects the greatest committed entryCSN in the database. On databases which support inequality indexing, setting an eq index on the entryCSN attribute and configuring contextCSN checkpoints will greatly speed up this scanning step.
If no contextCSN can be determined by reading and scanning the database, a new value will be generated. Also, if scanning the database yielded a greater entryCSN than was previously recorded in the suffix entry's contextCSN attribute, a checkpoint will be immediately written with the new value.
The consumer also stores its replica state, which is the provider's contextCSN received as a synchronization cookie, in the contextCSN attribute of the suffix entry. The replica state maintained by a consumer server is used as the synchronization state indicator when it performs subsequent incremental synchronization with the provider server. It is also used as a provider-side synchronization state indicator when it functions as a secondary provider server in a cascading replication configuration. Since the consumer and provider state information are maintained in the same location within their respective databases, any consumer can be promoted to a provider (and vice versa) without any special actions.
Because a general search filter can be used in the syncrepl specification, some entries in the context may be omitted from the synchronization content. The syncrepl engine creates a glue entry to fill in the holes in the replica context if any part of the replica content is subordinate to the holes. The glue entries will not be returned in the search result unless ManageDsaIT control is provided.
Also as a consequence of the search filter used in the syncrepl specification, it is possible for a modification to remove an entry from the replication scope even though the entry has not been deleted on the provider. Logically the entry must be deleted on the consumer but in refreshOnly mode the provider cannot detect and propagate this change without the use of the session log.
For configuration, please see the Syncrepl section.
17.2.2. Delta-syncrepl replication
- Disadvantages of Syncrepl replication:
OpenLDAP's syncrepl replication is an object-based replication mechanism. When any attribute value in a replicated object is changed on the provider, each consumer fetches and processes the complete changed object {B:both changed and unchanged attribute values} during replication. This works well, but has drawbacks in some situations.
For example, suppose you have a database consisting of 100,000 objects of 1 KB each. Further, suppose you routinely run a batch job to change the value of a single two-byte attribute value that appears in each of the 100,000 objects on the master. Not counting LDAP and TCP/IP protocol overhead, each time you run this job each consumer will transfer and process {B:1 GB} of data to process {B:200KB of changes! }
99.98% of the data that is transmitted and processed in a case like this will be redundant, since it represents values that did not change. This is a waste of valuable transmission and processing bandwidth and can cause an unacceptable replication backlog to develop. While this situation is extreme, it serves to demonstrate a very real problem that is encountered in some LDAP deployments.
- Where Delta-syncrepl comes in:
Delta-syncrepl, a changelog-based variant of syncrepl, is designed to address situations like the one described above. Delta-syncrepl works by maintaining a changelog of a selectable depth on the provider. The replication consumer on each consumer checks the changelog for the changes it needs and, as long as the changelog contains the needed changes, the delta-syncrepl consumer fetches them from the changelog and applies them to its database. If, however, a replica is too far out of sync (or completely empty), conventional syncrepl is used to bring it up to date and replication then switches to the delta-syncrepl mode.
For configuration, please see the Delta-syncrepl section.
17.3. Mixture of both Pull and Push based
17.3.1. N-Way Multi-Master replication
Multi-Master replication is a replication technique using Syncrepl to replicate data to multiple Master Directory servers.
- Advantages of Multi-Master replication:
- If any master fails, other masters will continue to accept updates
- Avoids a single point of failure
- Masters can be located in several physical sites i.e. distributed across the network/globe.
- Good for Automatic failover/High Availability
- Disadvantages of Multi-Master replication:
- It has NOTHING to do with load balancing
- http://www.openldap.org/faq/data/cache/1240.html
- If connectivity with a master is lost because of a network partition, then "automatic failover" can just compound the problem
- Typically, a particular machine cannot distinguish between losing contact with a peer because that peer crashed, or because the network link has failed
- If a network is partitioned and multiple clients start writing to each of the "masters" then reconciliation will be a pain; it may be best to simply deny writes to the clients that are partitioned from the single master
- Masters must propagate writes to all the other servers, which means the network traffic and write load is constant and spreads across all of the servers
For configuration, please see the N-Way Multi-Master section below
17.3.2. MirrorMode replication
MirrorMode is a hybrid configuration that provides all of the consistency guarantees of single-master replication, while also providing the high availability of multi-master. In MirrorMode two masters are set up to replicate from each other (as a multi-master configuration) but an external frontend is employed to direct all writes to only one of the two servers. The second master will only be used for writes if the first master crashes, at which point the frontend will switch to directing all writes to the second master. When a crashed master is repaired and restarted it will automatically catch up to any changes on the running master and resync.
17.3.2.1. Arguments for MirrorMode
- Provides a high-availability (HA) solution for directory writes (replicas handle reads)
- As long as one Master is operational, writes can safely be accepted
- Master nodes replicate from each other, so they are always up to date and can be ready to take over (hot standby)
- Syncrepl also allows the master nodes to re-synchronize after any downtime
- Delta-Syncrepl can be used
17.3.2.2. Arguments against MirrorMode
- MirrorMode is not what is termed as a Multi-Master solution. This is because writes have to go to one of the mirror nodes at a time
- MirrorMode can be termed as Active-Active Hot-Standby, therefor an external server (slapd in proxy mode) or device (hardware load balancer) to manage which master is currently active
- While syncrepl can recover from a completely empty database, slapadd is much faster
- Does not provide faster or more scalable write performance (neither could any Multi-Master solution)
- Backups are managed slightly differently
- If backing up the Berkeley database itself and periodically backing up the transaction log files, then the same member of the mirror pair needs to be used to collect logfiles until the next database backup is taken
- To ensure that both databases are consistent, each database might have to be put in read-only mode while performing a slapcat.
- When using slapcat, the generated LDIF files can be rather large. This can happen with a non-MirrorMode deployment also.
For configuration, please see the MirrorMode section below
17.4. Configuring the different replication types
17.4.1. Syncrepl
17.4.1.1. Syncrepl configuration
Because syncrepl is a consumer-side replication engine, the syncrepl specification is defined in slapd.conf(5) of the consumer server, not in the provider server's configuration file. The initial loading of the replica content can be performed either by starting the syncrepl engine with no synchronization cookie or by populating the consumer replica by adding an
When loading from a backup, it is not required to perform the initial loading from the up-to-date backup of the provider content. The syncrepl engine will automatically synchronize the initial consumer replica to the current provider content. As a result, it is not required to stop the provider server in order to avoid the replica inconsistency caused by the updates to the provider content during the content backup and loading process.
When replicating a large scale directory, especially in a bandwidth constrained environment, it is advised to load the consumer replica from a backup instead of performing a full initial load using syncrepl.
17.4.1.2. Set up the provider slapd
The provider is implemented as an overlay, so the overlay itself must first be configured in slapd.conf(5) before it can be used. The provider has only two configuration directives, for setting checkpoints on the contextCSN and for configuring the session log. Because the LDAP Sync search is subject to access control, proper access control privileges should be set up for the replicated content.
The contextCSN checkpoint is configured by the
syncprov-checkpoint <ops> <minutes>
directive. Checkpoints are only tested after successful write operations. If <ops> operations or more than <minutes> time has passed since the last checkpoint, a new checkpoint is performed.
The session log is configured by the
syncprov-sessionlog <size>
directive, where <size> is the maximum number of session log entries the session log can record. When a session log is configured, it is automatically used for all LDAP Sync searches within the database.
Note that using the session log requires searching on the entryUUID attribute. Setting an eq index on this attribute will greatly benefit the performance of the session log on the provider.
A more complete example of the slapd.conf(5) content is thus:
database bdb suffix dc=Example,dc=com rootdn dc=Example,dc=com directory /var/ldap/db index objectclass,entryCSN,entryUUID eq overlay syncprov syncprov-checkpoint 100 10 syncprov-sessionlog 100
17.4.1.3. Set up the consumer slapd
The syncrepl replication is specified in the database section of slapd.conf(5) for the replica context. The syncrepl engine is backend independent and the directive can be defined with any database type.
database hdb suffix dc=Example,dc=com rootdn dc=Example,dc=com directory /var/ldap/db index objectclass,entryCSN,entryUUID eq syncrepl rid=123 provider=ldap://provider.example.com:389 type=refreshOnly interval=01:00:00:00 searchbase="dc=example,dc=com" filter="(objectClass=organizationalPerson)" scope=sub attrs="cn,sn,ou,telephoneNumber,title,l" schemachecking=off bindmethod=simple binddn="cn=syncuser,dc=example,dc=com" credentials=secret
In this example, the consumer will connect to the provider slapd(8) at port 389 of ldap://provider.example.com to perform a polling (refreshOnly) mode of synchronization once a day. It will bind as cn=syncuser,dc=example,dc=com using simple authentication with password "secret". Note that the access control privilege of cn=syncuser,dc=example,dc=com should be set appropriately in the provider to retrieve the desired replication content. Also the search limits must be high enough on the provider to allow the syncuser to retrieve a complete copy of the requested content. The consumer uses the rootdn to write to its database so it always has full permissions to write all content.
The synchronization search in the above example will search for the entries whose objectClass is organizationalPerson in the entire subtree rooted at dc=example,dc=com. The requested attributes are cn, sn, ou, telephoneNumber, title, and l. The schema checking is turned off, so that the consumer slapd(8) will not enforce entry schema checking when it process updates from the provider slapd(8).
For more detailed information on the syncrepl directive, see the syncrepl section of The slapd Configuration File chapter of this admin guide.
17.4.1.4. Start the provider and the consumer slapd
The provider slapd(8) is not required to be restarted. contextCSN is automatically generated as needed: it might be originally contained in the
When starting a consumer slapd(8), it is possible to provide a synchronization cookie as the -c cookie command line option in order to start the synchronization from a specific state. The cookie is a comma separated list of name=value pairs. Currently supported syncrepl cookie fields are csn=<csn> and rid=<rid>. <csn> represents the current synchronization state of the consumer replica. <rid> identifies a consumer replica locally within the consumer server. It is used to relate the cookie to the syncrepl definition in slapd.conf(5) which has the matching replica identifier. The <rid> must have no more than 3 decimal digits. The command line cookie overrides the synchronization cookie stored in the consumer replica database.
17.4.2. Delta-syncrepl
17.4.2.1. Delta-syncrepl Master configuration
Setting up delta-syncrepl requires configuration changes on both the master and replica servers:
# Give the replica DN unlimited read access. This ACL may need to be # merged with other ACL statements. access to * by dn.base="cn=replicator,dc=symas,dc=com" read by * break # Set the module path location modulepath /opt/symas/lib/openldap # Load the hdb backend moduleload back_hdb.la # Load the accesslog overlay moduleload accesslog.la #Load the syncprov overlay moduleload syncprov.la # Accesslog database definitions database hdb suffix cn=accesslog directory /db/accesslog rootdn cn=accesslog index default eq index entryCSN,objectClass,reqEnd,reqResult,reqStart overlay syncprov syncprov-nopresent TRUE syncprov-reloadhint TRUE # Let the replica DN have limitless searches limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited # Primary database definitions database hdb suffix "dc=symas,dc=com" rootdn "cn=manager,dc=symas,dc=com" ## Whatever other configuration options are desired # syncprov specific indexing index entryCSN eq index entryUUID eq # syncrepl Provider for primary db overlay syncprov syncprov-checkpoint 1000 60 # accesslog overlay definitions for primary db overlay accesslog logdb cn=accesslog logops writes logsuccess TRUE # scan the accesslog DB every day, and purge entries older than 7 days logpurge 07+00:00 01+00:00 # Let the replica DN have limitless searches limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
For more information, always consult the relevant man pages (slapo-accesslog and slapd.conf)
17.4.2.2. Delta-syncrepl Replica configuration
# Primary replica database configuration database hdb suffix "dc=symas,dc=com" rootdn "cn=manager,dc=symas,dc=com" ## Whatever other configuration bits for the replica, like indexing ## that you want # syncrepl specific indices index entryUUID eq # syncrepl directives syncrepl rid=0 provider=ldap://ldapmaster.symas.com:389 bindmethod=simple binddn="cn=replicator,dc=symas,dc=com" credentials=secret searchbase="dc=symas,dc=com" logbase="cn=accesslog" logfilter="(&(objectClass=auditWriteObject)(reqResult=0))" schemachecking=on type=refreshAndPersist retry="60 +" syncdata=accesslog # Refer updates to the master updateref ldap://ldapmaster.symas.com
The above configuration assumes that you have a replicator identity defined in your database that can be used to bind to the master with. In addition, all of the databases (primary master, primary replica, and the accesslog storage database) should also have properly tuned DB_CONFIG files that meet your needs.
17.4.3. N-Way Multi-Master
For the following example we will be using 3 Master nodes. Keeping in line with test050-syncrepl-multimaster of the OpenLDAP test suite, we will be configuring slapd(8) via cn=config
This sets up the config database:
dn: cn=config objectClass: olcGlobal cn: config olcServerID: 1 dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcRootPW: secret
second and third servers will have a different olcServerID obviously:
dn: cn=config objectClass: olcGlobal cn: config olcServerID: 2 dn: olcDatabase={0}config,cn=config objectClass: olcDatabaseConfig olcDatabase: {0}config olcRootPW: secret
This sets up syncrepl as a provider (since these are all masters):
dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulePath: /usr/local/libexec/openldap olcModuleLoad: syncprov.la
Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with your actual ldap urls):
dn: cn=config changetype: modify replace: olcServerID olcServerID: 1 $URI1 olcServerID: 2 $URI2 olcServerID: 3 $URI3 dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config changetype: add objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: syncprov dn: olcDatabase={0}config,cn=config changetype: modify add: olcSyncRepl olcSyncRepl: rid=001 provider=$URI1 binddn="cn=config" bindmethod=simple credentials=secret searchbase="cn=config" type=refreshAndPersist retry="5 5 300 5" timeout=1 olcSyncRepl: rid=002 provider=$URI2 binddn="cn=config" bindmethod=simple credentials=secret searchbase="cn=config" type=refreshAndPersist retry="5 5 300 5" timeout=1 olcSyncRepl: rid=003 provider=$URI3 binddn="cn=config" bindmethod=simple credentials=secret searchbase="cn=config" type=refreshAndPersist retry="5 5 300 5" timeout=1 - add: olcMirrorMode olcMirrorMode: TRUE
Now start up the Master and a consumer/s, also add the above LDIF to the first consumer, second consumer etc. It will then replicate cn=config. You now have N-Way Multimaster on the config database.
We still have to replicate the actual data, not just the config, so add to the master (all active and configured consumers/masters will pull down this config, as they are all syncing). Also, replace all ${} variables with whatever is applicable to your setup:
dn: olcDatabase={1}$BACKEND,cn=config objectClass: olcDatabaseConfig objectClass: olc${BACKEND}Config olcDatabase: {1}$BACKEND olcSuffix: $BASEDN olcDbDirectory: ./db olcRootDN: $MANAGERDN olcRootPW: $PASSWD olcSyncRepl: rid=004 provider=$URI1 binddn="$MANAGERDN" bindmethod=simple credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly interval=00:00:00:10 retry="5 5 300 5" timeout=1 olcSyncRepl: rid=005 provider=$URI2 binddn="$MANAGERDN" bindmethod=simple credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly interval=00:00:00:10 retry="5 5 300 5" timeout=1 olcSyncRepl: rid=006 provider=$URI3 binddn="$MANAGERDN" bindmethod=simple credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly interval=00:00:00:10 retry="5 5 300 5" timeout=1 olcMirrorMode: TRUE dn: olcOverlay=syncprov,olcDatabase={1}${BACKEND},cn=config changetype: add objectClass: olcOverlayConfig objectClass: olcSyncProvConfig olcOverlay: syncprov
Note: You must have all your server set to the same time via http://www.ntp.org/
17.4.4. MirrorMode
MirrorMode configuration is actually very easy. If you have ever setup a normal slapd syncrepl provider, then the only change is the following two directives:
mirrormode on serverID 1
Note: You need to make sure that the serverID of each mirror node pair is different and add it as a global configuration option.
17.4.4.1. Mirror Node Configuration
This is the same as the Set up the provider slapd section.
Note: Delta-syncrepl is not yet supported with MirrorMode.
Here's a specific cut down example using LDAP Sync Replication in refreshAndPersist mode:
MirrorMode node 1:
# Global section serverID 1 # database section # syncrepl directives syncrepl rid=001 provider=ldap://ldap-ridr1.example.com bindmethod=simple binddn="cn=mirrormode,dc=example,dc=com" credentials=mirrormode searchbase="dc=example,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" syncrepl rid=002 provider=ldap://ldap-rid2.example.com bindmethod=simple binddn="cn=mirrormode,dc=example,dc=com" credentials=mirrormode searchbase="dc=example,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" mirrormode on
MirrorMode node 2:
# Global section serverID 2 # database section # syncrepl directives syncrepl rid=001 provider=ldap://ldap-ridr1.example.com bindmethod=simple binddn="cn=mirrormode,dc=example,dc=com" credentials=mirrormode searchbase="dc=example,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" syncrepl rid=002 provider=ldap://ldap-rid2.example.com bindmethod=simple binddn="cn=mirrormode,dc=example,dc=com" credentials=mirrormode searchbase="dc=example,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" mirrormode on
It's simple really; each MirrorMode node is setup exactly the same, except that the serverID is unique.
17.4.4.1.1. Failover Configuration
There are generally 2 choices for this; 1. Hardware proxies/load-balancing or dedicated proxy software, 2. using a Back-LDAP proxy as a syncrepl provider
A typical enterprise example might be:

Figure X.Y: MirrorMode in a Dual Data Center Configuration
17.4.4.1.2. Normal Consumer Configuration
This is exactly the same as the Set up the consumer slapd section. It can either setup in normal
17.4.4.2. MirrorMode Summary
Hopefully you will now have a directory architecture that provides all of the consistency guarantees of single-master replication, whilst also providing the high availability of multi-master replication.