2020-09-03 08:59:41 -03:00
2020-09-03 08:59:41 -03:00
2025-04-03 11:42:48 -03:00

Bond Clustering

This repository contains scripts that allow for clustering multiple bonds to enable high availability. The Pacemaker clustering software is used to manage the service.

The cluster is made up of the nodes at the site, as well as the bondingadmin server. The nodes communicate status with each other over one or more physical connections and also communicate with the management server.

Services are moved by changing the configuration on the management server. A takeover will only occur if a node can communicate with the management server.

Requirements

The nodes must be running openSUSE Leap or Debian.

Initial setup on management server

Before configuring the nodes, we will first need to install corosync-qnetd on the management server, which will act as a tie-breaker to avoid a split-brain scenario on the bonders where each device independently believes it should be providing services:

apt install corosync-qnetd

Note that this service is used to handle any number of bonder clusters. It only needs to be set up once.

Configuring the bond records on the management server

Configuring local communication

First, you will need to define at least one private IP range for communication over a local physical network connection shared by the bonders. It is highly recommended that a LAN interface is used for this.

If there are multiple LAN interfaces, it is recommended that a range is defined for each interface, to allow the cluster to make a better decision if only one LAN interface goes offline. Leg interfaces may also be used to implement redundant communication but it is not required.

As an example, we are going to use the range 10.99.1.0/24 for a LAN interface.

On the first bond, add a connected IP with the following parameters:

  • Interface: The LAN interface
  • IP: 10.99.1.1/24
  • Aggregator routing: never

On the second bond, add a connected IP with it's own IP address in the range:

  • Interface: The LAN interface
  • IP: 10.99.1.2/24
  • Aggregator routing: never

Configuring shared resources

Next, add any connected IPs, routes, CPE NAT IPs, and services needed on the first bond.

Then add the same resources to the second bond using identical configurations, but make sure that the connected IPs, routes, and CPE NAT IPs, are disabled.

Add a user account for API access

Finally, we need to add a user account that has permission to modify the resources on the bonds. It is recommended to use a single account per pair, and to limit the accounts access to just what is needed.

Creating a group for this purpose is a good idea. The following permissions are required:

  • Bond: View
  • Connected IP: View, Change
  • CPE NAT IP: View, Change
  • Route: View, Change

Initial setup on the nodes

Installation

Execute the following on each bonder:

openSUSE

Install packages:

zypper install git make pacemaker corosync-qdevice crmsh

Install the bond clustering tool:

git clone https://git.multapplied.net/Partner/bond-clustering.git
cd bond-clustering
make install

Debian

Install packages:

apt install git make pacemaker corosync-qdevice crmsh

Install the bond clustering tool:

git clone https://git.multapplied.net/Partner/bond-clustering.git
cd bond-clustering
make install

Setup

On the first bonder run the following script which will ask for the API account details and ask which connected IP(s) should be used for bonder-to-bonder communication:

bond-cluster-setup initial

This process automatically detects any shared resources configured on the bonds.

Once complete, it will display a command to duplicate the setup on the peer bonder. Run that command to complete the cluster installation.

Update

To update, execute the following on each bonder:

cd bond-clustering
git pull
make install

Then on one of the bonders, run:

bond-cluster-setup initial

If the output indicates to run a command on the peer do it, otherwise run the same bond-cluster-setup initial command on it.

Checking cluster status

At this point the base cluster communication should be present. Check with the following command:

corosync-quorumtool

The important parts are that the total votes are equal to the expected votes, and that the bonders are listed under the membership information. The Qdevice represents the management server connection and is used to break ties. It is also configured to take into account whether a bonder can ping it's aggregator as it makes decisions.

You can also check the higher-level cluster status, which includes the running services using the following command on either bonder:

crm_mon

This will continually monitor the state of the bonders and the resources. Note that we have not yet set up the service to manage the routing objects.

Setting up the resources

Run the following command on either bonder:

bond-cluster-setup setup-resource

This will set up the bond-cluster-objects resource, which will start automatically on one of the bonders. If the routing objects are already enabled for a bond, it will detect and leave them running on that bond.

Check the state on either bonder:

crm_mon

Testing resource migration

The easiest way to check what happens when a bonder is down is to simply stop the pacemaker service on the bonder currently running the resource while running crm_mon on the other bonder:

systemctl stop pacemaker

To start it again:

systemctl start pacemaker

From there, rebooting or pulling ethernet or power cables can be performed for further testing.

Changing the bond objects

The cluster configuration on the bonders will need to be updated if any of the following changes are made on the management server:

  • The IP addresses of the connected IPs used for underlying cluster communication are changed
  • Any shared connected IPs, routes, or CPE NAT IPs, are added or removed (changes are OK)

To do this, login to any of the bonders and run the initial setup again:

bond-cluster-setup initial --reconfigure

Then run the peer setup command that is displayed on the other bonder.

Advanced features

Preferring one bonder over another one

Normally, it is recommended that resources continue to run on whatever bonder it is running on, since transitions do induce a small outage and it is better to avoid unnecessary outages.

However, there are certain situations where a single bonder may be preferred as long as it is online, such as when one bonder contains a more powerful CPU and can handle more traffic, or if only one bonder has a non-shared device such as a mobile broadband interface.

To enable this, we will need to add a constraint. Say we want to prefer bond2, run the following command:

crm configure location bco-preference bond-cluster-objects inf: bond2

Note that this will force a transition if it is not running on that bonder.

You can check the configuration with this command:

crm configure show

If you ever need to remove this preference, run the following:

crm configure delete bco-preference

Note that removing the preference will not cause a transition.

Manually forcing a transition

Stopping pacemaker is one way to force the resource to the other bonder. There is another way.

Say we have the resource running on bond1 and we want it to move to bond2. We can run the following command on either bonder:

crm resource migrate bond-cluster-objects bond2

This places a temporary constraint to prefer bond2 for the service and the resource will move.

The constraint can be removed with the following command:

crm resource migrate bond-cluster-objects

Now that there is no preference, the resource will stay where it is.

Description
Bonding addon for highly available clustered bonds using Pacemaker
Readme 123 KiB
Languages
Python 87.2%
Shell 11.4%
Makefile 1.4%