MySQL InnoDB Cluster – Quick Guide

Nowadays, modern DB engines like MongoDB, provide a cluster solution out of the box. Although MySQL offers some mainline products for clusterization like MySQL HA and MySQL NDB Cluster, the most used MySQL engine, the InnoDB, was lacking a similar solution. For a few years, MySQL has been working on the MySQL InnoDB Cluster. This solution allows InnoDB users to build a cluster to support node outage easily.

MySQL also provides some tools like MySQL-shell and MySQL-router that ease the cluster administration and usage.

What is Scope Creep and why is so related to our every day work? How does this still happens in many companies of different type of industries worldwide? How can we notice the full impact at a glance? Get every answer on our blog post.

Startup Tasks

We need three hosts to build a cluster. According to how the hosts were deployed, some initial tasks must be performed. To ease the reading, we are going to use three hosts to set up a cluster named MySQL-rep-1, MySQL-rep-2, and MySQL-rep3.

I used three different virtual machines to achieve this setup. You can use containers, but some extra tasks have to be taken, and that is beyond the scope of this blog post.

All the nodes we are going to use are deployed with an RH7 operating system. This way, all the commands written after this point are related to this operating system.

The tasks described in this section must be performed on every host that will be part of the cluster.

Verify the Hostnames

Each host must resolve every hostname to an IP address that does not correspond to the loopback one. It’s recommended to edit the file/etc/hosts and then remove every reference to the loopback address. Then, an entry should be added for each host involved in the cluster.

In this case, the contents of the file /etc/hosts was as shown in snippet 1.

IMPORTANT: This way of verification and hostname resolution is only for testing purposes. In a production environment all this is handled by a DNS or similar solution.

127.0.0.1     mysql-rep-1

127.0.0.1     localhost localhost.localdomain localhost4 localhost4.localdomain4

::1           localhost localhost.localdomain localhost6 localhost6.localdomain6

Snippet 1: Initial content for file /etc/hosts.

After making the changes, the file looks like the one shown in Snippet 2.

127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4

::1          localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.11.122  mysql-rep-1

172.16.11.71   mysql-rep-2

172.16.11.204  mysql-rep-3

Snippet 2: Modified content for file /et/hosts.

Disable Selinux

To avoid network issues, it’s recommended to disable Selinux. It is possible to do so because we are working in a controlled environment. In a production environment, the Selinux configuration must fulfill the security requirements.

First of all, the firewall is disabled with the command from snippet 3.

# systemctl disable firewalld

Snippet 3: Disable the firewall.

Then we need to disable Selinux by editing the file /etc/Selinux/config and setting the option SELINUX to disable. Finally, we have to reboot the host.

Installing Components

This section describes all the software components we need to install in each host.

Install MySQL YUM repository

All components are installed using the official YUM repository. We need to download the RPM and install it. See snippet 4 for command details.

# rpm -ivh mysql80-community-release-el7-1.noarch.rpm

Snippet 4: Command for official MySQL YUM repository installation.

You can download the RPM from the MySQL Download site directly. After installing the YUM repository, we can install the official packages for every component.

Install MySQL components

We can use the yum command to install the MySQL-community-server, and the MySQL-community-client packages, the version that is going to be installed is 8.0.x (8.0.13 at the time this blog post was written), we are also installing the package MySQL-shell. Everything is achieved, executing the commands shown in snippet 5.

# yum update

# yum install mysql-community-server 

              mysql-community-client 

              mysql-shell

Snippet 5: Commands for MySQL packages installation.

Update MySQL Root Password

After installation, we must update the password for the root user in MySQL. It is set as a randomly generated string once the MySQL-server runs for the first time.

First of all, we need to start the MySQL-server, it is done by using the command service as shown in snippet 6.

# service mysqld start

Snippet 6: Command for starting mysql-server.

Once the MySQL-server is running, we need to grep the log file searching for the generated password; this can be done with the commands from snippet 7.

# grep 'temporary password' /var/log/mysqld.log

2018-12-03T11:54:42.653259Z 5 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: L?W,WrXTs0yl

Snippet 7: Searching the automatically generated password.

Init a root session in MySQL-server and change the password or the root user. We are going to use the string rootPasswr0rd!. The first login is done with the randomly generated password obtained in the previous step. See snippet 8 for commands details.

# mysql -u root -p

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or g.

Your MySQL connection id is 20

Server version: 8.0.13 MySQL Community Server - GPL

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPassw0rd!';

Snippet 8: Update procedure for root password.

By default the MySQL-server has enabled a plugin to verify the strength of a password, so you will need to include at least:

  • One lowercase character.
  • One uppercase character.
  • One number character.
  • One special character.
  • A minimum of eight characters.

Configuring the Nodes

Now that we have all set up, we can start with the main course. To perform these steps, we use the MySQL-shell, which is a new MySQL tool that eases the management of MySQL nodes. It is a shell application that can interpret Javascript, Python, and MySQL statements.

To start this tool, we use the command MySQL.

To configure the nodes, we need to perform the following commands on each host. Run the MySQL-shell tool and execute the dba.configureLocalInstance() command. This command will check if the node is ready to join a cluster or not. If not, MySQL-shell will perform all the configurations needed and will persist them.

IMPORTANT: We are working with mysql-server 8.0.x so some changes and persistence tasks can be done automatically by mysql-shell. This is not true when using mysql-server < 5.7.21.

To manage the cluster a new MySQL user must be created, mysql-shell will do that for us. We are telling mysql-shell to create the user icadmin identified with password icadminPassw0rd! for that purpose. See snippet 9 for command details on how to start mysql-shell and snippet 10 for commands executed inside the mysql-shell.

# mysqlsh

MySQL Shell 8.0.13

Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help' or '?' for help; 'quit' to exit.

mysh>

Snippet 9: Starting mysql-shell.

mysh> dba.configureLocalInstance('root@localhost:3306', {clusterAdmin: "'icadmin'@'%'", clusterAdminPassword: 'icadminPassw0rd!'});

Please provide the password for 'root@localhost:3306': *************

Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No):

Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as mysql-rep-1

Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Some configuration options need to be fixed:

+--------------------------+---------------+----------------+--------------------------------------------------+

| Variable                 | Current Value | Required Value | Note                                             |

+--------------------------+---------------+----------------+--------------------------------------------------+

| binlog_checksum          | CRC32 | NONE       | Update the server variable                   |

| enforce_gtid_consistency | OFF           | ON | Update read-only variable and restart the server |

| gtid_mode                | OFF | ON       | Update read-only variable and restart the server |

| server_id                | 1 | <unique ID>    | Update read-only variable and restart the server |

+--------------------------+---------------+----------------+--------------------------------------------------+

Do you want to perform the required configuration changes? [y/n]: y

Do you want to restart the instance after configuring it? [y/n]: y

Cluster admin user 'icadmin'@'%' created.

Configuring instance...

The instance 'localhost:3306' was configured for cluster usage.

Restarting MySQL...

MySQL server at localhost:3306 was restarted.

mysh>

Snippet 10: Command for configuring the instance locally.

The line with the command dba.configureLocalInstance(‘root@localhost:3306’, {clusterAdmin: “‘icadmin’@’%’”, clusterAdminPassword: ‘icadminPassw0rd!’}); asks the MySQL-shell to connect to to the local mysql instance as root@localhost:3306, to create the cluster administration user and to configure the local instance.

We can check if an instance is ready for clustering executing the command dba.checkInstanceConfiguration(), we use the cluster administration user. See snippet 11 for details.

mysh> dba.checkInstanceConfiguration('iwcluster@localhost:3306');

Snippet 11: Command for checking the instance configuration.

Creating the Cluster

Now that each node is configured, the cluster must be created. This is done by calling the dba.createCluster() function.

Create the Seed Instance

From MySQL documentation:

“Once you have prepared your instances, use the dba.createCluster() function to create the cluster. The machine which you are running MySQL Shell on is used as the seed instance for the cluster. The seed instance is replicated to the other instances which you add to the cluster, making them replicas of the seed instance.”

We must select one of the nodes as the seed node; in this case, we select the node MySQL-rep-1. The configuration made in this node will be replicated to the other nodes once they are part of the cluster. To achieve this, we use the MySQL-shell and start a session to the selected seed instance, see snippet 12.

mysh> shell.connect('icadmin@mysql-rep-1:3306');

Creating a session to 'icadmin@mysql-rep-1:3306'

Please provide the password for 'icadmin@mysql-rep-1:3306': ****************

Save password for 'icadmin@mysql-rep-1:3306'? [Y]es/[N]o/Ne[v]er (default No): 

Fetching schema names for autocompletion... Press ^C to stop.

Your MySQL connection id is 50544

Server version: 8.0.13 MySQL Community Server - GPL

No default schema selected; type use <schema> to set one.

<ClassicSession:icadmin@mysql-rep-1:3306>

mysh mysql-rep-1:3306>

Snippet 12: Command for connecting to an instance.

Once connected to the seed instance, we must create the cluster by executing the command dba.createCluster(). This command receives the name of the cluster and returns an object representing the cluster, to which we can invoke methods on. In snippet 13, you can see the cluster creation.

mysh mysql-rep-1:3306> var cluster = dba.createCluster('testCluster');

A new InnoDB cluster will be created on instance 'icadmin@mysql-rep-1:3306'.

Validating instance at mysql-rep-1:3306...

This instance reports its own address as mysql-rep-1

Instance configuration is suitable.

Creating InnoDB cluster 'testCluster' on 'icadmin@mysql-rep-1:3306'...

Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.

At least 3 instances are needed for the cluster to be able to withstand up to

one server failure.

mysh mysql-rep-1:3306>

Snippet 13: Commands for creating the cluster.

In snippet 14, you can see how to invoke the method status() on the cluster object to get the cluster status.

mysql-rep-1:3306> cluster.status();

{

    "clusterName": "testCluster",

    "defaultReplicaSet": {

        "name": "default",

        "primary": "mysql-rep-1:3306",

        "ssl": "REQUIRED",

        "status": "OK_NO_TOLERANCE",

        "statusText": "Cluster is NOT tolerant to any failures.",

        "topology": {

            "mysql-rep-1:3306": {

                "address": "mysql-rep-1:3306",

                "mode": "R/W",

                "readReplicas": {},

                "role": "HA",

                "status": "ONLINE"

            }

        }

    },

    "groupInformationSourceMember": "mysql://icadmin@mysql-rep-1:3306"

}

mysql-rep-1:3306>

Snippet 14: Command for obtaining the cluster status.

Add Instances to the Cluster

For the cluster to be useful, it must contain at least three instances. We must execute the command dba.addInstance() in the seed node to add the other two instances to the cluster. It is assumed that the MySQL-shell was not closed, and therefore, it is still connected to the seed instance, and the cluster variable is still available), see snippet 15 to see the commands.

mysh mysql-rep-1:3306> cluster.addInstance('icadmin@mysql-rep-2:3306');

A new instance will be added to the InnoDB cluster. Depending on the amount of

data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'icadmin@mysql-rep-2:3306': ****************

Save password for 'icadmin@mysql-rep-2:3306'? [Y]es/[N]o/Ne[v]er (default No):

Validating instance at mysql-rep-2:3306...

This instance reports its own address as mysql-rep-2

Instance configuration is suitable.

The instance 'icadmin@mysql-rep-2:3306' was successfully added to the cluster.

mysh mysql-rep-1:3306> cluster.addInstance('icadmin@mysql-rep-3:3306');

A new instance will be added to the InnoDB cluster. Depending on the amount of

data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster ...

Please provide the password for 'icadmin@mysql-rep-3:3306': ****************

Save password for 'icadmin@mysql-rep-3:3306'? [Y]es/[N]o/Ne[v]er (default No):

Validating instance at mysql-rep-3:3306...

This instance reports its own address as mysql-rep-3

Instance configuration is suitable.

The instance 'icadmin@mysql-rep-3:3306' was successfully added to the cluster.

Snippet 15: Command for adding a new instance to the cluster.

If you lose the cluster object, you must log in to the seed instance and execute the command dba.getCluster(), see snippet 16.

mysh> shell.connect('icadmin@mysql-rep-1:3306');

mysh mysql-rep-1:3306> var cluster = dba.getCluster();

Snippet 16: Command for retrieving the cluster object once the cluster was created.

Verify Cluster Status

To verify the status of the cluster, you can use the status() method on the cluster object. Its output will tell you the status of every node in the cluster. See snippet 17.

mysh mysql-rep-1:3306> cluster.status();

{

   "clusterName": "testCluster",

   "defaultReplicaSet": {

       "name": "default",

       "primary": "mysql-rep-1:3306",

       "ssl": "REQUIRED",

       "status": "OK",

       "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",

       "topology": {

           "mysql-rep-1:3306": {

               "address": "mysql-rep-1:3306",

               "mode": "R/W",

               "readReplicas": {},

               "role": "HA",

               "status": "ONLINE"

           },

           "mysql-rep-2:3306": {

               "address": "mysql-rep-2:3306",

               "mode": "R/O",

               "readReplicas": {},

               "role": "HA",

               "status": "ONLINE"

           },

           "mysql-rep-3:3306": {

               "address": "mysql-rep-3:3306",

               "mode": "R/O",

               "readReplicas": {},

               "role": "HA",

               "status": "ONLINE"

           }

       }

   },

   "groupInformationSourceMember": "mysql://icadmin@mysql-rep-1:3306"

}

mysh mysql-rep-1:3306>

Snippet 17: Command to get cluster status.

A Simple Working Example

Log in to each instance locally as the root user (because the icadmin user has no privileges to create databases), as shown in snippet 18.

mysql-rep-1

$ ssh mysql-rep-1

$ mysqlsh --uri='root@localhost:3306'

Creating a session to 'root@localhost:3306'

 MySQL  localhost:3306 ssl  JS >

mysql-rep-2

$ ssh mysql-rep-2

$ mysqlsh --uri='root@localhost:3306'

Creating a session to 'root@localhost:3306'

 MySQL  lcoalhost:3306 ssl  JS >

mysql-rep-3

$ ssh mysql-rep-3

$ mysqlsh --uri='root@localhost:3306'

Creating a session to 'root@localhost:3306'

 MySQL  localhost:3306 ssl  JS >

Snippet 18: Commands to login to each instance.

Now ask the MySQL-shell to switch to MySQL interpreter and list the databases on each instance as shown in snippet 19.

mysql-rep-1

MySQL  localhost:3306 ssl  JS > sql

Switching to SQL mode... Commands end with ;

 MySQL  localhost:3306 ssl  SQL > show databases;

+-------------------------------+

| Database                      |

+-------------------------------+

| information_schema            |

| mysql                         |

| mysql_innodb_cluster_metadata |

| performance_schema            |

| sys                           |

+-------------------------------+

5 rows in set (0.0024 sec)

 MySQL  localhost:3306 ssl  SQL >

mysql-rep-2

MySQL  localhost:3306 ssl  JS > sql

Switching to SQL mode... Commands end with ;

 MySQL  localhost:3306 ssl  SQL > show databases;

+-------------------------------+

| Database                      |

+-------------------------------+

| information_schema            |

| mysql                         |

| mysql_innodb_cluster_metadata |

| performance_schema            |

| sys                           |

+-------------------------------+

5 rows in set (0.0033 sec)

 MySQL  localhost:3306 ssl  SQL >

mysql-rep-3

MySQL  localhost:3306 ssl  JS > sql

Switching to SQL mode... Commands end with ;

 MySQL  localhost:3306 ssl  SQL > show databases;

+-------------------------------+

| Database                      |

+-------------------------------+

| information_schema            |

| mysql                         |

| mysql_innodb_cluster_metadata |

| performance_schema            |

| sys                           |

+-------------------------------+

5 rows in set (0.0013 sec)

 MySQL  localhost:3306 ssl  SQL >

Snippet 19: Command for listing databases on each instance.

Remember that we have one R/W node (MySQL-rep-1) and two R/O nodes (MySQL-rep-2 and MySQL-rep-3). Now, in node MySQL-rep-1, create a database called testdb and a table inside that database called test_table with only one integer column called c1 and insert a value to the created column as shown in snippet 20.

mysql-rep-1

 MySQL  localhost:3306 ssl  SQL > create database testdb;

Query OK, 1 row affected (0.1057 sec)

 MySQL  localhost:3306 ssl  SQL > use testdb;

Default schema set to `testdb`.

Fetching table and column names from `testdb` for auto-completion... Press ^C to stop.

 MySQL  localhost:3306 ssl  testdb  SQL > create table test_table (c1 integer primary key);

Query OK, 0 rows affected (0.0966 sec)

 MySQL  localhost:3306 ssl  testdb  SQL > insert into test_table (c1) values (1);

Query OK, 1 row affected (0.0431 sec)

 MySQL  localhost:3306 ssl  testdb  SQL >

Snippet 20: Command for creating DB objects in the R/W instance.

Now perform a SELECT operation on every node as shown in snippet 21.

mysql-rep-1

MySQL  localhost:3306 ssl  testdb SQL > select * from testdb.test_table;

+----+

| c1 |

+----+

|  1 |

+----+

1 row in set (0.0007 sec)

 MySQL  localhost:3306 ssl  testdb SQL >

mysql-rep-2

MySQL  localhost:3306 ssl  SQL > select * from testdb.test_table;

+----+

| c1 |

+----+

|  1 |

+----+

1 row in set (0.0023 sec)

 MySQL  localhost:3306 ssl  SQL >

mysql-rep-3

MySQL  localhost:3306 ssl  SQL > select * from testdb.test_table;

+----+

| c1 |

+----+

|  1 |

+----+

1 row in set (0.0012 sec)

 MySQL  localhost:3306 ssl  SQL >

Snippet 21: SELECT operation on each instance

As you can see, all nodes have the same contents despite the INSERT operation was performed only in node MySQL-rep-1.

Conclusions

  • Now you have a running cluster. You can use it like any standard MySQL instance. Just have in mind that some instances allow R/W operations and some others allow R/O operations only. So you have to choose what instance to connect to according to the operations. Every INSERT operation made to the R/W instance is replicated to all the R/O instances.
  • You can add as many R/O instances as you wish. Just have in mind that the size of your schema will be multiplied by n, being n the number of nodes in the cluster. InnoDB cluster does not allow using arbiter nodes (like MongoDB).
  • MySQL InnoDB cluster allows you to configure a replication group with more than one R/W instance, but it is not covered here.
  • If the R/W instance fails, the cluster runs a consensus algorithm to elect a new R/W instance. If you are using the cluster as an ordinary instance, you will have to reconfigure your applications to use the new R/W instance. Fortunately, MySQL has a tool called MySQL-router; this tool hides the cluster to applications. This tool will be described in another blog post.

What is Scope Creep and why is so related to our every day work? How does this still happens in many companies of different type of industries worldwide? How can we notice the full impact at a glance? Get every answer on our blog post.

,
Previous Post
From a Non Fault Tolerant Architecture to High Availability
Next Post
The State of Flow and Recommendations to Reach It

You must be logged in to post a comment.
Menu