Translate into your own language

Wednesday, September 29, 2021

Step by Step - How To Setup Password less SSH between nodes

We have to setup the password less ssh between primary and standby nodes for postgres user

a)      On Standby Server(192.168.204.134) -

 [postgres@Replica1 .ssh]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/postgres/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/postgres/.ssh/id_rsa.

Your public key has been saved in /home/postgres/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:hxSzxo6ESc+kzBNtQ4FGMnlC/N4Duax9XflSnWKwcAQ postgres@Replica1

The key's randomart image is:

+---[RSA 3072]----+

| o+o+++.E.       |

|  +*+O+. +.      |

|   =Bo+.=.       |

|    +o =..o      |

|   o +. So.+ . . |

|    + o  .+ + o  |

|   o   o . + .   |

|  . . . . . .    |

|     .     .     |

+----[SHA256]-----+

[postgres@Replica1 .ssh]$ ls -ltr

total 12

-rw-r--r--. 1 postgres postgres  531 Sep 29 06:57 known_hosts

-rw-r--r--. 1 postgres postgres  571 Sep 29 07:26 id_rsa.pub

-rw-------. 1 postgres postgres 2602 Sep 29 07:26 id_rsa

 Make a file authorized_keys under /home/postgres/.ssh directory and then copy the content of id_rsa.pub

[postgres@Replica1 .ssh]$ cat id_rsa.pub >> authorized_keys

 

b)      On Primary Server(192.168.204.133) –

[postgres@Master .ssh]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/postgres/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/postgres/.ssh/id_rsa.

Your public key has been saved in /home/postgres/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:wsvoDy4VhNi1r3KisHrivuhzePdEHTw31y8R5NUsmyk postgres@Master

The key's randomart image is:

+---[RSA 3072]----+

| o o.        .o.o|

|. o ..  .    .ooo|

|   ..    + o .o* |

|    .o  . + E +..|

|     .+.S.   .. .|

|    .+.o       . |

|. +.= o.         |

|+=oO o.          |

|XB=.+.o.         |

+----[SHA256]-----+

[postgres@Master .ssh]$ ls -trl

total 12

-rw-r--r--. 1 postgres postgres  531 Sep 18 15:34 known_hosts

-rw-r--r--. 1 postgres postgres  569 Sep 29 07:24 id_rsa.pub

-rw-------. 1 postgres postgres 2602 Sep 29 07:24 id_rsa

 

Make a file authorized_keys under /home/postgres/.ssh directory and then copy the content of id_rsa.pub

 

[postgres@Master .ssh]$ cat id_rsa.pub >> authorized_keys

 

c)       Append the content of authorized_keys  file from replica server to authorized_keys of master server and vice-versa.

[postgres@Replica1 ~]$ cat .ssh/id_rsa.pub | ssh postgres@192.168.204.133 'cat >> .ssh/authorized_keys'

[postgres@Master ~]$ cat .ssh/id_rsa.pub | ssh postgres@192.168.204.134 'cat >> .ssh/authorized_keys'


d)Change the permission of .ssh and and authorized_keys like below on both side –

On Primary -

[postgres@Master ~]$ chmod 700 .ssh

[postgres@Master .ssh]$ chmod 640 authorized_key

On Standby –

[postgres@Replica1~]$ chmod 700 .ssh

[postgres@ Replica1 .ssh]$ chmod 640 authorized_key

e)     Check the password less SSH from both nodes. It should work like below –




Step By Step – How To Setup Replication Manager(Repmgr) in PostgreSQL

 Step 1 - Make sure the replication configuration is setup correctly.


Step 2 – Install repmgr on primary and standby servers.

Make sure we install the same version of repmgr which is of our postgreSQL version. Since I am running postgreSQL version 13, I will be installing repmgr-13.

Note - We have to install it as a root user.

On Primary Server -

#yum -y install repmgr13*


On Standby Server -

#yum -y install repmgr13*


Step 3  – Configure postgresql.conf on Master and Standby nodes.

Since we have already configured replication, most of the parameters are already set. But just check on the below parameters and set whichever is missing.

shared_preload_libraries = 'repmgr'

max_wal_senders = 5

max_replication_slots = 10

wal_level = replica

hot_standby = on

archive_mode = on

archive_command = 'cp %p /home/postgres/arch_dir/%  - We can have the different archive locations.

Listen_address=’*’

Step 4 – Create a replication manager user as repmgr.

Psql$ create user repmgr;

Psql$ create database repmgr owner repmgr;

Psql$alter user repmgr with superuser;



Step 5 – Create repmgr.conf on Primary node.
Now we have to create repmgr.conf on Primary node. We can create it on any location but I have created under data directory only. My data directory is -


Create repmgr.conf with below parameters -

[postgres@Master master]$ cat repmgr.conf

cluster='failovertesting'

node_id=1

node_name=Master

conninfo='user=repmgr password=welcome host=192.168.204.133 dbname=repmgr port=5448 connect_timeout=2'

data_directory='/home/postgres/master/'

failover=automatic

promote_command='/usr/pgsql-13/bin/repmgr standby promote -f /home/postgres/master/repmgr.conf --log-to-file'

follow_command='/usr/pgsql-13/bin/repmgr standby follow -f /home/postgres/master/repmgr.conf --log-to-file --upstream-node-id=%n'


Parameter understanding

failover='automatic'

The failover parameter is one of the mandatory parameters for the repmgr daemon. This parameter tells the daemon if it should initiate an automatic failover when a failover situation is detected. It can have either of two values: “manual” or “automatic”. We will set this to automatic in each node.

promote_command

promote_command= '/usr/pgsql-13/bin/repmgr standby promote  -f /home/postgres/master/repmgr.conf  --log-to-file'

This is another mandatory parameter for the repmgr daemon. This parameter tells the repmgr daemon what command it should run to promote a standby. The value of this parameter will be typically the “repmgr standby promote” command, or the path to a shell script that calls the command. For our use case, we set this to the following in each node:

follow_command

follow_command= '/usr/pgsql-13/bin/repmgr standby follow  -f  /home/postgres/master/repmgr.conf   --log-to-file  --upstream-node-id=%n'

This is the third mandatory parameter for the repmgr daemon. This parameter tells a standby node to follow the new primary. The repmgr daemon replaces the %n placeholder with the node ID of the new primary at run time.

priority

The priority parameter adds weight to a node’s eligibility to become a primary. Setting this parameter to a higher value gives a node greater eligibility to become the primary node. Also, setting this value to zero for a node will ensure the node is never promoted as primary.

Suppose we have two standbys: replica1  and replica2 and we want to promote replica2 as the new primary when master node goes offline, and replica2 to follow replica1 as its new primary. We set the parameter to the following values in the two standby nodes:

Node Name       Parameter Setting

replica1              priority=60

replica2              priority=40

monitor_interval_secs

This parameter tells the repmgr daemon how often (default is seconds) it should check the availability of the upstream node. In our case, there is only one upstream node i.e the master node. The default value is 2 seconds, but we will explicitly set this anyhow in each node like below

monitor_interval_secs=2

connection_check_type

 This parameter dictates the protocol repmgr daemon will use to reach out to the upstream node. This parameter can take three values:

ping: repmgr uses the PQPing() method.

connection: repmgr tries to create a new connection to the upstream node

query: repmgr tries to run a SQL query on the upstream node using the existing connection

Again, we will set this parameter to the default value of ping in each node:

connection_check_type='ping'

reconnect_attempts and reconnect_interval

When the primary becomes unavailable, the repmgr daemon in the standby nodes will try to reconnect to the primary for reconnect_attempts times. The default value for this parameter is 6. Between each reconnect attempt, it will wait for reconnect_interval seconds, which has a default value of 10.

For demonstration purposes, we will use a short interval and fewer reconnect attempts. We set this parameter in every node:

reconnect_attempts=4

reconnect_interval=8

primary_visibility_consensus

When the primary becomes unavailable in a multi-node cluster, the standbys can consult each other to build a quorum about a failover. This is done by asking each standby about the time it last saw the primary. If a node’s last communication was very recent and later than the time the local node saw the primary, the local node assumes the primary is still available, and does not go ahead with a failover decision.

To enable this consensus model, the primary_visibility_consensus parameter needs to be set to “true” in each node – including the witness:

primary_visibility_consensus=true

standby_disconnect_on_failover

When the standby_disconnect_on_failover parameter is set to “true” in a standby node, the repmgr daemon will ensure its WAL receiver is disconnected from the primary and not receiving any WAL segments. It will also wait for the WAL receivers of other standby nodes to stop before making a failover decision. This parameter should be set to the same value in each node. We are setting this to “true”.

standby_disconnect_on_failover=true

Setting this parameter to true means every standby node has stopped receiving data from the primary as the failover happens. The process will have a delay of 5 seconds plus the time it takes the WAL receiver to stop before a failover decision is made. By default, the repmgr daemon will wait for 30 seconds to confirm all sibling nodes have stopped receiving WAL segments before the failover happens.

 Step 6 – Register Primary database cluster in the repmgr configuration

Since we have created the repmgr.conf file with the minimum required parameters. Now we have to register our primary database cluster in repmgr configuration.

[postgres@Master master]$ /usr/pgsql-13/bin/repmgr  -f  /home/postgres/master/repmgr.conf  primary register


We can see primary node is registered successfully using below command –

[postgres@Master master]$ /usr/pgsql-13/bin/repmgr  -f  /home/postgres/master/repmgr.conf  cluster show


Step 7Create repmgr.conf file on standby node.

We can create it in any location but I have created in the data directory only. My data directory is –


[postgres@Replica1 replica1]$ cat repmgr.conf

cluster='failovertesting'

node_id=2

node_name=Replica1

conninfo='host=192.168.204.134 user=repmgr password=welcome dbname=repmgr port=5448 connect_timeout=2'

data_directory='/home/postgres/replica1/'

failover=automatic

promote_command='/usr/pgsql-13/bin/repmgr  standby promote  -f /home/postgres/replica1/repmgr.conf --log-to-file'

follow_command='/usr/pgsql-13/bin/repmgr standby follow -f  /home/postgres/replica1/repmgr.conf --log-to-file  --upstream-node-id=%n'


Step 8 – Register Standby Database Cluster

Now we have to register our standby database cluster in the repmgr configuration.

[postgres@Replica1 replica1]$ /usr/pgsql-13/bin/repmgr  -f  /home/postgres/replica1/repmgr.conf  standby  register

Getting below error –


Then I had to force register it like below –

[postgres@Replica1 replica1]$ /usr/pgsql-13/bin/repmgr  -f  /home/postgres/replica1/repmgr.conf  standby  register –force


But when I was showing cluster status on primary it was giving below warning –

[postgres@Master ~]$ /usr/pgsql-13/bin/repmgr -f /home/postgres/master/repmgr.conf  cluster  show



So I had to run below command on standby database.

[postgres@Replica1 ~]$ /usr/pgsql-13/bin/repmgr -f /home/postgres/replica1/repmgr.conf  standby  follow


After than when I checked the cluster status on primary it is running without any error –


I checked both the nodes and it is running fine –

/usr/pgsql-13/bin/repmgr -f /home/postgres/master/repmgr.conf  node  check



$/usr/pgsql-13/bin/repmgr  –f  /home/postgres/master/repmgr.conf  cluster  crosscheck



 Step 9 – Start repmgrd process on master and standby

[postgres@Master ~]$ /usr/pgsql-13/bin/repmgrd  -f  /home/postgres/master/repmgr.conf


[postgres@Replica1 ~]$ /usr/pgsql-13/bin/repmgrd -f /home/postgres/replica1/repmgr.conf


We can see the series of events using cluster events commands –


When it starts monitoring then we will see the below details –



Repmgr Command Options


                          ====================The End================

Monday, September 27, 2021

Step By Step - (Sync)How To Setup Physical Steaming Replication in Sync Mode in PostgreSQL

  Step By Step – (Sync)Physical Streaming Replication(Sync) in PostgreSQL

Here I am using two nodes and I have named them as Master and Replica respectively.

To setup physical streaming in Sync mode first we have to configure it in Async mode and then we will convert it into Sync mode.

Step 1 - Initialize  a database cluster on master node using initdb utility. I have initialized a cluster named primary on master node.



Step 2 – Change the port number in postgresql.conf file and start it. I have used the port 5448.

 


 Step 3 – Start the cluster using pg_ctl cluster utility.


 

Step 4 - Prepare the Master Server- Now we have to prepare the master server for physical            replication. For this we need to change below parameters on postgresql.conf of primary db cluster.

1.     Primary db cluster must be in archive log mode so the parameters should be set as below

1.     Wal_level=replica

2.     Archive_mode=on

3.     Archive_command= cp %p <arch_dir>/%f  where arch_dir is the archvie location directory which we have set.

4.     Max_wal_sender=3 or more

5.     Listen_address=’*’ for connection from any user.

Note – After changing the parameters we need to restart the cluster to take into effect.

 

[postgres@Master ~]$ pg_ctl -D primary restart



After restart my parameter setting looks like below  

 



Step 5 – Create a user with replication privilege or user super user

        


Step 6 – Create a Physical replication slot

           postgres=# select * from pg_create_physical_replication_slot('standby1');



Step 7 – Add entries into pg_hba.conf: Here you have add entries into 2 sections  -

1)# IPv4 local connections – In this section you need to add entries for your standby server(s).

2) # replication privilege – In this section you have to add entries for your replication user(s).

Example – I have added entries which are highlighted in blues lines.



 Note – Restart the cluster after changing the pg_hba.conf file to take it into effect.

Step 8 – Take a bp_basebackup – We can take pg_basebackup from primary and also from standby node. If we are taking backup on primary node then we have to transer this backup to standby node using SCP or any other file transfer method.

In my case I am taking pg_basebackup from standby node. My standby node name is Replica.

Note – Before taking backup make sure you have same archive directory present in the standby node also. In my Master node archive directory is arch_dir so I have created the same directory arch_dir in the standby node also.

I have used below command to take pg_basesbackup from standby server-

[postgres@Replica ~]$ pg_basebackup -h 192.168.204.133 -U postgres -p 5448 -D /home/postgres/replica1 -P -Xs –R


Here I have used used to options in the pg_basebackup –

-R – This option is used to create the standby.signal file on the standby database.

-Xs – This option is used to include the WAL files with sepcified method. Here I have used s which means it will be used for streaming.

Once backup is successful we can see all the required directories and configuration files are created under /home/postgres/replica1 directory.



Step 9 – Prepare Standby Server :   Open postgresql.conf on the standby server and modify the below parameters –

1.     Port – we need to change the port if we are running the primary and standby in the same node. But in my case I have used different machines so I am keeping the same port number as 5448.

2.     Hot_standby=on.  By default it is off and it will not allow queries to run into read only mode. It will just be used for high availability.

3.     Primary_conninfo= 'user=repl password=welcome host=192.168.204.133 port=5448'. Here we have to specify through which user we are going to connect to the primary host. Also we have to mention primary hostname and port number which we will be connecting.

4.     primary_slot_name = 'standby1'

5.     recovery_target_timeline = 'latest'

Save the postgresql.conf and start the standby cluster.

[postgres@Replica ~]$ pg_ctl -D replica1 start


Once it is started we can see the walreceiver process on the standby server –


Also we can verify in the log file 


We can also verify from primary server by querying pg_stat_replication view –


Step 10 – Prepare Standby Server 

Add below parameter in standby server –

-          Add application name in the primary_conninfo of standby server.

-           

primary_conninfo = 'user=repl password=welcome host=192.168.204.133 port=5448 application_name=replica1'

Step 11 – Prepare Primary Server  -

Add below parameters in the primary server :

1.       Synchronous_commit=on

2.       Synchronous_standby_names= replica1

Note – Sometime there is problem with Synchronous_standby_names which simply does not take the value of application name so in that case we use –

             synchronous_standby_names = '*'                               

After making the changes just restart Primary and standby servers.



                    ======================THE END=============================

Wednesday, September 22, 2021

Step By Step - (Async)How To Setup Physical Steaming Replication in Async Mode in PostgreSQL

 

Step By Step – (Async)Physical Streaming Replication(Async) in PostgreSQL

Here I am using two nodes and I have named them as Master and Replica respectively.

Step 1 - Initialize  a database cluster on master node using initdb utility. I have initialized a cluster named primary on master node.



Step 2 – Change the port number in postgresql.conf file and start it. I have used the port 5448.

 


 Step 3 – Start the cluster using pg_ctl cluster utility.


 

Step 4 - Prepare the Master Server- Now we have to prepare the master server for physical            replication. For this we need to change below parameters on postgresql.conf of primary db cluster.

1.     Primary db cluster must be in archive log mode so the parameters should be set as below

1.     Wal_level=replica

2.     Archive_mode=on

3.     Archive_command= cp %p <arch_dir>/%f  where arch_dir is the archvie location directory which we have set.

4.     Max_wal_sender=3 or more

5.     Listen_address=’*’ for connection from any user.

Note – After changing the parameters we need to restart the cluster to take into effect.

 

[postgres@Master ~]$ pg_ctl -D primary restart



After restart my parameter setting looks like below  

 



Step 5Create a user with replication privilege or user super user

        


Step 6 – Create a Physical replication slot

           postgres=# select * from pg_create_physical_replication_slot('standby1');



Step 7 – Add entries into pg_hba.conf: Here you have add entries into 2 sections  -

1)# IPv4 local connections – In this section you need to add entries for your standby server(s).

2) # replication privilege – In this section you have to add entries for your replication user(s).

Example – I have added entries which are highlighted in blues lines.



 Note – Restart the cluster after changing the pg_hba.conf file to take it into effect.

Step 8 – Take a bp_basebackup – We can take pg_basebackup from primary and also from standby node. If we are taking backup on primary node then we have to transer this backup to standby node using SCP or any other file transfer method.

In my case I am taking pg_basebackup from standby node. My standby node name is Replica.

Note – Before taking backup make sure you have same archive directory present in the standby node also. In my Master node archive directory is arch_dir so I have created the same directory arch_dir in the standby node also.

I have used below command to take pg_basesbackup from standby server-

[postgres@Replica ~]$ pg_basebackup -h 192.168.204.133 -U postgres -p 5448 -D /home/postgres/replica1 -P -Xs –R


Here I have used used to options in the pg_basebackup –

-R – This option is used to create the standby.signal file on the standby database.

-Xs – This option is used to include the WAL files with sepcified method. Here I have used s which means it will be used for streaming.

Once backup is successful we can see all the required directories and configuration files are created under /home/postgres/replica1 directory.



Step 9 – Prepare Standby Server :   Open postgresql.conf on the standby server and modify the below parameters –

1.     Port – we need to change the port if we are running the primary and standby in the same node. But in my case I have used different machines so I am keeping the same port number as 5448.

2.     Hot_standby=on.  By default it is off and it will not allow queries to run into read only mode. It will just be used for high availability.

3.     Primary_conninfo= 'user=repl password=welcome host=192.168.204.133 port=5448'. Here we have to specify through which user we are going to connect to the primary host. Also we have to mention primary hostname and port number which we will be connecting.

4.     primary_slot_name = 'standby1'

5.     recovery_target_timeline = 'latest'

Save the postgresql.conf and start the standby cluster.

[postgres@Replica ~]$ pg_ctl -D replica1 start


Once it is started we can see the walreceiver process on the standby server –


Also we can verify in the log file 


We can also verify from primary server by querying pg_stat_replication view –

                                 ======================The END=============================