Postgres Topic

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 116

PostgreSQL All Topics

PostgreSQL Enterprise Manager - PEM Monitoring Tools

PostgreSQL Drop Database

PostgreSQL Connect Database

PostgreSQL Database Creation

Connect Postgres Server

PostgreSQL -11 Installation (rpm & source code)

PostgreSQL 10 Installation

PostgreSQL Database startup / shutdown /restart

Configure the network & Disk Partition

PostgreSQL - Oracle Vs PostgreSQL

PostgreSQL Monitoring Tools

PostgreSQL Features

PostgreSQL Brief History

PostgreSQL Introduction

Script to taking postgres DDL objects with individual file name

Automatically overwrite PostgreSQL logfile once reached 7 days.

Postgres Streaming Replication Configuration

How to fix xlog / wal log disk full issue in postgres database ?

PostgreSQL 11 Source code Installation

PostgreSQL 11 Installation - RPM

PostgreSQL SSL configuration

Setting postgreSQL logfile retention 7 days

PostgreSQL -11 Installation (rpm & source code)

1
PostgreSQL New version upgrade

Script to taking Backup of postgres object's DDL with individual files

Postgresql maximum size

PostgreSQL Sequence

Script for vacuum & analyzing selected postgresql tables

PostgreSQL autovacuum & Parameter configuration

Script To Listing Postgresql dead tuble tables

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in
PostgreSQL Database

Postgresql Partitioned Tables

Postgresql Server monitoring shell script

What's is the difference between streaming replication Vs hot standby vs warm standby ?

SSL setup in PostgreSQL

Postgresql Transaction isolation

Methods of installing PostgreSQL

Oracle to Postgresql migration

HOW TO SETUP/CONFIGURE THE POSTGRESQL STREAMING REPLICATION ON POSTGRESQL 10?

vacuumlo - removing large objects orphans from a database PostgreSQL

Oracle vs postgreSQL data types

How to verifiy postgres dump correct or not?

Migrating From Oracle to PostgreSQL using ora2pg open source tools

How to Move particular Postgres Schema to other database ?

Steps of PostgreSQL (point in time recovery) PITR

PostgreSQL pg_stat_activity status

How to connect vmware virtual machine server using putty & pgadmin ?

2
HOW TO TAKE THE BACKUP FROM SLAVE SERVER - POSTGRESQL 10.3?

How to add extra one slave an existing PostgreSQL cascade replication without down time ?

How to Configure the cascade replication On PostgreSQL 10.3 ?

PostgreSQL 9.6 idle_in_transaction_session_timeout parameter

PostgreSQL pg_stat_activity

How to upgrade Postgresql with minimal downtime Without using pg_upgrade?

PostgreSQL Cross Database Queries using DbLink

Difference Between Database Administrator vs Database Architect

PostgreSQL Log Compressing and Moving Script

PostgreSQL Killing Long Running Query and Most Resource Taken Process Script

How To Configure pglogical | streaming replication for PostgreSQL

PostgreSQL Multile Schema Backup & Restore,Backup Script,Restoring Script,Backup & Restore
Prerequest & PostRequest

PostgreSQL Script for what Query is running more than x minutes with status

Script For Finding slow Running Query and Most CPU Utilization Query Using Top command PID

Postgresql interview Question and answer

PostgreSQL Monitoring Script

PostgresqlDBA interview questions and answers

Script to kill ALL IDLE Connection In postgreSQL

How to write script to Get table and index sizes in PostgreSQL ?

VBOX installation full guide

HOW TO SETUP A LOGICAL REPLICATION WITHIN 5 MINUTES ON POSTGRESQL

HOW TO INSTALL POSTGRESQL 10

Daily monitoring PostgreSQL master and slave server and schedule vacuum and change
postgreSQL parametrer

3
PostgreSQL pg_stat_statements Extension

Insert script for all Countries

Script to find active sessions or connections in PostgreSQL

Script to find sessions that are blocking other sessions in PostgreSQL

Script to Find Table and Column without comment or description

Script to find which group roles are granted to the User

Important PostgreSQL DBA Commands -2

Important PostgreSQL DBA Commands

Script to find the unused and duplicate index

Script to find a Missing Indexes of the schema

Fast way to find the row count of a Table

Script to find Source and Destination of All Foreign Key Constraint

Kill all Running Connections and Sessions of a Database

How to create a Function to truncate all Tables created by Particular User

PostgreSQL Parameters to enable Log for all Queries

VACUUM FULL without Disk Space

How to Stop all Connections and Force to Drop the Database

Script to create a copy of the Existing Database

DROP FUNCTION script with the type of Parameters

measure the size of a Table Row and Data Page

PostgreSQL Killling All IDLE session Script Every 2 minute

PostgreSQL Script to kill 'idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled'
sessions of a Database

find all Default Values of the Columns

change the Database User Password

4
find TOP 10 Long Running Queries using pg_stat_statements

Monitor ALL SQL Query Execution Statistics using pg_stat_statements Extension

Copy Database to another Server in Windows (pg_dump – backup & restore)

check Table Fragmentation using pgstattuple module

find total Live Tuples and Dead Tuples

find Index Size and Index Usage Statistics

BRIN – Block Range Index with Performance Report Futures of PostgreSQL 9.5

How we can create Index on Expression?

find a Missing Indexes of the schema

find the unused and duplicate index

find Version and Release Information

search any Text from the Stored Function

copy table data from one server to another server

find Orphaned Sequence, not owned by any Column

Script to find the count of objects for each Database Schema

Find a list of active Temp tables with Size and User information

Improve the performance of pg_dump pg_restore

PostgreSQL: Script to Create a Read-Only Database User

Postgres IP Errors

postgreSQL Compress format backup

ERROR: 57014: cancelling statement due to statement timeout in postgreSQL

PostgreSQL BRIN index WITH pages_per_range

PostgreSQL BRIN index

PostgreSQL index With Explain plan

5
PostgreSQL Important Parameters for better Performance

PostgreSQL 9.4 FILTER CLAUSE

What is VACUUM, VACUUM FULL and ANALYZE in PostgreSQL

How to Improve the performance of PostgreSQL Query Sort operation

Force Autovacuum for running

How to Tuning Checkpoint Parameters In PostgreSQL

PostgreSQL Database Maintenance Operation

TABLESAMPLE, SQL STANDARD AND EXTENSIBLE postgreSQL 9.5

PICK A TASK TO WORK ON

Foreign Table Inheritance

PostgreSQL Row-Level Security Policies

PostgreSQL 9.5: IMPORT FOREIGN SCHEMA

FOREIGN DATA WRAPPER

IMPORT FOREIGN SCHEMA

PostgreSQL Commit timestamp tracking

Parallel VACUUMing

PostgreSQL SKIP LOCKED

PostgreSQL ALTER TABLE ... SET LOGGED / UNLOGGED

PostgreSQL pg_rewind

PostgreSQL Index

PostgreSQL Old Archive Move Script,

How to kill idle session and what is the shell script for killing idle connection ?

PostgreSQL Full backup and incremental backup script

Simple PostgreSQL vacuum script

6
PostgreSQL Vacuum and analyze script

Oracle And PostgreSQL DBA related Software

EDB Failover Manager Guide

The Connection Service File

PostgreSQL 9.6 ADD NEW SYSTEM VIEW, PG_CONFIG

POSTGRESQL USING RAM

TSVECTOR EDITING FUNCTIONS

PostgreSQL 9.6 Parallel query

imp postgres commands:

===============================

https://dbaclass.com/postgres-db-scripts/

https://dbaclass.com/postgres-database-articles/

https://dbaclass.com/article/how-to-open-postgres-standby-database-for-read-writesnapshot-
standby/

POSTGRES DB SCRIPTS

Please write in comment section, if links are not working.

7
DATABASE MANAGEMENT

Create a database in postgres

How to connect to postgres db

Drop database in postgres

Find the database details in postgres

Find postgres db sizes

How to find timezone info

How to find postgres version

How to enable archiving(wal) in postgres

How to rotate server log in postgres

Find query execution time using pg_stat_statement

How to find data directory location

Find current sessions in postgres

Kill a session in postgres

Cancel a session in postgres

Kill all session of a user in postgres

Find locks present in postgres

Find blocking sessions in postgres

Find location of postgres conf files

Find current data/time on postgres db

Find extension details

Find startup time and uptime postgres

Find archiver process status

8
Find postgres configuration values

Find the last pg config reload time

How to do wal switch manually

Monitor archiving progress

View/modify connection limit of database

Find wal file details and its size

Find temp file usage of databases

OBJECT MANAGEMENT

Create/drop table in postgres

Create/drop index commands in postgres

Find list of schemas in postgres

list of objects presents in a schema

Find schema wise size in postgres db

Find top 10 big tables in postgres

Find tables and its index sizes

List down index details in postgres

Find the size of a column

Find respective physical file of a table/index

Find list of views present

Manage sequences in postgres

Create Partial index in postgres

Find foreign key details in postgres

9
Find specific table/index size

Find list of partitioned table details

MAINTENANCE

Update statistics of a table using analyze

Reorg a table using VACUUM command

Reorg a table using VACUUM FULL command

Manage autovacuum process in postgres

Rebuild indexes using REINDEX

Monitor index creation or rebuild

Monitor vacuum operation

find and change statistics level of a column

Find vaccum settings of tables

Modify autovacuum setting of table/index

Find last vaccum/analyze details of a table

Find how much bloating a table has

USER MANAGEMENT

List users present in postgres

List roles present in postgres

create/drop user in postgres

10
Create/drop role in postgres

Alter an user in postgres

Convert a user to superuser

Set password to original one without knowing

GRANT privilege commands

REVOKE privilege commands

Create user profile - EDB Postgres

Create/Drop schema in postgres

Find search_path setting of users

Set search_path of a user in postgres

Find privileges granted to a user postgres

Find the roles granted to user/role

Find how much bloating a table has

TABLESPACE MANAGEMENT

View tablespace info in postgres

create/drop/rename tablespace in postgres

find/change default tablespace

find/change default temp tablespace

How to change tablespace owner

Move table/index to different tablespace

11
Move database to new tablespace in postgres

BACKUP & RECOVERY

export table data to file using COPY

AUDITING & SECURITY

Find pg_hba.conf file content

Enable auditing for ddl/dml statement

Enable audit for log on/log off to postgres

NETWORK

Find foreign server details

Find list of foreign tables

List foreign data wrapper

Find user mapping for FDW

Create database link between postgres dbs

Create/Modify foreign server

12
REPLICATION

Check streaming recovery status

Check replication details on primary server

Get received /replayed WAL records on standby(replication)

How to stop /resume recovery in standy(replication)

Find lag in streaming replication

Manage replication slots

Find subscription details in logic replication

PERFORMANCE

Find foreign server details

GENERIC

Find autocommit setting in postgres

Run a query repeatedly automatically

tablespaces

------------------------------

CREATE TABLESPACE:

postgres=# create tablespace ts_postgres location '/Library/PostgreSQL/TEST/TS_POSTGRES';

CREATE TABLESPACE

13
RENAME TABLESPACE:

postgres=# alter tablespace ts_postgres rename to ts_dbaclass;

ALTER TABLESPACE

DROP TABLESPACE:

postgres=# drop tablespace ts_dbaclass;

DROP TABLESPACE

postgres=# select * from pg_tablespace;

(OR)

postgres=# \db+

(or)

-- For getting size of specific tablespace:

postgres=# select pg_size_pretty(pg_tablespace_size('ts_dbaclass'));

pg_size_pretty

14
----------------

96 bytes

Pre-configured tablespaces:( these are default tablespaces)

Pg_global - > PGDATA/global - > used for cluster wide table and system catalog

Pg_default - > PGDATA/base directory - > it stores databases and relations

postgres=# show default_tablespace;

default_tablespace

--------------------

(1 row)

<<<< If output is blank means default is pg_default tablespace>>>>>

--To change the default tablespace at database level:

postgres=# alter system set default_tablespace=ts_postgres;

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

15
----------------

(1 row)

postgres=# show default_tablespace;

default_tablespace

--------------------

ts_postgres

(1 row)

postgres=# SELECT name, setting FROM pg_settings where name='default_tablespace';

name | setting

--------------------+-------------

default_tablespace | ts_postgres

(1 row)

Steps to change default tablespace at session level:

postgres=# set default_tablespace=ts_postgres;

SET

VIEW DEFAULT TEMP TABLESPACE:

16
dbaclass=# SELECT name, setting FROM pg_settings where name='temp_tablespaces';

name | setting

------------------+---------

temp_tablespaces |

(1 row)

dbaclass=# show temp_tablespaces

dbaclass-# ;

temp_tablespaces

------------------

(1 row)

CHANGE DEFAULT TEMP TABLESPACE

postgres=# alter system set temp_tablespaces=TS_TEMP;

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

(1 row)

17
postgres=# show temp_tablespaces;

temp_tablespaces

------------------

ts_temp

(1 row)

postgres=# SELECT name, setting FROM pg_settings where name='temp_tablespaces';

name | setting

------------------+---------

temp_tablespaces | ts_temp

(1 row)

-- Change ownership of tablespace ts_postgres to user dev_admin

postgres# alter tablespace ts_postgres owner to dev_admin;

postgres# \db+

Move table/index to different tablespace

-------------------------------------

-- move table to different tablespace

prod_crm=# alter table TEST8 set tablespace pg_crm;

18
ALTER TABLE

-- Move index to different tablespace

prod_crm=# alter index TEST_ind set tablespace pg_crm;

ALTER TABLE

postgres=# alter database prod_crm set tablespace crm_tblspc;

Before running this. make sure there are no active connections in the database.

You can kill the existing session using below query.

postgres# select pg_terminate_backend(pid) from pg_stat_activity where


datname='DB_NAME';

==============================================================================
===================================

database management:

--------------------

-- Below commands can be used to create database

19
postgres=# create database DBATEST;

CREATE DATABASE

postgres=# create database DBATEST with tablespace ts_postgres;

CREATE DATABASE

postgres#CREATE DATABASE "DBATEST"

WITH TABLESPACE ts_postgres

OWNER "postgres"

ENCODING 'UTF8'

LC_COLLATE = 'en_US.UTF-8'

LC_CTYPE = 'en_US.UTF-8'

TEMPLATE template0;

-- View database information:

postgres=# \l

postgres# select * from pg_database;

<< Note - alternatively database can be created using pgadmin GUI tool also >>>

20
how to connect to postgres db:

set PATH if not done:

postgres$ export PATH=/Library/PostgreSQL/10/bin:$PATH

postgres$ which psql

/Library/PostgreSQL/10/bin/psql

connect to db using SYNTAX - psql -d -U

postgres$ psql -d edb -U postgres

Password for user postgres:

psql (10.13)

Type "help" for help.

postgres=#

Find current connection info:

postgres=# \conninfo

You are connected to database "edb" as user "postgres" via socket in "/tmp" at port "5432".

21
postgres=# select current_schema,current_user,session_user,current_database();

current_schema | current_user | session_user | current_database

----------------+--------------+--------------+------------------

public | postgres. | postgres | edb

Drop database from psql

Note - while dropping a database, you need to connect to a database other than the db you are
trying to drop.

postgres=# \conninfo

You are connected to database "postgres" as user "postgres" on host "localhost" at port "5432".

postgres#drop database "DBACLASS";

-- Drop database using dropdb os utility

postgres$ pwd

/Library/PostgreSQL/10/bin

postgres$ ./dropdb -e "DBACLASS"

Password:

SELECT pg_catalog.set_config('search_path', '', false)

DROP DATABASE "DBACLASS";

22
postgres=# \list+

List of databases

Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description

-----------+----------+----------+---------+-------+-----------------------+---------+------------+------------------------
--------------------

dbaclass | postgres | UTF8 | C | C | | 2268 MB | pg_default |

postgres | postgres | UTF8 | C | C | | 4132 MB | pg_default | default administrative connection


database

template0 | postgres | UTF8 | C | C | =c/postgres +| 7601 kB | pg_default | unmodifiable


empty database

| | | | | postgres=CTc/postgres | | |

template1 | postgres | UTF8 | C | C | =c/postgres +| 7601 kB | pg_default | default template


for new databases

| | | | | postgres=CTc/postgres | | |

(4 rows)

postgres=# select datname from pg_database;

datname

-----------

postgres

template1

template0

dbaclass

(4 rows)

23
How to get postgres db size:

postgres=# SELECT pg_database.datname as "database_name",


pg_size_pretty(pg_database_size(pg_database.datname)) AS size_in_mb FROM pg_database
ORDER by size_in_mb DESC;

database_name | size_in_mb

---------------+------------

DBACLASS | 7767 kB

postgres | 7735 kB

template1 | 7735 kB

template0 | 7601 kB

(4 rows)

(or)

postgres=# \l+

List of databases

Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace |


Description

-----------+----------+----------+---------+-------+-----------------------+---------+------------+------------------------
--------------------

DBACLASS | postgres | UTF8 |C |C | | 7767 kB | pg_default | TESTING DB

postgres | postgres | UTF8 | C |C | | 7735 kB | pg_default | default


administrative connection database

template0 | postgres | UTF8 |C |C | =c/postgres +| 7601 kB | pg_default |

24
unmodifiable empty database

| | | | | postgres=CTc/postgres | | |

template1 | postgres | UTF8 | C |C | =c/postgres +| 7735 kB | pg_default |


default template for new databases

| | | | | postgres=CTc/postgres | | |

(4 rows)

Commands to find timezone information

dbaclass=# show timezone

TimeZone

--------------

Asia/Kolkata

(1 row)

dbaclass=# SELECT current_setting('TIMEZONE');

current_setting

-----------------

Asia/Kolkata

dbaclass=# select name,setting,short_desc,boot_val from pg_settings where name='TimeZone';

name | setting | short_desc | boot_val

----------+--------------+-----------------------------------------------------------------+----------

TimeZone | Asia/Kolkata | Sets the time zone for displaying and interpreting time stamps. |

25
GMT

(1 row)

STEPS FOR enabling archiving:

1. Create directory for archiving:

mkdir -p /Library/PostgreSQL/10/data/archive/

2. Update the postgres.conf file with below values

wal_level = replica

archive_mode = on

max_wal_senders=1

archive_command= 'test ! -f /Library/PostgreSQL/10/data/archive/%f && cp %p


/Library/PostgreSQL/10/data/archive/%f'

3. Restart the postgres servers

export PGDATA=/Library/PostgreSQL/10/data

pg_ctl stop

pg_ctl start

26
3. Check archive status:

postgres=# select name,setting from pg_settings where name like 'archive%';

name | setting

-----------------+--------------------------------------------------------------------------------------------------

archive_command | test ! -f /Library/PostgreSQL/10/data/archive/%f && cp %p


/Library/PostgreSQL/10/data/archive/%f

archive_mode | on

archive_timeout | 0

-- Monitor query execution time

select substr(query,1,100) query,calls,min_time/1000 "min_time(in sec)" , max_time/1000


"max_time(in sec)", mean_time/1000 "avg_time(in sec)", rows from pg_stat_statements order
by mean_time desc;

--------------------------------------------------------------------------------------------------------------------------------
-----------------------

NOTE:

If pg_stat_statements is not available in your database, then activate using below:

-- Add below parameters in postgres.conf file and restart the postgres cluster

shared_preload_libraries = 'pg_stat_statements'

27
pg_stat_statements.track = all

sudo service postgresql restart

-- Now create extension:

dbaclass=# create extension pg_stat_statements;

CREATE EXTENSION

HOW TO FIND DATA DIRECTORY LOCATION

DATA_DIRECTORY - > Specifies the directory to use for data storage.

dbaclass=# show data_directory;

data_directory

-----------------------------

/Library/PostgreSQL/10/data

(1 row)

dbaclass=# select setting from pg_settings where name = 'data_directory';

setting

28
-----------------------------

/Library/PostgreSQL/10/data

(1 row)

-- This will show location of important files in postgres

dbaclass=# SELECT name, setting FROM pg_settings WHERE category = 'File Locations';

name | setting

-------------------+---------------------------------------------

config_file | /Library/PostgreSQL/10/data/postgresql.conf

data_directory | /Library/PostgreSQL/10/data

external_pid_file |

hba_file | /Library/PostgreSQL/10/data/pg_hba.conf

ident_file | /Library/PostgreSQL/10/data/pg_ident.conf

(5 rows)

Find sessions in the postgres:

select pid as process_id,

usename as username,

datname as database_name,

client_addr as client_address,

application_name,

29
backend_start,

state,

state_change,query

from pg_stat_activity;

-- For specific database:

select pid as process_id,

usename as username,

datname as database_name,

client_addr as client_address,

application_name,

backend_start,

state,

state_change,query

from pg_stat_activity where datname='dbaclass';

process_id | username | database_name | client_address | application_name |


backend_start | state |
state_change ------------+----------+---------------+----------------+------------------+-----------------------
-----------+--------+----------------------------------

18970 | postgres | dbaclass | | psql | 2020-07-03 20:27:42.225987+05:30


| active | 2020-07-03 23:19:12.023416+05:30

30
(1 row)

Find locks present in postgres

select t.relname,l.locktype,page,virtualtransaction,pid,mode,granted from pg_locks l,


pg_stat_all_tables t where l.relation=t.relid order by relation asc;

-- QUERY TO FIND BLOCKING SESSION DETAILS

dbaclass#select pid as blocked_pid, usename, pg_blocking_pids(pid) as "blocked_by(pid)",


query as blocked_query from pg_stat_activity where cardinality(pg_blocking_pids(pid)) > 0;

output:

blocked_pid | usename | blocked_by(pid) | blocked_query

-------------+----------+-------------+------------------------------

4206 | postgres | {3673} | alter table test drop query;

Find location of postgres related conf files

dbaclass=# SELECT name, setting FROM pg_settings WHERE category = 'File Locations';

31
name | setting

-------------------+---------------------------------------------

config_file | /Library/PostgreSQL/10/data/postgresql.conf

data_directory | /Library/PostgreSQL/10/data

external_pid_file |

hba_file | /Library/PostgreSQL/10/data/pg_hba.conf

ident_file | /Library/PostgreSQL/10/data/pg_ident.conf

(5 rows)

alternatively:

postgres=# show config_file;

config_file

--------------------------------------------

/Library/PostgreSQL/10/data/postgresql.conf

(1 row)

postgres=# show hba_file;

hba_file

-----------------------------------------

32
/Library/PostgreSQL/10/data/pg_hba.conf

Find archiver process status

postgres# select * from pg_stat_archiver;

-[ RECORD 1 ]------+---------------------------------

archived_count |0

last_archived_wal |

last_archived_time |

failed_count. |0

last_failed_wal. |

last_failed_time |

stats_reset | 26-SEP-20 11:13:08.540237 +03:00

1 . Get Config values from psql prompt.

postgres=# select * from pg_settings;

\x

postgres=# select * from pg_settings where name='port';

2. Alternatively you can check postgresql.conf file

33
postgres=# show config_file;

config_file

---------------------------------

/pgdata/data/postgresql.conf

(1 row)

cat /pgdata/data/postgresql.conf

-- Last pg config reload time

postgres=# select pg_conf_load_time() ;

pg_conf_load_time

----------------------------------

2020-07-06 13:20:18.048689+05:30

(1 row)

-- Reload again and see whether reload time changed or not

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

34
(1 row)

postgres=# select pg_conf_load_time() ;

pg_conf_load_time

----------------------------------

2020-07-06 17:46:59.958056+05:30

(1 row)

* Monitor archiving progress

postgres#select pg_walfile_name(pg_current_wal_lsn()),last_archived_wal,last_failed_wal,

('x'||substring(pg_walfile_name(pg_current_wal_lsn()),9,8))::bit(32)::int*256 +

('x'||substring(pg_walfile_name(pg_current_wal_lsn()),17))::bit(32)::int -

('x'||substring(last_archived_wal,9,8))::bit(32)::int*256 -

('x'||substring(last_archived_wal,17))::bit(32)::int

as diff from pg_stat_archiver;

---View existing connection limit setting:( datconnlimit )

postgres=# select datname,datallowconn,datconnlimit from pg_database where


datname='ecomm';

-[ RECORD 1 ]--+------------

35
datname | test_dev

datallowconn | t.

datconnlimit | -1. -- >Means unlimited connections allowed

-- To set a specific limit for connection

test_dev=# alter database test_dev connection limit 100;

ALTER DATABASE

-- To restrict all the connections to db

test_dev=# alter database test_dev connection limit 0;

ALTER DATABASE

NOTE - > Even if connection limit is set to 0 , the superuser will be able to connect to database.

-- List down all the wal files present in pg_wal

postgres=# select * from pg_ls_waldir();

name | size | modification

------------------------------------------+----------+---------------------------

36
0000000100000079000000D5 | 16777216 | 22-APR-22 20:51:26 +03:00

0000000100000079000000D8 | 16777216 | 22-APR-22 20:39:33 +03:00

0000000100000079000000D6 | 16777216 | 22-APR-22 20:07:40 +03:00

0000000100000079000000D9 | 16777216 | 22-APR-22 20:47:21 +03:00

0000000100000079000000D7 | 16777216 | 22-APR-22 20:21:45 +03:00

00000001000000790000005C.00005BC8.backup | 323 | 21-APR-22 10:14:40 +03:00

(6 rows)

-- Find total size of wal:

postgres=# select sum(size) from pg_ls_waldir();

sum

----------

83886403

(1 row)

-- Find current wal file lsn:

postgres=# select pg_current_wal_insert_lsn(),pg_current_wal_lsn();

pg_current_wal_insert_lsn | pg_current_wal_lsn

---------------------------+--------------------

79/D5980480 | 79/D5980480

(1 row)

37
postgres=# SELECT datname, temp_files, temp_bytes, stats_reset FROM pg_stat_database;

datname | temp_files | temp_bytes | stats_reset

-----------+------------+------------+----------------------------------

|0 |0 | 18-APR-22.18:23:33.09366 +03:00

postgres | 2 | 28000000 | 18-APR-22 18:23:33.093639 +03:00

edb |0 |0 | 18-APR-22 18:23:37.095023 +03:00

template1 | 0 |0 |

template0 | 0 |0 |

b2cplmdev | 0 |0 | 18-APR-22 18:23:35.093019 +03:00

test_dev | 0 |0 | 19-APR-22 10:17:28.826261 +03:00

(7 rows)

-- Top 10 big tables in postgres

select schemaname as schema_owner,

relname as table_name,

pg_size_pretty(pg_total_relation_size(relid)) as total_size,

pg_size_pretty(pg_relation_size(relid)) as used_size,

pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid))

as free_space

from pg_catalog.pg_statio_user_tables

order by pg_total_relation_size(relid) desc,

38
pg_relation_size(relid) desc

limit 10;

(or)

SELECT

nspname as schema_name,relname as table_name,pg_size_pretty(pg_relation_size(c.oid)) as


"table_size"

from pg_class c left join pg_namespace n on ( n.oid=c.relnamespace)

where nspname not in ('pg_catalog','information_schema')

order by pg_relation_size(c.oid) desc limit 10;

Below queries can be used to get schema wise size in postgres db

postgres=# select
schemaname,pg_size_pretty(sum(pg_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)))::bigint) as schema_size FROM pg_tables group by schemaname;

schemaname | pg_size_pretty

--------------------+----------------

raj | 8192 bytes

public | 3651 MB

pg_catalog | 2936 kB

39
information_schema | 96 kB

(4 rows)

postgres=# SELECT schemaname,

pg_size_pretty(sum(table_size)::bigint) as schema_size,

(sum(table_size) / pg_database_size(current_database())) * 100 as percentage_of_total_db

FROM (

SELECT pg_catalog.pg_namespace.nspname as schemaname,

pg_relation_size(pg_catalog.pg_class.oid) as table_size

FROM pg_catalog.pg_class

JOIN pg_catalog.pg_namespace ON relnamespace = pg_catalog.pg_namespace.oid

)t

GROUP BY schemaname

ORDER BY schemaname;

schemaname | schema_size | percentage_of_total_db

--------------------+-------------+----------------------------

information_schema | 96 kB | 0.002561568956316216939600

pg_catalog | 6120 kB | 0.16330002096515883000

pg_toast | 648 kB | 0.01729059045513446400

public | 3651 MB | 99.76265110861169191100

raj | 8192 bytes | 0.000213464079693018078300

(5 rows)

40
-- Find table sizes and its respective index sizes

SELECT

table_name,

pg_size_pretty(table_size) AS table_size,

pg_size_pretty(indexes_size) AS indexes_size,

pg_size_pretty(total_size) AS total_size

FROM (

SELECT

table_name,

pg_table_size(table_name) AS table_size,

pg_indexes_size(table_name) AS indexes_size,

pg_total_relation_size(table_name) AS total_size

FROM (

SELECT ('"' || table_schema || '"."' || table_name || '"') AS table_name

FROM information_schema.tables

) AS all_tables

ORDER BY total_size DESC

) AS pretty_sizes limit 10;

--- It wil find the indexes present on a table 'test'

postgres=# select * from pg_indexes where tablename='test';

41
schemaname | tablename | indexname | tablespace | indexdef

------------+-----------+-----------+-------------+----------------------------------------------------------

public. | test | tes_idx1 | ts_postgres | CREATE INDEX tes_idx1 ON public.test USING


btree (datid)

(1 row)

-- All indexes present in database:

postgres#select * from pg_indexes

-- It will show all index details including size:

postgres=# \di+

List of relations

Schema | Name | Type | Owner | Table | Size | Description

--------+----------+-------+----------+--------+--------+-------------

public | tes_idx | index | postgres | test56 | 64 kB |

public | tes_idx1 | index | postgres | test | 472 MB |

(2 rows)

-- Find indexes with respective column name for table( here table name is test)

REFERENCE - https://stackoverflow.com/questions/2204058/list-columns-with-indexes-in-
postgresql

select

42
t.relname as table_name,

i.relname as index_name,

array_to_string(array_agg(a.attname), ', ') as column_names

from

pg_class t,

pg_class i,

pg_index ix,

pg_attribute a

where

t.oid = ix.indrelid

and i.oid = ix.indexrelid

and a.attrelid = t.oid

and a.attnum = ANY(ix.indkey)

and t.relkind = 'r'

and t.relname ='test'

group by

t.relname,

i.relname

order by

t.relname,

i.relname;

* Find list of views present

postgres=# select * from pg_views where schemaname not in


43
('pg_catalog','information_schema','sys');

count

-------

postgres#\dv

List of relations

Schema | Name | Type | Owner

--------+--------------------+------+--------------

public | pg_stat_statements | view | enterprisedb

(1 row).

-- Find the sequence details:

select * from pg_sequences;

(or)

\ds+

List of relations

Schema | Name | Type | Owner | Size | Description

--------+-----------+----------+--------------+------------+-------------

public | class_seq | sequence | enterprisedb | 8192 bytes |

44
(1 row)

-- Create sequences:

postgres# CREATE SEQUENCE class_seq INCREMENT 1 MINVALUE 1 MAXVALUE 1000 START 1;

CREATE SEQUENCE

-- Create sequence in descending:

postgres# CREATE SEQUENCE class_seq INCREMENT -1 MINVALUE 1 MAXVALUE 1000 START


1000;

CREATE SEQUENCE

-- Alter sequence to change maxvalue:

postgres=# alter sequence class_seq maxvalue 500;

ALTER SEQUENCE

-- Reset a sequence using alter command:

postgres=# alter sequence class_seq restart with 1;

ALTER SEQUENCE

-- Find next_val and currval of a sequence:

45
postgres=# select nextval('class_seq');

nextval

---------

(1 row)

postgres=# select currval('class_seq');

currval

---------

(1 row)

Partial index, means index will be created on a specific subset of data of a table.

edbstore=> create index part_emp_idx on orders(tax) where tax > 400;

CREATE INDEX

edbstore=> \d part_emp_idx

Index "edbuser.part_emp_idx"

Column | Type | Key? | Definition

--------+---------------+------+------------

tax | numeric(12,2) | yes | tax

btree, for table "edbuser.orders", predicate (tax > 400::numeric)

46
-- List down all partitioned tables present in db

SELECT

nmsp_parent.nspname AS parent_schema,

parent.relname AS parent,

nmsp_child.nspname AS child_schema,

child.relname AS child

FROM pg_inherits

JOIN pg_class parent ON pg_inherits.inhparent = parent.oid

JOIN pg_class child ON pg_inherits.inhrelid = child.oid

JOIN pg_namespace nmsp_parent ON nmsp_parent.oid = parent.relnamespace

JOIN pg_namespace nmsp_child ON nmsp_child.oid = child.relnamespace

-- List down all partitions of a single table:

SELECT

nmsp_parent.nspname AS parent_schema,

parent.relname AS parent,

nmsp_child.nspname AS child_schema,

child.relname AS child

FROM pg_inherits

47
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid

JOIN pg_class child ON pg_inherits.inhrelid = child.oid

JOIN pg_namespace nmsp_parent ON nmsp_parent.oid = parent.relnamespace

JOIN pg_namespace nmsp_child ON nmsp_child.oid = child.relnamespace

WHERE parent.relname='parent_table_name';

Ref link - > https://dba.stackexchange.com/questions/40441/get-all-partition-names-for-a-table

SELECT

o.conname AS constraint_name,

(SELECT nspname FROM pg_namespace WHERE oid=m.relnamespace) AS source_schema,

m.relname AS source_table,

(SELECT a.attname FROM pg_attribute a WHERE a.attrelid = m.oid AND a.attnum = o.conkey[1]
AND a.attisdropped = false) AS source_column,

(SELECT nspname FROM pg_namespace WHERE oid=f.relnamespace) AS target_schema,

f.relname AS target_table,

(SELECT a.attname FROM pg_attribute a WHERE a.attrelid = f.oid AND a.attnum = o.confkey[1]
AND a.attisdropped = false) AS target_column

FROM

pg_constraint o LEFT JOIN pg_class f ON f.oid = o.confrelid LEFT JOIN pg_class m ON m.oid =
o.conrelid

WHERE

o.contype = 'f' AND o.conrelid IN (SELECT oid FROM pg_class c WHERE c.relkind = 'r');

48
REFERENCE - https://stackoverflow.com/questions/1152260/postgres-sql-to-list-table-foreign-
keys

-- Analyze stats for a table testanalyze(schema is public)

dbaclass=# analyze testanalyze;

ANALYZE

-- For analyzing selected columns for emptab table ( schema is dbatest)

dbaclass=# analyze dbatest.emptab (datname,datdba);

ANALYZE

dbaclass=# select relname,reltuples from pg_class where relname in ('testanalyze','emptab');

relname | reltuples

-------------+-----------

testanalyze | 4

emptab |4

(2 rows)

dbaclass=# select schemaname,relname,analyze_count,last_analyze,last_autoanalyze from


pg_stat_user_tables where relname in ('testanalyze','emptab');

schemaname | relname | analyze_count | last_analyze | last_autoanalyze

49
------------+-------------+---------------+----------------------------------+------------------

public | testanalyze | 1 | 2020-07-21 17:00:49.687053+05:30 |

dbatest | emptab |1 | 2020-07-21 17:10:01.111517+05:30 |

(2 rows)

---Analyze command with verbose command

dbaclass=# analyze verbose dbatest.emptab (datname,datdba);

INFO: analyzing "dbatest.emptab"

INFO: "emptab": scanned 1 of 1 pages, containing 4 live rows and 0 dead rows; 4 rows in
sample, 4 estimated total rows

ANALYZE

---Analyze tables in the current schema that the user has access to.

dbaclass=# analyze ;

ANALYZE

NOTE: ANALYZE requires only a read lock on the target table, so it can run in parallel with other
activity on the table.

50
VACUUM - > REMOVES DEAD ROWS, AND MARK THEM FOR REUSE, BUT IT DOESN’T RETURN
THE SPACE TO ORACLE,. IT DOESN'T NEED EXCLUSIVE LOCK ON THE TABLE.

-------------------------------------------------------------------------

vacuum a table:

dbaclass=# vacuum dbatest.emptab;

VACUUM

both vacuum and analyze:

dbaclass=# vacuum analyze dbatest.emptab;

VACUUM

with verbose:

dbaclass# vacuum verbose analyze dbatest.emptab;

Monitor vacuum process( if vacuum process runs for a long time)

dbaclass#select * from pg_stat_progress_vacuum;

Check vacuum related information for the table

51
dbaclass=# select schemaname,relname,last_vacuum,vacuum_count from pg_stat_user_tables
where relname='emptab';

schemaname | relname | last_vacuum | vacuum_count

------------+---------+----------------------------------+--------------

dbatest | emptab | 2020-07-21 18:35:34.801402+05:30 | 2

(1 row)

VACUUM FULL - > JUST LIKE MOVE COMMAND IN ORACLE . IT TAKES MORE TIME, BUT IT
RETURNS THE SPACE TO OS BECAUSE OF ITS COMPLEX ALGORITHM. IT also requires additional
disk space , which can store the new copy of the table., until the activity is completed. Also it
locks the table exclusively, which block all operations on the table .

-- Command to run vacuum full command for table:

dbaclass=# VACUUM FULL dbatest.emptab;

VACUUM

DEMO TO CHECK HOW IT RECLAIMS SPACE:

-- Check existing space and delete some data:

dbaclass=# select pg_size_pretty(pg_relation_size('dbatest.emptab'));

pg_size_pretty

----------------

52
114 MB

(1 row)

dbaclass=# delete from dbatest.emptab where oid=13634;

DELETE 131072

-- We can observe size is still same:

dbaclass=# select pg_size_pretty(pg_relation_size('dbatest.emptab'));

pg_size_pretty

----------------

114 MB

(1 row)

-- Run vacuum full and observe the space usage:

dbaclass=# VACUUM FULL dbatest.emptab;

VACUUM

dbaclass=# select pg_size_pretty(pg_relation_size('dbatest.emptab'));

pg_size_pretty

----------------

39 MB ---- > from 114MB it came down to 39 MB.

53
(1 row)

Autovacuum methods automates the executions vacuum,freeze and analyze commands.

-- Find whether autovacuum is enabled or not:

dbaclass=# select name,setting,short_desc,boot_val,pending_restart from pg_settings where


name in ('autovacuum','track_counts');

name. | setting | short_desc | boot_val | pending_restart

--------------+---------+-------------------------------------------+----------+-----------------

autovacuum | on | Starts the autovacuum subprocess. | on |f

track_counts | on | Collects statistics on database activity. | on |f

(2 rows)

-- Find other autovacuum related parameter settings

dbaclass=# select
name,setting,short_desc,min_val,max_val,enumvals,boot_val,pending_restart from pg_settings
where category like 'Autovacuum';

- Change autovacuum settings:( they need restart)

dbaclass=# alter system set autovacuum_max_workers=10 ;

ALTER SYSTEM

54
Now restart :

pg_ctl stop

pg_ctl start

REINDEX rebuilds an index using the data stored in the index's table, replacing the old copy of
the index. There are several scenarios in which to use REINDEX:

- Rebuild particular index:

postgres=# REINDEX INDEX TEST_IDX2;

REINDEX

-- Rebuild all indexes on a table:

postgres=# REINDEX TABLE TEST;

REINDEX

-- Rebuild all indexes of tables in a schema:

postgres=# reindex schema public;

REINDEX

-- Rebuild all indexes in a database :

55
postgres=# reindex database dbaclass;

REINDEX

-- Reindex with verbose option:

postgres=# reindex (verbose) table test;

INFO: index "test_idx" was reindexed

DETAIL: CPU: user: 5.44 s, system: 2.72 s, elapsed: 11.96 s

INFO: index "test_idx2" was reindexed

DETAIL: CPU: user: 3.34 s, system: 1.01 s, elapsed: 5.49 s

INFO: index "pg_toast_17395_index" was reindexed

DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s

REINDEX

Rebuild index without causing lock on the table:( using concurrently option)

postgres=# REINDEX ( verbose) table concurrently test;

INFO: index "public.test_idx" was reindexed

INFO: index "public.test_idx2" was reindexed

INFO: index "pg_toast.pg_toast_17395_index" was reindexed

INFO: table "public.test" was reindexed

DETAIL: CPU: user: 11.09 s, system: 6.23 s, elapsed: 24.63 s.

REINDEX

56
Monitor index creation or rebuild

dbaclass=# SELECT a.query,p.phase, p.blocks_total,p.blocks_done,p.tuples_total, p.tuples_done


FROM pg_stat_progress_create_index p JOIN pg_stat_activity a ON p.pid = a.pid;

-[ RECORD 1 ]+-------------------------------

query | reindex index test_idx;

phase | building index: scanning table

blocks_total | 61281

blocks_done | 15331

tuples_total | 0

tuples_done | 0

(or)

dbaclass=# select
pid,datname,command,phase,tuples_total,tuples_done,partitions_total,partitions_done from
pg_stat_progress_create_index;

-[ RECORD 1 ]----+-------------------------------

pid. | 14944

datname | postgres

command | REINDEX

phase | building index: scanning table

57
tuples_total |0

tuples_done |0

partitions_total | 0

partitions_done | 0

* monitor vaccum process

postgres# select * from pg_stat_progress_vacuum;

-[ RECORD 1 ]------+--------------------

pid | 12540

datid | 21192

datname | b2cnsmst

relid | 22402

phase | cleaning up indexes

heap_blks_total. | 624176

heap_blks_scanned | 624176

heap_blks_vacuumed | 624176

index_vacuum_count | 0

max_dead_tuples | 178956970

num_dead_tuples | 0

-- Finding statistics level of a column ( orders.orderdate)

-- statistics level range is 1-10000 ( where 100 means 1 percent,10000 means 100 percent)

58
edbstore=> SELECT attname as column_name , attstattarget as stats_level FROM pg_attribute
WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'orders') and
attname='orderdate';

column_name | stats_level

-------------+-------------

orderdate | 1000

(1 row)

-- To change statistics level of a column:

edbstore=> alter table orders alter column orderdate set statistics 1000;

ALTER TABLE

* Find vaccum settings of tables

postgres=# SELECT n.nspname, c.relname,

pg_catalog.array_to_string(c.reloptions || array(

select 'toast.' ||

x from pg_catalog.unnest(tc.reloptions) x),', ')

as relopts

FROM pg_catalog.pg_class c

LEFT JOIN

59
pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)

JOIN

pg_namespace n ON c.relnamespace = n.oid

WHERE c.relkind = 'r'

AND nspname NOT IN ('pg_catalog', 'information_schema');

nspname | relname | relopts

---------+-------------------------------------+------------------------

public | test |

public | city_id2 |

public | test2 | autovacuum_enabled=off

-- Disable autovacuum for a table:

postgres=# alter table test2 set( autovacuum_enabled = off);

-- Enable autovacuum for a table

postgres=# alter table test2 set( autovacuum_enabled = on);

-- Here the table_name is test

60
postgres=# select * from pg_stat_user_tables where relname='test';

-[ RECORD 1 ]-------+---------------------------------

relid | 914713

schemaname | public

relname | test

seq_scan | 40

seq_tup_read | 12778861

idx_scan |

idx_tup_fetch |

n_tup_ins | 4774377

n_tup_upd |0

n_tup_del | 4774377

n_tup_hot_upd |0

n_live_tup |0

n_dead_tup |0

n_mod_since_analyze | 0

last_vacuum |

last_autovacuum | 22-APR-22 21:27:10.863536 +03:00

last_analyze | 22-APR-22 21:05:05.874929 +03:00

last_autoanalyze. | 22-APR-22 21:27:10.865308 +03:00

vacuum_count |0

autovacuum_count | 6

analyze_count |2

61
autoanalyze_count | 11

-- Create the pgstattuple extension:

postgres=# create extension pgstattuple;

CREATE EXTENSION

-- bloating percentage of the table "test":

postgres=# SELECT pg_size_pretty(pg_relation_size('ecomm.contact')) as


table_size,(pgstattuple('ecomm.contact')).dead_tuple_percent;

table_size. | dead_tuple_percent

------------+--------------------

1408 kB |0

(1 row)

-- bloating percentage of index "test_x_idx":

select pg_relation_size('test_x_idx') as index_size, 100-


(pgstatindex('test_x_idx')).avg_leaf_density as bloat_ratio;

index_size. | bloat_ratio

------------+--------------------

1008 kB |0

62
(1 row)

to find the freespace from table

------------------------------

CREATE EXTENSION pg_freespacemap;

CREATE EXTENSION

SELECT count(*) as "number of pages",

pg_size_pretty(cast(avg(avail) as bigint)) as "Av. freespace size",

round(100 * avg(avail)/8192 ,2) as "Av. freespace ratio"

FROM pg_freespace('contact');

-- This provides a summary of contents of client authentication config file pg_hba.conf

postgres=# select * from pg_hba_file_rules;

line_number | type | database | user_name | address | netmask | auth_method | options |


error

-------------+-------+---------------+-----------+---------------+-----------------------------------------+-------------+--
-------+-------

80 | local | {all} | {all} | | | md5 | |

63
82 | host | {all} | {all} | 127.0.0.1 | 255.255.255.255 | ident | |

84 | host | {all} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | ident | |

85 | host | {all} | {all} | 172.21.19.148 | 255.255.255.255 | trust | |

88 | local | {replication} | {all} | | | peer | |

89 | host | {replication} | {all} | 127.0.0.1 | 255.255.255.255 | ident | |

90 | host | {replication} | {all} | ::1 | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | ident | |

(7 rows)

-- Check auditing setting :

postgres=# show log_statement;

log_statement

---------------

none

-- For logging all ddl activites:

postgres=# alter system set log_statement=ddl;

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

64
-- For logging all DDL DML activities:

postgres=# alter system set log_statement=mod;

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

-- For logging all statement( i.e ddl , dml and even select statements)

postgres=# alter system set log_statement='all';

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

- Enable audit for connection and disconnection to postgres.

postgres=# select name,setting from pg_settings where name in


('log_disconnections','log_connections');

name | setting

--------------------+---------

log_connections | off

65
log_disconnections | off

postgres=# alter system set log_disconnections=off;

ALTER SYSTEM

postgres=# alter system set log_connections=on;

ALTER SYSTEM

postgres=# select pg_reload_conf();

pg_reload_conf

----------------

Now all log on and log off will logged in the log file.

<<<<<<<<< cd /Library/PostgreSQL/10/data/log/ >>>>>>>

2020-07-06 12:51:39.042 IST [10212] LOG: connection received: host=[local]

2020-07-06 12:51:53.416 IST [10215] LOG: connection received: host=[local]

2020-07-06 12:51:53.420 IST [10215] LOG: connection authorized: user=postgres


database=postgres

-- You can use watch command to run a particular query repeatedly until you cancel it.

-- watch 3 , means for every 3 seconds, the previous query will be executed

66
postgres=# select count(*) from test;

count

-------

4226

(1 row)

postgres=# \watch 3

Tue 19 Apr 2022 08:18:17 PM +03 (every 3s)

count

-------

4226

(1 row)

Tue 19 Apr 2022 08:18:20 PM +03 (every 3s)

count

-------

4226

(1 row)

-- Run this on hot standby server

67
postgres=# select pg_is_wal_replay_paused();

pg_is_wal_replay_paused

-------------------------

-- If the output is f then, streaming recovery is running, if t means not running.

* Check replication details on primary server

-- Run on this primary server for outgoing replication details

postgres=# select * from pg_stat_replication;

-[ RECORD 1 ]----+---------------------------------

pid | 18556

usesysid. | 10

usename | enterprisedb

application_name | walreceiver

client_addr | 10.20.76.12

client_hostname |

client_port | 44244

backend_start | 27-MAY-21 13:56:30.131681 +03:00

backend_xmin |

68
state | streaming

sent_lsn. | 0/401F658

write_lsn | 0/401F658

flush_lsn | 0/401F658

replay_lsn. | 0/401F658

write_lag |

flush_lag |

replay_lag. |

sync_priority. | 0

sync_state | async

* Get received /replayed WAL records on standby(replication)

-- Run on standby database

postgres=# select pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn(),


pg_last_xact_replay_timestamp();

-[ RECORD 1 ]-----------------+---------------------------------

pg_last_wal_receive_lsn | 0/401F658

pg_last_wal_replay_lsn | 0/401F658

pg_last_xact_replay_timestamp | 27-MAY-21 16:26:18.704299 +03:00

postgres=# select * from pg_stat_wal_receiver;

-[ RECORD 1 ]---------+------------------------------------------------------------------------

pid | 7933

status | streaming

69
receive_start_lsn | 0/4000000

receive_start_tli |1

received_lsn | 0/401F658

received_tli |1

last_msg_send_time. | 27-MAY-21 20:29:39.599389 +03:00

last_msg_receipt_time | 27-MAY-21 20:29:39.599599 +03:00

latest_end_lsn. | 0/401F658

latest_end_time | 27-MAY-21 16:31:20.815183 +03:00

slot_name |

sender_host | 10.20.30.40

sender_port | 5444

conninfo | user=enterprisedb passfile=/bgidata/enterprisedb/.pgpass dbname=replication


host=10.20.30.40 port=5444 fallback_application_name=walreceiver sslmode=prefer
sslcompression=0 krbsrvname=postgres target_session_attrs=any

-- To stop/pause recovery on replication server(standby)

postgres=# select pg_wal_replay_pause();

pg_wal_replay_pause

---------------------

(1 row)

postgres=# select pg_is_wal_replay_paused();

70
pg_is_wal_replay_paused

-------------------------

(1 row)

-- To Resume recovery on replication server(standby)

postgres=# select pg_wal_replay_resume();

pg_wal_replay_resume

----------------------

(1 row)

postgres=# select pg_is_wal_replay_paused();

pg_is_wal_replay_paused

-------------------------

(1 row)

-- Find lag in bytes( run on standby)

postgres# SELECT pg_wal_lsn_diff(sent_lsn, replay_lsn) from pg_stat_replication;

71
--- Find lag in seconds( run on standby)

postgres# SELECT CASE WHEN pg_last_wal_receive_lsn() =

pg_last_wal_replay_lsn()

THEN 0 ELSE

EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp()) END AS lag_seconds;

-- Check existing replication slot details

postgres# SELECT redo_lsn, slot_name,restart_lsn, active,

round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_lag

FROM pg_control_checkpoint(), pg_replication_slots;

-- Create replication slots

postgres#SELECT pg_create_physical_replication_slot('slot_one');

-- Drop unused replication slots

postgres=# SELECT pg_drop_replication_slot('slot_one');

* Find subscription details in logic replication

72
select * from pg_stat_subscription;

-- export specific column data to text file:

copy EMPLOYEE( EMP_NAME,EMP_ID) to '/tmp/emp.txt';

-- export complete table data to text file:

copy EMPLOYEE to '/tmp/emp.txt';

-- export table data to csv file:

copy EMPLOYEE to '/tmp/emp.csv' with csv headers;

-- export specific query output to csv file:

copy ( select ename,depname from emp where depname='HR') to '/tmp/emp.csv' with csv
headers;

-- Find search_path of users in a particular database ( replace your db_name(EDB))

SELECT r.rolname, d.datname, drs.setconfig

FROM pg_db_role_setting drs

73
LEFT JOIN pg_roles r ON r.oid = drs.setrole

LEFT JOIN pg_database d ON d.oid = drs.setdatabase

WHERE d.datname = 'EDB';

-- Find search_path of users in postgres db cluster( all database)

SELECT r.rolname, d.datname, drs.setconfig

FROM pg_db_role_setting drs

LEFT JOIN pg_roles r ON r.oid = drs.setrole

LEFT JOIN pg_database d ON d.oid = drs.setdatabase;

==============================================================================
=======

find the script to create the schema in Postgres server.

SET ROLE "ecomm";

\echo Creating 'dbadmin' SCHEMA...

CREATE SCHEMA dbadmin AUTHORIZATION ecomm;

\echo Granting USAGE on 'dbadmin' schema to ecomm roles...

GRANT USAGE ON SCHEMA dbadmin TO ecomm_editor, ecomm_operator, ecomm_viewer;

\echo Setting up ecomm_viewer privileges...

74
ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT SELECT ON TABLES TO ecomm_viewer;

\echo Setting up ecomm_operator privileges...

ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT SELECT, INSERT, UPDATE ON TABLES TO ecomm_operator;

\echo Setting up ecomm_editor privileges...

ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE ON TABLES TO ecomm_editor;

-- Allows EXECUTE on all existing and future PROCEDURES in the dbadmin schema

\echo Granting EXECUTE on all PROCEDURES to ecomm roles...

GRANT EXECUTE ON ALL PROCEDURES IN SCHEMA dbadmin TO ecomm_editor,


ecomm_operator;

-- Allows USAGE, SELECT on all existing and future SEQUENCES in the dbadmin schema

\echo Granting USAGE, SELECT on all SEQUENCES to ecomm roles...

GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA dbadmin TO ecomm_editor,


ecomm_operator;

ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT USAGE, SELECT ON SEQUENCES TO ecomm_editor, ecomm_operator;

-- Allows creation of a trigger on all existing and future TABLES in the dbadmin schema

\echo Granting TRIGGER on all TABLES to ecomm roles...

75
GRANT TRIGGER ON ALL TABLES IN SCHEMA dbadmin TO ecomm_editor, ecomm_operator;

ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT TRIGGER ON TABLES TO ecomm_editor, ecomm_operator;

-- Allows EXECUTE on all existing and future ROUTINES (functions, aggregate functions, window
functions, and procedures) in the dbadmin schema

\echo Granting EXECUTE on all ROUTINES to ecomm roles...

GRANT EXECUTE ON ALL ROUTINES IN SCHEMA dbadmin TO ecomm_editor, ecomm_operator;

ALTER DEFAULT PRIVILEGES IN SCHEMA dbadmin

GRANT EXECUTE ON ROUTINES TO ecomm_editor, ecomm_operator;

• Created schema dbadmin in dev server

ecomm=> \dn+

List of schemas

Name | Owner | Access privileges | Description

---------+----------+------------------------+-------------

dbadmin | ecomm | |

ecomm | ecomm | ecomm=UC/ecomm +|

| | ecomm_editor=U/ecomm +|

| | ecomm_operator=U/ecomm+|

| | ecomm_viewer=U/ecomm |

epro | ecomm | ecomm=UC/ecomm +|

| | ecomm_editor=U/ecomm +|

76
| | ecomm_operator=U/ecomm+|

| | ecomm_viewer=U/ecomm |

repack | postgres | |

(4 rows)

• Connected to schema dbadmin;

ecomm=> set search_path to dbadmin;

SET

• After providing necessary access privileges

ecomm=> \dn+

List of schemas

Name | Owner | Access privileges | Description

---------+----------+------------------------+-------------

dbadmin | ecomm | ecomm=UC/ecomm +|

| | ecomm_editor=U/ecomm +|

| | ecomm_operator=U/ecomm+|

| | ecomm_viewer=U/ecomm |

ecomm | ecomm | ecomm=UC/ecomm +|

| | ecomm_editor=U/ecomm +|

| | ecomm_operator=U/ecomm+|

| | ecomm_viewer=U/ecomm |

epro | ecomm | ecomm=UC/ecomm +|

77
| | ecomm_editor=U/ecomm +|

| | ecomm_operator=U/ecomm+|

| | ecomm_viewer=U/ecomm |

repack | postgres | |

• Created dummy table in schema admin for testing

ecomm=> set search_path to dbadmin;

SET

ecomm=> create table demo_contact as select * from ecomm.contact_ca_defaults;

SELECT 1649

ecomm=> create table dbadmin.demo_contact_copy as select * from


ecomm.contact_ca_defaults;

SELECT 1649

ecomm=> \d+

List of relations

Schema | Name | Type | Owner | Size | Description

---------+-------------------+-------+----------+--------+-------------

dbadmin | demo_contact | table | thimmapa | 256 kB |

dbadmin | demo_contact_copy | table | thimmapa | 256 kB |

• connected to schema ‘ecomm’ and ran select on tables from schema ‘dbadmin’

78
ecomm=> set search_path to ecomm;

SET

ecomm=> select count(*) from dbadmin.demo_contact;

count

-------

1649

(1 row)

ecomm=> select count(*) from dbadmin.demo_contact_copy;

count

-------

1649

(1 row)

==============================================================================
===============

--- in edb postgres advanced server we can create user profile

---- similar to that of oracle.

-- Create profile:

# create profile REPORTING_PROFILE limit FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME


90;

79
--- Alter profile:

# alter profile REPORTING_PROFILE limit FAILED_LOGIN_ATTEMPTS 1;

-- view profile details:

# select * from dba_profiles;

-- providing superuser role will make an user superuser.

postgres=# select usename,usesuper from pg_user where usename='dbatest';

usename | usesuper

---------+----------

dbatest | f

(1 row)

postgres=#

postgres=# alter user dbatest with superuser;

ALTER ROLE

postgres=# select usename,usesuper from pg_user where usename='dbatest';

usename | usesuper

---------+----------

dbatest | t

80
-- How to revoke superuser:

postgres=# alter user dbatest with nosuperuser;

ALTER ROLE

postgres=#

postgres=# select usename,usesuper from pg_user where usename='dbatest';

usename | usesuper

---------+----------

dbatest | f

(1 row)

CREATE USER:

dbaclass=# create user TEST_DBACLASS with password 'test123';

CREATE ROLE

CREATE USER WITH VALID UNTIL:

dbaclass=# create user TEST_dbuser1 with password 'test123' valid until '2020-08-08';

CREATE ROLE

CREATE USER WITH SUPER USER PRIVILEGE

81
dbaclass=# create user test_dbuser3 with password 'test123' CREATEDB SUPERUSER;

CREATE ROLE

VIEW USERS:

dbaclass=# select usename,valuntil,usecreatedb from pg_shadow;

dbaclass=# select usename,usesuper,valuntil from pg_user;

dbaclass=# \du+

DROP USER:

drop user DB_user1;

- Create role :

dbaclass=# create role dev_admin;

CREATE ROLE

dbaclass=# create role dev_admin with valid until '10-oct-2020';

CREATE ROLE

82
-- role with createdb and superuser privilege and login keyword mean it can login to db like a
normal user

dbaclass=# create role dev_admin with createdb createrole login ;

CREATE ROLE

DROP ROLE:

dbaclass=# drop role dev_admin;

DROP ROLE

select rolname,rolcanlogin,rolvaliduntil from pg_roles;

List roles :

postgres=# select rolname,rolcanlogin,rolvaliduntil from pg_roles;

rolname | rolcanlogin | rolvaliduntil

----------------------+-------------+---------------------------

pg_monitor |f |

pg_read_all_settings | f |

pg_read_all_stats |f |

pg_stat_scan_tables | f |

pg_signal_backend |f |

83
postgres |t |

test_dbuser1 |f | 2020-08-08 00:00:00+05:30

rolcanlogin - > If true mean they are role as well as user

If false mean they are only role( they cannot login)

NOTE - > In postgres users are bydefault role, but roles are not bydefault user. i.e

Bydefault user come with login privilege, where as roles don’t come with login privilege.

--- List users present in postgres:

postgres=# select usename,usesuper,valuntil from pg_user;

usename | usesuper | valuntil

---------------+----------+---------------------------

postgres |t |

test_dbuser1 | f | 2020-08-08 00:00:00+05:30

postgres#select usename,usesuper,valuntil from pg_shadow;

usename | usesuper | valuntil

84
---------------+----------+---------------------------

postgres |t |

test_dbuser1 | f | 2020-08-08 00:00:00+05:30

postgres#select usename,usesuper,valuntil from pg_shadow;

postgres=# \du

List of roles

Role name | Attributes | Member of

---------------+------------------------------------------------------------+-----------

postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

test_dbuser1 | Password valid until 2020-08-08 00:00:00+05:30 | {}

NOTE - > \du command output includes both user and roles(custom created roles only).

postgres users are bydefault role, but roles are not bydefault user.

Sequence issue

===============

Delving into the out-of-sync sequence problem in Postgres

85
Method 1: Single table solution

SELECT SETVAL('public."Users_id_seq"', COALESCE(MAX(id), 1)) FROM public."Users";

Method 2: Fixing all your sequences with one script

SELECT 'SELECT SETVAL(' ||

quote_literal(quote_ident(PGT.schemaname) || '.' || quote_ident(S.relname)) ||

', COALESCE(MAX(' ||quote_ident(C.attname)|| '), 1) ) FROM ' ||

quote_ident(PGT.schemaname)|| '.'||quote_ident(T.relname)|| ';'

FROM pg_class AS S,

pg_depend AS D,

pg_class AS T,

pg_attribute AS C,

pg_tables AS PGT

WHERE S.relkind = 'S'

AND S.oid = D.objid

AND D.refobjid = T.oid

AND D.refobjid = C.attrelid

AND D.refobjsubid = C.attnum

AND T.relname = PGT.tablename

ORDER BY S.relname;

==============================================================================
===============

86
Checkpoints

A checkpoint is a point in the transaction log sequence at which all data files have been updated
to reflect the

information in the log. All data files will be flushed to disk.

1. pg_start_backup,

2. CREATE DATABASE,

3. pg_ctl stop|restart,

4. pg_stop_backup,

5. When we issue checkpoint command manually.

For periodic checkpoints below parameter play important role.

checkpoint_timeout = 5min

max_wal_size = 1GB

A checkpoint is begun every checkpoint_timeout seconds, or if max_wal_size is about to be


exceeded, whichever

comes first. The default settings are 5 minutes and 1 GB, respectivel

, bgwriter

Disk writes (by processes)

Pages dirtied by queries

Checkpoints issued

Connections

87
# of connections by state + max_connections

Connections by client address (active + all)

Connections by app (active + all)

Connections by DB user

Connections by database name

Idle in transaction connections

Locks, waits

Total number of locks acquired

Locks by time

Queries blocked longer than X seconds

Deadlocks

Types of waits

Replication

Destination (follower)

Replication lag in bytes (for followers of the current node)

Replication lag in bytes (for the current node compared to the leader)

Replication lag in seconds (for the current node compared to the leader)

Origin (primary or replica with cascaded replication)

Unused replication slots / replication statuses

Amount of bytes for each replication slot

Replication lag phases (1 graph for each follower)

Tables

Table sizes (total, heap, TOAST, indexes)

Estimated rows

88
Top-N by seqscan

Top-N by blocks read

Top-N by INSERT

Top-N by UPDATE

Top-N by DELETE

Top-N by size (tuples, bytes)

Top-N by bloat (estimated!)

Top-N by n_dead_tup

HOT update

Indexes

Index sizes

Index usage

Not valid indexes

Unused indexes

Redundant indexes

Functions

Function usage (calls per second)

Average time of execution (total, self)

WAL

pg_xlog/pg_wal size

Archiver statuses (fail/success)

WAL write rates, B/s

WAL files count (total, unarchived)

WAL directory size

WAL files which ready to be archived (count of files in pg_xlog/pg_wal/archive_status which


89
end in ".ready")

Transactions

Transactions per second (TPS)

Long-running transactions / max transaction age

Query macro-analysis based on pg_stat_statements

Top-N by total_time

Top-N by mean_time

Top-N by calls

Top-N by CPU usage

Top-N by I/O timing

Top-N by I/O timing - writes

Top-N by block reads (page cache->buffer pool)

Top-N by blocks dirtied

Top-N by rows

Top-N by block hits (buffer pool)

Top-N by temporary files generated (bytes; blocks)

Top-N by block reads from disk

Top-N by block writes

--- ability to filter and/or aggregate by dbid (“no filter” also wanted)

--- ability to filter and/or aggregate by userid (“no filter” also wanted)

--- ability to filter and/or aggregate by “the first word” (SELECT/INSERT/…) (“no filter” also
wanted)

--- ability to filter and/or aggregate by relations mentioned in query text (“no filter” also
wanted)

90
For each query group from the top-N list -- personal graphs showing:

mean_time

total_time (?)

calls

block operations

rows

query stages: CPU, I/O read, I/O write

more info: pg_stat_kcache, pg_qualstats, pg_sortstats

Macro-analysis based on wait events

Query groups by wait event types

Query groups by wait events

Top-N by time spent in wait event (agg. by type)

Top-N by time spent in wait event

For each query

History of the query group: wait event types

History of the query group: wait events

For each wait event type:

History of query groups

For each wait event:

History of query groups

Time spent in each event withing this type

Log analysis

Critical events: restarts, crashes

Autovacuum activity

91
Checkpointer activity

Locks (>deadlock_timeout)

Deadlocks

Query examples

Plan examples

Connections, disconnections

pgBouncer monitoring

From pgbouncer log:

Average query time

Average transaction time

Queries per second (QPS)

Transactions per second (TPS)

Traffic in and out, B/s

Number of connections by client addr

Connections between pgBouncer and Postgres, by state

Utilization for each pool

Waiting clients and waiting time

Backups

Time from last successful time (graph)

Last backup size (graph)

Alerts

92
WIP

Critical

Disk space (% or in GiB)

Number of connections is close to max_connections (%)

Number of idle-in-transaction connections > N

Inactive replication slots

Replication slot size is ber of connections is close to max_connections (%)

Number of idle-in-transaction con> X

Autovacuum workers = autovacuum_max_workers

Transaction ID wraparound risk

Archives failed X times in Y seconds

Long blocking session (e.g. >1min)

Time from last successful backup (e.g. >30hours)

Wal delay between nodes -> This can cause desynchronization

==============================================================================
===================

change passwd in aix/linux

------------------------------

passwd db2inst1 ==> change password for user db2inst1

===================================================================

93
if crontab is not working in aix/linux

----------------------------------

We need to comment below highlighted line in below file as root user

vi /etc/security/access.conf

+ : corp.nsre.com\GP-SEC-USE1DNSRE04LT23_MAINFM : ALL

+ : root : ALL

- : ALL : ALL

==============================================================================
=====

When and How do you use pg_rewind() in PostgreSQL?

The pg_rewind utility in PostgreSQL is used to synchronize a standby server with a new master
server in a scenario

where the old master server has been promoted to a new master server and the standby server
needs to catch up with the new master server’s state.

This utility can be used when the old master server still has the transaction logs (WAL) that
were generated after the promotion,

and it can be used to “rewind” the state of the standby server to the point in time where the
promotion occurred.

To use the pg_rewind utility, you need to perform the following steps:

On the standby server,

94
sudo systemctl stop postgresql

2. Run the pg_rewind utility on the standby server, using the following command:

pg_rewind --source-server=host=new_master_server_hostname user=replication_user


password=replication_password --target-pgdata=path_to_data_directory

3. Once the pg_rewind utility has finished running, start the PostgreSQL service on the standby
server.

This will start the standby server as a standalone server, without the replication configuration.

4. Connect to the standby server using a PostgreSQL client, such as psql, and run the following
query to set up the

standby server as a new replica of the old master server:

SELECT pg_create_physical_replication_slot('slot_name');

Replace slot_name with the actual name of the replication slot that you want to create.

5. On the old master server, edit the postgresql.conf configuration file and set the hot_standby
parameter to on.

This will enable the old master server to connect to the standby server and start replicating to
it.

95
6. In the recovery.conf file on the old master server, add the following lines to specify the
connection details for the standby server

and the replication slot that you created:

standby_mode = 'on'

primary_conninfo = 'host=standby_server_hostname port=5432 user=replication_user


password=replication_password'

primary_slot_name = 'slot_name'

Replace standby_server_hostname,

replication_user,

replication_password,

and slot_name with the actual hostname, username, password, and replication slot name of
the standby server and the user that will be used for replication.

7. Restart the old master server for the changes to take effect. The old master server should
now be able to connect to the standby server and start replicating to it.

Note that these are the basic steps for using the pg_rewind utility in PostgreSQL. There are
many other factors to consider and configure, such as security, performance, and monitoring, to
ensure that your replication setup is reliable and efficient. It is recommended

==============================================================================
=============================================================

96
PostgreSQL All Topics

PostgreSQL Enterprise Manager - PEM Monitoring Tools

PostgreSQL Drop Database

PostgreSQL Connect Database

PostgreSQL Database Creation

Connect Postgres Server

PostgreSQL -11 Installation (rpm & source code)

PostgreSQL 10 Installation

PostgreSQL Database startup / shutdown /restart

Configure the network & Disk Partition

PostgreSQL - Oracle Vs PostgreSQL

PostgreSQL Monitoring Tools

PostgreSQL Features

PostgreSQL Brief History

PostgreSQL Introduction

Script to taking postgres DDL objects with individual file name

Postgres Streaming Replication Configuration

How to fix xlog / wal log disk full issue in postgres database ?

https://www.postgresql.fastware.com/blog/how-to-solve-the-problem-if-pg-wal-is-full

97
PostgreSQL 11 Source code Installation

PostgreSQL 11 Installation - RPM

PostgreSQL SSL configuration

===============================================

a. Setting postgreSQL logfile retention 7 days or b. Automatically overwrite PostgreSQL logfile


once reached 7 days.

To keep 7 days of logs, one log file per day named postgresql-Mon.log, postgresql-Tue.log, etc,
and automatically overwrite last week's log with this week's log.

logging_collector=on

log_filename = 'postgresql-%a.log'

log_directory = 'log'

log_rotation_age = 1d

log_truncate_on_rotation=on

===============================================

PostgreSQL -11 Installation (rpm & source code)

PostgreSQL New version upgrade

Script to taking Backup of postgres object's DDL with individual files

Postgresql maximum size

PostgreSQL Sequence

Script for vacuum & analyzing selected postgresql tables

98
PostgreSQL autovacuum & Parameter configuration

Script To Listing Postgresql dead tuble tables

How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in
PostgreSQL Database

Postgresql Partitioned Tables

https://hevodata.com/learn/postgresql-partitions/

Postgresql Server monitoring shell script

What's is the difference between streaming replication Vs hot standby vs warm standby ?

https://www.tutorialdba.com/2018/06/whats-is-difference-between-streaming.html

SSL setup in PostgreSQL

Postgresql Transaction isolation

Methods of installing PostgreSQL

Oracle to Postgresql migration

HOW TO SETUP/CONFIGURE THE POSTGRESQL STREAMING REPLICATION ON POSTGRESQL 10?

vacuumlo - removing large objects orphans from a database PostgreSQL

99
b=# with l_o as (select o,'l_o' tname from l_o union all select p,'lo' from lo)

select distinct loid, o, tname

from pg_largeobject left outer join l_o on l_o.o = loid;

How to check if a postgresql backup is finished or not? [closed]

SELECT pg_is_in_backup();

Migrating From Oracle to PostgreSQL using ora2pg open source tools

How to Move particular Postgres Schema to other database ?

$ pg_dump --format custom --file "my_backup" --schema "public" "db2"

$ pg_restore --dbname "db1" "my_backup"

Steps of PostgreSQL (point in time recovery) PITR

PostgreSQL pg_stat_activity status

How to connect vmware virtual machine server using putty & pgadmin ?

HOW TO TAKE THE BACKUP FROM SLAVE SERVER - POSTGRESQL 10.3?

How to add extra one slave an existing PostgreSQL cascade replication without down time ?

How to Configure the cascade replication On PostgreSQL 10.3 ?

100
PostgreSQL 9.6 idle_in_transaction_session_timeout parameter

PostgreSQL pg_stat_activity

How to upgrade Postgresql with minimal downtime Without using pg_upgrade?

PostgreSQL Cross Database Queries using DbLink

Difference Between Database Administrator vs Database Architect

PostgreSQL Log Compressing and Moving Script

PostgreSQL Killing Long Running Query and Most Resource Taken Process Script

How To Configure pglogical | streaming replication for PostgreSQL

PostgreSQL Multiple Schema Backup & Restore,Backup Script,Restoring Script,Backup & Restore
Prerequest & PostRequest

PostgreSQL Script for what Query is running more than x minutes with status

Script For Finding slow Running Query and Most CPU Utilization Query Using Top command PID

Postgresql interview Question and answer

PostgreSQL Monitoring Script

PostgresqlDBA interview questions and answers

Script to kill ALL IDLE Connection In postgreSQL

How to write script to Get table and index sizes in PostgreSQL ?

VBOX installation full guide

HOW TO SETUP A LOGICAL REPLICATION WITHIN 5 MINUTES ON POSTGRESQL

HOW TO INSTALL POSTGRESQL 10

Daily monitoring PostgreSQL master and slave server and schedule vacuum and change
postgreSQL parametrer

PostgreSQL pg_stat_statements Extension

Insert script for all Countries

Script to find active sessions or connections in PostgreSQL

101
Script to find sessions that are blocking other sessions in PostgreSQL

Script to Find Table and Column without comment or description

Script to find which group roles are granted to the User

Important PostgreSQL DBA Commands -2

Important PostgreSQL DBA Commands

Script to find the unused and duplicate index

Script to find a Missing Indexes of the schema

Fast way to find the row count of a Table

Script to find Source and Destination of All Foreign Key Constraint

Kill all Running Connections and Sessions of a Database

How to create a Function to truncate all Tables created by Particular User

PostgreSQL Parameters to enable Log for all Queries

VACUUM FULL without Disk Space

How to Stop all Connections and Force to Drop the Database

Script to create a copy of the Existing Database

DROP FUNCTION script with the type of Parameters

measure the size of a Table Row and Data Page

PostgreSQL Killling All IDLE session Script Every 2 minute

PostgreSQL Script to kill 'idle', 'idle in transaction', 'idle in transaction (aborted)', 'disabled'
sessions of a Database

find all Default Values of the Columns

change the Database User Password

find TOP 10 Long Running Queries using pg_stat_statements

Monitor ALL SQL Query Execution Statistics using pg_stat_statements Extension

Copy Database to another Server in Windows (pg_dump – backup & restore)

102
check Table Fragmentation using pgstattuple module

find total Live Tuples and Dead Tuples

find Index Size and Index Usage Statistics

BRIN – Block Range Index with Performance Report Futures of PostgreSQL 9.5

How we can create Index on Expression?

find a Missing Indexes of the schema

find the unused and duplicate index

find Version and Release Information

search any Text from the Stored Function

copy table data from one server to another server

1. Export via pg_dump into a file:

pg_dump --host "ecomm_qa" --port 5432 --username "username" --no-password --verbose --


file "filename" --table "source schema.tablename" "source db name"

this will create a file called "filename" at the directory where you ran above command - that will
have schema and data for the source table. You can can give any absolute path too.

2. Import via psql:

psql --host "target hostname" --port 5432 --username "username" --password --verbose --file

103
"file name" "target db name"

find Orphaned Sequence, not owned by any Column

SELECT

ns.nspname AS SchemaName

,c.relname AS SequenceName

FROM pg_class AS c

JOIN pg_namespace AS ns

ON c.relnamespace=ns.oid

WHERE c.relkind = 'S'

AND NOT EXISTS (SELECT * FROM pg_depend WHERE objid=c.oid AND deptype='a')

ORDER BY c.relname;

Script to find the count of objects for each Database Schema

Find a list of active Temp tables with Size and User information

Improve the performance of pg_dump pg_restore

PostgreSQL: Script to Create a Read-Only Database User

Postgres IP Errors

postgreSQL Compress format backup

ERROR: 57014: cancelling statement due to statement timeout in postgreSQL

104
PostgreSQL BRIN index WITH pages_per_range

PostgreSQL BRIN index

PostgreSQL index With Explain plan

PostgreSQL Important Parameters for better Performance

PostgreSQL 9.4 FILTER CLAUSE

What is VACUUM, VACUUM FULL and ANALYZE in PostgreSQL

How to Improve the performance of PostgreSQL Query Sort operation

1. change work_mem to 1 GB

show work_mem;

set work_mem to '1 GB';

2. create btree index

create index on table column used in order by caluse.

Force Autovacuum for running

Below have required changes to force the Autovacuum parameters for running frequently.

a. First enable the log for Autovacuum process:

105
log_autovacuum_min_duration = 0

b. Increase the size of worker to check table more:

autovacuum_max_workers = 6

autovacuum_naptime = 15s

c. Decrease the value of thresholds and auto analyze to trigger the sooner:

autovacuum_vacuum_threshold = 25

autovacuum_vacuum_scale_factor = 0.1

autovacuum_analyze_threshold = 10

autovacuum_analyze_scale_factor = 0.05

d. Make autovacuum less interruptable:

autovacuum_vacuum_cost_delay = 10ms

autovacuum_vacuum_cost_limit = 1000

How to Tuning Checkpoint Parameters In PostgreSQL

106
Example of parameters for PostgreSQL optimization:

shared_buffers = 512MB (default: 32MB)

effective_cache_size = 1024MB (default: 128MB)

checkpoint_segment = 32 (default: 3)

checkpoint_completion_target = 0.9 (default: 0.5)

default_statistics_target = 1000 (default: 100)

work_mem = 100MB (default: 1MB)

maintainance_work_mem = 256MB (default: 16MB)

PostgreSQL Database Maintenance Operation

TABLESAMPLE, SQL STANDARD AND EXTENSIBLE postgreSQL 9.5

PICK A TASK TO WORK ON

Foreign Table Inheritance

PostgreSQL Row-Level Security Policies

CREATE TABLE accounts (manager text, company text, contact_email text);

ALTER TABLE accounts ENABLE ROW LEVEL SECURITY;

PostgreSQL 9.5: IMPORT FOREIGN SCHEMA

107
IMPORT FOREIGN SCHEMA public

FROM SERVER dest_server INTO remote;

You can also filter out any tables you don't wish:

IMPORT FOREIGN SCHEMA public

EXCEPT (reports, audit)

FROM SERVER dest_server INTO remote;

Or limit it to just a specific set of tables:

IMPORT FOREIGN SCHEMA public

LIMIT TO (customers, purchases)

FOREIGN DATA WRAPPER

PostgreSQL Commit timestamp tracking

Parallel VACUUMing

108
vacuumdb -j4 productiondb ( The vacuumdb utility now supports parallel jobs. This is specified
with the -j option)

This would vacuum the database named "productiondb" by spawning 4 vacuum jobs to run
simultaneously.

for table;

vacuumdb -e -t test_zeecie -t test_yiegah -t test_wainie;

PostgreSQL SKIP LOCKED

PostgreSQL ALTER TABLE ... SET LOGGED / UNLOGGED

PostgreSQL pg_rewind

PostgreSQL Index

PostgreSQL Old Archive Move Script,

How to kill idle session and what is the shell script for killing idle connection ?

PostgreSQL Full backup and incremental backup script

Simple PostgreSQL vacuum script

PostgreSQL Vacuum and analyze script

Oracle And PostgreSQL DBA related Software

EDB Failover Manager Guide

The Connection Service File

109
PostgreSQL 9.6 ADD NEW SYSTEM VIEW, PG_CONFIG

POSTGRESQL USING RAM

TSVECTOR EDITING FUNCTIONS

PostgreSQL 9.6 Parallel query

==============================================================================
=====================================================

7 Steps to configure BDR replication in postgresql

How to find the server is whether standby (slave) or primary(master) in Postgresql replication ?

vacuumlo - removing large objects orphans from a database PostgreSQL

==============================================================================
=============

Functions In Postgresql

Server Signaling Functions :-

-----------------------------------

Use of these functions is usually restricted to superusers

pg_cancel_backend():- cancels the running query

pg_terminate_backend():- terminates the entire process and thus the database


connection.Terminate a backend A connection which is idle or idle in transaction does not have
a current query to cancel, but it has a backend process which can be terminated.

pg_reload_conf():- Cause server processes to reload their configuration files

pg_rotate_logfile():- Rotate server's log file

110
Backup Control Functions:-

----------------------------------

These functions cannot be executed during recovery (except pg_is_in_backup,


pg_backup_start_time and pg_xlog_location_diff).

pg_create_restore_point(name text):- Create a named point for performing restore (restricted


to superusers)

pg_current_xlog_insert_location():- Get current transaction log insert location

pg_current_xlog_location():- Get current transaction log write location

pg_start_backup(label text [, fast boolean ]):- Prepare for performing on-line roles)

pg_is_in_backup():- True if an on-line exclusive backup is still in progress.

pg_backup_start_time():- Get start time of an on-line exclusive backup in progress.backup


(restricted to superusers or replication roles)

pg_stop_backup():- Finish performing on-line backup (restricted to superusers or replication

pg_switch_xlog():- Force switch to a new transaction log file (restricted to superusers)

pg_xlogfile_name(location text):- Convert transaction log location string to file name

pg_xlogfile_name_offset(location text):- Convert transaction log location string to file name and
decimal byte offset within file

pg_xlog_location_diff(location text, location text):- Calculate the difference between two


transaction log locations

Recovery Control Functions:-

------------------------------

The functions shows provide information about the current status of the standby. These

111
functions may be executed both during recovery and in normal running.

pg_is_xlog_replay_paused():- True if recovery is paused.

pg_xlog_replay_pause():- Pauses recovery immediately.

pg_xlog_replay_resume():- Restarts recovery if it was paused.

Database Object Size Functions:-

--------------------------------

The functions shows to calculate the disk space usage of database objects.

pg_column_size(any):- Number of bytes used to store a particular value (possibly compressed)

pg_database_size(oid):- Disk space used by the database with the specified OID

pg_database_size(name):- Disk space used by the database with the specified name

pg_indexes_size(regclass):- Total disk space used by indexes attached to the specified table

pg_relation_size(relation regclass, fork text):- Disk space used by the specified fork ('main',
'fsm', 'vm', or 'init') of the specified table or index

pg_relation_size(relation regclass):- Shorthand for pg_relation_size(..., 'main')

pg_size_pretty(bigint):- Converts a size in bytes expressed as a 64-bit integer into a human-


readable format with size units

pg_size_pretty(numeric):- Converts a size in bytes expressed as a numeric value into a human-


readable format with size units

pg_table_size(regclass):- Disk space used by the specified table, excluding indexes (but including
TOAST, free space map, and visibility map)

pg_tablespace_size(oid):- Disk space used by the tablespace with the specified OID

pg_tablespace_size(name):- Disk space used by the tablespace with the specified name

pg_total_relation_size(regclass):- Total disk space used by the specified table, including all

112
indexes and TOAST data

Database Object Location Functions

--------------------------------------

pg_relation_filenode(relation regclass):- Filenode number of the specified relation

pg_relation_filepath(relation regclass):- File path name of the specified relation

==============================================================================
=================

TOAST:

=======

* TOAST is a storage technique used in PostgreSQL to handle large data objects such as images,
videos, and audio files.

* The TOAST technique allows for the efficient storage of large data objects by breaking them
into smaller chunks and

storing them separately from the main table.

* This can improve the performance of queries and indexing and reduce the amount of disk
space required to store the data.

* TOAST tables are created automatically by PostgreSQL when a table contains a column of type
OID, bytea,

or any other data type with the TOASTable storage class.

* The TOAST table is then used to store the large data objects, while the main table stores a
reference to the TOAST table.

Create a table with a large data column:

113
CREATE TABLE images ( id SERIAL PRIMARY KEY, data BYTEA );

Query the table to see that the large data object is stored in a TOAST table:

SELECT relname, relkind FROM pg_class WHERE relname LIKE 'pg_toast%';

* When a large image is inserted into the table, PostgreSQL automatically creates a TOAST table
to store

the image data separately from the main table.

* The pg_class system catalog table is then queried to show that a TOAST table has been
created.

In PostgreSQL, you can use the different TOAST storage strategies by setting the “storage”
attribute on a column.

CREATE TABLE mytable ( id serial primary key, large_column dat);

postgres=# d+ mytable

Table "public.mytable"

Column | Type | Collation | Nullable | Default | Storage

-------------+---------+-----------+----------+-------------------------------------+----------

id | integer | | not null | nextval('mytable_id_seq'::regclass) | plain

large_column | bytea | | | | extended

Indexes:

114
"mytable_pkey" PRIMARY KEY, btree (id)

Access method: heap

postgres=# ALTER TABLE mytable ALTER COLUMN large_column SET STORAGE PLAIN;

ALTER TABLE

postgres=# d+ mytable

Table "public.mytable"

Column | Type | Collation | Nullable | Default | Storage

-------------+---------+-----------+----------+-------------------------------------+----------

id | integer | | not null | nextval('mytable_id_seq'::regclass) | plain

large_column | bytea | | | | plain

Indexes:

"mytable_pkey" PRIMARY KEY, btree (id)

Access method: heap

Limited data types: The TOAST table is created only for the columns defined as oid, bytea, or
any other data type

with the TOASTable storage class. You can’t use a TOAST table for data types such as text or
varchar, which can also be large.

SELECT nspname || '.' || relname AS "relation"

115
,pg_size_pretty(pg_relation_size(C.oid)) AS "size"

FROM pg_class C

LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)

WHERE nspname NOT IN ('pg_catalog', 'information_schema')

ORDER BY pg_relation_size(C.oid) DESC

LIMIT 20;

==============================================================================
============================================

Postgres major version upgrade using replication server

./pg_basebackup -D ../pgsql_backup -v -F p --tablespace-


mapping=/home/postgres/pgbench_tab_1=/home/postgres/pgbench_tab_3 --tablespace-
mapping=/home/postgres/pgbench_tab_2=/home/postgres/pgbench_tab_4

116

You might also like