• /
  • Log in

PostgreSQL monitoring integration

The New Relic PostgreSQL on-host integration receives and sends inventory metrics from your PostgreSQL instance to the New Relic platform, where you can aggregate and visualize key performance metrics. Data from instances, databases, and clusters helps you find the source of problems.

Read on to install the integration, and to see what data we collect. If you don't have one already, create a New Relic account. It's free, forever.

Compatibility and requirements

Our integration is compatible with PostgreSQL 9.0 or higher.

If PostgreSQL is not running on Kubernetes or Amazon ECS, you can install the infrastructure agent on a Linux or Windows OS host where PostgreSQL is installed or on a host capable of remotely accessing where PostgreSQL is installed. Otherwise:

Quick start

Instrument your PostgreSQL instance quickly and send your telemetry data with guided install. Our guided install creates a customized CLI command for your environment that downloads and installs the New Relic CLI and the infrastructure agent.

A screenshot of the guided install CLI.

Ready to get started? Click one of these button to try it out.

Guided install

Our guided install uses the infrastructure agent to set up the PostgreSQL integration. Not only that, it discovers other applications and log sources running in your environment and then recommends which ones you should instrument.

The guided install works with most setups. But if it doesn't suit your needs, you can find other methods below to get started monitoring your PostgreSQL instance.

Install and activate

To install the PostgreSQL integration, follow the instructions for your environment:

Additional notes:

PostgreSQL users and permissions

Create a user with SELECT permissions on:

  • pg_stat_database
  • pg_stat_database_conflicts
  • pg_stat_bgwriter

You can complete this step before or after you configure the postgresql-config.yml file. To create the user for the PostgreSQL integration:

CREATE USER new_relic WITH PASSWORD 'PASSWORD';
GRANT SELECT ON pg_stat_database TO new_relic;
GRANT SELECT ON pg_stat_database_conflicts TO new_relic;
GRANT SELECT ON pg_stat_bgwriter TO new_relic;

This will allow the integration to gather global metrics related to the PostgreSQL instance.

If you also want to obtain table and index-related metrics (for example, table size and index size), the PostgreSQL role used by the integration (new_relic) also needs SELECT permissions on the tables from which it will gather metrics from. For example, to allow the integration to collect metrics from all the tables and indexes present in the database (in the public schema), use the following:

GRANT SELECT ON ALL TABLES IN SCHEMA public TO new_relic;

Configure the integration

An integration's YAML-format configuration is where you can place required login credentials and configure how data is collected. Which options you change depend on your setup and preference.

There are several ways to configure the integration, depending on how it was installed:

Config options are below. For an example configuration, see the example config file.

The configuration file has common settings applicable to all integrations like interval, timeout, inventory_source. To read all about these common settings refer to our Configuration Format document.

Important

If you are still using our Legacy configuration/definition files please refer to this document for help.

Specific settings related to PostgreSQL are defined using the env section of the configuration file. These settings control the connection to your PostgreSQL instance as well as other security settings and features. The list of valid settings is described in the next section of this document.

PostgreSQL Instance Settings

The PostgreSQL integration collects both Metrics(M) and Inventory(I) information. Check the Applies To column below to find which settings can be used for each specific collection:

Setting

Description

Default

Applies To

HOSTNAME

The hostname for the PostgreSQL connection.

localhost

M/I

PORT

The port where PostgreSQL is running.

5432

M/I

USERNAME

The user name for the PostgreSQL connection. Required.

N/A

M/I

PASSWORD

The password for the PostgreSQL connection. Required.

N/A

M/I

COLLECTION_LIST

JSON array, a JSON object, or the string literal ALL that specifies the entities to be collected. The PostgreSQL user can only collect table and index metrics from tables it has SELECT permissions for.

Required except for PgBouncer.

Examples.

N/A

M

COLLECTION_IGNORE_DATABASE_LIST

JSON array of database names that will be ignored for metrics collection. Typically useful for cases where COLLECTION_LIST is set to ALL and some databases need to be ignored.

'[]'

M

PGBOUNCER

Collect pgbouncer metrics.

false

M

ENABLE_SSL

Determines if SSL is enabled. If true, ssl_cert_location and ssl_key_location are required.

false

M/I

TRUST_SERVER_CERTIFICATE

If true, the server certificate is not verified for SSL. If false, the server certificate identified in ssl_root_cert_location is verified.

false

M/I

SSL_ROOT_CERT_LOCATION

Absolute path to PEM-encoded root certificate file. Required if trust_server_certificate is false.

N/A

M/I

SSL_CERT_LOCATION

Absolute path to PEM-encoded client certificate file. Required if enable_ssl is true.

N/A

M/I

SSL_KEY_LOCATION

Absolute path to PEM-encoded client key file. Required if enable_ssl is true.

N/A

M/I

TIMEOUT

maximum wait for connection, in seconds. Set to 0 for no timeout.

10

M/I

DATABASE

The PostgreSQL database to connect to.

postgres

M/I

CUSTOM_METRICS_QUERY

A SQL query that required columns metric_name, metric_type, and metric_value.metric_type can be gauge, rate, delta, or attribute. Additional columns collected with the query are added to the metric set as attributes.

N/A

M

CUSTOM_METRICS_CONFIG

A path to a YAML file with a list of custom queries, along with their metric type, database, and sample name overrides. See example for details.

N/A

M

COLLECT_DB_LOCK_METRICS

Enable collecting database lock metrics, which can be performance intensive.

false

M

COLLECT_BLOAT_METRICS

Enable tablespace bloat metrics, which can be performance intensive.

true

M

METRICS

Set to true to enable Metrics only collection.

false

INVENTORY

Set to true to enable Inventory only collection.

false

The values for these settings can be defined in several ways:

  • Adding the value directly in the config file. This is the most common way.
  • Replacing the values from environment variables using the {{}} notation. This requires infrastructure agent v1.14.0+. Read more here.
  • Using Secrets management. Use this to protect sensible information such as passwords to be exposed in plain text on the configuration file. For more information, see Secrets management.

Labels/Custom Attributes

Environment variables can be used to control config settings, such as your license key, and are then passed through to the Infrastructure agent. For instructions on how to use this feature, see Configure the Infrastructure agent. You can further decorate your metrics using labels. Labels allow you to add key/value pairs attributes to your metrics which you can then use to query, filter or group your metrics on.
Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify or add new ones of your choice.

labels:
env: production
role: postgresql

Example configuration

Example postgresql-config.yml file configuration:

For more about the general structure of on-host integration configuration, see Configuration.

Find and use data

Data from this service is reported to an integration dashboard.

Metrics are attached to these event types:

You can query this data for troubleshooting purposes or to create custom charts and dashboards.

For more on how to find and use your data, see Understand integration data.

Metric data

The PostgreSQL integration collects the following database metric attributes. Some attributes apply to all PostgreSQL event types. Some metric names are prefixed with a category indicator and a period, such as db. or index. metric names.

PostgresqlDatabaseSample metrics

These attributes are attached to the PostgresqlDatabaseSample event type:

PostgreSQLDatabaseSample attributes

Description

db.connections

Number of backends currently connected to this database.

db.maxconnections

The maximum number of concurrent connections to the database server.

db.commitsPerSecond

Committed transactions per second.

db.rollbacksPerSecond

Transactions rolled back per second.

db.readsPerSecond

Number of disk blocks read in this database per second.

db.bufferHitsPerSecond

Number of times disk blocks were found already in the buffer cache, so that a read was not necessary. This only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache.

db.rowsReturnedPerSecond

Rows returned by queries per second.

db.rowsFetchedPerSecond

Rows fetched by queries per second.

db.rowsInsertedPerSecond

Rows inserted per second.

db.rowsUpdatedPerSecond

Rows updated per second.

db.rowsDeletedPerSecond

Rows deleted per second.

db.conflicts.tablespacePerSecond

Number of queries in this database that have been canceled due to dropped tablespaces.

db.conflicts.locksPerSecond

Number of queries in this database that have been canceled due to lock timeouts.

db.conflicts.snapshotPerSecond

Number of queries in this database that have been canceled due to old snapshots.

db.conflicts.bufferpinPerSecond

Number of queries in this database that have been canceled due to pinned buffers.

db.conflicts.deadlockPerSecond

Number of queries in this database that have been canceled due to deadlocks.

db.tempFilesCreatedPerSecond

Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (for example, sorting or hashing), and regardless of the log_temp_files setting.

db.tempWrittenInBytesPerSecond

Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the log_temp_files setting.

db.deadlocksPerSecond

Number of deadlocks detected in this database.

db.readTimeInMillisecondsPerSecond

Time spent reading data file blocks by backends in this database, in milliseconds.

db.writeTimeInMillisecondsPerSecond

Time spent writing data file blocks by backends in this database, in milliseconds.

PostgresqlIndexSample metrics

These attributes are attached to the PostgresqlIndexSample event type:

PostgreSQLIndexSample attributes

Description

index.sizeInBytes

The size of an index.

index.rowsReadPerSecond

The number of index entries returned by scans on this index.

index.rowsFetchedPerSecond

The number of index entries fetched by scans on this index.

PostgresqlInstanceSample metrics

These attributes are attached to the PostgresqlInstanceSample event type:

PostgreSQLInstanceSample attributes

Description

bgwriter.checkpointsScheduledPerSecond

Number of scheduled checkpoints that have been performed.

bgwriter.checkpointsRequestedPerSecond

Number of requested checkpoints that have been performed.

bgwriter.buffersWrittenForCheckpointsPerSecond

Number of buffers written during checkpoints.

bgwriter.buffersWrittenByBackgroundWriterPerSecond

Number of buffers written by the background writer.

bgwriter.backgroundWriterStopsPerSecond

Number of times the background writer stopped a cleaning scan because it had written too many buffers.

bgwriter.buffersWrittenByBackendPerSecond

Number of buffers written directly by a backend.

bgwriter.buffersAllocatedPerSecond

Number of buffers allocated.

bgwriter.backendFsyncCallsPerSecond

Number of times a backend had to execute its own fsync call. Normally the background writer handles them even when the backend does its own write.

bgwriter.checkpointWriteTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds.

bgwriter.checkpointSyncTimeInMillisecondsPerSecond

Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds.

PostgresqlTableSample metrics

These attributes are attached to the PostgresqlTableSample event type:

PostgreSQLTableSample attributes

Description

table.totalSizeInBytes

The total disk space used by the table, including indexes and TOAST data.

table.indexSizeInBytes

The total disk space used by indexes attached to the specified table.

table.liveRows

Number of live rows.

table.deadRows

Number of dead rows.

table.indexBlocksReadPerSecond

The number of disk blocks read from all indexes on this table.

table.indexBlocksHitPerSecond

The number of buffer hits in all indexes on this table.

table.indexToastBlocksReadPerSecond

The number of disk blocks read from this table's TOAST table index.

table.indexToastBlocksHitPerSecond

The number of buffer hits in this table's TOAST table index.

table.lastVacuum

Time of last vacuum on table.

table.lastAutoVacuum

Time of last automatic vacuum on table.

table.lastAnalyze

Time of last analyze on table.

table.lastAutoAnalyze

Time of last automatic analyze on table.

table.sequentialScansPerSecond

Number of sequential scans initiated on this table per second.

table.sequentialScanRowsFetchedPerSecond

Number of live rows fetched by sequential scans per second.

table.indexScansPerSecond

Number of index scans initiated on this table.

table.indexScanRowsFetchedPerSecon

Number of live rows fetched by index scans.

table.rowsInsertedPerSecond

Rows inserted per second.

table.rowsUpdatedPerSecond

Rows updated per second.

table.rowsDeletedPerSecond

Rows deleted per second.

table.bloatSizeInBytes

Size of bloat in bytes.

table.dataSizeInBytes

Size of disk spaced used by the main fork of the table.

table.bloatRatio

Fraction of table data size that is bloat.

PgBouncerSample metrics

These attributes are attached to the PgBouncerSample event type:

PgBouncerSample attributes

Description

pgbouncer.stats.transactionsPerSecond

The transaction rate.

pgbouncer.stats.queriesPerSecond

The query rate.

pgbouncer.stats.bytesInPerSecond

The total network traffic received.

pgbouncer.stats.bytesOutPerSecond

The total network traffic sent.

pgbouncer.stats.totalTransactionDurationInMillisecondsPerSecond

Time spent by pgbouncer in transaction.

pgbouncer.stats.totalQueryDurationInMillisecondsPerSecond

Time spent by pgbouncer actively querying PostgreSQL.

pgbouncer.stats.avgTransactionCount

The average number of transactions per second in last stat period.

pgbouncer.stats.avgTransactionDurationInMilliseconds

The average transaction duration.

pgbouncer.stats.avgQueryCount

The average number of queries per second in last stat period.

pgbouncer.stats.avgBytesIn

The client network traffic received.

pgbouncer.stats.avgBytesOut

The client network traffic sent.

pgbouncer.stats.avgQueryDurationInMilliseconds

The average query duration.

pgbouncer.pools.clientConnectionsActive

Client connections linked to server connection and able to process queries.

pgbouncer.pools.clientConnectionsWaiting

Client connections waiting on a server connection.

pgbouncer.pools.serverConnectionsActive

Server connections linked to a client connection.

pgbouncer.pools.serverConnectionsIdle

Server connections idle and ready for a client query.

pgbouncer.pools.serverConnectionsUsed

Server connections idle more than server_check_delay, needing server_check_query.

pgbouncer.pools.serverConnectionsTested

Server connections currently running either server_reset_query or server_check_query.

pgbouncer.pools.serverConnectionsLogin

Server connections currently in the process of logging in.

pgbouncer.pools.maxwaitInMilliseconds

Age of oldest unserved client connection.

Inventory data

The PostgreSQL integration collects each setting from pg_settings along with its boot_val and reset_val. The inventory data appears on the Inventory page, under the config/postgresql source.

Troubleshooting

Here are some troubleshooting tips for the PostgreSQL integration:

  • If you have connection problems, make sure you can connect to the cluster from the same box with psql.
  • If you have problems collecting PgBouncer metrics, make sure you are connected to the instance through PgBouncer. Default port is 6432.
  • If you get the error message Error creating list of entities to collect: pq: unsupported startup parameter: extra_float_digits, set ignore_startup_parameters = extra_float_digits in the PgBouncer config file.

Check the source code

This integration is open source software. That means you can browse its source code and send improvements, or create your own fork and build it.

For more help

If you need more help, check out these support and learning resources:

Create issueEdit page
Copyright © 2021 New Relic Inc.