Cloud Native Postgres (CNPG) Procedures¶
Intended audience: Anyone who is administering databases at the USDF.
Setup Cluster¶
Setup Pooler¶
PgBouncer is used to serve connections to the database. A Pooler resource is defined in Kubernetes to create the underlying Kubernetes Pods and Service. USDF uses both session and transaction poolers. Transaction Pooler is used for application access to databases because it more efficiently uses connections.
For services to be accessible from the S3DF they need an annotation for the service to request a S3DF IP Address that is accessible outside of the cluster. Previously these annotations could not be defined in the pooler. Now service annotations are supported. Below is snippet from rucio. Note the metallb.universe.tf/address-pool: sdf-rubin-ingest
annotation. After an IP address is created a SLAC DNS should be requested with a Service Now ticket.
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: pooler-name
spec:
serviceTemplate:
metadata:
annotations:
metallb.universe.tf/address-pool: sdf-rubin-ingest
spec:
type: LoadBalancer
Setup Backup¶
Below are instructions for setting up backup. Backups should be created for production databases. Backups for development are usually not needed. For storing backups a unique, unused S3 bucket must be created. It is very important that the S3 bucket cannot have an existing data and should not be shared. This will cause issues with restore if the cluster names are the same and they share the same bucket.
To setup backup request a new S3 bucket with a Service Now Ticket with the name of the S3 bucket and request the credentials be stored in vault under the name of the database with a key of S3. We use the syntax of rubin-name of database as the syntax.
Update the makefile to create a s3 secret from vault. Example below from Butler. Note the addition of the S3_SECRET_PATH which is the path in Vault and second line with set that creates the S3 secret.
SECRET_PATH ?= secret/rubin/usdf-butler/postgres S3_SECRET_PATH ?= secret/rubin/usdf-butler/s3 get-secrets-from-vault: mkdir -p etc/.secrets/ set -e; for i in username password; do vault kv get --field=$$i $(SECRET_PATH) > etc/.secrets/$$i ; done set -e; for i in client-id client-secret; do vault kv get --field=$$i $(S3_SECRET_PATH) > etc/.secrets/$$i ; done
In the Cluster manifest for the database setup backup. Example below. Update the S3 name.
backup: retentionPolicy: "15d" barmanObjectStore: destinationPath: s3://<bucket name> endpointURL: https://s3dfrgw.slac.stanford.edu s3Credentials: accessKeyId: name: s3-creds key: ACCESS_KEY_ID secretAccessKey: name: s3-creds key: ACCESS_SECRET_KEY
Restore from Backup¶
Building Containers¶
Enable PgSphere¶
PgSphere is installed on the LSST CNPG image as detailed in the Building Container Image section. To enable the extension connect to the database and execute the CREATE EXTENSION pg_sphere;
Below is how you can validate that pg_sphere is enabled and version.
SELECT pg_sphere_version();
pg_sphere_version
-------------------
1.3.2
(1 row)
Resize Cluster¶
To increase or decrease the number of database instances edit the instances
section of the database manifest as detailed below and apply the change.
spec:
instances: 2