Optimize Your CI/CD Pipeline
Get instant insights into your CI/CD performance and costs. Reduce build times by up to 45% and save on infrastructure costs.
Introduction
TL:DR: What is the Best Way to Deploy PostgreSQL on Kubernetes?
Depending on your use case, here is the best way to deploy PostgreSQL on Kubernetes:
- Development/Test: Use StatefulSets for simplicity.
- Small-Medium Production: The Bitnami Helm Chart can be used for managed setup.
- Enterprise: Use CloudNativePG to provide High Availability and Advanced Features. - Fully Managed: Choose AWS RDS to select serverless PostgreSQL.
In the decade-plus journey of my DevOps engineer life, I have deployed PostgreSQL on the Kubernetes cluster with different methods. Each approach bears merits and challenges.
Here in this tutorial, I will walk you through the different strategies of deployment, right from the most basic approach through the more sophisticated solutions.
Steps we will cover:
- Interactive PostgreSQL Deployment Guide
- Deployment Methods Comparison
- How to Deploy PostgreSQL on Kubernetes Using StatefulSets: A Basic Guide
- How to Deploy PostgreSQL Using Bitnami Helm Charts: A Production-Ready Setup
- How to Set Up Enterprise-Grade PostgreSQL with CloudNativePG Operator
- How to Integrate AWS RDS PostgreSQL with Kubernetes: A Complete Guide
Interactive PostgreSQL Deployment Guide
Try our interactive tool to find the best deployment strategy of PostgreSQL that matches your use case before reading this paper in detail:
PostgreSQL Deployment Method Finder
High Availability
How critical is high availability and automated failover?
Deployment Methods Comparison
Feature | Basic StatefulSet | Helm (Bitnami) | CloudNativePG | AWS RDS |
---|---|---|---|---|
Production Readiness | Development/Test | Small-Medium Prod | Enterprise | Enterprise |
Setup Comply | Simple | Medium | Complex | Simple |
HA/Failover | Manual | Semi-Auto | Auto | Auto |
Scaling | Manual | Semi-Auto | Auto | Auto |
Backup/Recovery | Manual | Semi-Auto | Auto + PITR | Auto + PITR |
Monitoring | DIY | Basic Included | Advanced | AWS Native |
Maintenance Effort | High | Medium | Low | Minimal |
Cost | $ | $ | $ | $ |
Control/Flexibility | Full | Good | Good | Limited |
Team Needs | K8s Basics | Helm + K8s | K8s Expert | AWS/RDS |
Choose based on:
Development: Basic StatefulSet
Small Production: Helm Chart
Enterprise/Critical: CloudNativePG or AWS RDS, use RDS if already on AWS Kubernetes expertise Low → RDS, High → CloudNativePG
How to Deploy PostgreSQL on Kubernetes Using StatefulSets: A Basic Guide
The most straightforward way is to use a StatefulSet for deploying PostgreSQL. This is perfect for development environments and small production workloads where high availability isn't critical.
Prerequisites
- A running Kubernetes cluster (1.19+)
kubectl
configured to reach your cluster- Basic understanding of Kubernetes resources
- Storage class which is of type ReadWriteOnce access mode
Creating a Namespace
kubectl create namespace postgres
kubectl config set-context --current --namespace=postgres
Create Secrets
# postgres-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
type: Opaque
data:
POSTGRES_PASSWORD: $(echo -n "your-secure-password" | base64)
POSTGRES_USER: $(echo -n "postgres" | base64)
Apply the secrets:
kubectl apply -f postgres-secrets.yaml
Configuring PostgreSQL
Create ConfigMap for the PostgreSQL configuration:
# postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
postgresql.conf: |
# Connection Settings
max_connections = 100
# Memory Settings
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
work_mem = 2621kB
# Write Ahead Log
wal_buffers = 7864kB
min_wal_size = 1GB
max_wal_size = 4GB
# Query Planning
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
# Checkpointing
checkpoint_completion_target = 0.9
pg_hba.conf: |
local all all trust
host all all 0.0.0.0/0 md5
host replication all 0.0.0.0/0 md5
Apply the ConfigMap:
kubectl apply -f postgres-config.yaml
Create the StatefulSet
Now, let's create the PostgreSQL StatefulSet with proper configuration:
# postgres-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15.3
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: POSTGRES_PASSWORD
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secrets
key: POSTGRES_USER
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: postgres-config
mountPath: /etc/postgresql/postgresql.conf
subPath: postgresql.conf
- name: postgres-config
mountPath: /etc/postgresql/pg_hba.conf
subPath: pg_hba.conf
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
livenessProbe:
exec:
command:
- pg_isready
- -U
- postgres
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- pg_isready
- -U
- postgres
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
Apply the StatefulSet:
kubectl apply -f postgres-statefulset.yaml
Create the Service
Create a headless service to expose PostgreSQL:
# postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
name: postgres
clusterIP: None
selector:
app: postgres
Apply the service:
kubectl apply -f postgres-service.yaml
Setup Monitoring
Basic monitoring with Prometheus:
# postgres-monitoring.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: postgres-monitor
spec:
selector:
matchLabels:
app: postgres
endpoints:
- port: postgres
interval: 30s
scrapeTimeout: 10s
Configure Backups
Create a backup solution using CronJob:
# postgres-backup.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: postgres-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: postgres:15.3
command:
- /bin/sh
- -c
- |
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
PGPASSWORD=$POSTGRES_PASSWORD pg_dump -h postgres -U postgres > /backup/db_$TIMESTAMP.sql
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: POSTGRES_PASSWORD
volumeMounts:
- name: backup-volume
mountPath: /backup
volumes:
- name: backup-volume
persistentVolumeClaim:
claimName: postgres-backup-pvc
restartPolicy: OnFailure
Verification Steps
kubectl get pods -l app=postgres
kubectl get pvc
kubectl get svc
kubectl run -it --rm --image=postgres:15.3 postgres-client -- psql -h postgres -U postgres
kubectl logs -f statefulset.apps/postgres
Common Issues and Solutions
-
Pod won't start
- Check PVC status:
kubectl get pvc
- Check Storage class:
kubectl get sc
- Check pod events:
kubectl describe pod postgres-0
- Check PVC status:
-
Connection issues
- Check the service:
kubectl get svc postgres
- Verify the endpoints with:
kubectl get endpoints postgres
- Test network policy:
kubectl get networkpolicies
- Check the service:
-
Performance issues
- Check the resources consumption:
kubectl top pod postgres-0
- Verify logs of the PostgreSQL:
kubectl logs postgres-0
- Check the resources consumption:
Best Practices for Basic Deployment
Adjust resource limits in StatefulSet Best Practices for Basic Deployment
Category | Pros | Cons |
---|---|---|
Resource Management | • Predictable performance, Easy scaling, Clear allocation | • Manual adjustments, Constant monitoring, Risk of misprovisioning |
Data Protection | • Custom backups, Flexible schedules, Full retention control | • Manual management, Complex restores, High storage needs |
Security | • Full policy control, Custom networking, Granular access | • Manual updates, Complex secrets, Large attack surface |
Monitoring | • Custom metrics, Detailed control, Flexible alerts | • High setup effort, Infrastructure costs, Manual tuning |
How to Deploy PostgreSQL Using Bitnami Helm Charts: A Production-Ready Setup
Following on from the plain vanilla StatefulSet pattern described above, the next evolution for most teams is onto Helm charts: providing a better packaged and managed way of doing deployments. For me, this has been one of the best features of completeness and most actively maintained charts: the Bitnami PostgreSQL Chart.
Prerequisites
- Helm 3.x installed
- Basic knowledge of Helm concepts
- Kubernetes cluster with Helm up and running
Step 1: Add Bitnami Repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Step 2: Create Values File
Create a custom values file (values.yaml
) for PostgreSQL configuration:
# postgres-values.yaml
global:
postgresql:
auth:
postgresPassword: "your-secure-password"
database: "your-database"
primary:
persistence:
size: 100Gi
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
configuration: |
max_connections = 100
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
checkpoint_completion_target = 0.9
wal_buffers = 7864kB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 2621kB
min_wal_size = 1GB
max_wal_size = 4GB
metrics:
enabled: true
serviceMonitor:
enabled: true
replication:
enabled: true
readReplicas: 2
synchronousCommit: "on"
numSynchronousReplicas: 1
networkPolicy:
enabled: true
backup:
enabled: true
cronjob:
schedule: "0 2 * * *"
storage:
persistentVolumeClaim:
size: 50Gi
Step 3: Install PostgreSQL
Deploy PostgreSQL using the Helm chart:
helm install postgres bitnami/postgresql \
--namespace postgres \
--create-namespace \
--values postgres-values.yaml
Step 4: Verify Installation
Check the deployment status:
# Check all resources
helm status postgres -n postgres
# Get PostgreSQL password
export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
echo $POSTGRES_PASSWORD
# Connect to PostgreSQL
kubectl run postgres-client --rm --tty -i --restart='Never' \
--namespace postgres \
--image docker.io/bitnami/postgresql:15.3.0 \
--env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432
Common Operations
Scaling Read Replicas
helm upgrade postgres bitnami/postgresql \
--namespace postgres \
--values postgres-values.yaml \
--set replication.readReplicas=3
When my application's read traffic increases, I scale up the read replicas to handle the load better. This command adds another replica to our PostgreSQL cluster.
Update Configuration
# Edit postgres-values.yaml and then:
helm upgrade postgres bitnami/postgresql \
--namespace postgres \
--values postgres-values.yaml
I often need to tweak PostgreSQL settings as my application grows. This simple upgrade command applies any configuration changes I've made in my values file.
Backup and Restore
Manual Backup:
kubectl exec -it postgres-postgresql-0 -n postgres -- \
pg_dump -U postgres > backup.sql
I use this command for quick, on-demand backups before making major changes. It's a straightforward way to get a point-in-time snapshot of my database.
Restore from Backup:
kubectl exec -i postgres-postgresql-0 -n postgres -- \
psql -U postgres < backup.sql
When things go wrong, I can easily restore my database using this command. It's saved me more than once during development and testing.
Monitoring and Alerts
The Bitnami chart already includes Prometheus exporters. To configure extra monitoring, perform the following steps:
# prometheus-rules.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: postgresql-alerts
namespace: postgres
spec:
groups:
- name: postgresql
rules:
- alert: PostgreSQLHighConnections
expr: pg_stat_activity_count > 100
for: 5m
labels:
severity: warning
annotations:
description: "PostgreSQL instance has too many connections"
- alert: PostgreSQLReplicationLag
expr: pg_replication_lag_bytes > 100000000
for: 5m
labels:
severity: critical
annotations:
description: "PostgreSQL replication is lagging"
Best Practices for Helm Deployment
Category | Pros | Cons |
---|---|---|
Version Control | • Reproducible deploys, Easy rollbacks, Clear history | • Complex versioning, Dependency issues, Storage overhead |
High Availability | • Built-in replication, Auto failover, Pod anti-affinity | • High resource usage, Complex setup, Network costs |
Backup Strategy | • Auto backups, Multiple options, Cross-zone storage | • Storage costs, Performance impact, Complex policies |
Security | • Built-in features, Auto rotation, SSL support | • Cert management, Complex updates, Config hardening |
How to Set Up Enterprise-Grade PostgreSQL with CloudNativePG Operator
CloudNativePG is my go-to choice for a production deployment that requires enterprise-grade features. It's a full Kubernetes-native operator that provides advanced PostgreSQL capabilities to your cluster.
Prerequisites
- Kubernetes 1.21+
- kubectl and Helm installed (optional)
- Cluster administrator privileges to install the operator
Installing Operator
# Using kubectl
kubectl apply -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.21/releases/cnpg-1.21.0.yaml
# Or using Helm
helm repo add cloudnative-pg https://cloudnative-pg.github.io/charts
helm install cloudnative-pg cloudnative-pg/cloudnative-pg \
--namespace cnpg-system \
--create-namespace
Create a Basic Cluster
Let's start with a basic PostgreSQL cluster:
# postgres-cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres-cluster
spec:
instances: 3
# PostgreSQL configuration
postgresql:
parameters:
max_connections: "100"
shared_buffers: "256MB"
effective_cache_size: "768MB"
maintenance_work_mem: "64MB"
checkpoint_completion_target: "0.9"
wal_buffers: "16MB"
default_statistics_target: "100"
random_page_cost: "1.1"
effective_io_concurrency: "200"
work_mem: "2621kB"
min_wal_size: "1GB"
max_wal_size: "4GB"
# Storage configuration
storage:
size: 100Gi
storageClass: standard
# Resource requirements
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
# Backup configuration
backup:
barmanObjectStore:
destinationPath: "s3://my-bucket/backup"
endpointURL: "https://s3.amazonaws.com"
s3Credentials:
accessKeyId:
name: aws-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: SECRET_ACCESS_KEY
# Monitoring configuration
monitoring:
enablePodMonitor: true
Configure Backups
Create an S3 credentials secret:
# aws-creds.yaml
apiVersion: v1
kind: Secret
metadata:
name: aws-creds
type: Opaque
stringData:
ACCESS_KEY_ID: your-access-key
SECRET_ACCESS_KEY: your-secret-key
Deploy the Cluster
Apply the configurations:
kubectl apply -f aws-creds.yaml
kubectl apply -f postgres-cluster.yaml
Advanced Features
1. High Availability Configuration
# ha-postgres-cluster.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres-ha
spec:
instances: 3
postgresql:
parameters:
# Replication settings
max_wal_senders: "10"
max_replication_slots: "10"
wal_level: "logical"
replicationSlots:
highAvailability:
enabled: true
bootstrap:
recovery:
source: postgres-cluster
# Anti-affinity settings
affinity:
enablePodAntiAffinity: true
topologyKey: kubernetes.io/hostname
2. Point-in-Time Recovery
# pitr-recovery.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgres-pitr
spec:
instances: 3
bootstrap:
recovery:
source: postgres-cluster
recoveryTarget:
targetTime: "2024-01-01 00:00:00.000000+00"
3. Rolling Updates
CloudNativePG automatically handles rolling updates. Just update the cluster spec:
kubectl patch cluster postgres-cluster --type merge \
-p '{"spec":{"postgresql":{"parameters":{"max_connections":"200"}}}}'
Monitoring Setup
Build a complete monitoring stack:
# monitoring-config.yaml
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: postgres-monitor
spec:
selector:
matchLabels:
postgresql: postgres-cluster
podMetricsEndpoints:
- port: metrics
interval: 30s
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: postgres-alerts
spec:
groups:
- name: postgresql
rules:
- alert: PostgreSQLHighReplicationLag
expr: pg_replication_lag_bytes > 100000000
for: 5m
labels:
severity: critical
- alert: PostgreSQLHighConnections
expr: pg_stat_activity_count > 100
for: 5m
labels:
severity: warning
Backup and Recovery
1. On-demand Backup
# backup.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Backup
metadata:
name: postgres-backup
spec:
cluster:
name: postgres-cluster
2. Planned Backups
# scheduled-backup.yaml
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: postgres-scheduled-backup
spec:
schedule: "0 0 * * *"
cluster:
name: postgres-cluster
immediate: true
Best Practices for CloudNativePG
Category | Pros | Cons |
---|---|---|
High Availability | • Native K8s, Auto failover, Multi-zone | • High costs, Complex setup, Network overhead |
Backup Strategy | • Continuously archiving, PITR, multistorage | • Costs of storage, Validation overhead, Complex retention |
Monitoring | • Native metrics, Prometheus, Custom alerts | • Resource overhead, Complex dashboards, Alert tuning |
Security | • Native security, Auto certs, RBAC | • Complex setup, Regular rotation, Policy overhead |
How to Integrate AWS RDS PostgreSQL with Kubernetes: A Complete Guide
Now, let's discuss how one might go about using AWS RDS after exploring self-managed options: a fully managed PostgreSQL Service to offload the database management tasks to AWS while it seamlessly integrates with your Kubernetes workloads.
Prerequisites
- AWS Account with appropriate permissions
- AWS CLI configured
- eksctl or similar tools for EKS management
- AWS hosted Kubernetes cluster preferably using EKS
Create RDS Instance
First, let's create an RDS instance using AWS RDS Operator:
# rds-instance.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: postgres-rds
spec:
engine: postgres
engineVersion: "15.3"
dbInstanceClass: db.t3.large
dbInstanceIdentifier: postgres-prod
masterUsername: postgres
masterUserPassword:
name: rds-credentials
key: password
allocatedStorage: 100
maxAllocatedStorage: 200
publiclyAccessible: false
vpcSecurityGroupIDs:
- sg-xxxxxxxxxxxxxxxxx
dbSubnetGroupName: my-db-subnet-group
# High Availability Configuration
multiAZ: true
# Backup Configuration
backupRetentionPeriod: 7
preferredBackupWindow: "03:00-04:00"
# Maintenance Window
preferredMaintenanceWindow: "Mon:04:00-Mon:05:00"
# Performance Insights
enablePerformanceInsights: true
performanceInsightsRetentionPeriod: 7
# Enhanced Monitoring
monitoringInterval: 60
monitoringRoleARN: arn:aws:iam::123456789012:role/rds-monitoring-role
# Storage Configuration
storageType: gp3
iops: 3000
Create Kubernetes Secret
Create a secret for database credentials:
# rds-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: rds-credentials
type: Opaque
stringData:
password: your-secure-password
username: postgres
Create Service for RDS
Create a Kubernetes service to access RDS:
# rds-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-rds
spec:
type: ExternalName
externalName: postgres-prod.xxxxx.region.rds.amazonaws.com
Step 4: Configure Connection Pooling
Deploy PgBouncer for connection pooling:
# pgbouncer-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgbouncer
spec:
replicas: 2
selector:
matchLabels:
app: pgbouncer
template:
metadata:
labels:
app: pgbouncer
spec:
containers:
- name: pgbouncer
image: edoburu/pgbouncer:1.18.0
env:
- name: DB_HOST
value: "postgres-prod.xxxxx.region.rds.amazonaws.com"
- name: DB_USER
valueFrom:
secretKeyRef:
name: rds-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: rds-credentials
key: password
ports:
- containerPort: 5432
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Step 5: Set Up Monitoring
Create CloudWatch metrics collection:
# cloudwatch-metrics.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudwatch-agent-config
data:
cwagentconfig.json: |
{
"metrics": {
"metrics_collected": {
"rds": {
"metrics_collection_interval": 60,
"resources": [
"postgres-prod"
],
"measurement": [
"CPUUtilization",
"DatabaseConnections",
"FreeStorageSpace",
"ReadIOPS",
"WriteIOPS"
]
}
}
}
}
AWS RDS Integration Best Practices
Category | Pros | Cons |
---|---|---|
Network Security | • AWS security, VPC, PrivateLink | • Complex setup, Cross-account issues, Latency |
High Availability | • Multi-AZ, Auto failover, Read replicas | • High costs, Region comply, Network costs |
Backup Strategy | • Auto backups, Cross-region, PITR | • Backup windows, Storage costs, Recovery time |
Performance | • Managed ops, Performance insights, Auto scaling | • Limited control, Cost tradeoffs, Size limits |
Common Operations
1. Creating Read Replica
aws rds create-db-instance-read-replica \
--db-instance-identifier postgres-prod-replica \
--source-db-instance-identifier postgres-prod
2. Scaling Storage
aws rds modify-db-instance \
--db-instance-identifier postgres-prod \
--allocated-storage 200 \
--apply-immediately
3. Taking Manual Snapshot
aws rds create-db-snapshot \
--db-instance-identifier postgres-prod \
--db-snapshot-identifier manual-backup-$(date +%Y%m%d)
Monitoring and Alerting
- CloudWatch Alarms
# cloudwatch-alarms.yaml
AWSTemplateFormatVersion: '2010-09-09'
Resources:
HighCPUAlarm:
Type: AWS::CloudWatch::Alarm
Properties:
AlarmName: RDS-HighCPU
MetricName: CPUUtilization
Namespace: AWS/RDS
Statistic: Average
Period: 300
EvaluationPeriods: 2
Threshold: 80
AlarmActions:
- arn:aws:sns:region:account-id:notification-topic
Dimensions:
- Name: DBInstanceIdentifier
Value: postgres-prod
- Kubernetes Prometheus Integration
# prometheus-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: rds-monitor
spec:
endpoints:
- interval: 30s
port: metrics
selector:
matchLabels:
app: cloudwatch-exporter
Cost Optimization
-
Instance Right-sizing
- Monitor Performance Insights
- Use AWS Cost Explorer
- Consider Reserved Instances
- Scale instance class based on usage
-
Optimizing Storage
- Monitor storage growth
- Enable storage autoscaling
- Cleanup old snapshots
- Use gp3 storage for better performance/cost ratio
Migration to RDS
If migrating from another solution:
- Preparation
# Create subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name my-subnet-group \
--subnet-ids subnet-xxxxx subnet-yyyyy
# Create parameter group
aws rds create-db-parameter-group \
--db-parameter-group-name custom-postgres15 \
--db-parameter-group-family postgres15
- Data Migration
# Using AWS DMS
aws dms create-replication-instance \
--replication-instance-class dms.t3.large \
--replication-instance-identifier migration-instance
# Or using pg_dump/pg_restore
pg_dump -h old-postgres -U postgres | \
psql -h postgres-prod.xxxxx.region.rds.amazonaws.com -U postgres
- Application Migration
# Update application configurations
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: app
env:
- name: DB_HOST
value: postgres-rds
- name: DB_PORT
value: "5432"
- name: DB_USER
valueFrom:
secretKeyRef:
name: rds-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: rds-credentials
key: password
That completes this rather comprehensive tutorial about the various ways you could deploy PostgreSQL on Kubernetes. Each has its pros and use cases, and which to choose depends upon your particular needs, team skills, and operational constraints.
Conclusion
Well, after playing around with various ways of deploying PostgreSQL on Kubernetes, my conclusion would be that each has a certain purpose: the simple StatefulSets, a very good development and learning ground; Helm charts give balanced solutions for a normal production workload, but advanced features with native integration of CloudNativePG, maybe will answer enterprise needs, and those in the AWS ecosystem would take RDS.
Choose your deployment method based on your team's expertise, operational requirements, and budget constraints. Keep in mind that you can always start simple and evolve your infrastructure as your needs grow.