Sr Technical Writer and Team Lead

How to Deploy Postgres to Kubernetes Cluster is a common question for teams that want database control while keeping app and infrastructure workflows in one platform. In this tutorial, you will deploy PostgreSQL on Kubernetes with a StatefulSet, persistent volumes, Secrets, and Services, then verify that data survives pod restarts.
You will also compare manual YAML, Helm, and operator-based approaches so you can choose the right path for development, staging, or production on DigitalOcean Kubernetes (DOKS).
StatefulSet on Kubernetes because it needs
stable identity and durable storage.ReadWriteOnce access mode for primary instances.Secret objects, not ConfigMaps.Before you begin, make sure you have:
kubectl installed and configured to talk to your cluster.1 vCPU and 2 GiB RAM.PostgreSQL is stateful. That means storage persistence, startup ordering, and identity matter. Kubernetes supports this well when you use the right controllers and storage model.
A StatefulSet gives each pod a stable name and stable storage attachment
behavior, which fits a database workload. A standard Deployment treats pods
as interchangeable units and is a better fit for stateless services.
Kubernetes uses a PersistentVolumeClaim (PVC) to request durable storage from a StorageClass. On DOKS, this maps to DigitalOcean Block Storage. If the pod is rescheduled, Kubernetes reattaches the same volume so your data remains available.
| Method | Complexity | HA | Production fit | Best for |
|---|---|---|---|---|
| Manual YAML | Medium | Manual setup | Good with strong ops | Full control |
| Helm chart | Low-mid | Chart based | Good for standard setups | Fast rollout |
| Operator | Mid-high | Built in ops | Best for long-term ops | Many clusters |
Create a dedicated namespace so database resources stay isolated from application resources.
apiVersion: v1
kind: Namespace
metadata:
name: postgres-demo
Apply it:
kubectl apply -f namespace.yaml
Use a Secret for passwords and a ConfigMap for non-sensitive settings.
Create the Secret:
kubectl create secret generic postgres-auth \
--namespace postgres-demo \
--from-literal=POSTGRES_PASSWORD='ReplaceWithStrongPassword'
Create postgres-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
namespace: postgres-demo
data:
POSTGRES_DB: appdb
POSTGRES_USER: appuser
Apply it:
kubectl apply -f postgres-configmap.yaml
For DOKS, use do-block-storage for most cases. For production environments
where you want to avoid accidental volume deletion, consider
do-block-storage-retain.
Create postgres-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: postgres-demo
spec:
accessModes:
- ReadWriteOnce
storageClassName: do-block-storage
resources:
requests:
storage: 20Gi
Apply and verify:
kubectl apply -f postgres-pvc.yaml
kubectl get pvc -n postgres-demo
Use a headless service for stable network identity in the StatefulSet and a ClusterIP service for in-cluster client access.
Create postgres-services.yaml:
apiVersion: v1
kind: Service
metadata:
name: postgres-headless
namespace: postgres-demo
spec:
clusterIP: None
selector:
app: postgres
ports:
- name: postgres
port: 5432
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres-demo
spec:
type: ClusterIP
selector:
app: postgres
ports:
- name: postgres
port: 5432
targetPort: 5432
Apply:
kubectl apply -f postgres-services.yaml
This StatefulSet deploys a single PostgreSQL primary with explicit CPU and memory requests and limits for predictable scheduling.
Create postgres-statefulset.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: postgres-demo
spec:
serviceName: postgres-headless
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: postgres-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-auth
key: POSTGRES_PASSWORD
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1"
memory: "1Gi"
readinessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DB"
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
exec:
command:
- /bin/sh
- -c
- pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DB"
initialDelaySeconds: 30
periodSeconds: 10
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
Apply and verify:
kubectl apply -f postgres-statefulset.yaml
kubectl get pods -n postgres-demo -l app=postgres
kubectl get statefulset -n postgres-demo
Now test connectivity and verify that data survives pod recreation.
Get the pod name:
POD_NAME=$(kubectl get pods -n postgres-demo -l app=postgres \
-o jsonpath='{.items[0].metadata.name}')
echo "$POD_NAME"
Connect to PostgreSQL:
kubectl exec -it -n postgres-demo "$POD_NAME" -- \
psql -U appuser -d appdb
Run a quick test inside psql:
CREATE TABLE IF NOT EXISTS healthcheck (
id serial PRIMARY KEY,
status text NOT NULL
);
INSERT INTO healthcheck (status) VALUES ('ok');
SELECT count(*) FROM healthcheck;
Exit psql, then delete the pod and wait for recreation:
kubectl delete pod -n postgres-demo "$POD_NAME"
kubectl get pods -n postgres-demo -l app=postgres -w
Reconnect and verify the row is still there:
NEW_POD_NAME=$(kubectl get pods -n postgres-demo -l app=postgres \
-o jsonpath='{.items[0].metadata.name}')
kubectl exec -it -n postgres-demo "$NEW_POD_NAME" -- \
psql -U appuser -d appdb -c "SELECT count(*) FROM healthcheck;"
If you still see the row count, your storage is persistent and correctly attached.
You can perform a quick logical backup with pg_dump. For production strategy,
also review
How To Back Up and Restore a PostgreSQL Database.
Create a backup:
kubectl exec -n postgres-demo "$NEW_POD_NAME" -- \
pg_dump -U appuser -d appdb > appdb_backup.sql
Restore from backup:
kubectl cp appdb_backup.sql postgres-demo/"$NEW_POD_NAME":/tmp/appdb_backup.sql
kubectl exec -n postgres-demo "$NEW_POD_NAME" -- \
psql -U appuser -d appdb -f /tmp/appdb_backup.sql
If you want a faster install path, Helm is a good option:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install postgres bitnami/postgresql \
--namespace postgres-helm \
--create-namespace \
--set auth.postgresPassword='ReplaceWithStrongPassword' \
--set primary.persistence.storageClass=do-block-storage \
--set primary.resources.requests.cpu=250m \
--set primary.resources.requests.memory=512Mi
Verify Helm deployment:
helm list -n postgres-helm
kubectl get pods -n postgres-helm
Kubernetes operators automate lifecycle tasks such as backup scheduling, failover, rolling updates, and major operational workflows that are hard to maintain in plain YAML.
Use an operator when you need repeatable cluster operations, built in failover workflows, and easier day-2 management.
Running Postgres on Kubernetes in production is possible, but your architecture must include failure handling.
If your team prefers less operational overhead, compare this self-managed setup with DigitalOcean Managed PostgreSQL, which provides managed backups, updates, and high availability controls.
A StatefulSet is used for workloads that need stable pod identity and stable storage. PostgreSQL depends on both, so StatefulSet is the standard controller for Kubernetes database pods.
A Deployment is designed for stateless replicas that can be replaced freely. A StatefulSet preserves pod identity and works with persistent volumes in a way that is safer for database workloads.
You can delete a StatefulSet object, but do it carefully. Depending on your delete command and retention settings, pods and data volumes may be affected. In production, verify reclaim policy, backups, and restore steps before deletion.
Yes. Many teams run production Postgres on Kubernetes, especially with operators. The key is disciplined storage, backup, failover, monitoring, and upgrade practices.
Use do-block-storage for general workloads. For production data retention
safety, evaluate do-block-storage-retain so volume deletion is less likely
during PVC lifecycle changes.
In this tutorial, you deployed PostgreSQL on Kubernetes using a StatefulSet, durable storage, and secure configuration handling. You also validated connectivity and data persistence, compared deployment methods, and reviewed production planning requirements.
To keep building from here, explore related DigitalOcean resources:
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer(Team Lead) @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
I think something is missing here. The replicas will not have the same state of the db so you actually need to create an actual replication.
It looks like all the replicas will write to the same volume without sync, right? It doesn’t look like the right way to do it.
This is my firs time working with kubernetes, after following this guide step by step y get this error y alredy made this double to double chekc everything its fine:
psql: error: connection to server at “localhost” (::1), port 5432 failed: FATAL: database “ps_db” does not exist
command terminated with exit code 2
I have a few questions:
**persistentVolumeClaim** refers to a PersistentVolumeClaim named “postgres-volume-claim”. This claim is likely used to provide persistent storage to... Are you not sure?? I’m guessing this article was made with ChatGPT haha.
Why would you use a Deployment here? All pods will end up using the same storage and interfere with each other. Do yourself a favour and don’t follow this “guide”.
AFAIK this is NOT the correct procedure to scale up PG databases. This will likely corrupt your data.
This is insane! READ https://www.postgresql.org/docs/current/runtime-config-replication.html
This tutorial should be removed, as Mistral says correctly when reviewing it:
The tutorial you linked does not implement synchronization of PostgreSQL nodes in a way that ensures high availability and data consistency across multiple nodes. The setup described in the tutorial uses a Deployment with multiple replicas, but it does not configure PostgreSQL replication. This means that all the replicas will be writing to the same persistent volume, which can lead to data corruption and conflicts.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.