We use the Prometheus Operator Chart to deploy the Prometheus, Alert Manager and Grafana stack,
Please note as of October 2020, the official Prometheus Operator Chart is.
prometheus-community – https://prometheus-community.github.io/helm-charts
To add this chart to your Helm repo.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
What usually happens is that you will initially install the chart and by default your kubernetes PV will have a default policy of DELETE. This means if you uninstall the chart, the Persistent Volume in the cloud (Azure, AWS, GCP etc) will also be deleted. Not a great outcome if you want historic metrics.
What you want is a PV that has a reclaim policy of retain, so that when the chart is every uninstalled, your managed disks in the cloud are retained.

So how do you go about doing this?
- Install the chart initially with a persistent volume configured in the values files for Prometheus. (The default way)
- Configure Grafana correctly on the first install.
Stage 1
Prometheus
We using managed GKE/GCP, so standard storage class is fine, your cloud provider may be different.
- Configure your Prometheus Operator chart with the following in the values file.
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: standard
resources:
requests:
storage: 64Gi
Grafana
With Grafana, you can get away with setting it up correctly first time round.
Create the PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Add the following to the Grafana values files.
grafana:
persistence:
enabled: true
type: pvc
existingClaim: pv-claim-grafana
- Deploy your chart
Stage 2
Once the chart is deployed, go to your cloud provider and note the disk id’s. I am using GCP. So I note them down here:

In the above, the Name column is the disk id for GCP. Azure/AWS will be different e.g. Disk URI etc.
Go back to your helm chart repository and lets alter the chart so that Prometheus and Grafana are always bound to this disks, even if you uninstall the chart.
Prometheus
If you would like to keep the data of the current persistent volumes, it should be possible to attach existing volumes to new PVCs and PVs that are created using the conventions in the new chart. For example, in order to use an existing Azure disk for a helm release called `prometheus-operator` the following resources can be created:
- Note down the RELEASE NAME of your prometheus operator chart. Mine is called prometheus operator.
Configure the following yaml template. This is a HACK. By making the name of the PV and PVC EXACTLY the same as the chart. Prometheus will reuse the PV/PVC.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-prometheus-operator-prometheus-0
spec:
storageClassName: "standard"
capacity:
storage: 64Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gke-dev-xyz-aae-pvc-d8971937-85f8-4566-b90e-110dfbc17cbb
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: prometheus
prometheus: prometheus-operator-prometheus
name: prometheus-prometheus-operator-prometheus-db-prometheus-prometheus-operator-prometheus-0
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
- Configure the above to always run as a PreInstall hook e.g. with Helmfile
- events: ["presync"]
showlogs: true
command: "kubectl"
args:
- apply
- -n
- monitoring
- -f
- ./pv/pv-prometheus.yaml
Grafana
Grafana is not so fussy. So we can do the following:
Configure the following yaml template.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-grafana
spec:
storageClassName: "standard"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: service-compliance
name: pv-claim-grafana
gcePersistentDisk:
pdName: gke-dev-xyz-aae-pvc-4b450590-8ec0-471d-bf1a-4f6aaa9c4e81
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-grafana
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Then finally setup a preinstall helm sync if using helmfile
hooks:
- events: ["presync"]
showlogs: true
command: "kubectl"
args:
- apply
- -n
- monitoring
- -f
- ./pv/pv-grafana.yaml
With the above in place, you will be able to rerun chart installs for updates and uninstall the chart. Your final check is to ensure the PVs are RETAIN and not on the DELETE policy.
You must be logged in to post a comment.