This topic provide reference informaiton for the Prometheus pacakge.
About Prometheus and Altertmanager
Prometheus (https://prometheus.io/) is a system and service monitoring system. Prometheus collects metrics from configured targets at given intervals, evaluates rule expressions, and displays the results. Alertmanager is used to trigger alerts if some condition is observed to be true.
To install the Prometheus package:
Prometheus Components
The Prometheus package installs on a TKG cluster the containers listed in the table. The package pulls the containers from the VMware public registry specified in Package Repository.
Container | Resource Type | Replicas | Description |
---|---|---|---|
prometheus-alertmanager |
Deployment | 1 | Handles alerts sent by client applications such as the Prometheus server. |
prometheus-cadvisor |
DaemonSet | 5 | Analyzes and exposes resource usage and performance data from running containers |
prometheus-kube-state-metrics |
Deployment | 1 | Monitors node status and capacity, replica-set compliance, pod, job, and cronjob status, resource requests and limits. |
prometheus-node-exporter |
DaemonSet | 5 | Exporter for hardware and OS metrics exposed by kernels. |
prometheus-pushgateway |
Deployment | 1 | Service that allows you to push metrics from jobs which cannot be scraped. |
prometheus-server |
Deployment | 1 | Provides core functionality, including scraping, rule processing, and alerting. |
Prometheus Data Values
Below is an example prometheus-data-values.yaml
file.
Note the following:
- Ingress is enabled (ingress: enabled: true).
- Ingress is configured for URLs ending in /alertmanager/ (alertmanager_prefix:) and / (prometheus_prefix:).
- The FQDN for Prometheus is
prometheus.system.tanzu
(virtual_host_fqdn:). - Supply your own custom certificate in the Ingress section (tls.crt, tls.key, ca.crt).
- The pvc for alertmanager is 2GiB. Supply the
storageClassName
for the default storage policy. - The pvc for prometheus is 20GiB. Supply the
storageClassName
for the vSphere storage policy.
namespace: prometheus-monitoring alertmanager: config: alertmanager_yml: | global: {} receivers: - name: default-receiver templates: - '/etc/alertmanager/templates/*.tmpl' route: group_interval: 5m group_wait: 10s receiver: default-receiver repeat_interval: 3h deployment: replicas: 1 rollingUpdate: maxSurge: 25% maxUnavailable: 25% updateStrategy: Recreate pvc: accessMode: ReadWriteOnce storage: 2Gi storageClassName: default service: port: 80 targetPort: 9093 type: ClusterIP ingress: alertmanager_prefix: /alertmanager/ alertmanagerServicePort: 80 enabled: true prometheus_prefix: / prometheusServicePort: 80 tlsCertificate: ca.crt: | -----BEGIN CERTIFICATE----- MIIFczCCA1ugAwIBAgIQTYJITQ3SZ4BBS9UzXfJIuTANBgkqhkiG9w0BAQsFADBM ... w0oGuTTBfxSMKs767N3G1q5tz0mwFpIqIQtXUSmaJ+9p7IkpWcThLnyYYo1IpWm/ ZHtjzZMQVA== -----END CERTIFICATE----- tls.crt: | -----BEGIN CERTIFICATE----- MIIHxTCCBa2gAwIBAgITIgAAAAQnSpH7QfxTKAAAAAAABDANBgkqhkiG9w0BAQsF ... YYsIjp7/f+Pk1DjzWx8JIAbzItKLucDreAmmDXqk+DrBP9LYqtmjB0n7nSErgK8G sA3kGCJdOkI0kgF10gsinaouG2jVlwNOsw== -----END CERTIFICATE----- tls.key: | -----BEGIN PRIVATE KEY----- MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDOGHT8I12KyQGS ... l1NzswracGQIzo03zk/X3Z6P2YOea4BkZ0Iwh34wOHJnTkfEeSx6y+oSFMcFRthT yfFCZUk/sVCc/C1a4VigczXftUGiRrTR -----END PRIVATE KEY----- virtual_host_fqdn: prometheus.system.tanzu kube_state_metrics: deployment: replicas: 1 service: port: 80 targetPort: 8080 telemetryPort: 81 telemetryTargetPort: 8081 type: ClusterIP node_exporter: daemonset: hostNetwork: false updatestrategy: RollingUpdate service: port: 9100 targetPort: 9100 type: ClusterIP prometheus: pspNames: "vmware-system-restricted" config: alerting_rules_yml: | {} alerts_yml: | {} prometheus_yml: | global: evaluation_interval: 1m scrape_interval: 1m scrape_timeout: 10s rule_files: - /etc/config/alerting_rules.yml - /etc/config/recording_rules.yml - /etc/config/alerts - /etc/config/rules scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'kube-state-metrics' static_configs: - targets: ['prometheus-kube-state-metrics.prometheus.svc.cluster.local:8080'] - job_name: 'node-exporter' static_configs: - targets: ['prometheus-node-exporter.prometheus.svc.cluster.local:9100'] - job_name: 'kubernetes-pods' kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_pod_name] action: replace target_label: kubernetes_pod_name - job_name: kubernetes-nodes-cadvisor kubernetes_sd_configs: - role: node relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - replacement: kubernetes.default.svc:443 target_label: __address__ - regex: (.+) replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor source_labels: - __meta_kubernetes_node_name target_label: __metrics_path__ scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - job_name: kubernetes-apiservers kubernetes_sd_configs: - role: endpoints relabel_configs: - action: keep regex: default;kubernetes;https source_labels: - __meta_kubernetes_namespace - __meta_kubernetes_service_name - __meta_kubernetes_endpoint_port_name scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token alerting: alertmanagers: - scheme: http static_configs: - targets: - alertmanager.prometheus.svc:80 - kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_namespace] regex: default action: keep - source_labels: [__meta_kubernetes_pod_label_app] regex: prometheus action: keep - source_labels: [__meta_kubernetes_pod_label_component] regex: alertmanager action: keep - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_probe] regex: .* action: keep - source_labels: [__meta_kubernetes_pod_container_port_number] regex: action: drop recording_rules_yml: | groups: - name: kube-apiserver.rules interval: 3m rules: - expr: |2 ( ( sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[1d])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[1d])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[1d])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[1d])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[1d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[1d])) labels: verb: read record: apiserver_request:burnrate1d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[1h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[1h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[1h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[1h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[1h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[1h])) labels: verb: read record: apiserver_request:burnrate1h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[2h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[2h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[2h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[2h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[2h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[2h])) labels: verb: read record: apiserver_request:burnrate2h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[30m])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30m])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[30m])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[30m])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[30m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[30m])) labels: verb: read record: apiserver_request:burnrate30m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[3d])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[3d])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[3d])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[3d])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[3d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[3d])) labels: verb: read record: apiserver_request:burnrate3d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[5m])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[5m])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[5m])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[5m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) labels: verb: read record: apiserver_request:burnrate5m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[6h])) - ( ( sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[6h])) or vector(0) ) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[6h])) + sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[6h])) ) ) + # errors sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET",code=~"5.."}[6h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[6h])) labels: verb: read record: apiserver_request:burnrate6h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1d])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[1d])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[1d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1d])) labels: verb: write record: apiserver_request:burnrate1d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[1h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[1h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[1h])) labels: verb: write record: apiserver_request:burnrate1h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[2h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[2h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[2h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[2h])) labels: verb: write record: apiserver_request:burnrate2h - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[30m])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[30m])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[30m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[30m])) labels: verb: write record: apiserver_request:burnrate30m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[3d])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[3d])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[3d])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[3d])) labels: verb: write record: apiserver_request:burnrate3d - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[5m])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[5m])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) labels: verb: write record: apiserver_request:burnrate5m - expr: |2 ( ( # too slow sum(rate(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[6h])) - sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",le="1"}[6h])) ) + sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE",code=~"5.."}[6h])) ) / sum(rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[6h])) labels: verb: write record: apiserver_request:burnrate6h - expr: | sum by (code,resource) (rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m])) labels: verb: read record: code_resource:apiserver_request_total:rate5m - expr: | sum by (code,resource) (rate(apiserver_request_total{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m])) labels: verb: write record: code_resource:apiserver_request_total:rate5m - expr: | histogram_quantile(0.99, sum by (le, resource) (rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET"}[5m]))) > 0 labels: quantile: "0.99" verb: read record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.99, sum by (le, resource) (rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"POST|PUT|PATCH|DELETE"}[5m]))) > 0 labels: quantile: "0.99" verb: write record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: |2 sum(rate(apiserver_request_duration_seconds_sum{subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod) / sum(rate(apiserver_request_duration_seconds_count{subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod) record: cluster:apiserver_request_duration_seconds:mean5m - expr: | histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.99" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.9, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.9" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - expr: | histogram_quantile(0.5, sum(rate(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",subresource!="log",verb!~"LIST|WATCH|WATCHLIST|DELETECOLLECTION|PROXY|CONNECT"}[5m])) without(instance, pod)) labels: quantile: "0.5" record: cluster_quantile:apiserver_request_duration_seconds:histogram_quantile - interval: 3m name: kube-apiserver-availability.rules rules: - expr: |2 1 - ( ( # write too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"POST|PUT|PATCH|DELETE"}[30d])) - sum(increase(apiserver_request_duration_seconds_bucket{verb=~"POST|PUT|PATCH|DELETE",le="1"}[30d])) ) + ( # read too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"LIST|GET"}[30d])) - ( ( sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30d])) or vector(0) ) + sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope="namespace",le="0.5"}[30d])) + sum(increase(apiserver_request_duration_seconds_bucket{verb=~"LIST|GET",scope="cluster",le="5"}[30d])) ) ) + # errors sum(code:apiserver_request_total:increase30d{code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d) labels: verb: all record: apiserver_request:availability30d - expr: |2 1 - ( sum(increase(apiserver_request_duration_seconds_count{job="kubernetes-apiservers",verb=~"LIST|GET"}[30d])) - ( # too slow ( sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope=~"resource|",le="0.1"}[30d])) or vector(0) ) + sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="namespace",le="0.5"}[30d])) + sum(increase(apiserver_request_duration_seconds_bucket{job="kubernetes-apiservers",verb=~"LIST|GET",scope="cluster",le="5"}[30d])) ) + # errors sum(code:apiserver_request_total:increase30d{verb="read",code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d{verb="read"}) labels: verb: read record: apiserver_request:availability30d - expr: |2 1 - ( ( # too slow sum(increase(apiserver_request_duration_seconds_count{verb=~"POST|PUT|PATCH|DELETE"}[30d])) - sum(increase(apiserver_request_duration_seconds_bucket{verb=~"POST|PUT|PATCH|DELETE",le="1"}[30d])) ) + # errors sum(code:apiserver_request_total:increase30d{verb="write",code=~"5.."} or vector(0)) ) / sum(code:apiserver_request_total:increase30d{verb="write"}) labels: verb: write record: apiserver_request:availability30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"2.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"3.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"4.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="LIST",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="GET",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="POST",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PUT",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="PATCH",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code, verb) (increase(apiserver_request_total{job="kubernetes-apiservers",verb="DELETE",code=~"5.."}[30d])) record: code_verb:apiserver_request_total:increase30d - expr: | sum by (code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"}) labels: verb: read record: code:apiserver_request_total:increase30d - expr: | sum by (code) (code_verb:apiserver_request_total:increase30d{verb=~"POST|PUT|PATCH|DELETE"}) labels: verb: write record: code:apiserver_request_total:increase30d rules_yml: | {} deployment: configmapReload: containers: args: - --volume-dir=/etc/config - --webhook-url=http://127.0.0.1:9090/-/reload containers: args: - --storage.tsdb.retention.time=42d - --config.file=/etc/config/prometheus.yml - --storage.tsdb.path=/data - --web.console.libraries=/etc/prometheus/console_libraries - --web.console.templates=/etc/prometheus/consoles - --web.enable-lifecycle replicas: 1 rollingUpdate: maxSurge: 25% maxUnavailable: 25% updateStrategy: Recreate pvc: accessMode: ReadWriteOnce storage: 20Gi storageClassName: default service: port: 80 targetPort: 9090 type: ClusterIP pushgateway: deployment: replicas: 1 service: port: 9091 targetPort: 9091 type: ClusterIP
Prometheus Configuration
The Prometheus configuration is set in the
prometheus-data-values.yaml
file. The table lists and describes the available parameters.
Parameter | Description | Type | Default |
---|---|---|---|
monitoring.namespace | Namespace where Prometheus will be deployed | string | tanzu-system-monitoring |
monitoring.create_namespace | The flag indicates whether to create the namespace specified by monitoring.namespace | boolean | false |
monitoring.prometheus_server.config.prometheus_yaml | Kubernetes cluster monitor config details to be passed to Prometheus | yaml file | prometheus.yaml |
monitoring.prometheus_server.config.alerting_rules_yaml | Detailed alert rules defined in Prometheus | yaml file | alerting_rules.yaml |
monitoring.prometheus_server.config.recording_rules_yaml | Detailed record rules defined in Prometheus | yaml file | recording_rules.yaml |
monitoring.prometheus_server.service.type | Type of service to expose Prometheus. Supported Values: ClusterIP | string | ClusterIP |
monitoring.prometheus_server.enable_alerts.kubernetes_api | Enable SLO alerting for the Kubernetes API in Prometheus | boolean | true |
monitoring.prometheus_server.sc.aws_type | AWS type defined for storageclass on AWS | string | gp2 |
monitoring.prometheus_server.sc.aws_fsType | AWS file system type defined for storageclass on AWS | string | ext4 |
monitoring.prometheus_server.sc.allowVolumeExpansion | Define if volume expansion allowed for storageclass on AWS | boolean | true |
monitoring.prometheus_server.pvc.annotations | Storage class annotations | map | {} |
monitoring.prometheus_server.pvc.storage_class | Storage class to use for persistent volume claim. By default this is null and default provisioner is used | string | null |
monitoring.prometheus_server.pvc.accessMode | Define access mode for persistent volume claim. Supported values: ReadWriteOnce, ReadOnlyMany, ReadWriteMany | string | ReadWriteOnce |
monitoring.prometheus_server.pvc.storage | Define storage size for persistent volume claim | string | 8Gi |
monitoring.prometheus_server.deployment.replicas | Number of prometheus replicas | integer | 1 |
monitoring.prometheus_server.image.repository | Location of the repository with the Prometheus image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.prometheus_server.image.name | Name of Prometheus image | string | prometheus |
monitoring.prometheus_server.image.tag | Prometheus image tag. This value may need to be updated if you are upgrading the version. | string | v2.17.1_vmware.1 |
monitoring.prometheus_server.image.pullPolicy | Prometheus image pull policy | string | IfNotPresent |
monitoring.alertmanager.config.slack_demo | Slack notification configuration for Alertmanager | string | slack_demo: name: slack_demo slack_configs: - api_url: https://hooks.slack.com channel: '#alertmanager-test' |
monitoring.alertmanager.config.email_receiver | Email notification configuration for Alertmanager | string | email_receiver: name: email-receiver email_configs: - to: demo@tanzu.com send_resolved: false from: from-email@tanzu.com smarthost: smtp.eample.com:25 require_tls: false |
monitoring.alertmanager.service.type | Type of service to expose Alertmanager. Supported Values: ClusterIP | string | ClusterIP |
monitoring.alertmanager.image.repository | Location of the repository with the Alertmanager image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.alertmanager.image.name | Name of Alertmanager image | string | alertmanager |
monitoring.alertmanager.image.tag | Alertmanager image tag. This value may need to be updated if you are upgrading the version. | string | v0.20.0_vmware.1 |
monitoring.alertmanager.image.pullPolicy | Alertmanager image pull policy | string | IfNotPresent |
monitoring.alertmanager.pvc.annotations | Storage class annotations | map | {} |
monitoring.alertmanager.pvc.storage_class | Storage class to use for persistent volume claim. By default this is null and default provisioner is used. | string | null |
monitoring.alertmanager.pvc.accessMode | Define access mode for persistent volume claim. Supported values: ReadWriteOnce, ReadOnlyMany, ReadWriteMany | string | ReadWriteOnce |
monitoring.alertmanager.pvc.storage | Define storage size for persistent volume claim | string | 2Gi |
monitoring.alertmanager.deployment.replicas | Number of alertmanager replicas | integer | 1 |
monitoring.kube_state_metrics.image.repository | Repository containing kube-state-metircs image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.kube_state_metrics.image.name | Name of kube-state-metircs image | string | kube-state-metrics |
monitoring.kube_state_metrics.image.tag | kube-state-metircs image tag. This value may need to be updated if you are upgrading the version. | string | v1.9.5_vmware.1 |
monitoring.kube_state_metrics.image.pullPolicy | kube-state-metircs image pull policy | string | IfNotPresent |
monitoring.kube_state_metrics.deployment.replicas | Number of kube-state-metrics replicas | integer | 1 |
monitoring.node_exporter.image.repository | Repository containing node-exporter image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.node_exporter.image.name | Name of node-exporter image | string | node-exporter |
monitoring.node_exporter.image.tag | node-exporter image tag. This value may need to be updated if you are upgrading the version. | string | v0.18.1_vmware.1 |
monitoring.node_exporter.image.pullPolicy | node-exporter image pull policy | string | IfNotPresent |
monitoring.node_exporter.hostNetwork | If set to hostNetwork: true , the pod can use the network namespace and network resources of the node. |
boolean | false |
monitoring.node_exporter.deployment.replicas | Number of node-exporter replicas | integer | 1 |
monitoring.pushgateway.image.repository | Repository containing pushgateway image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.pushgateway.image.name | Name of pushgateway image | string | pushgateway |
monitoring.pushgateway.image.tag | pushgateway image tag. This value may need to be updated if you are upgrading the version. | string | v1.2.0_vmware.1 |
monitoring.pushgateway.image.pullPolicy | pushgateway image pull policy | string | IfNotPresent |
monitoring.pushgateway.deployment.replicas | Number of pushgateway replicas | integer | 1 |
monitoring.cadvisor.image.repository | Repository containing cadvisor image. The default is the public VMware registry. Change this value if you are using a private repository (e.g., air-gapped environment). | string | projects.registry.vmware.com/tkg/prometheus |
monitoring.cadvisor.image.name | Name of cadvisor image | string | cadvisor |
monitoring.cadvisor.image.tag | cadvisor image tag. This value may need to be updated if you are upgrading the version. | string | v0.36.0_vmware.1 |
monitoring.cadvisor.image.pullPolicy | cadvisor image pull policy | string | IfNotPresent |
monitoring.cadvisor.deployment.replicas | Number of cadvisor replicas | integer | 1 |
monitoring.ingress.enabled | Enable/disable ingress for prometheus and alertmanager | boolean | false To use ingress, set this field to |
monitoring.ingress.virtual_host_fqdn | Hostname for accessing Prometheus and Alertmanager | string | prometheus.system.tanzu |
monitoring.ingress.prometheus_prefix | Path prefix for prometheus | string | / |
monitoring.ingress.alertmanager_prefix | Path prefix for alertmanager | string | /alertmanager/ |
monitoring.ingress.tlsCertificate.tls.crt | Optional cert for ingress if you want to use your own TLS cert. A self signed cert is generated by default | string | Generated cert |
monitoring.ingress.tlsCertificate.tls.key | Optional cert private key for ingress if you want to use your own TLS cert. | string | Generated cert key |
Parameter | Description | Type | Default |
---|---|---|---|
evaluation_interval | frequency to evaluate rules | duration | 1m |
scrape_interval | frequency to scrape targets | duration | 1m |
scrape_timeout | How long until a scrape request times out | duration | 10s |
rule_files | Rule files specifies a list of globs. Rules and alerts are read from all matching files | yaml file | |
scrape_configs | A list of scrape configurations. | list | |
job_name | The job name assigned to scraped metrics by default | string | |
kubernetes_sd_configs | List of Kubernetes service discovery configurations. | list | |
relabel_configs | List of target relabel configurations. | list | |
action | Action to perform based on regex matching. | string | |
regex | Regular expression against which the extracted value is matched. | string | |
source_labels | The source labels select values from existing labels. | string | |
scheme | Configures the protocol scheme used for requests. | string | |
tls_config | Configures the scrape request's TLS settings. | string | |
ca_file | CA certificate to validate API server certificate with. | filename | |
insecure_skip_verify | Disable validation of the server certificate. | boolean | |
bearer_token_file | Optional bearer token file authentication information. | filename | |
replacement | Replacement value against which a regex replace is performed if the regular expression matches. | string | |
target_label | Label to which the resulting value is written in a replace action. | string |
Parameter | Description | Type | Default |
---|---|---|---|
resolve_timeout | ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt | duration | 5m |
smtp_smarthost | The SMTP host through which emails are sent. | duration | 1m |
slack_api_url | The Slack webhook URL. | string | global.slack_api_url |
pagerduty_url | The pagerduty URL to send API requests to. | string | global.pagerduty_url |
templates | Files from which custom notification template definitions are read | file path | |
group_by | group the alerts by label | string | |
group_interval | set time to wait before sending a notification about new alerts that are added to a group | duration | 5m |
group_wait | How long to initially wait to send a notification for a group of alerts | duration | 30s |
repeat_interval | How long to wait before sending a notification again if it has already been sent successfully for an alert | duration | 4h |
receivers | A list of notification receivers. | list | |
severity | Severity of the incident. | string | |
channel | The channel or user to send notifications to. | string | |
html | The HTML body of the email notification. | string | |
text | The text body of the email notification. | string | |
send_resolved | Whether or not to notify about resolved alerts. | filename | |
email_configs | Configurations for email integration | boolean |
Annotations on pods allow a fine control of the scraping process. These annotations must be part of the pod metadata. They will have no effect if set on other objects such as Services or DaemonSets.
Pod Annotation | Description |
---|---|
prometheus.io/scrape |
The default configuration will scrape all pods and, if set to false, this annotation will exclude the pod from the scraping process. |
prometheus.io/path |
If the metrics path is not /metrics, define it with this annotation. |
prometheus.io/port |
Scrape the pod on the indicated port instead of the pod’s declared ports (default is a port-free target if none are declared). |
The DaemonSet manifest below will instruct Prometheus to scrape all of its pods on port 9102.
apiVersion: apps/v1beta2 # for versions before 1.8.0 use extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: weave labels: app: fluentd-logging spec: selector: matchLabels: name: fluentd-elasticsearch template: metadata: labels: name: fluentd-elasticsearch annotations: prometheus.io/scrape: 'true' prometheus.io/port: '9102' spec: containers: - name: fluentd-elasticsearch image: gcr.io/google-containers/fluentd-elasticsearch:1.20