12、Prometheus配置介绍

Prometheus入门 / 2022-06-30

本文基于Prometheus 2.3.6官方配置文件翻译与补充而成

一、配置文件

要指定加载哪个配置文件,请使用--config.file标签。

该文件是以YAML格式编写的,该格式由下面描述的语法定义。方括号表示参数是可选的,对于非列表参数,该值设置为指定的默认值。

通用占位符的定义如下:

  • <boolean>:可以接受值为true false的布尔值
  • <duration>: 匹配正则表达式 ((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0), e.g. 1d, 1h30m, 5m, 10s
  • <filename>:当前工作目录中的有效文件路径
  • <host>:由hostname或IP后跟可选端口号组成的有效字符串
  • <int>:整数值
  • <labelname>:与正则表达式匹配的字符串 [a-zA-Z_][a-zA-Z0-9_]*
  • <labelvalue>:Unicode字符串
  • <path>:有效的URL路径
  • <scheme>:可以接受httphttps值的字符串
  • <secret>:加密的常规字符串,比如密码
  • <string>: 常规字符串
  • <size>:以字节bytes为单位的大小,例如512MB,支持的单位:B、KB、MB、GB、TB、PB、EB。
  • <tmpl_string>: 在使用模板前展开的字符串

其他占位符是单独指定的。

1.1、global

全局配置指定在所有其他配置上下文中有效的参数。它们还用作其他配置部分的默认值。

global:
  # 采集间隔,默认1分钟。
  [ scrape_interval: <duration> | default = 1m ]

  # 采集超时的时间
  [ scrape_timeout: <duration> | default = 10s ]

  # 告警规则检测时间,即多久检查一次是否达到告警的条件,默认1分钟。
  [ evaluation_interval: <duration> | default = 1m ]

  # 外部标签,即与外部系统通信时添加到任意时间序列或告警所用。
  external_labels:
    [ <labelname>: <labelvalue> ... ]

  # PromQL查询记录,重载配置时将会重新打开
  [ query_log_file: <string> ]

# 规则和告警文件位置
rule_files:
  [ - <filepath_glob> ... ]

# 采集配置
scrape_configs:
  [ - <scrape_config> ... ]

# 告警配置
alerting:
  alert_relabel_configs:  # 告警重命名配置
    [ - <relabel_config> ... ]
  alertmanagers:  # 告警管理器配置
    [ - <alertmanager_config> ... ]

# 远程写入配置
remote_write:
  [ - <remote_write> ... ]

# 远程读取配置
remote_read:
  [ - <remote_read> ... ]

1.2、scrape_configs

scrape_configs部分指定一组描述如何采集它们的目标和参数。在一般情况下,一个job有一个采集配置。但在高级配置中,这种情况可能会改变。

采集目标可以通过static_configs参数指定静态配置,也可以使用支持的服务发现机制之一动态发现。

此外,reabel_configs允许在采集之前对任何目标及其标签进行高级修改。

# 全局唯一的作业名称,Prometheus会将该名称作为标签追加到抓取的每条时序数据中。
job_name: <job_name>

# 采集频率,会覆盖全局中的配置。
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# 采集超时时间,会覆盖全局中的配置。
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# 自定义metrics路径
[ metrics_path: <path> | default = /metrics ]

# 定义Prometheus处理标签冲突的办法。若设置为true,则表示保留标签来解决冲突并进行数据采集。
# 若设置为false,则表示通过重命名来解决标签冲突,并以exporter_<original_labels>格式采集数据。
# 例如exporter_job形式,默认是发绿色
[ honor_labels: <boolean> | default = false ]

# 采集数据时是否保留时间戳,默认true。
[ honor_timestamps: <boolean> | default = true ]

# 配置抓取数据请求的协议类型,默认为http.
[ scheme: <scheme> | default = http ]

# 可选的http url参数
params:
  [ <string>: [<string>, ...] ]

# 认证方式,username password不可与password_file同时使用;且不可与oauth2同时使用
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# 使用配置的凭据在每个采集请求上设置授权标头。
authorization:
  # 设置请求的身份验证类型。
  [ type: <string> | default: Bearer ]
  # 设置请求的凭据,不可与credentials_file同时使用。
  [ credentials: <secret> ]
  # 使用从配置文件读取的凭据来设置请求的凭据,不可与credentials同时使用。
  [ credentials_file: <filename> ]

# OAuth 2.0认证配置,且不能和basic_auth同时使用。
oauth2:
  [ <oauth2> ]

# 采集请求重定向配置,默认开启。
[ follow_redirects: <bool> | default = true ]

# 采集请求TLS配置
tls_config:
  [ <tls_config> ]

# 代理地址
[ proxy_url: <string> ]

# Azure服务发现配置列表
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# Consul动态服务发现配置
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# DigitalOcean服务发现配置
digitalocean_sd_configs:
  [ - <digitalocean_sd_config> ... ]

# docker服务发现配置
docker_sd_configs:
  [ - <docker_sd_config> ... ]

# Docker Swarm服务发现配置
dockerswarm_sd_configs:
  [ - <dockerswarm_sd_config> ... ]

# DNS服务发现配置
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# EC2服务发现配置
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# Eureka服务发现配置
eureka_sd_configs:
  [ - <eureka_sd_config> ... ]

# 文件形式的动态服务发现配置
file_sd_configs:
  [ - <file_sd_config> ... ]

# GCE服务发现配置
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# etzner服务发现配置
hetzner_sd_configs:
  [ - <hetzner_sd_config> ... ]

# HTTP服务发现配置
http_sd_configs:
  [ - <http_sd_config> ... ]

# Kubernetes服务发现配置
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# Kuma服务发现配置
kuma_sd_configs:
  [ - <kuma_sd_config> ... ]

# Lightsail服务发现配置
lightsail_sd_configs:
  [ - <lightsail_sd_config> ... ]

# Linode服务发现配置
linode_sd_configs:
  [ - <linode_sd_config> ... ]

# Marathon服务发现配置
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# AirBnB's Nerve服务发现配置
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# OpenStack服务发现配置
openstack_sd_configs:
  [ - <openstack_sd_config> ... ]

# PuppetDB服务发现配置
puppetdb_sd_configs:
  [ - <puppetdb_sd_config> ... ]

# Scaleway服务法配置
scaleway_sd_configs:
  [ - <scaleway_sd_config> ... ]

# Zookeeper服务发现配置
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# Triton服务发现配置
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# Uyuni服务发现配置
uyuni_sd_configs:
  [ - <uyuni_sd_config> ... ]

# 静态配置列表,多数用来写target地址
static_configs:
  [ - <static_config> ... ]

# 采集前重写标签配置
relabel_configs:
  - source_labels: [<labelname>,...]  # 源标签名,可多个
    separator: <string>  # 拼接符号,多个源标签会按照此处拼接符号进行拼接,默认为分号;,可选
    target_label: <labelname>  # 替换后的标签名称
    regex: <regex>  # 匹配的正则表达式,默认为(.*)
    modulus: <unit64>  # 系数,对source_labels值进行散列运算
    replacement: <string>  # $1为原标签值,如有多个匹配组,可使用$1$2代替内容,default=$1
    # 定义当前relabel_configs对元数据标签的搜索方式,默认为replace。
    
    action: <relabel_action>

# 存储前重写标签配置
metric_relabel_configs:
  [ - <relabel_config> ... ]

# 单次采集最大字节数,超过则被抛弃导致采集失败,比如100MB;0表示无限制。
[ body_size_limit: <size> | default = 0 ]
# 单次采集最大样本数,0表示无限制;如重新标记后仍超过此数,则导致本次采集失败。
[ sample_limit: <int> | default = 0 ]

# 每次采集每个样本数据的最大标签数,超过则采集失败,0表示无限制。
[ label_limit: <int> | default = 0 ]

# 每次采集时每个样本的标签名最大长度,超过则采集失败,0表示无限制。
[ label_name_length_limit: <int> | default = 0 ]

# 每次采集时样本的标签的长度最大值,超过则采集失败,0表示无限制。
[ label_value_length_limit: <int> | default = 0 ]

# 单次采集target最大数,如重新标记后仍超出则采集失败,0表示无限制。
[ target_limit: <int> | default = 0 ]

1.2.1、tls_config

A tls_config allows configuring TLS connections.

# CA证书路径
[ ca_file: <filename> ]

# Certificate and key files for client cert authentication to the server.
[ cert_file: <filename> ]
[ key_file: <filename> ]

# 服务器名
# https://tools.ietf.org/html/rfc4366#section-3.1
[ server_name: <string> ]

# 关闭服务器ca证书校验
[ insecure_skip_verify: <boolean> ]

1.2.2、oauth2

OAuth 2.0 authentication using the client credentials grant type. Prometheus fetches an access token from the specified endpoint with the given client access and secret keys.

client_id: <string>
[ client_secret: <secret> ]

# Read the client secret from a file.
# It is mutually exclusive with `client_secret`.
[ client_secret_file: <filename> ]

# Scopes for the token request.
scopes:
  [ - <string> ... ]

# The URL to fetch the token from.
token_url: <string>

# Optional parameters to append to the token URL.
endpoint_params:
  [ <string>: <string> ... ]

# Configures the token request's TLS settings.
tls_config:
  [ <tls_config> ]

1.2.3、服务发现配置

1.2.3.1、azure_sd_config

azure_sd_config allow retrieving scrape targets from Azure VMs.

The following meta labels are available on targets during relabeling:

  • __meta_azure_machine_id: the machine ID
  • __meta_azure_machine_location: the location the machine runs in
  • __meta_azure_machine_name: the machine name
  • __meta_azure_machine_computer_name: the machine computer name
  • __meta_azure_machine_os_type: the machine operating system
  • __meta_azure_machine_private_ip: the machine’s private IP
  • __meta_azure_machine_public_ip: the machine’s public IP if it exists
  • __meta_azure_machine_resource_group: the machine’s resource group
  • __meta_azure_machine_tag_<tagname>: each tag value of the machine
  • __meta_azure_machine_scale_set: the name of the scale set which the vm is part of (this value is only set if you are using a scale set)
  • __meta_azure_subscription_id: the subscription ID
  • __meta_azure_tenant_id: the tenant ID

See below for the configuration options for Azure discovery:

# The information to access the Azure API.
# The Azure environment.
[ environment: <string> | default = AzurePublicCloud ]

# The authentication method, either OAuth or ManagedIdentity.
# See https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
[ authentication_method: <string> | default = OAuth]
# The subscription ID. Always required.
subscription_id: <string>
# Optional tenant ID. Only required with authentication_method OAuth.
[ tenant_id: <string> ]
# Optional client ID. Only required with authentication_method OAuth.
[ client_id: <string> ]
# Optional client secret. Only required with authentication_method OAuth.
[ client_secret: <secret> ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 300s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# Authentication information used to authenticate to the consul server.
# Note that `basic_auth`, `authorization` and `oauth2` options are
# mutually exclusive.
# `password` and `password_file` are mutually exclusive.

# Optional HTTP basic authentication information, currently not support by Azure.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration, currently not supported by Azure.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration, currently not supported by Azure.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]
1.2.3.2、consul_sd_config

consul_sd_config允许从consul API服务中添加采集target

在重新标记期间,以下元标签在目标上可用:

  • __meta_consul_address:target的地址
  • __meta_consul_dc:target的数据中心名称
  • __meta_consul_health:服务的健康状态
  • __meta_consul_metadata_<key>:target的每个节点的元数据的键值
  • __meta_consul_node:定义target的节点名称
  • __meta_consul_service_address:target的服务地址
  • __meta_consul_service_id:target的服务ID
  • __meta_consul_service_metadata_<key>:target的每个服务的元数据的键值
  • __meta_consul_service_port: target的端口
  • __meta_consul_service:target所属的服务的名称
  • __meta_consul_tagged_address_<key>:每个节点标记的target的地址的键值
  • __meta_consul_tags:由标签分隔符连接的target的标签列表
# consul服务器地址,consul HTTP API默认8500端口
[ server: <host> | default = "localhost:8500" ]
[ token: <secret> ]
[ datacenter: <string> ]
# 只有Consul Enterprise版本才支持Namespaces
[ namespace: <string> ]
[ scheme: <string> | default = "http" ]
# 已弃用,建议使用basic_auth配置。
[ username: <string> ]
[ password: <secret> ]

# 检索目标的服务列表。如果省略,则采集所有服务。
services:
  [ - <string> ]

# 查阅更多过滤器使用方法:https://www.consul.io/api/catalog.html#list-nodes-for-service

# 用于过滤指定服务的节点的可选标签列表,services必须包含此处的所有标签。
tags:
  [ - <string> ]

# 节点元数据键值对,用于筛选指定服务的节点,可选。
[ node_meta:
  [ <string>: <string> ... ] ]

# 指定标签分隔符,默认是逗号。.
[ tag_separator: <string> | default = , ]

# 允许过时的Consul结果(参阅https://www.consul.io/api/features/consistency.html). 这可以减轻consul的负载。
[ allow_stale: <boolean> | default = true ]

# 服务刷新间隔,大规模环境中,提高此值有利于减轻负载,默认30s。
[ refresh_interval: <duration> | default = 30s ]

# 用于向consul server进行身份校验的验证信息
# 注意:basic_auth、authorization、oauth2不可同时使用;password、password_file不可同时使用
# HTTP基本身份验证信息,可选。
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Authorization请求头部配置,可选。
authorization:
  # 设置验证类型
  [ type: <string> | default: Bearer ]
  # 设置凭据,不可与credentials_file同时使用。
  [ credentials: <secret> ]
  # 从配置文件读取凭据,不可与credentials同时使用。
  [ credentials_file: <filename> ]

# OAuth 2.0配置,可选。
oauth2:
  [ <oauth2> ]

# proxy地址,可选
[ proxy_url: <string> ]

# 是否开启重定向
[ follow_redirects: <bool> | default = true ]

# TLS配置,可选。
tls_config:
  [ <tls_config> ]

Note that the IP number and port used to scrape the targets is assembled as <__meta_consul_address>:<__meta_consul_service_port>. However, in some Consul setups, the relevant address is in __meta_consul_service_address. In those cases, you can use the relabel feature to replace the special __address__ label.

The relabeling phase is the preferred and more powerful way to filter services or nodes for a service based on arbitrary labels. For users with thousands of services it can be more efficient to use the Consul API directly which has basic support for filtering nodes (currently by node metadata and a single tag).

1.2.3.3、digitalocean_sd_config

DigitalOcean SD configurations allow retrieving scrape targets from DigitalOcean’s Droplets API. This service discovery uses the public IPv4 address by default, by that can be changed with relabelling, as demonstrated in the Prometheus digitalocean-sd configuration file.

The following meta labels are available on targets during relabeling:

  • __meta_digitalocean_droplet_id: the id of the droplet
  • __meta_digitalocean_droplet_name: the name of the droplet
  • __meta_digitalocean_image: the slug of the droplet’s image
  • __meta_digitalocean_image_name: the display name of the droplet’s image
  • __meta_digitalocean_private_ipv4: the private IPv4 of the droplet
  • __meta_digitalocean_public_ipv4: the public IPv4 of the droplet
  • __meta_digitalocean_public_ipv6: the public IPv6 of the droplet
  • __meta_digitalocean_region: the region of the droplet
  • __meta_digitalocean_size: the size of the droplet
  • __meta_digitalocean_status: the status of the droplet
  • __meta_digitalocean_features: the comma-separated list of features of the droplet
  • __meta_digitalocean_tags: the comma-separated list of tags of the droplet
  • __meta_digitalocean_vpc: the id of the droplet’s VPC
# Authentication information used to authenticate to the API server.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information, not currently supported by DigitalOcean.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# The time after which the droplets are refreshed.
[ refresh_interval: <duration> | default = 60s ]
1.2.3.4、docker_sd_config

Docker SD configurations allow retrieving scrape targets from Docker Engine hosts.

This SD discovers “containers” and will create a target for each network IP and port the container is configured to expose.

Available meta labels:

  • __meta_docker_container_id: the id of the container
  • __meta_docker_container_name: the name of the container
  • __meta_docker_container_network_mode: the network mode of the container
  • __meta_docker_container_label_<labelname>: each label of the container
  • __meta_docker_network_id: the ID of the network
  • __meta_docker_network_name: the name of the network
  • __meta_docker_network_ingress: whether the network is ingress
  • __meta_docker_network_internal: whether the network is internal
  • __meta_docker_network_label_<labelname>: each label of the network
  • __meta_docker_network_scope: the scope of the network
  • __meta_docker_network_ip: the IP of the container in this network
  • __meta_docker_port_private: the port on the container
  • __meta_docker_port_public: the external port if a port-mapping exists
  • __meta_docker_port_public_ip: the public IP if a port-mapping exists

See below for the configuration options for Docker discovery:

# Address of the Docker daemon.
host: <string>

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# The port to scrape metrics from, when `role` is nodes, and for discovered
# tasks and services that don't have published ports.
[ port: <int> | default = 80 ]

# The host to use if the container is in host networking mode.
[ host_networking_host: <string> | default = "localhost" ]

# Optional filters to limit the discovery process to a subset of available
# resources.
# The available filters are listed in the upstream documentation:
# Services: https://docs.docker.com/engine/api/v1.40/#operation/ServiceList
# Tasks: https://docs.docker.com/engine/api/v1.40/#operation/TaskList
# Nodes: https://docs.docker.com/engine/api/v1.40/#operation/NodeList
[ filters:
  [ - name: <string>
      values: <string>, [...] ]

# The time after which the containers are refreshed.
[ refresh_interval: <duration> | default = 60s ]

# Authentication information used to authenticate to the Docker daemon.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

The relabeling phase is the preferred and more powerful way to filter containers. For users with thousands of containers it can be more efficient to use the Docker API directly which has basic support for filtering containers (using filters).

See this example Prometheus configuration file for a detailed example of configuring Prometheus for Docker Engine.

1.2.3.5、dockerswarm_sd_config

Docker Swarm SD configurations allow retrieving scrape targets from Docker Swarm engine.

One of the following roles can be configured to discover targets:

services

The services role discovers all Swarm services and exposes their ports as targets. For each published port of a service, a single target is generated. If a service has no published ports, a target per service is created using the port parameter defined in the SD configuration.

Available meta labels:

  • __meta_dockerswarm_service_id: the id of the service
  • __meta_dockerswarm_service_name: the name of the service
  • __meta_dockerswarm_service_mode: the mode of the service
  • __meta_dockerswarm_service_endpoint_port_name: the name of the endpoint port, if available
  • __meta_dockerswarm_service_endpoint_port_publish_mode: the publish mode of the endpoint port
  • __meta_dockerswarm_service_label_<labelname>: each label of the service
  • __meta_dockerswarm_service_task_container_hostname: the container hostname of the target, if available
  • __meta_dockerswarm_service_task_container_image: the container image of the target
  • __meta_dockerswarm_service_updating_status: the status of the service, if available
  • __meta_dockerswarm_network_id: the ID of the network
  • __meta_dockerswarm_network_name: the name of the network
  • __meta_dockerswarm_network_ingress: whether the network is ingress
  • __meta_dockerswarm_network_internal: whether the network is internal
  • __meta_dockerswarm_network_label_<labelname>: each label of the network
  • __meta_dockerswarm_network_scope: the scope of the network
tasks

The tasks role discovers all Swarm tasks and exposes their ports as targets. For each published port of a task, a single target is generated. If a task has no published ports, a target per task is created using the port parameter defined in the SD configuration.

Available meta labels:

  • __meta_dockerswarm_task_id: the id of the task
  • __meta_dockerswarm_task_container_id: the container id of the task
  • __meta_dockerswarm_task_desired_state: the desired state of the task
  • __meta_dockerswarm_task_label_<labelname>: each label of the task
  • __meta_dockerswarm_task_slot: the slot of the task
  • __meta_dockerswarm_task_state: the state of the task
  • __meta_dockerswarm_task_port_publish_mode: the publish mode of the task port
  • __meta_dockerswarm_service_id: the id of the service
  • __meta_dockerswarm_service_name: the name of the service
  • __meta_dockerswarm_service_mode: the mode of the service
  • __meta_dockerswarm_service_label_<labelname>: each label of the service
  • __meta_dockerswarm_network_id: the ID of the network
  • __meta_dockerswarm_network_name: the name of the network
  • __meta_dockerswarm_network_ingress: whether the network is ingress
  • __meta_dockerswarm_network_internal: whether the network is internal
  • __meta_dockerswarm_network_label_<labelname>: each label of the network
  • __meta_dockerswarm_network_label: each label of the network
  • __meta_dockerswarm_network_scope: the scope of the network
  • __meta_dockerswarm_node_id: the ID of the node
  • __meta_dockerswarm_node_hostname: the hostname of the node
  • __meta_dockerswarm_node_address: the address of the node
  • __meta_dockerswarm_node_availability: the availability of the node
  • __meta_dockerswarm_node_label_<labelname>: each label of the node
  • __meta_dockerswarm_node_platform_architecture: the architecture of the node
  • __meta_dockerswarm_node_platform_os: the operating system of the node
  • __meta_dockerswarm_node_role: the role of the node
  • __meta_dockerswarm_node_status: the status of the node

The __meta_dockerswarm_network_* meta labels are not populated for ports which are published with mode=host.

nodes

The nodes role is used to discover Swarm nodes.

Available meta labels:

  • __meta_dockerswarm_node_address: the address of the node
  • __meta_dockerswarm_node_availability: the availability of the node
  • __meta_dockerswarm_node_engine_version: the version of the node engine
  • __meta_dockerswarm_node_hostname: the hostname of the node
  • __meta_dockerswarm_node_id: the ID of the node
  • __meta_dockerswarm_node_label_<labelname>: each label of the node
  • __meta_dockerswarm_node_manager_address: the address of the manager component of the node
  • __meta_dockerswarm_node_manager_leader: the leadership status of the manager component of the node (true or false)
  • __meta_dockerswarm_node_manager_reachability: the reachability of the manager component of the node
  • __meta_dockerswarm_node_platform_architecture: the architecture of the node
  • __meta_dockerswarm_node_platform_os: the operating system of the node
  • __meta_dockerswarm_node_role: the role of the node
  • __meta_dockerswarm_node_status: the status of the node

See below for the configuration options for Docker Swarm discovery:

# Address of the Docker daemon.
host: <string>

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Role of the targets to retrieve. Must be `services`, `tasks`, or `nodes`.
role: <string>

# The port to scrape metrics from, when `role` is nodes, and for discovered
# tasks and services that don't have published ports.
[ port: <int> | default = 80 ]

# Optional filters to limit the discovery process to a subset of available
# resources.
# The available filters are listed in the upstream documentation:
# https://docs.docker.com/engine/api/v1.40/#operation/ContainerList
[ filters:
  [ - name: <string>
      values: <string>, [...] ]

# The time after which the service discovery data is refreshed.
[ refresh_interval: <duration> | default = 60s ]

# Authentication information used to authenticate to the Docker daemon.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

The relabeling phase is the preferred and more powerful way to filter tasks, services or nodes. For users with thousands of tasks it can be more efficient to use the Swarm API directly which has basic support for filtering nodes (using filters).

See this example Prometheus configuration file for a detailed example of configuring Prometheus for Docker Swarm.

1.2.3.6、dns_sd_config

A DNS-based service discovery configuration allows specifying a set of DNS domain names which are periodically queried to discover a list of targets. The DNS servers to be contacted are read from /etc/resolv.conf.

This service discovery method only supports basic DNS A, AAAA and SRV record queries, but not the advanced DNS-SD approach specified in RFC6763.

The following meta labels are available on targets during relabeling:

  • __meta_dns_name: the record name that produced the discovered target.
  • __meta_dns_srv_record_target: the target field of the SRV record
  • __meta_dns_srv_record_port: the port field of the SRV record
# A list of DNS domain names to be queried.
names:
  [ - <string> ]

# The type of DNS query to perform. One of SRV, A, or AAAA.
[ type: <string> | default = 'SRV' ]

# The port number used if the query type is not SRV.
[ port: <int>]

# The time after which the provided names are refreshed.
[ refresh_interval: <duration> | default = 30s ]
1.2.3.7、ec2_sd_config

EC2 SD configurations allow retrieving scrape targets from AWS EC2 instances. The private IP address is used by default, but may be changed to the public IP address with relabeling.

The following meta labels are available on targets during relabeling:

  • __meta_ec2_ami: the EC2 Amazon Machine Image
  • __meta_ec2_architecture: the architecture of the instance
  • __meta_ec2_availability_zone: the availability zone in which the instance is running
  • __meta_ec2_availability_zone_id: the availability zone ID in which the instance is running (requires ec2:DescribeAvailabilityZones)
  • __meta_ec2_instance_id: the EC2 instance ID
  • __meta_ec2_instance_lifecycle: the lifecycle of the EC2 instance, set only for ‘spot’ or ‘scheduled’ instances, absent otherwise
  • __meta_ec2_instance_state: the state of the EC2 instance
  • __meta_ec2_instance_type: the type of the EC2 instance
  • __meta_ec2_ipv6_addresses: comma separated list of IPv6 addresses assigned to the instance’s network interfaces, if present
  • __meta_ec2_owner_id: the ID of the AWS account that owns the EC2 instance
  • __meta_ec2_platform: the Operating System platform, set to ‘windows’ on Windows servers, absent otherwise
  • __meta_ec2_primary_subnet_id: the subnet ID of the primary network interface, if available
  • __meta_ec2_private_dns_name: the private DNS name of the instance, if available
  • __meta_ec2_private_ip: the private IP address of the instance, if present
  • __meta_ec2_public_dns_name: the public DNS name of the instance, if available
  • __meta_ec2_public_ip: the public IP address of the instance, if available
  • __meta_ec2_subnet_id: comma separated list of subnets IDs in which the instance is running, if available
  • __meta_ec2_tag_<tagkey>: each tag value of the instance
  • __meta_ec2_vpc_id: the ID of the VPC in which the instance is running, if available

See below for the configuration options for EC2 discovery:

# The information to access the EC2 API.

# The AWS region. If blank, the region from the instance metadata is used.
[ region: <string> ]

# Custom endpoint to be used.
[ endpoint: <string> ]

# The AWS API keys. If blank, the environment variables `AWS_ACCESS_KEY_ID`
# and `AWS_SECRET_ACCESS_KEY` are used.
[ access_key: <string> ]
[ secret_key: <secret> ]
# Named AWS profile used to connect to the API.
[ profile: <string> ]

# AWS Role ARN, an alternative to using AWS API keys.
[ role_arn: <string> ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# Filters can be used optionally to filter the instance list by other criteria.
# Available filter criteria can be found here:
# https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
# Filter API documentation: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_Filter.html
filters:
  [ - name: <string>
      values: <string>, [...] ]

The relabeling phase is the preferred and more powerful way to filter targets based on arbitrary labels. For users with thousands of instances it can be more efficient to use the EC2 API directly which has support for filtering instances.

1.2.3.8、openstack_sd_config

OpenStack SD configurations allow retrieving scrape targets from OpenStack Nova instances.

One of the following <openstack_role> types can be configured to discover targets:

hypervisor

The hypervisor role discovers one target per Nova hypervisor node. The target address defaults to the host_ip attribute of the hypervisor.

The following meta labels are available on targets during relabeling:

  • __meta_openstack_hypervisor_host_ip: the hypervisor node’s IP address.
  • __meta_openstack_hypervisor_id: the hypervisor node’s ID.
  • __meta_openstack_hypervisor_name: the hypervisor node’s name.
  • __meta_openstack_hypervisor_state: the hypervisor node’s state.
  • __meta_openstack_hypervisor_status: the hypervisor node’s status.
  • __meta_openstack_hypervisor_type: the hypervisor node’s type.
instance

The instance role discovers one target per network interface of Nova instance. The target address defaults to the private IP address of the network interface.

The following meta labels are available on targets during relabeling:

  • __meta_openstack_address_pool: the pool of the private IP.
  • __meta_openstack_instance_flavor: the flavor of the OpenStack instance.
  • __meta_openstack_instance_id: the OpenStack instance ID.
  • __meta_openstack_instance_name: the OpenStack instance name.
  • __meta_openstack_instance_status: the status of the OpenStack instance.
  • __meta_openstack_private_ip: the private IP of the OpenStack instance.
  • __meta_openstack_project_id: the project (tenant) owning this instance.
  • __meta_openstack_public_ip: the public IP of the OpenStack instance.
  • __meta_openstack_tag_<tagkey>: each tag value of the instance.
  • __meta_openstack_user_id: the user account owning the tenant.

See below for the configuration options for OpenStack discovery:

# The information to access the OpenStack API.

# The OpenStack role of entities that should be discovered.
role: <openstack_role>

# The OpenStack Region.
region: <string>

# identity_endpoint specifies the HTTP endpoint that is required to work with
# the Identity API of the appropriate version. While it's ultimately needed by
# all of the identity services, it will often be populated by a provider-level
# function.
[ identity_endpoint: <string> ]

# username is required if using Identity V2 API. Consult with your provider's
# control panel to discover your account's username. In Identity V3, either
# userid or a combination of username and domain_id or domain_name are needed.
[ username: <string> ]
[ userid: <string> ]

# password for the Identity V2 and V3 APIs. Consult with your provider's
# control panel to discover your account's preferred method of authentication.
[ password: <secret> ]

# At most one of domain_id and domain_name must be provided if using username
# with Identity V3. Otherwise, either are optional.
[ domain_name: <string> ]
[ domain_id: <string> ]

# The project_id and project_name fields are optional for the Identity V2 API.
# Some providers allow you to specify a project_name instead of the project_id.
# Some require both. Your provider's authentication policies will determine
# how these fields influence authentication.
[ project_name: <string> ]
[ project_id: <string> ]

# The application_credential_id or application_credential_name fields are
# required if using an application credential to authenticate. Some providers
# allow you to create an application credential to authenticate rather than a
# password.
[ application_credential_name: <string> ]
[ application_credential_id: <string> ]

# The application_credential_secret field is required if using an application
# credential to authenticate.
[ application_credential_secret: <secret> ]

# Whether the service discovery should list all instances for all projects.
# It is only relevant for the 'instance' role and usually requires admin permissions.
[ all_tenants: <boolean> | default: false ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# The availability of the endpoint to connect to. Must be one of public, admin or internal.
[ availability: <string> | default = "public" ]

# TLS configuration.
tls_config:
  [ <tls_config> ]
1.2.3.9、puppetdb_sd_config

PuppetDB SD configurations allow retrieving scrape targets from PuppetDB resources.

This SD discovers resources and will create a target for each resource returned by the API.

The resource address is the certname of the resource and can be changed during relabeling.

The following meta labels are available on targets during relabeling:

  • __meta_puppetdb_certname: the name of the node associated with the resource
  • __meta_puppetdb_resource: a SHA-1 hash of the resource’s type, title, and parameters, for identification
  • __meta_puppetdb_type: the resource type
  • __meta_puppetdb_title: the resource title
  • __meta_puppetdb_exported: whether the resource is exported ("true" or "false")
  • __meta_puppetdb_tags: comma separated list of resource tags
  • __meta_puppetdb_file: the manifest file in which the resource was declared
  • __meta_puppetdb_environment: the environment of the node associated with the resource
  • __meta_puppetdb_parameter_<parametername>: the parameters of the resource

See below for the configuration options for PuppetDB discovery:

# The URL of the PuppetDB root query endpoint.
url: <string>

# Puppet Query Language (PQL) query. Only resources are supported.
# https://puppet.com/docs/puppetdb/latest/api/query/v4/pql.html
query: <string>

# Whether to include the parameters as meta labels.
# Due to the differences between parameter types and Prometheus labels,
# some parameters might not be rendered. The format of the parameters might
# also change in future releases.
#
# Note: Enabling this exposes parameters in the Prometheus UI and API. Make sure
# that you don't have secrets exposed as parameters if you enable this.
[ include_parameters: <boolean> | default = false ]

# Refresh interval to re-read the resources list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# TLS configuration to connect to the PuppetDB.
tls_config:
  [ <tls_config> ]

# basic_auth, authorization, and oauth2, are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# `Authorization` HTTP header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials with the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

See this example Prometheus configuration file for a detailed example of configuring Prometheus with PuppetDB.

1.2.3.10、file_sd_config

File-based service discovery provides a more generic way to configure static targets and serves as an interface to plug in custom service discovery mechanisms.

It reads a set of files containing a list of zero or more <static_config>s. Changes to all defined files are detected via disk watches and applied immediately. Files may be provided in YAML or JSON format. Only changes resulting in well-formed target groups are applied.

Files must contain a list of static configs, using these formats:

JSON json [ { "targets": [ "<host>", ... ], "labels": { "<labelname>": "<labelvalue>", ... } }, ... ]

YAML yaml - targets: [ - '<host>' ] labels: [ <labelname>: <labelvalue> ... ]

As a fallback, the file contents are also re-read periodically at the specified refresh interval.

Each target has a meta label __meta_filepath during the relabeling phase. Its value is set to the filepath from which the target was extracted.

There is a list of integrations with this discovery mechanism.

# Patterns for files from which target groups are extracted.
files:
  [ - <filename_pattern> ... ]

# Refresh interval to re-read the files.
[ refresh_interval: <duration> | default = 5m ]

Where <filename_pattern> may be a path ending in .json, .yml or .yaml. The last path segment may contain a single * that matches any character sequence, e.g. my/path/tg_*.json.

1.2.3.11、gce_sd_config

GCE SD configurations allow retrieving scrape targets from GCP GCE instances. The private IP address is used by default, but may be changed to the public IP address with relabeling.

The following meta labels are available on targets during relabeling:

  • __meta_gce_instance_id: the numeric id of the instance
  • __meta_gce_instance_name: the name of the instance
  • __meta_gce_label_<labelname>: each GCE label of the instance
  • __meta_gce_machine_type: full or partial URL of the machine type of the instance
  • __meta_gce_metadata_<name>: each metadata item of the instance
  • __meta_gce_network: the network URL of the instance
  • __meta_gce_private_ip: the private IP address of the instance
  • __meta_gce_interface_ipv4_<name>: IPv4 address of each named interface
  • __meta_gce_project: the GCP project in which the instance is running
  • __meta_gce_public_ip: the public IP address of the instance, if present
  • __meta_gce_subnetwork: the subnetwork URL of the instance
  • __meta_gce_tags: comma separated list of instance tags
  • __meta_gce_zone: the GCE zone URL in which the instance is running

See below for the configuration options for GCE discovery:

# The information to access the GCE API.

# The GCP Project
project: <string>

# The zone of the scrape targets. If you need multiple zones use multiple
# gce_sd_configs.
zone: <string>

# Filter can be used optionally to filter the instance list by other criteria
# Syntax of this filter string is described here in the filter query parameter section:
# https://cloud.google.com/compute/docs/reference/latest/instances/list
[ filter: <string> ]

# Refresh interval to re-read the instance list
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]

# The tag separator is used to separate the tags on concatenation
[ tag_separator: <string> | default = , ]

Credentials are discovered by the Google Cloud SDK default client by looking in the following places, preferring the first location found:

  1. a JSON file specified by the GOOGLE_APPLICATION_CREDENTIALS environment variable
  2. a JSON file in the well-known path $HOME/.config/gcloud/application_default_credentials.json
  3. fetched from the GCE metadata server

If Prometheus is running within GCE, the service account associated with the instance it is running on should have at least read-only permissions to the compute resources. If running outside of GCE make sure to create an appropriate service account and place the credential file in one of the expected locations.

1.2.3.12、hetzner_sd_config

Hetzner SD configurations allow retrieving scrape targets from Hetzner Cloud API and Robot API. This service discovery uses the public IPv4 address by default, but that can be changed with relabeling, as demonstrated in the Prometheus hetzner-sd configuration file.

The following meta labels are available on all targets during relabeling:

  • __meta_hetzner_server_id: the ID of the server
  • __meta_hetzner_server_name: the name of the server
  • __meta_hetzner_server_status: the status of the server
  • __meta_hetzner_public_ipv4: the public ipv4 address of the server
  • __meta_hetzner_public_ipv6_network: the public ipv6 network (/64) of the server
  • __meta_hetzner_datacenter: the datacenter of the server

The labels below are only available for targets with role set to hcloud:

  • __meta_hetzner_hcloud_image_name: the image name of the server
  • __meta_hetzner_hcloud_image_description: the description of the server image
  • __meta_hetzner_hcloud_image_os_flavor: the OS flavor of the server image
  • __meta_hetzner_hcloud_image_os_version: the OS version of the server image
  • __meta_hetzner_hcloud_image_description: the description of the server image
  • __meta_hetzner_hcloud_datacenter_location: the location of the server
  • __meta_hetzner_hcloud_datacenter_location_network_zone: the network zone of the server
  • __meta_hetzner_hcloud_server_type: the type of the server
  • __meta_hetzner_hcloud_cpu_cores: the CPU cores count of the server
  • __meta_hetzner_hcloud_cpu_type: the CPU type of the server (shared or dedicated)
  • __meta_hetzner_hcloud_memory_size_gb: the amount of memory of the server (in GB)
  • __meta_hetzner_hcloud_disk_size_gb: the disk size of the server (in GB)
  • __meta_hetzner_hcloud_private_ipv4_<networkname>: the private ipv4 address of the server within a given network
  • __meta_hetzner_hcloud_label_<labelname>: each label of the server
  • __meta_hetzner_hcloud_labelpresent_<labelname>: true for each label of the server

The labels below are only available for targets with role set to robot:

  • __meta_hetzner_robot_product: the product of the server
  • __meta_hetzner_robot_cancelled: the server cancellation status
# The Hetzner role of entities that should be discovered.
# One of robot or hcloud.
role: <string>

# Authentication information used to authenticate to the API server.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information, required when role is robot
# Role hcloud does not support basic auth.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration, required when role is
# hcloud. Role robot does not support bearer token authentication.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# The time after which the servers are refreshed.
[ refresh_interval: <duration> | default = 60s ]
1.2.3.13、http_sd_config

HTTP-based service discovery provides a more generic way to configure static targets and serves as an interface to plug in custom service discovery mechanisms.

It fetches targets from an HTTP endpoint containing a list of zero or more <static_config>s. The target must reply with an HTTP 200 response. The HTTP header Content-Type must be application/json, and the body must be valid JSON.

Example response body:

[
  {
    "targets": [ "<host>", ... ],
    "labels": {
      "<labelname>": "<labelvalue>", ...
    }
  },
  ...
]

The endpoint is queried periodically at the specified refresh interval.

Each target has a meta label __meta_url during the relabeling phase. Its value is set to the URL from which the target was extracted.

# URL from which the targets are fetched.
url: <string>

# Refresh interval to re-query the endpoint.
[ refresh_interval: <duration> | default = 60s ]

# Authentication information used to authenticate to the API server.
# Note that `basic_auth`, `authorization` and `oauth2` options are
# mutually exclusive.
# `password` and `password_file` are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]
1.2.3.14、kubernetes_sd_config

Kubernetes SD configurations allow retrieving scrape targets from Kubernetes’ REST API and always staying synchronized with the cluster state.

One of the following role types can be configured to discover targets:

node

The node role discovers one target per cluster node with the address defaulting to the Kubelet’s HTTP port. The target address defaults to the first existing address of the Kubernetes node object in the address type order of NodeInternalIP, NodeExternalIP, NodeLegacyHostIP, and NodeHostName.

Available meta labels:

  • __meta_kubernetes_node_name: The name of the node object.
  • __meta_kubernetes_node_label_<labelname>: Each label from the node object.
  • __meta_kubernetes_node_labelpresent_<labelname>: true for each label from the node object.
  • __meta_kubernetes_node_annotation_<annotationname>: Each annotation from the node object.
  • __meta_kubernetes_node_annotationpresent_<annotationname>: true for each annotation from the node object.
  • __meta_kubernetes_node_address_<address_type>: The first address for each node address type, if it exists.

In addition, the instance label for the node will be set to the node name as retrieved from the API server.

service

The service role discovers a target for each service port for each service. This is generally useful for blackbox monitoring of a service. The address will be set to the Kubernetes DNS name of the service and respective service port.

Available meta labels:

  • __meta_kubernetes_namespace: The namespace of the service object.
  • __meta_kubernetes_service_annotation_<annotationname>: Each annotation from the service object.
  • __meta_kubernetes_service_annotationpresent_<annotationname>: “true” for each annotation of the service object.
  • __meta_kubernetes_service_cluster_ip: The cluster IP address of the service. (Does not apply to services of type ExternalName)
  • __meta_kubernetes_service_external_name: The DNS name of the service. (Applies to services of type ExternalName)
  • __meta_kubernetes_service_label_<labelname>: Each label from the service object.
  • __meta_kubernetes_service_labelpresent_<labelname>: true for each label of the service object.
  • __meta_kubernetes_service_name: The name of the service object.
  • __meta_kubernetes_service_port_name: Name of the service port for the target.
  • __meta_kubernetes_service_port_protocol: Protocol of the service port for the target.
  • __meta_kubernetes_service_type: The type of the service.
pod

The pod role discovers all pods and exposes their containers as targets. For each declared port of a container, a single target is generated. If a container has no specified ports, a port-free target per container is created for manually adding a port via relabeling.

Available meta labels:

  • __meta_kubernetes_namespace: The namespace of the pod object.
  • __meta_kubernetes_pod_name: The name of the pod object.
  • __meta_kubernetes_pod_ip: The pod IP of the pod object.
  • __meta_kubernetes_pod_label_<labelname>: Each label from the pod object.
  • __meta_kubernetes_pod_labelpresent_<labelname>: truefor each label from the pod object.
  • __meta_kubernetes_pod_annotation_<annotationname>: Each annotation from the pod object.
  • __meta_kubernetes_pod_annotationpresent_<annotationname>: true for each annotation from the pod object.
  • __meta_kubernetes_pod_container_init: true if the container is an InitContainer
  • __meta_kubernetes_pod_container_name: Name of the container the target address points to.
  • __meta_kubernetes_pod_container_port_name: Name of the container port.
  • __meta_kubernetes_pod_container_port_number: Number of the container port.
  • __meta_kubernetes_pod_container_port_protocol: Protocol of the container port.
  • __meta_kubernetes_pod_ready: Set to true or false for the pod’s ready state.
  • __meta_kubernetes_pod_phase: Set to Pending, Running, Succeeded, Failed or Unknown in the lifecycle.
  • __meta_kubernetes_pod_node_name: The name of the node the pod is scheduled onto.
  • __meta_kubernetes_pod_host_ip: The current host IP of the pod object.
  • __meta_kubernetes_pod_uid: The UID of the pod object.
  • __meta_kubernetes_pod_controller_kind: Object kind of the pod controller.
  • __meta_kubernetes_pod_controller_name: Name of the pod controller.
endpoints

The endpoints role discovers targets from listed endpoints of a service. For each endpoint address one target is discovered per port. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well.

Available meta labels:

  • __meta_kubernetes_namespace: The namespace of the endpoints object.
  • __meta_kubernetes_endpoints_name: The names of the endpoints object.
  • For all targets discovered directly from the endpoints list (those not additionally inferred from underlying pods), the following labels are attached:
    • __meta_kubernetes_endpoint_hostname: Hostname of the endpoint.
    • __meta_kubernetes_endpoint_node_name: Name of the node hosting the endpoint.
    • __meta_kubernetes_endpoint_ready: Set to true or false for the endpoint’s ready state.
    • __meta_kubernetes_endpoint_port_name: Name of the endpoint port.
    • __meta_kubernetes_endpoint_port_protocol: Protocol of the endpoint port.
    • __meta_kubernetes_endpoint_address_target_kind: Kind of the endpoint address target.
    • __meta_kubernetes_endpoint_address_target_name: Name of the endpoint address target.
  • If the endpoints belong to a service, all labels of the role: service discovery are attached.
  • For all targets backed by a pod, all labels of the role: pod discovery are attached.
endpointslice

The endpointslice role discovers targets from existing endpointslices. For each endpoint address referenced in the endpointslice object one target is discovered. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well.

Available meta labels: * __meta_kubernetes_namespace: The namespace of the endpoints object. * __meta_kubernetes_endpointslice_name: The name of endpointslice object. * For all targets discovered directly from the endpointslice list (those not additionally inferred from underlying pods), the following labels are attached: * __meta_kubernetes_endpointslice_address_target_kind: Kind of the referenced object. * __meta_kubernetes_endpointslice_address_target_name: Name of referenced object. * __meta_kubernetes_endpointslice_address_type: The ip protocol family of the adress target. * __meta_kubernetes_endpointslice_endpoint_conditions_ready: Set to true or false for the referenced endpoint’s ready state. * __meta_kubernetes_endpointslice_endpoint_topology_kubernetes_io_hostname: Name of the node hosting the referenced endpoint. * __meta_kubernetes_endpointslice_endpoint_topology_present_kubernetes_io_hostname: Flag that shows if the referenced object has a kubernetes.io/hostname annotation. * __meta_kubernetes_endpointslice_port: Port of the referenced endpoint. * __meta_kubernetes_endpointslice_port_name: Named port of the referenced endpoint. * __meta_kubernetes_endpointslice_port_protocol: Protocol of the referenced endpoint. * If the endpoints belong to a service, all labels of the role: service discovery are attached. * For all targets backed by a pod, all labels of the role: pod discovery are attached.

ingress

The ingress role discovers a target for each path of each ingress. This is generally useful for blackbox monitoring of an ingress. The address will be set to the host specified in the ingress spec.

Available meta labels:

  • __meta_kubernetes_namespace: The namespace of the ingress object.
  • __meta_kubernetes_ingress_name: The name of the ingress object.
  • __meta_kubernetes_ingress_label_<labelname>: Each label from the ingress object.
  • __meta_kubernetes_ingress_labelpresent_<labelname>: true for each label from the ingress object.
  • __meta_kubernetes_ingress_annotation_<annotationname>: Each annotation from the ingress object.
  • __meta_kubernetes_ingress_annotationpresent_<annotationname>: true for each annotation from the ingress object.
  • __meta_kubernetes_ingress_class_name: Class name from ingress spec, if present.
  • __meta_kubernetes_ingress_scheme: Protocol scheme of ingress, https if TLS config is set. Defaults to http.
  • __meta_kubernetes_ingress_path: Path from ingress spec. Defaults to /.

See below for the configuration options for Kubernetes discovery:

# The information to access the Kubernetes API.

# The API server addresses. If left empty, Prometheus is assumed to run inside
# of the cluster and will discover API servers automatically and use the pod's
# CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/.
[ api_server: <host> ]

# The Kubernetes role of entities that should be discovered.
# One of endpoints, service, pod, node, or ingress.
role: <string>

# Optional path to a kubeconfig file.
# Note that api_server and kube_config are mutually exclusive.
[ kubeconfig_file: <filename> ]

# Optional authentication information used to authenticate to the API server.
# Note that `basic_auth` and `authorization` options are mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Optional namespace discovery. If omitted, all namespaces are used.
namespaces:
  names:
    [ - <string> ]

# Optional label and field selectors to limit the discovery process to a subset of available resources.
# See https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
# and https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ to learn more about the possible
# filters that can be used. Endpoints role supports pod, service and endpoints selectors, other roles
# only support selectors matching the role itself (e.g. node role can only contain node selectors).

# Note: When making decision about using field/label selector make sure that this
# is the best approach - it will prevent Prometheus from reusing single list/watch
# for all scrape configs. This might result in a bigger load on the Kubernetes API,
# because per each selector combination there will be additional LIST/WATCH. On the other hand,
# if you just want to monitor small subset of pods in large cluster it's recommended to use selectors.
# Decision, if selectors should be used or not depends on the particular situation.
[ selectors:
  [ - role: <string>
    [ label: <string> ]
    [ field: <string> ] ]]

See this example Prometheus configuration file for a detailed example of configuring Prometheus for Kubernetes.

You may wish to check out the 3rd party Prometheus Operator, which automates the Prometheus setup on top of Kubernetes.

1.2.3.15、kuma_sd_config

Kuma SD configurations allow retrieving scrape target from the Kuma control plane.

This SD discovers “monitoring assignments” based on Kuma Dataplane Proxies, via the MADS v1 (Monitoring Assignment Discovery Service) xDS API, and will create a target for each proxy inside a Prometheus-enabled mesh.

The following meta labels are available for each target:

  • __meta_kuma_mesh: the name of the proxy’s Mesh
  • __meta_kuma_dataplane: the name of the proxy
  • __meta_kuma_service: the name of the proxy’s associated Service
  • __meta_kuma_label_<tagname>: each tag of the proxy

See below for the configuration options for Kuma MonitoringAssignment discovery:

# Address of the Kuma Control Plane's MADS xDS server.
server: <string>

# The time to wait between polling update requests.
[ refresh_interval: <duration> | default = 30s ]

# The time after which the monitoring assignments are refreshed.
[ fetch_timeout: <duration> | default = 2m ]

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# Authentication information used to authenticate to the Docker daemon.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.

# Optional HTTP basic authentication information.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional the `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials with the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

The relabeling phase is the preferred and more powerful way to filter proxies and user-defined tags.

1.2.3.16、lightsail_sd_config

Lightsail SD configurations allow retrieving scrape targets from AWS Lightsail instances. The private IP address is used by default, but may be changed to the public IP address with relabeling.

The following meta labels are available on targets during relabeling:

  • __meta_lightsail_availability_zone: the availability zone in which the instance is running
  • __meta_lightsail_blueprint_id: the Lightsail blueprint ID
  • __meta_lightsail_bundle_id: the Lightsail bundle ID
  • __meta_lightsail_instance_name: the name of the Lightsail instance
  • __meta_lightsail_instance_state: the state of the Lightsail instance
  • __meta_lightsail_instance_support_code: the support code of the Lightsail instance
  • __meta_lightsail_ipv6_addresses: comma separated list of IPv6 addresses assigned to the instance’s network interfaces, if present
  • __meta_lightsail_private_ip: the private IP address of the instance
  • __meta_lightsail_public_ip: the public IP address of the instance, if available
  • __meta_lightsail_tag_<tagkey>: each tag value of the instance

See below for the configuration options for Lightsail discovery:

# The information to access the Lightsail API.

# The AWS region. If blank, the region from the instance metadata is used.
[ region: <string> ]

# Custom endpoint to be used.
[ endpoint: <string> ]

# The AWS API keys. If blank, the environment variables `AWS_ACCESS_KEY_ID`
# and `AWS_SECRET_ACCESS_KEY` are used.
[ access_key: <string> ]
[ secret_key: <secret> ]
# Named AWS profile used to connect to the API.
[ profile: <string> ]

# AWS Role ARN, an alternative to using AWS API keys.
[ role_arn: <string> ]

# Refresh interval to re-read the instance list.
[ refresh_interval: <duration> | default = 60s ]

# The port to scrape metrics from. If using the public IP address, this must
# instead be specified in the relabeling rule.
[ port: <int> | default = 80 ]
1.2.3.17、linode_sd_config

Linode SD configurations allow retrieving scrape targets from Linode’s Linode APIv4. This service discovery uses the public IPv4 address by default, by that can be changed with relabelling, as demonstrated in the Prometheus linode-sd configuration file.

The following meta labels are available on targets during relabeling:

  • __meta_linode_instance_id: the id of the linode instance
  • __meta_linode_instance_label: the label of the linode instance
  • __meta_linode_image: the slug of the linode instance’s image
  • __meta_linode_private_ipv4: the private IPv4 of the linode instance
  • __meta_linode_public_ipv4: the public IPv4 of the linode instance
  • __meta_linode_public_ipv6: the public IPv6 of the linode instance
  • __meta_linode_region: the region of the linode instance
  • __meta_linode_type: the type of the linode instance
  • __meta_linode_status: the status of the linode instance
  • __meta_linode_tags: a list of tags of the linode instance joined by the tag separator
  • __meta_linode_group: the display group a linode instance is a member of
  • __meta_linode_hypervisor: the virtualization software powering the linode instance
  • __meta_linode_backups: the backup service status of the linode instance
  • __meta_linode_specs_disk_bytes: the amount of storage space the linode instance has access to
  • __meta_linode_specs_memory_bytes: the amount of RAM the linode instance has access to
  • __meta_linode_specs_vcpus: the number of VCPUS this linode has access to
  • __meta_linode_specs_transfer_bytes: the amount of network transfer the linode instance is allotted each month
  • __meta_linode_extra_ips: a list of all extra IPv4 addresses assigned to the linode instance joined by the tag separator
# Authentication information used to authenticate to the API server.
# Note that `basic_auth` and `authorization` options are
# mutually exclusive.
# password and password_file are mutually exclusive.
# Note: Linode APIv4 Token must be created with scopes: 'linodes:read_only', 'ips:read_only', and 'events:read_only'

# Optional HTTP basic authentication information, not currently supported by Linode APIv4.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional the `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials with the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# The string by which Linode Instance tags are joined into the tag label.
[ tag_separator: <string> | default = , ]

# The time after which the linode instances are refreshed.
[ refresh_interval: <duration> | default = 60s ]
1.2.3.18、marathon_sd_config

Marathon SD configurations allow retrieving scrape targets using the Marathon REST API. Prometheus will periodically check the REST endpoint for currently running tasks and create a target group for every app that has at least one healthy task.

The following meta labels are available on targets during relabeling:

  • __meta_marathon_app: the name of the app (with slashes replaced by dashes)
  • __meta_marathon_image: the name of the Docker image used (if available)
  • __meta_marathon_task: the ID of the Mesos task
  • __meta_marathon_app_label_<labelname>: any Marathon labels attached to the app
  • __meta_marathon_port_definition_label_<labelname>: the port definition labels
  • __meta_marathon_port_mapping_label_<labelname>: the port mapping labels
  • __meta_marathon_port_index: the port index number (e.g. 1 for PORT1)

See below for the configuration options for Marathon discovery:

# List of URLs to be used to contact Marathon servers.
# You need to provide at least one server URL.
servers:
  - <string>

# Polling interval
[ refresh_interval: <duration> | default = 30s ]

# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token_file` and other authentication mechanisms.
[ auth_token: <secret> ]

# Optional authentication information for token-based authentication
# https://docs.mesosphere.com/1.11/security/ent/iam-api/#passing-an-authentication-token
# It is mutually exclusive with `auth_token` and other authentication mechanisms.
[ auth_token_file: <filename> ]

# Sets the `Authorization` header on every request with the
# configured username and password.
# This is mutually exclusive with other authentication mechanisms.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
# NOTE: The current version of DC/OS marathon (v1.11.0) does not support
# standard `Authentication` header, use `auth_token` or `auth_token_file`
# instead.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# TLS configuration for connecting to marathon servers
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

By default every app listed in Marathon will be scraped by Prometheus. If not all of your services provide Prometheus metrics, you can use a Marathon label and Prometheus relabeling to control which instances will actually be scraped. See the Prometheus marathon-sd configuration file for a practical example on how to set up your Marathon app and your Prometheus configuration.

By default, all apps will show up as a single job in Prometheus (the one specified in the configuration file), which can also be changed using relabeling.

1.2.3.19、nerve_sd_config

Nerve SD configurations allow retrieving scrape targets from AirBnB’s Nerve which are stored in Zookeeper.

The following meta labels are available on targets during relabeling:

  • __meta_nerve_path: the full path to the endpoint node in Zookeeper
  • __meta_nerve_endpoint_host: the host of the endpoint
  • __meta_nerve_endpoint_port: the port of the endpoint
  • __meta_nerve_endpoint_name: the name of the endpoint
# The Zookeeper servers.
servers:
  - <host>
# Paths can point to a single service, or the root of a tree of services.
paths:
  - <string>
[ timeout: <duration> | default = 10s ]
1.2.3.20、serverset_sd_config

Serverset SD configurations allow retrieving scrape targets from Serversets which are stored in Zookeeper. Serversets are commonly used by Finagle and Aurora.

The following meta labels are available on targets during relabeling:

  • __meta_serverset_path: the full path to the serverset member node in Zookeeper
  • __meta_serverset_endpoint_host: the host of the default endpoint
  • __meta_serverset_endpoint_port: the port of the default endpoint
  • __meta_serverset_endpoint_host_<endpoint>: the host of the given endpoint
  • __meta_serverset_endpoint_port_<endpoint>: the port of the given endpoint
  • __meta_serverset_shard: the shard number of the member
  • __meta_serverset_status: the status of the member
# The Zookeeper servers.
servers:
  - <host>
# Paths can point to a single serverset, or the root of a tree of serversets.
paths:
  - <string>
[ timeout: <duration> | default = 10s ]

Serverset data must be in the JSON format, the Thrift format is not currently supported.

1.2.3.21、triton_sd_config

Triton SD configurations allow retrieving scrape targets from Container Monitor discovery endpoints.

One of the following <triton_role> types can be configured to discover targets:

container

The container role discovers one target per “virtual machine” owned by the account. These are SmartOS zones or lx/KVM/bhyve branded zones.

The following meta labels are available on targets during relabeling:

  • __meta_triton_groups: the list of groups belonging to the target joined by a comma separator
  • __meta_triton_machine_alias: the alias of the target container
  • __meta_triton_machine_brand: the brand of the target container
  • __meta_triton_machine_id: the UUID of the target container
  • __meta_triton_machine_image: the target container’s image type
  • __meta_triton_server_id: the server UUID the target container is running on
cn

The cn role discovers one target for per compute node (also known as “server” or “global zone”) making up the Triton infrastructure. The account must be a Triton operator and is currently required to own at least one container.

The following meta labels are available on targets during relabeling:

  • __meta_triton_machine_alias: the hostname of the target (requires triton-cmon 1.7.0 or newer)
  • __meta_triton_machine_id: the UUID of the target

See below for the configuration options for Triton discovery:

# The information to access the Triton discovery API.

# The account to use for discovering new targets.
account: <string>

# The type of targets to discover, can be set to:
# * "container" to discover virtual machines (SmartOS zones, lx/KVM/bhyve branded zones) running on Triton
# * "cn" to discover compute nodes (servers/global zones) making up the Triton infrastructure
[ role : <string> | default = "container" ]

# The DNS suffix which should be applied to target.
dns_suffix: <string>

# The Triton discovery endpoint (e.g. 'cmon.us-east-3b.triton.zone'). This is
# often the same value as dns_suffix.
endpoint: <string>

# A list of groups for which targets are retrieved, only supported when `role` == `container`.
# If omitted all containers owned by the requesting account are scraped.
groups:
  [ - <string> ... ]

# The port to use for discovery and metric scraping.
[ port: <int> | default = 9163 ]

# The interval which should be used for refreshing targets.
[ refresh_interval: <duration> | default = 60s ]

# The Triton discovery API version.
[ version: <int> | default = 1 ]

# TLS configuration.
tls_config:
  [ <tls_config> ]
1.2.3.22、eureka_sd_config

Eureka SD configurations allow retrieving scrape targets using the Eureka REST API. Prometheus will periodically check the REST endpoint and create a target for every app instance.

The following meta labels are available on targets during relabeling:

  • __meta_eureka_app_name: the name of the app
  • __meta_eureka_app_instance_id: the ID of the app instance
  • __meta_eureka_app_instance_hostname: the hostname of the instance
  • __meta_eureka_app_instance_homepage_url: the homepage url of the app instance
  • __meta_eureka_app_instance_statuspage_url: the status page url of the app instance
  • __meta_eureka_app_instance_healthcheck_url: the health check url of the app instance
  • __meta_eureka_app_instance_ip_addr: the IP address of the app instance
  • __meta_eureka_app_instance_vip_address: the VIP address of the app instance
  • __meta_eureka_app_instance_secure_vip_address: the secure VIP address of the app instance
  • __meta_eureka_app_instance_status: the status of the app instance
  • __meta_eureka_app_instance_port: the port of the app instance
  • __meta_eureka_app_instance_port_enabled: the port enabled of the app instance
  • __meta_eureka_app_instance_secure_port: the secure port address of the app instance
  • __meta_eureka_app_instance_secure_port_enabled: the secure port of the app instance
  • __meta_eureka_app_instance_country_id: the country ID of the app instance
  • __meta_eureka_app_instance_metadata_<metadataname>: app instance metadata
  • __meta_eureka_app_instance_datacenterinfo_name: the datacenter name of the app instance
  • __meta_eureka_app_instance_datacenterinfo_<metadataname>: the datacenter metadata

See below for the configuration options for Eureka discovery:

# The URL to connect to the Eureka server.
server: <string>

# Sets the `Authorization` header on every request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Optional `Authorization` header configuration.
authorization:
  # Sets the authentication type.
  [ type: <string> | default: Bearer ]
  # Sets the credentials. It is mutually exclusive with
  # `credentials_file`.
  [ credentials: <secret> ]
  # Sets the credentials to the credentials read from the configured file.
  # It is mutually exclusive with `credentials`.
  [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Configures the scrape request's TLS settings.
tls_config:
  [ <tls_config> ]

# Optional proxy URL.
[ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# Refresh interval to re-read the app instance list.
[ refresh_interval: <duration> | default = 30s ]

See the Prometheus eureka-sd configuration file for a practical example on how to set up your Eureka app and your Prometheus configuration.

1.2.3.23、scaleway_sd_config

Scaleway SD configurations allow retrieving scrape targets from Scaleway instances and baremetal services.

The following meta labels are available on targets during relabeling:

Instance role
  • __meta_scaleway_instance_boot_type: the boot type of the server
  • __meta_scaleway_instance_hostname: the hostname of the server
  • __meta_scaleway_instance_id: the ID of the server
  • __meta_scaleway_instance_image_arch: the arch of the server image
  • __meta_scaleway_instance_image_id: the ID of the server image
  • __meta_scaleway_instance_image_name: the name of the server image
  • __meta_scaleway_instance_location_cluster_id: the cluster ID of the server location
  • __meta_scaleway_instance_location_hypervisor_id: the hypervisor ID of the server location
  • __meta_scaleway_instance_location_node_id: the node ID of the server location
  • __meta_scaleway_instance_name: name of the server
  • __meta_scaleway_instance_organization_id: the organization of the server
  • __meta_scaleway_instance_private_ipv4: the private IPv4 address of the server
  • __meta_scaleway_instance_project_id: project id of the server
  • __meta_scaleway_instance_public_ipv4: the public IPv4 address of the server
  • __meta_scaleway_instance_public_ipv6: the public IPv6 address of the server
  • __meta_scaleway_instance_region: the region of the server
  • __meta_scaleway_instance_security_group_id: the ID of the security group of the server
  • __meta_scaleway_instance_security_group_name: the name of the security group of the server
  • __meta_scaleway_instance_status: status of the server
  • __meta_scaleway_instance_tags: the list of tags of the server joined by the tag separator
  • __meta_scaleway_instance_type: commercial type of the server
  • __meta_scaleway_instance_zone: the zone of the server (ex: fr-par-1, complete list here)

This role uses the private IPv4 address by default. This can be changed with relabelling, as demonstrated in the Prometheus scaleway-sd configuration file.

Baremetal role
  • __meta_scaleway_baremetal_id: the ID of the server
  • __meta_scaleway_baremetal_public_ipv4: the public IPv4 address of the server
  • __meta_scaleway_baremetal_public_ipv6: the public IPv6 address of the server
  • __meta_scaleway_baremetal_name: the name of the server
  • __meta_scaleway_baremetal_os_name: the name of the operating system of the server
  • __meta_scaleway_baremetal_os_version: the version of the operating system of the server
  • __meta_scaleway_baremetal_project_id: the project ID of the server
  • __meta_scaleway_baremetal_status: the status of the server
  • __meta_scaleway_baremetal_tags: the list of tags of the server joined by the tag separator
  • __meta_scaleway_baremetal_type: the commercial type of the server
  • __meta_scaleway_baremetal_zone: the zone of the server (ex: fr-par-1, complete list here)

This role uses the public IPv4 address by default. This can be changed with relabelling, as demonstrated in the Prometheus scaleway-sd configuration file.

See below for the configuration options for Scaleway discovery:

# Access key to use. https://console.scaleway.com/project/credentials
access_key: <string>

# Secret key to use when listing targets. https://console.scaleway.com/project/credentials
# It is mutually exclusive with `secret_key_file`.
[ secret_key: <secret> ]

# Sets the secret key with the credentials read from the configured file.
# It is mutually exclusive with `secret_key`.
[ secret_key_file: <filename> ]

# Project ID of the targets.
project_id: <string>

# Role of the targets to retrieve. Must be `instance` or `baremetal`.
role: <string>

# The port to scrape metrics from.
[ port: <int> | default = 80 ]

# API URL to use when doing the server listing requests.
[ api_url: <string> | default = "https://api.scaleway.com" ]

# Zone is the availability zone of your targets (e.g. fr-par-1).
[ zone: <string> | default = fr-par-1 ]

# NameFilter specify a name filter (works as a LIKE) to apply on the server listing request.
[ name_filter: <string> ]

# TagsFilter specify a tag filter (a server needs to have all defined tags to be listed) to apply on the server listing request.
tags_filter:
[ - <string> ]

# Refresh interval to re-read the targets list.
[ refresh_interval: <duration> | default = 60s ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
[ follow_redirects: <bool> | default = true ]

# Optional proxy URL.
[ proxy_url: <string> ]

# TLS configuration.
tls_config:
  [ <tls_config> ]
1.2.3.24、uyuni_sd_config

Uyuni SD configurations allow retrieving scrape targets from managed systems via Uyuni API.

The following meta labels are available on targets during relabeling:

  • __meta_uyuni_endpoint_name: the name of the application endpoint
  • __meta_uyuni_exporter: the exporter exposing metrics for the target
  • __meta_uyuni_groups: the system groups of the target
  • __meta_uyuni_metrics_path: metrics path for the target
  • __meta_uyuni_minion_hostname: hostname of the Uyuni client
  • __meta_uyuni_primary_fqdn: primary FQDN of the Uyuni client
  • __meta_uyuni_proxy_module: the module name if Exporter Exporter proxy is configured for the target
  • __meta_uyuni_scheme: the protocol scheme used for requests
  • __meta_uyuni_system_id: the system ID of the client

See below for the configuration options for Uyuni discovery:

# The URL to connect to the Uyuni server.
server: <string>

# Credentials are used to authenticate the requests to Uyuni API.
username: <string>
password: <secret>

# The entitlement string to filter eligible systems.
[ entitlement: <string> | default = monitoring_entitled ]

# The string by which Uyuni group names are joined into the groups label.
[ separator: <string> | default = , ]

# Refresh interval to re-read the managed targets list.
[ refresh_interval: <duration> | default = 60s ]

# Optional HTTP basic authentication information, currently not supported by Uyuni.
basic_auth:
  [ username: <string> ]
    [ password: <secret> ]
    [ password_file: <string> ]

# Optional `Authorization` header configuration, currently not supported by Uyuni.
authorization:
  # Sets the authentication type.
    [ type: <string> | default: Bearer ]
    # Sets the credentials. It is mutually exclusive with
    # `credentials_file`.
    [ credentials: <secret> ]
    # Sets the credentials to the credentials read from the configured file.
    # It is mutually exclusive with `credentials`.
    [ credentials_file: <filename> ]

# Optional OAuth 2.0 configuration, currently not supported by Uyuni.
# Cannot be used at the same time as basic_auth or authorization.
oauth2:
  [ <oauth2> ]

# Optional proxy URL.
  [ proxy_url: <string> ]

# Configure whether HTTP requests follow HTTP 3xx redirects.
  [ follow_redirects: <bool> | default = true ]

# TLS configuration.
tls_config:
  [ <tls_config> ]

See the Prometheus uyuni-sd configuration file for a practical example on how to set up Uyuni Prometheus configuration.

1.2.4、static_configs

static_configs允许指定目标列表和为其设置的通用标签,这是在采集配置中指定静态目标的典型方法。

# 使用静态配置指定要采集的目标
targets:
  [ - '<host>' ]

# 为目标采集到的所有指标添加的标签
labels:
  [ <labelname>: <labelvalue> ... ]

1.2.5、relabel_configs

relabel_configs(重新标记)是一个功能强大的工具,可以在目标被采集数据前重写它的标签集,每个采集配置可以配置多个重写标签,并按照配置的顺序应用与每个目标的标签集。重写标签之后,以双下划线__开头的标签将会被从标签集中删除。如果只需要临时存储的标签,可以使用_tmp作为前缀标识。在实际生产中,relabel_configs不仅可以分组收集数据,也可以降低内存占用。

这种发生在采集样本数据之前,对目标(target)的实例标签进行重写的机制,在Prometheus中称为Relabeling行为机制。可以通过配置文件中的relabel_configs字段自定义重写标签,除了可以修改标签,还可以为采集的指标添加新标签。

在每个目标重新标记之后:

  • 目标的job标签被设置为相应采集配置的job_name值。 are mutually exclusive.__Address__标签设置为目标的<host>:<port>地址。
  • 重新标记后,如果重新标记时没有设置instance标签,则默认设置为__Address__的值。
  • __SCHEMA__:采集该目标服务所使用的HTTP方法,有HTTP或HTTPS
  • __METRICS_PATH__:采集目标的服务访问地址的路径
  • __param_<name>:采集目标时包含的请求参数

在重新标记阶段,可以使用以__meta_为前缀的附加标签。它们通过目标的服务发现机制来设置,并且在不同的机制之间有所不同。

# 源标签,多个标签用分隔符(默认为分号)隔开。
[ source_labels: '[' <labelname> [, ...] ']' ]

# 标签分隔符,默认为分号,可选。
[ separator: <string> | default = ; ]

# 替换后的标签名称
[ target_label: <labelname> ]

# 匹配的正则表达式
[ regex: <regex> | default = (.*) ]

# 系数,对source_labels值进行散列运算,一般为采集节点数,如4台则为4
[ modulus: <int> ]

# 如果正则表达式匹配,则对其执行正则表达式替换的源标签值。
[ replacement: <string> | default = $1 ]

# 正则匹配后如何处理源标签,默认为替换。
[ action: <relabel_action> | default = replace ]

relabel常见的action类型有如下几种:

  • replace:默认值。它会根据正则表达式的配置匹配source_labels的值(多个则用separator拼接),并将匹配到的值写入target_label中。如果有多个匹配组,可以使用$1,$2确定写入内容;如匹配不到内容,则不对target_label进行重写操作。
  • keep:用于选择,仅保留source_labels的值中匹配到regex正则表达式内容的target实例,即匹配不到的全丢弃。
  • drop:用于排除,即丢弃用regex正则表达式匹配到source_labels的值对应的target实relabel_configs例,即匹配到的全丢弃。
  • hashmod:以modulus的值为系数,计算source_labels的散列值,然后对source_labels所有值添加系数,假设系数为4,那么hash完后的每个source_labels里面的值都会分配0-3中的一个值。
  • labelmap:根据正则的定义去匹配target实例中有标签的名称,并以匹配到的内容作为新标签名,其值作为新标签的值。
  • labeldrop:删除正则匹配到的目标的source_labels
  • labelkeep:删除正则不匹配的目标的source_labels,和labeldrop相反。

注意:

  1. 重新标记的默认操作是replace,也是最常用的操作。
  2. dropkeep可以看作是过滤器,但是,除了这两个的其它操作,无论正则是否匹配,处理都将继续进行,以下为dropkeeprelabel_configs的影响:
    • relabel_configs中,会导致目标不被采集。
    • metric_relabel_configs中,会导致时间序列不被采集。
    • alert_relabel_configs中,会导致告警不发送到alertmanager
    • write_relabel_configs中,会导致时间序列不发送到远程写入点。

1.2.6、metric_relabel_configs

Prometheus从数据源拉取数据后,会对原始数据进行编辑,metric_relabel_configsjob内的配置块,作用是在采集到数据后,保存数据前重新编辑标签,而relabel_configs是在采集前进行重新标记。

metric_relabel_configs的主要应用场景是:将监控数据中不需要的部分丢弃而不保存。它的参数用法和relabel_configs一样。

1.3、alerting

告警相关配置

1.3.1、alert_relabel_configs

在发送给Alertmanager前重写标签

配置格式同scrape_configs中的relabel_configs

1.3.2、alertmanagers

alertmanagers 指定Prometheus向其发送告警的Alertmanager实例,可通过在static_configs中指定,也可以使用服务发现机制动态发现。

此外, relabel_configs 允许从发现的实例中选择Alertmanagers,并提供对使用的API路径进行高级修改, 该路径通过__alerts_path__ 公开。

# 推送告警到ALertmanager的超时时间.
[ timeout: <duration> | default = 10s ]

# Alertmanager API版本,默认为2.
[ api_version: <string> | default = v2 ]

# 将警报推送到指定的HTTP路径的前缀。
[ path_prefix: <path> | default = / ]

# 设置请求的方式,默认为HTTP。
[ scheme: <scheme> | default = http ]

# 开启验证, are mutually exclusive.password和password_file不可同时使用
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# 请求的头部授权配置,可选
authorization:
  # 验证类型
  [ type: <string> | default: Bearer ]
  # 设置凭据,不可与credentials_file同时使用。
  [ credentials: <secret> ]
  # 从配置文件中读取凭据,不可与credentials同时使用。
  [ credentials_file: <filename> ]

# OAuth 2.0配置,可选;不可与basic_auth、authorization同时使用
oauth2:
  [ <oauth2> ]

# 发起采集请求时的TLS配置
tls_config:
  [ <tls_config> ]

# 代理地址
[ proxy_url: <string> ]

# 是否开启重定向
[ follow_redirects: <bool> | default = true ]

# 亚马逊服务发现配置列表
azure_sd_configs:
  [ - <azure_sd_config> ... ]

# Consul服务发现配置列表
consul_sd_configs:
  [ - <consul_sd_config> ... ]

# DNS服务发现配置列表
dns_sd_configs:
  [ - <dns_sd_config> ... ]

# EC2服务发现配置列表
ec2_sd_configs:
  [ - <ec2_sd_config> ... ]

# Eureka服务发现配置列表.
eureka_sd_configs:
  [ - <eureka_sd_config> ... ]

# 文件服务发现配置列表.
file_sd_configs:
  [ - <file_sd_config> ... ]

# DigitalOcean服务发现配置列表.
digitalocean_sd_configs:
  [ - <digitalocean_sd_config> ... ]

# Docker服务发现配置列表.
docker_sd_configs:
  [ - <docker_sd_config> ... ]

# Docker Swarm服务发现配置列表
dockerswarm_sd_configs:
  [ - <dockerswarm_sd_config> ... ]

# GCE服务发现配置列表
gce_sd_configs:
  [ - <gce_sd_config> ... ]

# LHetzner服务发现配置列表
hetzner_sd_configs:
  [ - <hetzner_sd_config> ... ]

# HTTP服务发现配置列表
http_sd_configs:
  [ - <http_sd_config> ... ]

# Kubernetes服务发现配置列表
kubernetes_sd_configs:
  [ - <kubernetes_sd_config> ... ]

# Lightsail服务发现配置列表
lightsail_sd_configs:
  [ - <lightsail_sd_config> ... ]

# Linode服务发现配置列表
linode_sd_configs:
  [ - <linode_sd_config> ... ]

# Marathon服务发现配置列表
marathon_sd_configs:
  [ - <marathon_sd_config> ... ]

# AirBnB's Nerve服务发现配置列表
nerve_sd_configs:
  [ - <nerve_sd_config> ... ]

# OpenStack服务发现配置列表
openstack_sd_configs:
  [ - <openstack_sd_config> ... ]

# PuppetDB服务发现配置列表
puppetdb_sd_configs:
  [ - <puppetdb_sd_config> ... ]

# Scaleway服务发现配置列表
scaleway_sd_configs:
  [ - <scaleway_sd_config> ... ]

# LZookeeper服务发现配置列表
serverset_sd_configs:
  [ - <serverset_sd_config> ... ]

# Triton服务发现配置列表
triton_sd_configs:
  [ - <triton_sd_config> ... ]

# Uyuni服务发现配置列表
uyuni_sd_configs:
  [ - <uyuni_sd_config> ... ]

# Alertmanagers静态配置列表
static_configs:
  [ - <static_config> ... ]

# ALertmanager重新标记列表
relabel_configs:
  [ - <relabel_config> ... ]

1.4、remote_write

write_relabel_configs 在发送数据给远程存储前,会对样本进行重新标记。写入时重新标记,这可以用来限制发送的样本数据。

# 远程存储地址
url: <string>

# 远程写入超时时间
[ remote_timeout: <duration> | default = 30s ]

# 与远程写入请求一起发送的自定义HTTP标头,但Prometheus自身设定的标头不能被覆盖。
headers:
  [ <string>: <string> ... ]

# 写入时重新标记的配置列表
write_relabel_configs:
  [ - <relabel_config> ... ]

# 远程写入配置的唯一名称
[ name: <string> ]

# 是否开启通过远程写入来发送样本数据,必须先启用样本数据自我存储才能采集样本数据。
[ send_exemplars: <boolean> | default = false ]

# 每次远程写入前都必须先验证账号和密码,password和password_file不能同时使用。
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# 为请求的标头配置授权
authorization:
  # 验证类型
  [ type: <string> | default: Bearer ]
  # 设置凭据,和credentials_file不可同时使用。
  [ credentials: <secret> ]
  # 从文件中读取凭据,不可与credentials同时使用
  [ credentials_file: <filename> ]

# 可选,使用AWS的签名验证v4版对请求进行签名,不能与basic_auth、authorization、oauth2同时使用
# 要使用AWS的SDK默认凭据,请使用sigv4: {}
sigv4:
  # 所属AWS地区,留空则使用默认凭据链中的区域。
  [ region: <string> ]

  # AWS API密钥,留空则使用环境变量AWS_ACCESS_KEY_ID和AWS_SECRET_ACCESS_KEY。
  [ access_key: <string> ]
  [ secret_key: <secret> ]

  # 用于身份验证的命名AWS配置文件.
  [ profile: <string> ]

  # AWS角色ARN,是AWS API密钥的替代方案。
  [ role_arn: <string> ]

# 可选,OAuth 2.0配置,不能与basic_auth、authorization、sigv4同时使用。
oauth2:
  [ <oauth2> ]

# 远程写入请求的TLS配置,可选。
tls_config:
  [ <tls_config> ]

# 代理地址,可选。
[ proxy_url: <string> ]

# 是否开启重定向
[ follow_redirects: <bool> | default = true ]

# 用于写入远程存储的队列的配置。
queue_config:
  # 在阻止从WAL读取更多样本之前,每个碎片要缓冲的样本数量。
  # 建议每个分片有足够的容量来缓冲几个请求,以在处理偶尔较慢的远程请求时可以保持吞吐量。
  [ capacity: <int> | default = 2500 ]
  # 最大分片数量,即最大并发数。
  [ max_shards: <int> | default = 200 ]
  # 最小分片数量,即最小并发数。
  [ min_shards: <int> | default = 1 ]
  # 单次发送的最大样本数
  [ max_samples_per_send: <int> | default = 500]
  # 样本在缓冲区中等待的最长时间
  [ batch_send_deadline: <duration> | default = 5s ]
  # 初始重试延迟。每次重试都会加倍.
  [ min_backoff: <duration> | default = 30ms ]
  # 最长重试延迟
  [ max_backoff: <duration> | default = 100ms ]
  # 是否开启在收到远程存储返回的429状态码后重试,试验性功能,后面可能会取消。
  [ retry_on_http_429: <boolean> | default = false ]

# 配置将时间序列的元数据发送到远程存储,试验性功能,后续可能会取消。
metadata_config:
  # 是否发送指标元数据到远程存储
  [ send: <boolean> | default = true ]
  # 发送间隔
  [ send_interval: <duration> | default = 1m ]
  # 单次发送的最大样本数
  [ max_samples_per_send: <int> | default = 500]

1.5、remote_read

# 远程读取数据的URL
url: <string>

# 远程读取配置的唯一名称
[ name: <string> ]

# 可选的相等匹配器列表,必须存在于选择器中用以查询远程读取端点。
required_matchers:
  [ <labelname>: <labelvalue> ... ]

# 远程读取的超时时间
[ remote_timeout: <duration> | default = 1m ]

# 与远程读取请求一起发送的自定义HTTP标头,但Prometheus自身设定的标头不能被覆盖。
headers:
  [ <string>: <string> ... ]

# 是否强制每次查询都要从数据库读取最新数据
[ read_recent: <boolean> | default = false ]

# 每个远程读取请求都要先经过账号密码验证,password和password_file不能同时使用。
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# 可选的Authorization标头配置.
authorization:
  # 验证类型
  [ type: <string> | default: Bearer ]
  # 设置凭据,不能和credentials_file同时使用。
  [ credentials: <secret> ]
  # 从文件中读取凭据,不能和credentials同时使用
  [ credentials_file: <filename> ]

# OAuth 2.0配置,可选;不能和basic_auth或authorization同时使用。
oauth2:
  [ <oauth2> ]

# 远程读取请求的TLS配置
tls_config:
  [ <tls_config> ]

# 代理配置,可选。
[ proxy_url: <string> ]

# 是否开启重定向
[ follow_redirects: <bool> | default = true ]
世间微尘里 独爱茶酒中