Prometheus
Connecting to Petals JMX
Petals exposes its metrics on JMX, but Prometheus itself cannot natively gather metrics through JMX. So we need to expose those metrics on HTTP, which Prometheus can access.
Luckily Prometheus maintains jmx_exporter which exposes metrics on HTTP. It can either act as a java agent injected into the JVM during Petals ESB startup or an independent server connecting to Petals ESB by RMI.
Official packages jmx_exporter :
JMX to Prometheus exporter: a collector that can configurably scrape and expose mBeans of a JMX target.
This exporter is intended to be run as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can be also run as an independent HTTP server and scrape remote JMX targets, but this has various disadvantages, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged.
- Copy jmx_prometheus_javaagent-XXX.jar in petals-esb-directory/lib folder
- Create a yaml config file in petals-esb-directory/conf folder, here it is named prometheus-jmx.yaml
The file can be empty for now, but this default config will display everything available :
startDelaySeconds: 0
rules:
-pattern: ".*"
- Add the following line to petals-esb.sh, just before the “exec” command at the very end of the script. 8585 is the port number on which HTTP metrics will be exposed (once gathered by the jmx_exporter), set is as you see fit.
JAVA_OPTS="$JAVA_OPTS -javaagent:${PETALS_LIB_DIR}/jmx_prometheus_javaagent-0.3.1.jar=8585:${PETALS_CONF_DIR}/prometheus-jmx.yaml"
- Run petals-esb.sh
- JVM metrics are available at http://localhost:8585/metrics
Install Prometheus
Install : https://prometheus.io/docs/prometheus/latest/getting_started/
use this prometheus.yml config:
# my global config global: scrape_interval: 5s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 5s # Evaluate rules every 15 seconds. The default is every 1 minute. # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'petals monitoring' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:8484'] labels: groups: 'petals'
Configure jmx agent
Jmx agent can be configured in its yaml config file.
Note that :
- Only numeric values are supported by prometheus (though string can me interpreted as regexp to extract numeric values)
- Custom and complex objects may not be exported by the exporter, having ‘- pattern “.*”’ as only rule will return every metric available (useful for testing)
- Rules order is important: Eventually, a single Mbean is processed by a single rule! To decide which rule is applied: Mbeans will be parsed by each rule (in order) until a pattern matches, then this rule is applied to the Mbean. In other words, all rules are tested against each Mbean the first one to match is kept for the Mbean. So very specific rules should be put first, and generic/default rules last.
Rule samples :
rules: - pattern: 'java.lang<type=Runtime, key=java.runtime.name><>SystemProperties: (.*)' name: aaaa_test value: 1 labels: runtime: "$1" - pattern: '(.*)<(.*)><(.*)>(.*)' name: aaaa_petals_test value: 1 labels: mouais: "$1 $2 $3 $3" - pattern: '(\w+)<type=(\w+), name=(\w+)><>Value: (\w+)' name: $1_$2_$3 value: 1 help: "$1 metric $2 $3" labels: value: "$3: $4"