View Source

h1. Connecting to Petals JMX

Petals exposes its metrics on [JMX|https://en.wikipedia.org/wiki/Java_Management_Extensions], but Prometheus itself cannot natively gather metrics through JMX. So we need to expose those metrics on HTTP, which Prometheus can access.   
Luckily Prometheus maintains [jmx_exporter|https://github.com/prometheus/jmx_exporter] which exposes metrics on HTTP. It can either act as a java agent injected into the [JVM|https://en.wikipedia.org/wiki/Java_virtual_machine] during Petals ESB startup or an independent server connecting to Petals ESB by [RMI|https://en.wikipedia.org/wiki/Java_remote_method_invocation].

h2. Installing jmx_exporter as java agent :


_JMX to Prometheus exporter: a collector that can configurably scrape and expose MBeans of a JMX target._
_This exporter is intended to be run as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can be also run as an independent HTTP server and scrape remote JMX targets, but this has various disadvantages, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged._

* Copy *jmx_prometheus_javaagent-XXX.jar* in _petals-esb-directory/lib_ folder
* Create a yaml config file in _petals-esb-directory/conf_ folder, here it is named *prometheus-jmx.yaml*. The file can be empty for now, but this default config will display everything available :{code}
startDelaySeconds: 0
rules:
-pattern: ".*"
{code}
* Add the following line to *petals-esb.sh*, just before the “_exec_” command at the very end of the script. If necessary, change the version number to match the jar file you downloaded. _8585_ is the port number on which HTTP metrics will be exposed (once gathered by the jmx_exporter), set is as you see fit.{code}JAVA_OPTS="$JAVA_OPTS -javaagent:${PETALS_LIB_DIR}/jmx_prometheus_javaagent-0.3.1.jar=8585:${PETALS_CONF_DIR}/prometheus-jmx.yaml"{code}
* Run _petals-esb.sh_
* Metrics are available at *[http://localhost:8585/metrics|http://localhost:8484/metrics*]*

h2. Alternate jmx_exporter install: as HTTP server


h3.

* Download [jmx_prometheus_httpserver|https://mvnrepository.com/artifact/io.prometheus.jmx/jmx_prometheus_httpserver]. Be careful about the confusing version number, check the date to have the last version.
* Adapt the *prometheus-jmx.yaml* config file to connect by RMI. You can use either *jmxUrl* or *hostPort*, *username* and *password* are mandatory. {code}
startDelaySeconds: 0

# jmxUrl: service:jmx:rmi:///jndi/rmi://localhost:7700/PetalsJMX
hostPort: localhost:7700
username: petals
password: petals

rules:
- pattern: ".*"
{code}
* Start the server, with the exposition HTTP *ip:port* and config file as argument : {code}java \-jar jmx_prometheus_httpserver-0.3.1-jar-with-dependencies.jar localhost:8585 prometheus-jmx.yaml{code}



h2. Install Prometheus

* Install : [https://prometheus.io/docs/prometheus/latest/getting_started/|https://prometheus.io/docs/prometheus/latest/getting_started/]
* Configure Prometheus, here is a sample *prometheus.yml* config:{code}
global:
scrape_interval: 5s
evaluation_interval: 5s

scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'petals monitoring'

# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:8585']
labels:
groups: 'petals'
{code}
* Start Prometheus : {code}./prometheus --config.file=prometheus.yml{code}


h1. Configuring jmx agent


h2.

Jmx agent can be configured in its yaml config file.

Note that :
* Only *numeric values* are supported by Prometheus (though string can me interpreted as regexp to extract numeric values)
* Custom and complex objects may not be exported by the exporter, *having ‘\- pattern “.*”’ as only rule will return every metric available\* (useful for testing).&nbsp;
* Petals ESB container MBeans metrics are all typed as Map, so are ignored by the jmx agent (v0.3.1). As is, *you can monitor some components metrics but cannot monitor container metrics with Prometheus.*
* Rules order is important: Eventually, *a single MBean is processed by a single rule*\! To decide which rule is applied: MBeans will be parsed by each rule (in order) until a pattern matches, then this rule is applied to the MBean. In other words, all rules are tested against each MBean the first one to match is kept for the MBean. So very specific rules should be put first, and generic/default rules last.
* Prometheus can make extensive use of *labels*&nbsp;through queries to determine&nbsp;*metrics* sources. Think about your needs when designing your labels,&nbsp;more explanations on [the official documentation|https://prometheus.io/docs/concepts/data_model/] or&nbsp;[this blog post.|https://pierrevincent.github.io/2017/12/prometheus-blog-series-part-1-metrics-and-labels/]
* Metrics can be typed (conceptually, as gauge, counter or histogram) for Prometheus to know how to handle them. More details on the [official documentation|https://prometheus.io/docs/concepts/metric_types/].
* Metrics format: {code}<metric name>{<label name>=<label value>, ...}{code}

Be careful passing strings as labels (quote from [Pierre Vincent's blog|https://pierrevincent.github.io/2017/12/prometheus-blog-series-part-1-metrics-and-labels/]):

{quote}
A word on label cardinality
Labels are really powerful so it can be tempting to annotate each metric with very specific information, however there are some important limitations to what should be used for labels.

Prometheus considers each unique combination of labels and label value as a different time series. As a result if a label has an unbounded set of possible values, Prometheus will have a very hard time storing all these time series. In order to avoid performance issues, labels should not be used for high cardinality data sets (e.g. Customer unique ids).
{quote}


h2. Configuration samples :

The following samples are produced monitoring a Petals ESB single container topology hosting 3 components (SOAP, REST and Camel).
Raw metrics can be hard to exploit, as the exporter automatically creates metrics:

Wildcard pattern rule:
{code}
rules:
- pattern: ".*"
{code}


Raw metrics sample:
{code}
# metric: java.lang<type=OperatingSystem><>SystemCpuLoad
java_lang_OperatingSystem_SystemCpuLoad 0.10240228944418933

# metric: java.lang<type=OperatingSystem><>ProcessCpuLoad
java_lang_OperatingSystem_ProcessCpuLoad 3.158981547513337E-4

# metrics: org.ow2.petals<type=custom, name=monitoring_petals-(se-camel | bc-soap | bc-rest)><>MessageExchangeProcessorThreadPoolQueuedRequestsMax)
org_ow2_petals_custom_MessageExchangeProcessorThreadPoolQueuedRequestsMax{name="monitoring_petals-se-camel",} 0.0
org_ow2_petals_custom_MessageExchangeProcessorThreadPoolQueuedRequestsMax{name="monitoring_petals-bc-soap",} 0.0
org_ow2_petals_custom_MessageExchangeProcessorThreadPoolQueuedRequestsMax{name="monitoring_petals-bc-rest",} 0.0
{code}

In this case, we cannot know later in Prometheus where the metrics originated or which Petals ESB container is concerned. By adding a few generic rules, we can add label and control the metric names.

h3. Adding generic rules

In this example, the point of our rules is:
* gather _java.lang_ metrics, name the metric with the explicit MBean, label them by type and add the container producing them.
* gather component metrics, name the metric with the explicit MBean, and label in a usable way component, container and type (monitoring or runtime_configuration).


Generic rules samples :
{code}
rules:
- pattern: 'java.lang<type=(.*)><>(.*): (.*)'
name: "$2"
value: "$3"
labels:
type: "$1"
container: "petals_sample_0"

- pattern: 'org.ow2.petals<type=custom, name=monitoring_(.+)><>(.+): (.+)'
name: "$2"
value: "$3"
labels:
type: "monitoring"
container: "petals_sample_0"
component: "$1"

- pattern: 'org.ow2.petals<type=custom, name=runtime_configuration_(.+)><>(.+): (.+)'
name: "$2"
value: "$3"
labels:
type: "runtime_config"
container: "petals_sample_0"
component: "$1"
{code}

Metrics parsed by generic rules :
{code}
ProcessCpuLoad{container="petals_sample_0",type="OperatingSystem",} 2.5760609293017057E-4
SystemCpuLoad{container="petals_sample_0",type="OperatingSystem",} 0.10177234194298118

MessageExchangeProcessorThreadPoolQueuedRequestsMax{component="petals-bc-soap",container="petals_sample_0",type="monitoring",} 0.0
MessageExchangeProcessorThreadPoolQueuedRequestsMax{component="petals-se-camel",container="petals_sample_0",type="monitoring",} 0.0
MessageExchangeProcessorThreadPoolQueuedRequestsMax{component="petals-bc-rest",container="petals_sample_0",type="monitoring",} 0.0
{code}

h3. Adding specific rules

And you can go further by adding rules for specific MBeans. Here we will
* group *SystemCpuLoad* and *ProcessCpuLoad* as a single metric.
* rename *MessageExchangeProcessorThreadPoolQueuedRequestsMax* into a shorter metric, while keeping the full name as label and helper.

{code}
- pattern: 'java.lang<type=OperatingSystem><>SystemCpuLoad: (.*)'
name: CpuLoad
value: "$1"
labels:
type: "OperatingSystem"
target: "system"
container: "petals_sample_0"

- pattern: 'java.lang<type=OperatingSystem><>ProcessCpuLoad: (.*)'
name: CpuLoad
value: "$1"
labels:
type: OperatingSystem
target: "process"
container: "petals_sample_0"

- pattern: 'org.ow2.petals<type=custom, name=monitoring_(.+)><>MessageExchangeProcessorThreadPoolQueuedRequestsMax: (.+)'
name: "MEPTP_QueuedRequests_Max"
value: "$2"
help: "MessageExchangeProcessorThreadPoolQueuedRequestsMax"
labels:
type: "monitoring"
mbean: "MessageExchangeProcessorThreadPoolQueuedRequestsMax"
container: "petals_sample_0"
component: "$1"
{code}

Metrics parsed by advanced rules :
{code}
CpuLoad{container="petals_sample_0",target="system",type="OperatingSystem",} 0.10234667681404555
CpuLoad{container="petals_sample_0",target="process",type="OperatingSystem",} 2.655985589352835E-4

MEPTP_QueuedRequests_Max{component="petals-bc-soap",container="petals_sample_0",mbean="MessageExchangeProcessorThreadPoolQueuedRequestsMax",type="monitoring",} 0.0
MEPTP_QueuedRequests_Max{component="petals-se-camel",container="petals_sample_0",mbean="MessageExchangeProcessorThreadPoolQueuedRequestsMax",type="monitoring",} 0.0
MEPTP_QueuedRequests_Max{component="petals-bc-rest",container="petals_sample_0",mbean="MessageExchangeProcessorThreadPoolQueuedRequestsMax",type="monitoring",} 0.0
{code}

You can mix generic and specific patterns, but remember that they are applied in order, so *always put specific rules first \!*