Measurements¶
The Telemetry service collects meters within an OpenStack deployment. This section provides a brief summary about meters format and origin and also contains the list of available meters.
Telemetry collects meters by polling the infrastructure elements and also by consuming the notifications emitted by other OpenStack services. For more information about the polling mechanism and notifications see Data collection. There are several meters which are collected by polling and by consuming. The origin for each meter is listed in the tables below.
Note
You may need to configure Telemetry or other OpenStack services in order to be able to collect all the samples you need. For further information about configuration requirements see the Telemetry chapter in the Installation Tutorials and Guides.
Telemetry uses the following meter types:
Type |
Description |
---|---|
Cumulative |
Increasing over time (instance hours) |
Delta |
Changing over time (bandwidth) |
Gauge |
Discrete items (floating IPs, image uploads) and fluctuating values (disk I/O) |
Telemetry provides the possibility to store metadata for samples. This metadata can be extended for OpenStack Compute and OpenStack Object Storage.
In order to add additional metadata information to OpenStack Compute you
have two options to choose from. The first one is to specify them when
you boot up a new instance. The additional information will be stored
with the sample in the form of resource_metadata.user_metadata.*
.
The new field should be defined by using the prefix metering.
. The
modified boot command look like the following:
$ openstack server create --property metering.custom_metadata=a_value my_vm
The other option is to set the reserved_metadata_keys
to the list of
metadata keys that you would like to be included in
resource_metadata
of the instance related samples that are collected
for OpenStack Compute. This option is included in the DEFAULT
section of the ceilometer.conf
configuration file.
You might also specify headers whose values will be stored along with
the sample data of OpenStack Object Storage. The additional information
is also stored under resource_metadata
. The format of the new field
is resource_metadata.http_header_$name
, where $name
is the name of
the header with -
replaced by _
.
For specifying the new header, you need to set metadata_headers
option
under the [filter:ceilometer]
section in proxy-server.conf
under the
swift
folder. You can use this additional data for instance to distinguish
external and internal users.
Measurements are grouped by services which are polled by Telemetry or emit notifications that this service consumes.
OpenStack Compute¶
The following meters are collected for OpenStack Compute.
Name |
Type |
Unit |
Resource |
Origin |
Support |
Note |
---|---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
||||||
memory |
Gauge |
MB |
instance ID |
Notification |
Libvirt, Hyper-V |
Volume of RAM allocated to the instance |
memory.usage |
Gauge |
MB |
instance ID |
Pollster |
Libvirt, Hyper-V, vSphere, XenAPI |
Volume of RAM used by the instance from the amount of its allocated memory |
memory.resident |
Gauge |
MB |
instance ID |
Pollster |
Libvirt |
Volume of RAM used by the instance on the physical machine |
cpu |
Cumulative |
ns |
instance ID |
Pollster |
Libvirt, Hyper-V |
CPU time used |
vcpus |
Gauge |
vcpu |
instance ID |
Notification |
Libvirt, Hyper-V |
Number of virtual CPUs allocated to the instance |
disk.device.read.requests |
Cumulative |
request |
disk ID |
Pollster |
Libvirt, Hyper-V |
Number of read requests |
disk.device.write.requests |
Cumulative |
request |
disk ID |
Pollster |
Libvirt, Hyper-V |
Number of write requests |
disk.device.read.bytes |
Cumulative |
B |
disk ID |
Pollster |
Libvirt, Hyper-V |
Volume of reads |
disk.device.write.bytes |
Cumulative |
B |
disk ID |
Pollster |
Libvirt, Hyper-V |
Volume of writes |
disk.root.size |
Gauge |
GB |
instance ID |
Notification |
Libvirt, Hyper-V |
Size of root disk |
disk.ephemeral.size |
Gauge |
GB |
instance ID |
Notification |
Libvirt, Hyper-V |
Size of ephemeral disk |
disk.device.latency |
Gauge |
ms |
disk ID |
Pollster |
Hyper-V |
Average disk latency per device |
disk.device.iops |
Gauge |
count/s |
disk ID |
Pollster |
Hyper-V |
Average disk iops per device |
disk.device.capacity |
Gauge |
B |
disk ID |
Pollster |
Libvirt |
The amount of disk per device that the instance can see |
disk.device.allocation |
Gauge |
B |
disk ID |
Pollster |
Libvirt |
The amount of disk per device occupied by the instance on the host machine |
disk.device.usage |
Gauge |
B |
disk ID |
Pollster |
Libvirt |
The physical size in bytes of the image container on the host per device |
network.incoming.bytes |
Cumulative |
B |
interface ID |
Pollster |
Libvirt, Hyper-V |
Number of incoming bytes |
network.outgoing.bytes |
Cumulative |
B |
interface ID |
Pollster |
Libvirt, Hyper-V |
Number of outgoing bytes |
network.incoming.packets |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt, Hyper-V |
Number of incoming packets |
network.outgoing.packets |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt, Hyper-V |
Number of outgoing packets |
Meters added in the Newton release |
||||||
cpu_l3_cache |
Gauge |
B |
instance ID |
Pollster |
Libvirt |
L3 cache used by the instance |
memory.bandwidth.total |
Gauge |
B/s |
instance ID |
Pollster |
Libvirt |
Total system bandwidth from one level of cache |
memory.bandwidth.local |
Gauge |
B/s |
instance ID |
Pollster |
Libvirt |
Bandwidth of memory traffic for a memory controller |
perf.cpu.cycles |
Gauge |
cycle |
instance ID |
Pollster |
Libvirt |
the number of cpu cycles one instruction needs |
perf.instructions |
Gauge |
instruction |
instance ID |
Pollster |
Libvirt |
the count of instructions |
perf.cache.references |
Gauge |
count |
instance ID |
Pollster |
Libvirt |
the count of cache hits |
perf.cache.misses |
Gauge |
count |
instance ID |
Pollster |
Libvirt |
the count of cache misses |
Meters added in the Ocata release |
||||||
network.incoming.packets.drop |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt |
Number of incoming dropped packets |
network.outgoing.packets.drop |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt |
Number of outgoing dropped packets |
network.incoming.packets.error |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt |
Number of incoming error packets |
network.outgoing.packets.error |
Cumulative |
packet |
interface ID |
Pollster |
Libvirt |
Number of outgoing error packets |
Meters added in the Pike release |
||||||
memory.swap.in |
Cumulative |
MB |
instance ID |
Pollster |
Libvirt |
Memory swap in |
memory.swap.out |
Cumulative |
MB |
instance ID |
Pollster |
Libvirt |
Memory swap out |
Meters added in the Queens release |
||||||
disk.device.read.latency |
Cumulative |
ns |
Disk ID |
Pollster |
Libvirt |
Total time read operations have taken |
disk.device.write.latency |
Cumulative |
ns |
Disk ID |
Pollster |
Libvirt |
Total time write operations have taken |
Note
To enable the libvirt memory.usage
support, you need to install
libvirt version 1.1.1+, QEMU version 1.5+, and you also need to
prepare suitable balloon driver in the image. It is applicable
particularly for Windows guests, most modern Linux distributions
already have it built in. Telemetry is not able to fetch the
memory.usage
samples without the image balloon driver.
Note
To enable libvirt disk.*
support when running on RBD-backed shared
storage, you need to install libvirt version 1.2.16+.
OpenStack Compute is capable of collecting CPU
related meters from
the compute host machines. In order to use that you need to set the
compute_monitors
option to cpu.virt_driver
in the
nova.conf
configuration file. For further information see the
Compute configuration section in the Compute chapter
of the OpenStack Configuration Reference.
The following host machine related meters are collected for OpenStack Compute:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
compute.node.cpu.frequency |
Gauge |
MHz |
host ID |
Notification |
CPU frequency |
compute.node.cpu.kernel.time |
Cumulative |
ns |
host ID |
Notification |
CPU kernel time |
compute.node.cpu.idle.time |
Cumulative |
ns |
host ID |
Notification |
CPU idle time |
compute.node.cpu.user.time |
Cumulative |
ns |
host ID |
Notification |
CPU user mode time |
compute.node.cpu.iowait.time |
Cumulative |
ns |
host ID |
Notification |
CPU I/O wait time |
compute.node.cpu.kernel.percent |
Gauge |
% |
host ID |
Notification |
CPU kernel percentage |
compute.node.cpu.idle.percent |
Gauge |
% |
host ID |
Notification |
CPU idle percentage |
compute.node.cpu.user.percent |
Gauge |
% |
host ID |
Notification |
CPU user mode percentage |
compute.node.cpu.iowait.percent |
Gauge |
% |
host ID |
Notification |
CPU I/O wait percentage |
compute.node.cpu.percent |
Gauge |
% |
host ID |
Notification |
CPU utilization |
IPMI meters¶
Telemetry captures notifications that are emitted by the Bare metal service. The source of the notifications are IPMI sensors that collect data from the host machine.
Alternatively, IPMI meters can be generated by deploying the ceilometer-agent-ipmi on each IPMI-capable node. For further information about the IPMI agent see IPMI agent.
Warning
To avoid duplication of metering data and unnecessary load on the
IPMI interface, do not deploy the IPMI agent on nodes that are
managed by the Bare metal service and keep the
conductor.send_sensor_data
option set to False
in the
ironic.conf
configuration file.
The following IPMI sensor meters are recorded:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
hardware.ipmi.fan |
Gauge |
RPM |
fan sensor |
Notification, Pollster |
Fan rounds per minute (RPM) |
hardware.ipmi.temperature |
Gauge |
C |
temperature sensor |
Notification, Pollster |
Temperature reading from sensor |
hardware.ipmi.current |
Gauge |
W |
current sensor |
Notification, Pollster |
Current reading from sensor |
hardware.ipmi.voltage |
Gauge |
V |
voltage sensor |
Notification, Pollster |
Voltage reading from sensor |
Note
The sensor data is not available in the Bare metal service by default. To enable the meters and configure this module to emit notifications about the measured values see the Installation Guide for the Bare metal service.
Besides generic IPMI sensor data, the following Intel Node Manager meters are recorded from capable platform:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
hardware.ipmi.node.power |
Gauge |
W |
host ID |
Pollster |
Current power of the system |
hardware.ipmi.node.temperature |
Gauge |
C |
host ID |
Pollster |
Current temperature of the system |
hardware.ipmi.node.inlet_temperature |
Gauge |
C |
host ID |
Pollster |
Inlet temperature of the system |
hardware.ipmi.node.outlet_temperature |
Gauge |
C |
host ID |
Pollster |
Outlet temperature of the system |
hardware.ipmi.node.airflow |
Gauge |
CFM |
host ID |
Pollster |
Volumetric airflow of the system, expressed as 1/10th of CFM |
hardware.ipmi.node.cups |
Gauge |
CUPS |
host ID |
Pollster |
CUPS(Compute Usage Per Second) index data of the system |
hardware.ipmi.node.cpu_util |
Gauge |
% |
host ID |
Pollster |
CPU CUPS utilization of the system |
hardware.ipmi.node.mem_util |
Gauge |
% |
host ID |
Pollster |
Memory CUPS utilization of the system |
hardware.ipmi.node.io_util |
Gauge |
% |
host ID |
Pollster |
IO CUPS utilization of the system |
SNMP based meters¶
Telemetry supports gathering SNMP based generic host meters. In order to be able to collect this data you need to run snmpd on each target host.
The following meters are available about the host machines by using SNMP:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
hardware.cpu.load.1min |
Gauge |
process |
host ID |
Pollster |
CPU load in the past 1 minute |
hardware.cpu.load.5min |
Gauge |
process |
host ID |
Pollster |
CPU load in the past 5 minutes |
hardware.cpu.load.15min |
Gauge |
process |
host ID |
Pollster |
CPU load in the past 15 minutes |
hardware.disk.size.total |
Gauge |
KB |
disk ID |
Pollster |
Total disk size |
hardware.disk.size.used |
Gauge |
KB |
disk ID |
Pollster |
Used disk size |
hardware.memory.total |
Gauge |
KB |
host ID |
Pollster |
Total physical memory size |
hardware.memory.used |
Gauge |
KB |
host ID |
Pollster |
Used physical memory size |
hardware.memory.buffer |
Gauge |
KB |
host ID |
Pollster |
Physical memory buffer size |
hardware.memory.cached |
Gauge |
KB |
host ID |
Pollster |
Cached physical memory size |
hardware.memory.swap.total |
Gauge |
KB |
host ID |
Pollster |
Total swap space size |
hardware.memory.swap.avail |
Gauge |
KB |
host ID |
Pollster |
Available swap space size |
hardware.network.incoming.bytes |
Cumulative |
B |
interface ID |
Pollster |
Bytes received by network interface |
hardware.network.outgoing.bytes |
Cumulative |
B |
interface ID |
Pollster |
Bytes sent by network interface |
hardware.network.outgoing.errors |
Cumulative |
packet |
interface ID |
Pollster |
Sending error of network interface |
hardware.network.ip.incoming.datagrams |
Cumulative |
datagrams |
host ID |
Pollster |
Number of received datagrams |
hardware.network.ip.outgoing.datagrams |
Cumulative |
datagrams |
host ID |
Pollster |
Number of sent datagrams |
hardware.system_stats.io.incoming.blocks |
Cumulative |
blocks |
host ID |
Pollster |
Aggregated number of blocks received to block device |
hardware.system_stats.io.outgoing.blocks |
Cumulative |
blocks |
host ID |
Pollster |
Aggregated number of blocks sent to block device |
Meters added in the Queens release |
|||||
hardware.disk.read.bytes |
Gauge |
B |
disk ID |
Pollster |
Bytes read from device since boot |
hardware.disk.write.bytes |
Gauge |
B |
disk ID |
Pollster |
Bytes written to device since boot |
hardware.disk.read.requests |
Gauge |
requests |
disk ID |
Pollster |
Read requests to device since boot |
hardware.disk.write.requests |
Gauge |
requests |
disk ID |
Pollster |
Write requests to device since boot |
Meters added in the Stein release |
|||||
hardware.cpu.user |
Gauge |
tick |
host ID |
Pollster |
CPU user in tick |
hardware.cpu.system |
Gauge |
tick |
host ID |
Pollster |
CPU system in tick |
hardware.cpu.nice |
Gauge |
tick |
host ID |
Pollster |
CPU nice in tick |
hardware.cpu.idle |
Gauge |
tick |
host ID |
Pollster |
CPU idle in tick |
hardware.cpu.wait |
Gauge |
tick |
host ID |
Pollster |
CPU wait in tick |
hardware.cpu.kernel |
Gauge |
tick |
host ID |
Pollster |
CPU kernel in tick |
hardware.cpu.interrupt |
Gauge |
tick |
host ID |
Pollster |
CPU interrupt in tick |
OpenStack Image service¶
The following meters are collected for OpenStack Image service:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
image.size |
Gauge |
B |
image ID |
Notification, Pollster |
Size of the uploaded image |
image.download |
Delta |
B |
image ID |
Notification |
Image is downloaded |
image.serve |
Delta |
B |
image ID |
Notification |
Image is served out |
OpenStack Block Storage¶
The following meters are collected for OpenStack Block Storage:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
volume.size |
Gauge |
GB |
volume ID |
Notification |
Size of the volume |
snapshot.size |
Gauge |
GB |
snapshot ID |
Notification |
Size of the snapshot |
Meters added in the Queens release |
|||||
volume.provider.capacity.total |
Gauge |
GB |
hostname |
Notification |
Total volume capacity on host |
volume.provider.capacity.free |
Gauge |
GB |
hostname |
Notification |
Free volume capacity on host |
volume.provider.capacity.allocated |
Gauge |
GB |
hostname |
Notification |
Assigned volume capacity on host by Cinder |
volume.provider.capacity.provisioned |
Gauge |
GB |
hostname |
Notification |
Assigned volume capacity on host |
volume.provider.capacity.virtual_free |
Gauge |
GB |
hostname |
Notification |
Virtual free volume capacity on host |
volume.provider.pool.capacity.total |
Gauge |
GB |
hostname#pool |
Notification |
Total volume capacity in pool |
volume.provider.pool.capacity.free |
Gauge |
GB |
hostname#pool |
Notification |
Free volume capacity in pool |
volume.provider.pool.capacity.allocated |
Gauge |
GB |
hostname#pool |
Notification |
Assigned volume capacity in pool by Cinder |
volume.provider.pool.capacity.provisioned |
Gauge |
GB |
hostname#pool |
Notification |
Assigned volume capacity in pool |
volume.provider.pool.capacity.virtual_free |
Gauge |
GB |
hostname#pool |
Notification |
Virtual free volume capacity in pool |
OpenStack Object Storage¶
The following meters are collected for OpenStack Object Storage:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
storage.objects |
Gauge |
object |
storage ID |
Pollster |
Number of objects |
storage.objects.size |
Gauge |
B |
storage ID |
Pollster |
Total size of stored objects |
storage.objects.containers |
Gauge |
container |
storage ID |
Pollster |
Number of containers |
storage.objects.incoming.bytes |
Delta |
B |
storage ID |
Notification |
Number of incoming bytes |
storage.objects.outgoing.bytes |
Delta |
B |
storage ID |
Notification |
Number of outgoing bytes |
storage.containers.objects |
Gauge |
object |
storage ID/container |
Pollster |
Number of objects in container |
storage.containers.objects.size |
Gauge |
B |
storage ID/container |
Pollster |
Total size of stored objects in container |
Ceph Object Storage¶
In order to gather meters from Ceph, you have to install and configure
the Ceph Object Gateway (radosgw) as it is described in the Installation
Manual. You also have to enable
usage logging in
order to get the related meters from Ceph. You will need an
admin
user with users
, buckets
, metadata
and usage
caps
configured.
In order to access Ceph from Telemetry, you need to specify a
service group
for radosgw
in the ceilometer.conf
configuration file along with access_key
and secret_key
of the
admin
user mentioned above.
The following meters are collected for Ceph Object Storage:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
radosgw.objects |
Gauge |
object |
storage ID |
Pollster |
Number of objects |
radosgw.objects.size |
Gauge |
B |
storage ID |
Pollster |
Total size of stored objects |
radosgw.objects.containers |
Gauge |
container |
storage ID |
Pollster |
Number of containers |
radosgw.api.request |
Gauge |
request |
storage ID |
Pollster |
Number of API requests against Ceph Object Gateway (radosgw) |
radosgw.containers.objects |
Gauge |
object |
storage ID/container |
Pollster |
Number of objects in container |
radosgw.containers.objects.size |
Gauge |
B |
storage ID/container |
Pollster |
Total size of stored objects in container |
Note
The usage
related information may not be updated right after an
upload or download, because the Ceph Object Gateway needs time to
update the usage properties. For instance, the default configuration
needs approximately 30 minutes to generate the usage logs.
OpenStack Identity¶
The following meters are collected for OpenStack Identity:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
identity.authenticate.success |
Delta |
user |
user ID |
Notification |
User successfully authenticated |
identity.authenticate.pending |
Delta |
user |
user ID |
Notification |
User pending authentication |
identity.authenticate.failure |
Delta |
user |
user ID |
Notification |
User failed to authenticate |
OpenStack Networking¶
The following meters are collected for OpenStack Networking:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
bandwidth |
Delta |
B |
label ID |
Notification |
Bytes through this l3 metering label |
SDN controllers¶
The following meters are collected for SDN:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
switch |
Gauge |
switch |
switch ID |
Pollster |
Existence of switch |
switch.port |
Gauge |
port |
switch ID |
Pollster |
Existence of port |
switch.port.receive.packets |
Cumulative |
packet |
switch ID |
Pollster |
Packets received on port |
switch.port.transmit.packets |
Cumulative |
packet |
switch ID |
Pollster |
Packets transmitted on port |
switch.port.receive.bytes |
Cumulative |
B |
switch ID |
Pollster |
Bytes received on port |
switch.port.transmit.bytes |
Cumulative |
B |
switch ID |
Pollster |
Bytes transmitted on port |
switch.port.receive.drops |
Cumulative |
packet |
switch ID |
Pollster |
Drops received on port |
switch.port.transmit.drops |
Cumulative |
packet |
switch ID |
Pollster |
Drops transmitted on port |
switch.port.receive.errors |
Cumulative |
packet |
switch ID |
Pollster |
Errors received on port |
switch.port.transmit.errors |
Cumulative |
packet |
switch ID |
Pollster |
Errors transmitted on port |
switch.port.receive.frame_error |
Cumulative |
packet |
switch ID |
Pollster |
Frame alignment errors received on port |
switch.port.receive.overrun_error |
Cumulative |
packet |
switch ID |
Pollster |
Overrun errors received on port |
switch.port.receive.crc_error |
Cumulative |
packet |
switch ID |
Pollster |
CRC errors received on port |
switch.port.collision.count |
Cumulative |
count |
switch ID |
Pollster |
Collisions on port |
switch.table |
Gauge |
table |
switch ID |
Pollster |
Duration of table |
switch.table.active.entries |
Gauge |
entry |
switch ID |
Pollster |
Active entries in table |
switch.table.lookup.packets |
Gauge |
packet |
switch ID |
Pollster |
Lookup packets for table |
switch.table.matched.packets |
Gauge |
packet |
switch ID |
Pollster |
Packets matches for table |
switch.flow |
Gauge |
flow |
switch ID |
Pollster |
Duration of flow |
switch.flow.duration.seconds |
Gauge |
s |
switch ID |
Pollster |
Duration of flow in seconds |
switch.flow.duration.nanoseconds |
Gauge |
ns |
switch ID |
Pollster |
Duration of flow in nanoseconds |
switch.flow.packets |
Cumulative |
packet |
switch ID |
Pollster |
Packets received |
switch.flow.bytes |
Cumulative |
B |
switch ID |
Pollster |
Bytes received |
Meters added in the Pike release |
|||||
port |
Gauge |
port |
port ID |
Pollster |
Existence of port |
port.uptime |
Gauge |
s |
port ID |
Pollster |
Uptime of port |
port.receive.packets |
Cumulative |
packet |
port ID |
Pollster |
Packets trasmitted on port |
port.transmit.packets |
Cumulative |
packet |
port ID |
Pollster |
Packets transmitted on port |
port.receive.bytes |
Cumulative |
B |
port ID |
Pollster |
Bytes received on port |
port.transmit.bytes |
Cumulative |
B |
port ID |
Pollster |
Bytes transmitted on port |
port.receive.drops |
Cumulative |
packet |
port ID |
Pollster |
Drops received on port |
port.receive.errors |
Cumulative |
packet |
port ID |
Pollster |
Errors received on port |
switch.ports |
Gauge |
ports |
switch ID |
Pollster |
Number of portson switch |
switch.port.uptime |
Gauge |
s |
switch ID |
Pollster |
Uptime of switch |
These meters are available for OpenFlow based switches. In order to enable these meters, each driver needs to be properly configured.
Load-Balancer-as-a-Service (LBaaS v1)¶
The following meters are collected for LBaaS v1:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
network.services.lb.pool |
Gauge |
pool |
pool ID |
Pollster |
Existence of a LB pool |
network.services.lb.vip |
Gauge |
vip |
vip ID |
Pollster |
Existence of a LB VIP |
network.services.lb.member |
Gauge |
member |
member ID |
Pollster |
Existence of a LB member |
network.services.lb.health_monitor |
Gauge |
health_monitor |
monitor ID |
Pollster |
Existence of a LB health probe |
network.services.lb.total.connections |
Cumulative |
connection |
pool ID |
Pollster |
Total connections on a LB |
network.services.lb.active.connections |
Gauge |
connection |
pool ID |
Pollster |
Active connections on a LB |
network.services.lb.incoming.bytes |
Gauge |
B |
pool ID |
Pollster |
Number of incoming Bytes |
network.services.lb.outgoing.bytes |
Gauge |
B |
pool ID |
Pollster |
Number of outgoing Bytes |
Load-Balancer-as-a-Service (LBaaS v2)¶
The following meters are collected for LBaaS v2.
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
network.services.lb.pool |
Gauge |
pool |
pool ID |
Pollster |
Existence of a LB pool |
network.services.lb.listener |
Gauge |
listener |
listener ID |
Pollster |
Existence of a LB listener |
network.services.lb.member |
Gauge |
member |
member ID |
Pollster |
Existence of a LB member |
network.services.lb.health_monitor |
Gauge |
health_monitor |
monitor ID |
Pollster |
Existence of a LB health probe |
network.services.lb.loadbalancer |
Gauge |
loadbalancer |
loadbalancer ID |
Pollster |
Existence of a LB loadbalancer |
network.services.lb.total.connections |
Cumulative |
connection |
pool ID |
Pollster |
Total connections on a LB |
network.services.lb.active.connections |
Gauge |
connection |
pool ID |
Pollster |
Active connections on a LB |
network.services.lb.incoming.bytes |
Gauge |
B |
pool ID |
Pollster |
Number of incoming Bytes |
network.services.lb.outgoing.bytes |
Gauge |
B |
pool ID |
Pollster |
Number of outgoing Bytes |
Note
The above meters are experimental and may generate a large load against the Neutron APIs. The future enhancement will be implemented when Neutron supports the new APIs.
VPN-as-a-Service (VPNaaS)¶
The following meters are collected for VPNaaS:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
network.services.vpn |
Gauge |
vpnservice |
vpn ID |
Pollster |
Existence of a VPN |
network.services.vpn.connections |
Gauge |
ipsec_site_connection |
connection ID |
Pollster |
Existence of an IPSec connection |
Firewall-as-a-Service (FWaaS)¶
The following meters are collected for FWaaS:
Name |
Type |
Unit |
Resource |
Origin |
Note |
---|---|---|---|---|---|
Meters added in the Mitaka release or earlier |
|||||
network.services.firewall |
Gauge |
firewall |
firewall ID |
Pollster |
Existence of a firewall |
network.services.firewall.policy |
Gauge |
firewall_policy |
firewall ID |
Pollster |
Existence of a firewall policy |