Loading

Ceph Integration

<div class="condensed-table">
| | |
| --- | --- |
| Version | 1.7.2 (View all) |
| Compatible Kibana version(s) | 8.13.0 or higher |
| Supported Serverless project types
What’s this? | Security
Observability |
| Subscription level
What’s this? | Basic |
| Level of support
What’s this? | Elastic |

</div>

Ceph is a framework for distributed storage clusters. The frontend client framework is based on RADOS (Reliable Autonomic Distributed Object Store). Clients can directly access Ceph storage clusters with librados, but also can use RADOSGW (object storage), RBD (block storage), and CephFS (file storage). The backend server framework consists of several daemons that manage nodes, and backend object stores to store user’s actual data.

Use the Ceph integration to:

  • Collect metrics related to the cluster disk, cluster health, cluster status, Object Storage Daemons (OSD) performance, Object Storage Daemons (OSD) pool stats, Object Storage Daemons (OSD) tree and pool disk.
  • Create visualizations to monitor, measure and analyze the usage trend and key data, and derive business insights.
  • Create alerts to reduce the MTTD and also the MTTR by referencing relevant logs when troubleshooting an issue.

The Ceph integration collects metrics data.

Metrics give you insight into the statistics of the Ceph. The Metric data streams collected by the Ceph integration are cluster_disk, cluster_health, cluster_status, osd_performance, osd_pool_stats, osd_tree and pool_disk, so that the user can monitor and troubleshoot the performance of the Ceph instance.

Data streams:

  • cluster_disk: Collects information related to overall storage of the cluster.
  • cluster_health: Collects information related to health of the cluster.
  • cluster_status: Collects information related to status of the cluster.
  • osd_performance: Collects information related to Object Storage Daemons (OSD) performance.
  • osd_pool_stats: Collects information related to client I/O rates.
  • osd_tree: Collects information related to structure of the Object Storage Daemons (OSD) tree.
  • pool_disk: Collects information related to memory of each pool.

Note:

  • Users can monitor and see the metrics inside the ingested documents for Ceph in the logs-* index pattern from Discover.

This integration has been tested against Ceph 15.2.17 (Octopus) and 14.2.22 (Nautilus).

In order to find out the Ceph version of your instance, see following approaches:

  1. On the Ceph Dashboard, in the top right corner of the screen, go to Help > About. You can see the version of Ceph.
  2. Please run the following command from Ceph instance:
ceph version

You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it. You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended or self-manage the Elastic Stack on your own hardware.

In order to ingest data from the Ceph, user must have

For step-by-step instructions on how to set up an integration, see the Getting started guide.

You need the following information from your Ceph instance to configure this integration in Elastic:

Host Configuration Format: http[s]://<ceph-mgr>:<port>

Example Host Configuration: https://127.0.0.1:8003

To list all of your API keys, please run the following command from Ceph instance:

ceph restful list-keys

The ceph restful list-keys command will output in JSON:

{
      "api": "52dffd92-a103-4a10-bfce-5b60f48f764e"
}

In the above JSON, please consider api as API User and value of 52dffd92-a103-4a10-bfce-5b60f48f764e as API Secret Key while configuring an integration.

After the integration is successfully configured, clicking on the Assets tab of the Ceph Integration should display a list of available dashboards. Click on the dashboard available for your configured data stream. It should be populated with the required data.

  • If host.ip appears conflicted under the logs-* data view, this issue can be resolved by reindexing the Cluster Disk, Cluster health, Cluster Status, OSD Performance, OSD Pool Stats, OSD Tree and Pool Disk data streams.

This is the cluster_disk data stream. This data stream collects metrics related to the total storage, available storage and used storage of cluster disk.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the cluster_health data stream. This data stream collects metrics related to the cluster health.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the cluster_status data stream. This data stream collects metrics related to cluster health status, number of monitors in the cluster, cluster version, cluster placement group (pg) count, cluster osd states and cluster storage.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the osd_performance data stream. This data stream collects metrics related to Object Storage Daemon (OSD) id, commit latency and apply latency.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the osd_pool_stats data stream. This data stream collects metrics related to Object Storage Daemon (OSD) client I/O rates.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the osd_tree data stream. This data stream collects metrics related to Object Storage Daemon (OSD) tree id, name, status, exists, crush_weight, etc.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.

This is the pool_disk data stream. This data stream collects metrics related to pool id, pool name, pool objects, used bytes and available bytes of the pool disk.

ECS Field Reference

Please refer to the following document for detailed information on ECS fields.