Entity store
To use the entity store, you must have the appropriate privileges. For more information, refer to Entity risk scoring requirements.
The entity store allows you to query, reconcile, maintain, and persist entity metadata such as:
- Ingested log data
- Data from integrated identity providers (such as Active Directory, EntraID, and Okta)
- Data from internal and external alerts
- External asset repository data
- Asset criticality data
- Entity risk score data
The entity store can hold any entity type observed by Elastic Security. It allows you to view and query select entities represented in your indices without needing to perform real-time searches of observable data. The entity store extracts entities from all indices in the Elastic Security default data view.
When the entity store is enabled, the following resources are created for the active space:
- A latest entity alias,
entities-latest-<space-id>, backed by the concrete index.entities.v2.latest.security_<space-id>-<mapping_version>. Query this alias to retrieve the current state of all entities in the entity store. - History snapshot indices,
.entities.v2.history.security_<space-id>.<timestamp>, which store daily snapshots of entity data and enable historical analysis of entity attributes over time.
Starting in 9.4, the entity store uses ES|QL-based LOOKUP JOIN queries instead of Elasticsearch transforms and moves from transform-based indices (.entities.v1.*) to ES|QL-based indices (.entities.v2.*). When you upgrade from a previous version, existing transforms, enrich policies, and ingest pipelines are removed. Your existing index data is retained. After the entity store is enabled, historical Entity data from logs within the last 3 hours will be extracted.
Starting in 9.4, the entity store replaces previous per-type indices with a single shared latest alias. Update any direct queries or automations that reference .entities.v1.latest.security_user_*, .entities.v1.latest.security_host_*, or .entities.v1.latest.security_service_* to use entities-latest-<space-id> instead. The previous API routes are removed.
For each entity type (hosts, users, and services):
- Elasticsearch resources, such as transforms, ingest pipelines, and enrich policies.
- Data and fields for each entity.
- The
.entities.v1.latest.security_user_<space-id>,.entities.v1.latest.security_host_<space-id>, and.entities.v1.latest.security_services_<space-id>indices, which contain field mappings for hosts, users, and services respectively. You can query these indices to see a list of fields that are mapped in the entity store. -
Snapshot indices ( .entities.v1.history.<ISO_date>.*) that store daily snapshots of entity data, enabling historical analysis of attributes over time. -
Reset indices ( .entities.v1.reset.*) that ensure entity timestamps are updated correctly in the latest index, supporting accurate time-based queries and future data resets.
The entity store is automatically enabled when you turn on risk scoring. In the default Kibana space, both are enabled automatically. In non-default spaces, you must enable them manually:
- Find the Entity Analytics management page in the navigation menu or by using the global search field.
- Turn the toggle on.
If you've upgraded from a previous version, and the entity store was installed in any space, it's automatically migrated after the upgrade. Your existing index data is retained.
To enable the entity store:
- Find Entity Store in the navigation menu or by using the global search field.
- Turn the toggle on.
Once you enable the entity store, the Entities section appears on the following pages:
Once the entity store is enabled, you may want to clear the stored data and start fresh. For example, if you normalized the user.name, host.name, or service.name fields, clearing the entity store data would allow you to repopulate the entity store with the updated, normalized values. This action removes all previously extracted entity information, enabling new data extraction and analysis.
The impact of clearing entity store data on risk scores and asset criticality depends on your version:
Clearing entity store data does not delete your source data. However, asset criticality assignments will need to be reapplied, and risk scoring will run again for the new entities repopulated into the store.
Clearing entity store data does not delete your source data, assigned entity risk scores, or asset criticality assignments.
Clearing entity store data permanently deletes persisted user, host, and service records, and data is no longer available for analysis. Proceed with caution, as this cannot be undone.
To clear entity data:
- Find the Entity Analytics management page in the navigation menu or by using the global search field.
- Click Clear Entity Data.
- Find Entity Store in the navigation menu or by using the global search field.
- Click Clear Entity Data.
Once the entity store is enabled, you can verify which engines are installed and their statuses from the Engine Status tab. This tab shows a list of installed resources for each installed entity. Click the resource link to navigate to the resource page and view more information.
To access the Engine Status tab, find Entity Analytics in the navigation menu or by using the global search field.
To access the Engine Status tab, find Entity Store in the navigation menu or by using the global search field.
The entity store creates user, host, and service entities from data in supported source indices (mainly the Security default data view) when the incoming events include the ECS fields needed to identify those entities. Any integration that populates standard ECS identity fields — such as host.*, user.*, service.*, and related event.* fields — can contribute to entity creation, as long as the data contains enough information for the entity store to identify and build the entity.
Examples of supported integrations include:
Identity and account sources:
- Active Directory Entity Analytics
- Microsoft Entra ID Entity Analytics
- Okta Entity Analytics
- Google Workspace
- Microsoft 365
- AWS CloudTrail
Endpoint and host sources:
The entity store runs scheduled log extraction to keep entity data up to date.
To determine whether log extraction is slow or unhealthy, check the Engine Status tab or query the Entity store status API.
A process might be slow if:
- New entities are not appearing as expected.
- The last successful execution does not appear to advance (
lastExecutionTimestamp). You can verify this only through the API.
A process might be unhealthy if:
- The engine enters an
errorstate. - Component health indicators are degraded.
- Extraction appears stalled and no forward progress is visible.
If log extraction appears slow, you can modify the following log extraction configuration settings to balance freshness, coverage, and query cost.
Use frequency to control how often extraction runs.
- Decrease frequency if extraction is healthy but too resource-intensive and Elasticsearch CPU utilization is too high. The minimum supported value is
30s.
Use docsLimit to control how many entities can be processed in one extraction page.
- Lower it if Kibana is consuming too much memory.
- Default:
10000entities.
Use maxLogsPerPage to cap the raw-log slice size before aggregation.
- Lower it if queries are too heavy or time-consuming.
- Default:
40000documents.
Start with maxLogsPerPage rather than docsLimit when extraction is slow or unstable, because it reduces the amount of raw source data processed in each extraction operation. Adjust docsLimit if tuning maxLogsPerPage is insufficient and you still see performance issues.