Installing in an air-gapped environment
Some components of the Elastic Stack require additional configuration and local dependencies in order to deploy in environments without internet access. This guide gives an overview of this setup scenario and helps bridge together existing documentation for individual parts of the stack.
- 1. Self-Managed Install (Linux)
- 2. Kubernetes & OpenShift Install
- 3.0 Elastic Cloud Enterprise
- Appendix A - Elastic Package Registry
- Appendix B - Elastic Artifact Registry
- Appendix C - EPR Kubernetes Deployment
- Appendix D - Agent Integration Guide
Note
If you’re working in an air-gapped environment and have a subscription level that includes Support coverage, contact us if you’d like to request an offline version of the Elastic documentation.
1. Self-Managed Install (Linux) ¶
Refer to the section for each Elastic component for air-gapped installation configuration and dependencies in a self-managed Linux environment.
1.1. Elasticsearch ¶
Air-gapped install of Elasticsearch may require additional steps in order to access some of the features. General install and configuration guides are available in the Elasticsearch install documentation.
Specifically:
- To be able to use the GeoIP processor, refer to the GeoIP processor documentation for instructions on downloading and deploying the required databases.
- Refer to Machine learning for instructions on deploying the Elastic Learned Sparse EncodeR (ELSER) natural language processing (NLP) model and other trained machine learning models.
1.2. Kibana ¶
Air-gapped install of Kibana may require a number of additional services in the local network in order to access some of the features. General install and configuration guides are available in the Kibana install documentation.
Specifically:
- To be able to use Kibana mapping visualizations, you need to set up and configure the Elastic Maps Service.
- To be able to use Kibana sample data, install or update hundreds of prebuilt alert rules, and explore available data integrations, you need to set up and configure the Elastic Package Registry.
- To provide detection rule updates for Endpoint Security agents, you need to set up and configure the Elastic Endpoint Artifact Repository.
- To access Enterprise Search capabilities (in addition to the general search capabilities of Elasticsearch), you need to set up and configure Enterprise Search.
- To access the APM integration, you need to set up and configure Elastic APM.
- To install and use the Elastic documentation for Kibana AI assistants, you need to set up and configure the Elastic product documentation for Kibana.
1.3. Beats ¶
Elastic Beats are light-weight data shippers. They do not require any unique setup in the air-gapped scenario. To learn more, refer to the Beats documentation.
1.4. Logstash ¶
Logstash is a versatile data shipping and processing application. It does not require any unique setup in the air-gapped scenario. To learn more, refer to the Logstash documentation.
1.5. Elastic Agent ¶
Air-gapped install of Elastic Agent depends on the Elastic Package Registry and the Elastic Artifact Registry for most use-cases. The agent itself is fairly lightweight and installs dependencies only as required by its configuration. In terms of connections to these dependencies, Elastic Agents need to be able to connect to the Elastic Artifact Registry directly, but Elastic Package Registry connections are handled through Kibana.
Additionally, if the Elastic Agent {elastic-defend} integration is used, then access to the Elastic Endpoint Artifact Repository is necessary in order to deploy updates for some of the detection and prevention capabilities.
To learn more about install and configuration, refer to the Elastic Agent install documentation. Make sure to check the requirements specific to running Elastic Agents in an air-gapped environment.
To get a better understanding of how to work with Elastic Agent configuration settings and policies, refer to Appendix D - Agent Integration Guide.
1.6. Fleet Server ¶
Fleet Server is a required middleware component for any scalable deployment of the Elastic Agent. The air-gapped dependencies of Fleet Server are the same as those of the Elastic Agent.
To learn more about installing Fleet Server, refer to the Fleet Server set up documentation.
1.7. Elastic APM ¶
Air-gapped setup of the APM server is possible in two ways:
- By setting up one of the Elastic Agent deployments with an APM integration, as described in Switch a self-installation to the APM integration.
- Or, by installing a standalone Elastic APM Server, as described in the APM configuration documentation.
1.8. Elastic Maps Service ¶
Refer to Connect to Elastic Maps Service in the Kibana documentation to learn how to configure your firewall to connect to Elastic Maps Service, host it locally, or disable it completely.
1.9. Enterprise Search ¶
Detailed install and configuration instructions are available in the Enterprise Search install documentation.
1.10. Elastic Package Registry ¶
Air-gapped install of the EPR is possible using any OCI-compatible runtime like Podman (a typical choice for RHEL-like Linux systems) or Docker. Links to the official container image and usage guide is available on the Air-gapped environments page in the Fleet and Elastic Agent Guide.
Refer to Appendix A - Elastic Package Registry for additional setup examples.
Note
Besides setting up the EPR service, you also need to configure Kibana to use this service. If using TLS with the EPR service, it is also necessary to set up Kibana to trust the certificate presented by the EPR.
1.11. Elastic Artifact Registry ¶
Air-gapped install of the Elastic Artifact Registry is necessary in order to enable Elastic Agent deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to Host your own artifact registry for binary downloads in the Fleet and Elastic Agent Guide.
Refer to Appendix B - Elastic Artifact Registry for additional setup examples.
Note
When setting up own web server, such as NGINX, to function as the Elastic Artifact Registry, it is recommended not to use TLS as there are, currently, no direct ways to establish certificate trust between Elastic Agents and this service.
1.12. Elastic Endpoint Artifact Repository ¶
Air-gapped setup of this component is, essentially, identical to the setup of the Elastic Artifact Registry except that different artifacts are served. To learn more, refer to Configure offline endpoints and air-gapped environments in the Elastic Security guide.
1.13 Machine learning ¶
Some machine learning features, like natural language processing (NLP), require you to deploy trained models. To learn about deploying machine learning models in an air-gapped environment, refer to:
- Deploy ELSER in an air-gapped environment.
- Install trained models in an air-gapped environment with Eland.
1.14 Kibana Product documentation for AI Assistants ¶
Detailed install and configuration instructions are available in the Kibana AI Assistants settings documentation.
2. Kubernetes & OpenShift Install ¶
Setting up air-gapped Kubernetes or OpenShift installs of the Elastic Stack has some unique concerns, but the general dependencies are the same as in the self-managed install case on a regular Linux machine.
2.1. Elastic Kubernetes Operator (ECK) ¶
The Elastic Kubernetes operator is an additional component in the Kubernetes OpenShift install that, essentially, does a lot of the work in installing, configuring, and updating deployments of the Elastic Stack. For details, refer to the Elastic Cloud on Kubernetes install instructions.
The main requirements are:
- Syncing container images for ECK and all other Elastic Stack components over to a locally-accessible container repository.
- Modifying the ECK helm chart configuration so that ECK is aware that it is supposed to use your offline container repository instead of the public Elastic repository.
- Optionally, disabling ECK telemetry collection in the ECK helm chart. This configuration propagates to all other Elastic components, such as Kibana.
- Building your custom deployment container image for the Elastic Artifact Registry.
- Building your custom deployment container image for the Elastic Endpoint Artifact Repository.
2.2. Elastic Package Registry ¶
The container image can be downloaded from the official Elastic Docker repository, as described in the Fleet and Elastic Agent air-gapped environments documentation.
This container would, ideally, run as a Kubernetes deployment. Refer to Appendix C - EPR Kubernetes Deployment for examples.
2.3. Elastic Artifact Registry ¶
A custom container would need to be created following similar instructions to setting up a web server in the self-managed install case. For example, a container file using an NGINX base image could be used to run a build similar to the example described in Appendix B - Elastic Artifact Registry.
2.4. Elastic Endpoint Artifact Repository ¶
Just like the Elastic Artifact Registry. A custom container needs to be created following similar instructions to setting up a web server for the self-managed install case.
2.5. Ironbank Secure Images for Elastic ¶
Besides the public Elastic container repository, most Elastic Stack container images are also available in Platform One’s Iron Bank.
3.0 Elastic Cloud Enterprise ¶
To install Elastic Cloud Enterprise in an air-gapped environment you’ll need to host your own 1.10. Elastic Package Registry. Refer to the ECE offline install instructions for details.
Appendix A - Elastic Package Registry ¶
The following script generates a SystemD service file on a RHEL 8 system in order to run EPR with Podman in a production environment.
#!/usr/bin/env bash
EPR_BIND_ADDRESS="0.0.0.0"
EPR_BIND_PORT="8443"
EPR_TLS_CERT="/etc/elastic/epr/epr.pem"
EPR_TLS_KEY="/etc/elastic/epr/epr-key.pem"
EPR_IMAGE="docker.elastic.co/package-registry/distribution:9.0.0-beta1"
podman create \
--name "elastic-epr" \
-p "$EPR_BIND_ADDRESS:$EPR_BIND_PORT:$EPR_BIND_PORT" \
-v "$EPR_TLS_CERT:/etc/ssl/epr.crt:ro" \
-v "$EPR_TLS_KEY:/etc/ssl/epr.key:ro" \
-e "EPR_ADDRESS=0.0.0.0:$EPR_BIND_PORT" \
-e "EPR_TLS_CERT=/etc/ssl/epr.crt" \
-e "EPR_TLS_KEY=/etc/ssl/epr.key" \
"$EPR_IMAGE"
## creates service file in the root directory
# podman generate systemd --new --files --name elastic-epr --restart-policy always
The following is an example of an actual SystemD service file for an EPR, launched as a Podman service.
# container-elastic-epr.service
# autogenerated by Podman 4.1.1
# Wed Oct 19 13:12:33 UTC 2022
[Unit]
Description=Podman container-elastic-epr.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=always
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
-d \
--replace \
--name elastic-epr \
-p 0.0.0.0:8443:8443 \
-v /etc/elastic/epr/epr.pem:/etc/ssl/epr.crt:ro \
-v /etc/elastic/epr/epr-key.pem:/etc/ssl/epr.key:ro \
-e EPR_ADDRESS=0.0.0.0:8443 \
-e EPR_TLS_CERT=/etc/ssl/epr.crt \
-e EPR_TLS_KEY=/etc/ssl/epr.key docker.elastic.co/package-registry/distribution:9.0.0-beta1
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Appendix B - Elastic Artifact Registry ¶
The following example script downloads artifacts from the internet to be later served as a private Elastic Package Registry.
#!/usr/bin/env bash
set -o nounset -o errexit -o pipefail
STACK_VERSION=9.0.0-beta1
ARTIFACT_DOWNLOADS_BASE_URL=https://artifacts.elastic.co/downloads
DOWNLOAD_BASE_DIR=${DOWNLOAD_BASE_DIR:?"Make sure to set DOWNLOAD_BASE_DIR when running this script"}
COMMON_PACKAGE_PREFIXES="apm-server/apm-server beats/auditbeat/auditbeat beats/elastic-agent/elastic-agent beats/filebeat/filebeat beats/heartbeat/heartbeat beats/metricbeat/metricbeat beats/osquerybeat/osquerybeat beats/packetbeat/packetbeat cloudbeat/cloudbeat endpoint-dev/endpoint-security fleet-server/fleet-server"
WIN_ONLY_PACKAGE_PREFIXES="beats/winlogbeat/winlogbeat"
RPM_PACKAGES="beats/elastic-agent/elastic-agent"
DEB_PACKAGES="beats/elastic-agent/elastic-agent"
function download_packages() {
local url_suffix="$1"
local package_prefixes="$2"
local _url_suffixes="$url_suffix ${url_suffix}.sha512 ${url_suffix}.asc"
local _pkg_dir=""
local _dl_url=""
for _download_prefix in $package_prefixes; do
for _pkg_url_suffix in $_url_suffixes; do
_pkg_dir=$(dirname ${DOWNLOAD_BASE_DIR}/${_download_prefix})
_dl_url="${ARTIFACT_DOWNLOADS_BASE_URL}/${_download_prefix}-${_pkg_url_suffix}"
(mkdir -p $_pkg_dir && cd $_pkg_dir && curl -O "$_dl_url")
done
done
}
# and we download
for _os in linux windows; do
case "$_os" in
linux)
PKG_URL_SUFFIX="${STACK_VERSION}-${_os}-x86_64.tar.gz"
;;
windows)
PKG_URL_SUFFIX="${STACK_VERSION}-${_os}-x86_64.zip"
;;
*)
echo "[ERROR] Something happened"
exit 1
;;
esac
download_packages "$PKG_URL_SUFFIX" "$COMMON_PACKAGE_PREFIXES"
if [[ "$_os" = "windows" ]]; then
download_packages "$PKG_URL_SUFFIX" "$WIN_ONLY_PACKAGE_PREFIXES"
fi
if [[ "$_os" = "linux" ]]; then
download_packages "${STACK_VERSION}-x86_64.rpm" "$RPM_PACKAGES"
download_packages "${STACK_VERSION}-amd64.deb" "$DEB_PACKAGES"
fi
done
## selinux tweaks
# semanage fcontext -a -t "httpd_sys_content_t" '/opt/elastic-packages(/.*)?'
# restorecon -Rv /opt/elastic-packages
The following is an example NGINX configuration for running a web server for the Elastic Artifact Registry.
user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 9080 default_server;
server_name _;
root /opt/elastic-packages;
location / {
}
}
}
Appendix C - EPR Kubernetes Deployment ¶
The following is a sample EPR Kubernetes deployment YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: elastic-package-registry
namespace: default
labels:
app: elastic-package-registry
spec:
replicas: 1
selector:
matchLabels:
app: elastic-package-registry
template:
metadata:
name: elastic-package-registry
labels:
app: elastic-package-registry
spec:
containers:
- name: epr
image: docker.elastic.co/package-registry/distribution:9.0.0-beta1
ports:
- containerPort: 8080
name: http
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 20
periodSeconds: 30
resources:
requests:
cpu: 125m
memory: 128Mi
limits:
cpu: 1000m
memory: 512Mi
env:
- name: EPR_ADDRESS
value: "0.0.0.0:8080"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elastic-package-registry
name: elastic-package-registry
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: http
selector:
app: elastic-package-registry
Appendix D - Agent Integration Guide ¶
When configuring any integration in Elastic Agent, you need to set up integration settings within whatever policy is ultimately assigned to that agent.
D.1. Terminology ¶
Note the following terms and definitions:
- Integration
- A variety of optional capabilities that can be deployed on top of the Elastic Stack. Refer to Integrations to learn more.
- Agent integration
- The integrations that require Elastic Agent to run. For example, the Sample Data integration requires only Elasticsearch and Kibana and consists of dashboards, data, and related objects, but the APM integration not only has some Elasticsearch objects, but also needs Elastic Agent to run the APM Server.
- Package
- A set of dependencies (such as dashboards, scripts, and others) for a given integration that, typically, needs to be retrieved from the Elastic Package Registry before an integration can be correctly installed and configured.
- Agent policy
- A configuration for the Elastic Agent that may include one or more Elastic Agent integrations, and configurations for each of those integrations.
D.2. How to configure ¶
There are three ways to configure Elastic Agent integrations:
- D.2.1. Using the Kibana UI
- D.2.2. Using the
kibana.yml
config file - D.2.3. Using the Kibana {fleet} API
D.2.1. Using the Kibana UI ¶
Best option for: Manual configuration and users who prefer using a UI over scripting.
Example: Get started with logs and metrics
Agent policies and integration settings can be managed using the Kibana UI. For example, the following shows the configuration of logging for the System integration in an Elastic Agent policy:
D.2.2. Using the kibana.yml
config file ¶
Good option for: Declarative configuration and users who need reproducible and automated deployments.
Example: Fleet settings in Kibana
Note
This documentation is still under development; there may be gaps around building custom agent policies.
You can have Kibana create Elastic Agent policies on your behalf by adding appropriate configuration parameters in the kibana.yml
settings file, these include:
xpack.fleet.packages
- Takes a list of all integration package names and versions that Kibana should download from the Elastic Package Registry (EPR). This is done because Elastic Agents themselves do not directly fetch packages from the EPR.
xpack.fleet.agentPolicies
- Takes a list of Elastic Agent policies in the format expected by the Kibana {fleet} HTTP API. Refer to the setting in Preconfiguration settings for the format. See also D.2.3. Using the Kibana {fleet} API.
xpack.fleet.registryUrl
- Takes a URL of the Elastic Package Registry that can be reached by the Kibana server. Enable this setting only when deploying in an air-gapped environment.
- Other settings
- You can add other, more discretionary settings for Fleet, Elastic Agents, & policies. Refer to Fleet settings in Kibana.
D.2.3. Using the Kibana {fleet} API ¶
Best option for: Declarative configuration and users who need reproducible and automated deployments in even the trickiest of environments.
Example: See the following.
It is possible to use custom scripts that call the Kibana {fleet} API to create or update policies without restarting Kibana, and also allowing for custom error handling and update logic.
At this time, you can refer to the the Kibana {fleet} HTTP API documentation, however additional resources from public code repositories should be consulted to capture the full set of configuration options available for a given integration. Specifically, many integrations have configuration options such as inputs
and data_streams
that are unique.
In particular, the *.yml.hbs
templates should be consulted to determine which vars
are available for configuring a particular integration using the Kibana {fleet} API.
- For most Integrations, refer to the README and
*.yml.hbs
files in the appropriate directory in the elastic/integrations repository. - For the APM integration, refer to the README and
*.yml.hbs
files in the elastic/apm-server repository.