Loading

Step 4: Run the backend applications

The next step is to run the backend applications. To do this:

  1. Create API keys to authenticate the backend applications.
  2. Run the application on Linux or Kubernetes.

Both the collector and symbolizer need to authenticate to Elasticsearch to process profiling data. For this, you need to create an API key for each application.

Refer to Create an API key to create an API key using Kibana. Select a User API key and assign the following permissions under Control security privileges:

{
  "profiling": {
    "cluster": [
      "monitor"
    ],
    "indices": [
      {
        "names": [
          "profiling-*"
        ],
        "privileges": [
          "read",
          "write"
        ]
      }
    ]
  }
}

Store the "Encoded" version of the API keys, as you will need them to run the Universal Profiling backend. Continue to Run on Linux or Run on Kubernetes for information on running the backend applications.

Before running the backend applications on Linux, we recommend creating configuration files to manage the applications. CLI flags are also supported, but they might result in a more complex management of the backend applications.

Install the backend applications using one of the following options:

  1. OS packages (DEB/RPM)
  2. OCI containers
  3. Binary: orchestrated with your configuration management system of choice (Ansible, Puppet, Chef, Salt, etc.)

The configuration files are in YAML format, and are composed of two top-level sections: an "application" section, and an "output" section.

The "application" section contains the configuration for the backend applications, and the "output" section contains the configuration to connect to where the data will be read and sent to. The "application" section is named after the name of the binary. The "output" section currently supports only Elasticsearch.

The configuration files are read from the following default locations:

  • Collector: /etc/Elastic/universal-profiling/pf-elastic-collector.yml
  • Symbolizer: /etc/Elastic/universal-profiling/pf-elastic-symbolizer.yml

You can customize the location of the configuration files by using the -c flag when running the application.

For the sake of simplicity, we will use the default locations in the examples below. We also display the default application settings; you can refer to the comments in the YAML to understand how to customize them.

Copy the content of the snippet below in the /etc/Elastic/universal-profiling/pf-elastic-collector.yml file.

Customize the content of pf-elastic-collector.auth.secret_token with a secret token of your choice. This token will be used by the Universal Profiling Agent to authenticate to the collector; you cannot use an empty string as a token. Adjust the ssl section if you want to protect the collector’s endpoint with TLS.

Customize the content of the output.elasticsearch section, using the Elasticsearch endpoint and API key to set the hosts and api_key values, respectively. Adjust the protocol value and other TLS related settings as needed.

Copy the content of the snippet below in the /etc/Elastic/universal-profiling/pf-elastic-symbolizer.yml file.

You don’t need to customize any values in the pf-elastic-symbolizer section. Adjust the ssl section if you want to protect the symbolizer’s endpoint with TLS.

Customize the content of the output.elasticsearch section, using the Elasticsearch endpoint and API key to set the hosts and api_key values, respectively. Adjust the protocol value and other TLS related settings as needed.

Follow these steps to install the backend using OS packages.

  1. Configure the APT repository:

    wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    sudo apt-get install apt-transport-https
    echo "deb https://artifacts.elastic.co/packages/9.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-9.x.list
    
  2. Install the packages:

    sudo apt update
    sudo apt install -y pf-elastic-collector pf-elastic-symbolizer
    

For RPM packages, configure the YUM repository and install the packages:

  1. Download and install the public signing key:

    sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
    
  2. Create a file with a .repo extension (for example, elastic.repo) in your /etc/yum.repos.d/ directory and add the following lines:

    [elastic-9.x]
    name=Elastic repository for 9.x packages
    baseurl=https://artifacts.elastic.co/packages/9.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md
    
  3. Install the backend services by running:

    sudo yum update
    sudo yum install -y pf-elastic-collector pf-elastic-symbolizer
    

After installing the packages, enable and start the systemd services:

sudo systemctl enable pf-elastic-collector
sudo systemctl start pf-elastic-collector

sudo systemctl enable pf-elastic-symbolizer
sudo systemctl start pf-elastic-symbolizer

Now you can check the services' logs to spot any problems:

sudo journalctl -xu pf-elastic-collector
sudo journalctl -xu pf-elastic-symbolizer

Refer to Troubleshooting Universal Profiling backend for more information on troubleshooting possible errors in the logs.

We provide OCI images in the Elastic registry to run the backend services in containers. The images are multi-platform, so they both work on x86_64 and ARM64 architectures.

With the config file in place in your system, you can run the containers with the following commands (the example command uses Docker, but any OCI runtime will work):

  1. Collector:

    docker run -d --name pf-elastic-collector -p 8260:8260 -v /etc/Elastic/universal-profiling/pf-elastic-collector.yml:/pf-elastic-collector.yml:ro \
      docker.elastic.co/observability/profiling-collector:{version} -c /pf-elastic-collector.yml
    
  2. Symbolizer:

    docker run -d --name pf-elastic-symbolizer -p 8240:8240 -v /etc/Elastic/universal-profiling/pf-elastic-symbolizer.yml:/pf-elastic-symbolizer.yml:ro \
      docker.elastic.co/observability/profiling-symbolizer:{version} -c /pf-elastic-symbolizer.yml
    

With the above commands, the backend containers will serve the HTTP endpoints on the host ports 8260 and 8240, respectively. We provided the -v flag to mount the configuration files in the containers, and then we used the -c flag to tell the applications to read the configuration files from the mounted path.

Container processes will be running in the background, you can check the logs with docker logs <container_name>, e.g.

docker logs pf-elastic-collector
docker logs pf-elastic-symbolizer
  1. Download and unpack the binaries for your platform:

    For x86_64

    wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-x86_64.tar.gz" | tar xzf -
    wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-x86_64.tar.gz" | tar xzf -
    

    For ARM64

    wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-collector-9.0.0-beta1-linux-arm64.tar.gz" | tar xzf -
    wget -O- "https://artifacts.elastic.co/downloads/prodfiler/pf-elastic-symbolizer-9.0.0-beta1-linux-arm64.tar.gz" | tar xzf -
    
  2. Copy the pf-elastic-collector and pf-elastic-symbolizer binaries to a directory in the machine’s PATH.

  3. Run the backend application processes, instructing them to read the configuration files created previously.

    pf-elastic-collector -c /etc/Elastic/universal-profiling/pf-elastic-collector.yml
    pf-elastic-symbolizer -c /etc/Elastic/universal-profiling/pf-elastic-symbolizer.yml
    

If you want to customize configuration options passed to the binaries, you can use command line flags. All overrides are specified using the -E flag. For example, if you want to override the host value for the pf-elastic-collector application, you can use the -E pf-elastic-collector.host flag as follows:

pf-elastic-collector -c /etc/Elastic/universal-profiling/pf-elastic-collector.yml -E pf-elastic-collector.host=0.0.0.0:8844

In the previous example, we configured the collector to listen on all network interfaces on port 8844, instead of the 8260 value contained in the YAML configuration file.

You can use the -E flag to override any values contained in the configuration files, as lng as you specify the full YAML path on the command line flag. We recommend sticking with the configuration files for simpler orchestration.

The same configuration overrides and recommendations apply to the pf-elastic-symbolizer binary.

We provide Helm charts to deploy the backend services on Kubernetes.

To install the backend services, you need to add the Elastic Helm repository to your Helm installation and then install the charts.

We recommend creating a values.yaml file defining the Kubernetes-specific options of the chart. If you want to stick with the default values provided by the chart, you don’t need to create a values.yaml file for each chart. For the applications' configuration, you can reuse the configuration files detailed in "Create configuration files" and pass them to Helm as a values file (using the --values of -f flags), or copy them in the values.yaml file.

In the example below we don’t apply any modifications to the Kubernetes configs, so we will use the default values provided by the chart.

  1. Install and update the Elastic Helm registry:

    helm repo add elastic https://helm.elastic.co
    helm repo update elastic
    
  2. Install the charts (we are using the universal-profiling namespace, but you can customize at will):

    helm install --create-namespace -n universal-profiling collector elastic/profiling-collector -f /etc/Elastic/universal-profiling/pf-elastic-collector.yml
    helm install --create-namespace -n universal-profiling symbolizer elastic/profiling-symbolizer -f /etc/Elastic/universal-profiling/pf-elastic-symbolizer.yml
    
  3. Check the pods are running and read their logs, by following the steps listed in the output of the helm install commands.

Note

In the previous examples, we used the charts' default values to configure Kubernetes resources. These do not include the creation of an Ingress resource. If you want to expose the services to an Universal Profiling Agent and symbtool deployment outside the Kubernetes cluster, you need to set up the ingress section of each chart.

Continue to Step 5: Next steps.