Loading

Google Cloud Storage repository

You can use the Google Cloud Storage service as a repository for Snapshot/Restore.

This repository type uses the Google Cloud Java Client for Storage to connect to the Storage service. If you are using Google Cloud Storage for the first time, you must connect to the Google Cloud Platform Console and create a new project. After your project is created, you must enable the Cloud Storage Service for your project.

The Google Cloud Storage service uses the concept of a bucket as a container for all the data. Buckets are usually created using the Google Cloud Platform Console. This repository type does not automatically create buckets.

To create a new bucket:

  1. Connect to the Google Cloud Platform Console.
  2. Select your project.
  3. Go to the Storage Browser.
  4. Click the Create Bucket button.
  5. Enter the name of the new bucket.
  6. Select a storage class.
  7. Select a location.
  8. Click the Create button.

For more detailed instructions, see the Google Cloud documentation.

The repository must authenticate the requests it makes to the Google Cloud Storage service. It is common for Google client libraries to employ a strategy named application default credentials. However, that strategy is only partially supported by Elasticsearch. The repository operates under the Elasticsearch process, which runs with the security manager enabled. The security manager obstructs the "automatic" credential discovery when the environment variable GOOGLE_APPLICATION_CREDENTIALS is used to point to a local file on disk. It can, however, retrieve the service account that is attached to the resource that is running Elasticsearch, or fall back to the default service account that Compute Engine, Kubernetes Engine or App Engine provide. Alternatively, you must configure service account credentials if you are using an environment that does not support automatic credential discovery.

You have to obtain and provide service account credentials manually.

For detailed information about generating JSON service account files, see the Google Cloud documentation. Note that the PKCS12 format is not supported by this repository type.

Here is a summary of the steps:

  1. Connect to the Google Cloud Platform Console.
  2. Select your project.
  3. Select the Service Accounts tab.
  4. Click Create service account.
  5. After the account is created, select it and go to Keys.
  6. Select Add Key and then Create new key.
  7. Select Key Type JSON as P12 is unsupported.

A JSON service account file looks like this:

{
  "type": "service_account",
  "project_id": "your-project-id",
  "private_key_id": "...",
  "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
  "client_email": "service-account-for-your-repository@your-project-id.iam.gserviceaccount.com",
  "client_id": "...",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/your-bucket@your-project-id.iam.gserviceaccount.com"
}
		

To provide this file to the repository, it must be stored in the Elasticsearch keystore. You must add a file setting with the name gcs.client.NAME.credentials_file using the add-file subcommand. NAME is the name of the client configuration for the repository. The implicit client name is default, but a different client name can be specified in the repository settings with the client key.

Note

Passing the file path via the GOOGLE_APPLICATION_CREDENTIALS environment variable is not supported.

For example, if you added a gcs.client.my_alternate_client.credentials_file setting in the keystore, you can configure a repository to use those credentials like this:

				PUT _snapshot/my_gcs_repository
					{
  "type": "gcs",
  "settings": {
    "bucket": "my_bucket",
    "client": "my_alternate_client"
  }
}
		

The credentials_file settings are reloadable. You can define these settings before the node is started, or call the Nodes reload secure settings API after the settings are defined to apply them to a running node.

After you reload the settings, the internal gcs clients, which are used to transfer the snapshot contents, utilize the latest settings from the keystore.

Note

Snapshot or restore jobs that are in progress are not preempted by a reload of the client's credentials_file settings. They complete using the client as it was built when the operation started.

The client used to connect to Google Cloud Storage has a number of settings available. Client setting names are of the form gcs.client.CLIENT_NAME.SETTING_NAME and are specified inside elasticsearch.yml. The default client name looked up by a gcs repository is called default, but can be customized with the repository setting client.

You can configure the GCS client for authentication and project scoping, custom endpoints, HTTP timeouts, how the client identifies itself to Google Cloud, and proxy behavior.

For a complete list of all GCS client settings, refer to GCS repository client settings.

The gcs repository type supports a number of settings to customize how data is stored in Google Cloud Storage. These can be specified when creating the repository. For example:

				PUT _snapshot/my_gcs_repository
					{
  "type": "gcs",
  "settings": {
    "bucket": "my_other_bucket",
    "base_path": "dev"
  }
}
		

Available repository settings cover storage location, data layout, snapshot transfer characteristics, and throughput limits. For a complete list of all GCS repository settings, refer to GCS repository settings.

The service account used to access the bucket must have the "Writer" access to the bucket:

  1. Connect to the Google Cloud Platform Console.
  2. Select your project.
  3. Go to the Storage Browser.
  4. Select the bucket and "Edit bucket permission".
  5. The service account must be configured as a "User" with "Writer" access.

The linearizable register implementation for GCS repositories is based on GCS's support for strongly consistent preconditions on put-blob operations. To perform a compare-and-exchange operation on a register, Elasticsearch retrieves the register blob and its current generation, and then uploads the updated blob using the observed generation as its precondition. The precondition ensures that the generation has not changed in the meantime.