Link Search Menu Expand Document

Secrets

Platform release - v22.22

Document describes the secrets required by the HELM3 deployment of the platform. From the platform version 21.37 we separated the secrets from the platform HELM3 charts to provide best practices for kubernetes secret management.

Please note, you must create all the required secrets before the installation of the platform. Without secrets in place the platform deployment is not possible.

We present the secrets and their structure assuming the platform has default namespaces called platform and tasks. If you are using other values for namespaces do not forget to use them instead of the default ones while applying these secrets.

Internal docker registry secrets

These are the secrets used by the internal docker registry service. Depending on the type of platform installation you can ommit some of them.

Docker registry htpasswd secret

REQUIRED by the docker-registry service if the internal docker-registry is used. Platform uses this secret in the configuration of username/password authentication for internal docker registry.

Set value of this secret as a simple htpasswd string.

IMPORTANT: The credentials used in this secret should match the credentials used to generate Docker registry push secret and Docker registry pull secret.

The secret_name should match the .Values.global.secrets.dockerRegistryHtpasswdSecret value from the platform chart.

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: secret_name
  namespace: platform
data:
  htpasswd: "<value>"

Docker registry tls secret

REQUIRED by the docker-registry service if the internal docker-registry is used and it should use secure connection (E.g. if .Values.global.services.dockerRegistry.secured set to true).

The secret_name should match the .Values.global.services.dockerRegistry.tlsSecretName value from the platform chart.

apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: platform
type: "kubernetes.io/tls"
data:
  tls.crt: "<value>"
  tls.key: "<value>"

Docker registry push secret

REQUIRED by the docker-registry service and git-receiver-service. Platform uses this secret to push component docker container images to the docker-registry-service.

It’s a simple .dockerconfigjson kubernetes secret.

The secret_name should match the .Values.global.secrets.dockerRegistryPush value from the platform chart.

apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: platform
type: "kubernetes.io/dockerconfigjson"
data:
  .dockerconfigjson: "<value>"

Docker registry pull secret

REQUIRED by the docker-registry service and admiral . It’s a simple .dockerconfigjson kubernetes secret.

The secret_name should match the .Values.global.secrets.dockerRegistry value from the platform chart.

apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: tasks
type: "kubernetes.io/dockerconfigjson"
data:
  .dockerconfigjson: "<value>"

Platform docker registry secret

REQUIRED by the whole platform. Since it’s used to pull elastic.io images from the dockerhub.

It’s a simple .dockerconfigjson kubernetes secret.

The secret_name should match the .Values.global.secrets.imagePull value from the platform chart.

apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: platform
type: "kubernetes.io/dockerconfigjson"
data:
  .dockerconfigjson: "<value>"

Git-receiver secret

REQUIRED by the git-receiver-service.

The value of this key is an rsa private key.

The secret_name should match the .Values.global.secrets.gitReceiverPrivateKey value from the platform chart.

apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: platform
type: Opaque
data:
  key: <value>

Ingress tls secret

REQUIRED by the handmaiden-service and ingress service.

The secret is the simple kubernetes tls secret.

You also should provide the secret_name to the Platform environment secret:

DEFAULT_INGRESS_CERT_NAME: secret_name
apiVersion: v1
kind: Secret
metadata:
  name: secret_name
  namespace: platform
type: "kubernetes.io/tls"
data:
  tls.crt: "<value>"
  tls.key: "<value>"

Knight of the bloody gate tls secret

REQUIRED if the VPM agents feature is enabled.

The secret is the simple kubernetes tls secret.

Important: At the moment the name of the secret should be knight-of-the-bloody-gate-ca.

apiVersion: v1
kind: Secret
metadata:
  name: knight-of-the-bloody-gate-ca
  namespace: platform
type: "kubernetes.io/tls"
data:
  tls.crt: <value>
  tls.key: <value>

Platform environment secret

REQUIRED by the elastic.io platform.

We use this secret to store the platform environment variables.

It’s an simple opaque kubernetes secret.

The secret must have the following structure:

apiVersion: v1
kind: Secret
metadata:
  name: elasticio
  namespace: platform
type: Opaque
stringData:
  ENVIRONMENT_VARIABLES: ""
  ...

The list of environment variables and their explanations follows.

ACCOUNTS_PASSWORD:

accounts_password - A secret key, used for payload encryption of user credentials in DB.

ADMIRAL_SERVICE_ACCOUNT_USERNAME:

admiral - API service account for admiral, leave as is.

ADMIRAL_SERVICE_ACCOUNT_PASSWORD:

admiral_service_acc_password - API service account password for admiral, any random string.

ADMIRAL_QUOTA_SERVICE_TYPE

gold-dragon-coin - Can have values gold-dragon-coin or quota-service (for now it should be gold-dragon-coin).

AGENT_VPN_ENTRYPOINT:

agent_vpn_entrypoint - entry point ip/domain for vpn local agent. Should be set if agents are enabled.

ALLOW_EMPTY_CONTRACT_AFTER_THE_LAST_USER_REMOVING:

AMQP_URI:

URI used to connect platform services to RabbitMQ. MUST include the same vhost as in “RABBITMQ_VIRTUAL_HOST” secret

APPRUNNER_IMAGE:

Docker image used for running flow steps containers. Default value "elasticio/apprunner:production"

BRAN_PREFETCH_COUNT:

30

BRAN_CLICKHOUSE_URI:

If bran enabled: provide clickhouse uri.

BRAN_CLICKHOUSE_NO_REPLICA:

If bran uses replicated clickhouse. Default value true.

BRAN_RETENTION_MONTHS_MESSAGES:

Default value “1”

BRAN_RETENTION_MONTHS_EVENTS:

Default value “1”

CACHE_REDIS_URI:

Set if the cache service supplied with the platform disabled in favour of external one and Redis sentinels are NOT USED.

CACHE_REDIS_SENTINELS:

- host: “”

port: “”

Set if the cache service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Configuration for our caching solution based on Redis. List of sentinels to connect to. Format: array of objects with host and port values.

CACHE_REDIS_SENTINEL_NAME:

Set if the cache service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Configuration for our caching solution based on Redis. Identifies a group of Redis instances composed of a master and one or more slaves.

CACHE_REDIS_SENTINEL_PASSWORD:

Set if the cache service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Configuration for our caching solution based on Redis. Password to authenticate with Sentinel.

CACHE_REDIS_PASSWORD:

CERTIFICATE_STORE_ENCRYPTION_PASSWORD:

certificate_store_encryption_password - Password for encrypting tenant certs store (provided during creating new tenant)

CERTIFICATE_SUBJECT:

Optional value. The ubject for bloody-gate server CA.

COMPANY_NAME:

The default contract name where the main components are deployed.

COMPONENT_CPU:

0.08 - CPU allocated for each integration flow step pod running in the Kubernetes cluster.

COMPONENT_CPU_LIMIT:

1 - CPU limit for each integration flow step pod running in the Kubernetes cluster.

COMPONENT_MEM_DEFAULT:

90 - Allocated memory in MB for each integration flow step pod running with Node.js code in the Kubernetes cluster.

COMPONENT_MEM_DEFAULT_LIMIT:

256 - Memory limit in MB or each integration flow step pod running with Node.js code in the Kubernetes cluster.

COMPONENT_MEM_JAVA:

400 - Allocated memory in MB for each integration flow step pod running with JAVA code in the Kubernetes cluster.

COMPONENT_MEM_JAVA_LIMIT:

512 - Memory limit in MB or each integration flow step pod running with JAVA code in the Kubernetes cluster.

DEBUG_DATA_SIZE_LIMIT_MB:

5 - Limits the size in MB of data samples to be stored into the DB.

DEFAULT_DRIVER_BACKEND: "kubernetes"

DOCKER_REGISTRY_STORAGE_S3_ACCESSKEY:

Configuration for docker registry for components if an s3 storage driver is used.

DEFAULT_INGRESS_CERT_NAME:

DEFAULT_PER_CONTRACT_QUOTA:

5- Default limit of quota tokens per contract. Will work only when the enforce_quota is set to true.

DOCKER_REGISTRY_STORAGE_S3_SECRETKEY:

DOCKER_REGISTRY_STORAGE_S3_REGION:

DOCKER_REGISTRY_STORAGE_S3_REGIONENDPOINT:

DOCKER_REGISTRY_STORAGE_S3_BUCKET:

DOCKER_REGISTRY_STORAGE_S3_ENCRYPT:

DOCKER_REGISTRY_STORAGE_S3_KEYID:

DOCKER_REGISTRY_STORAGE_S3_SECURE:

DOCKER_REGISTRY_STORAGE_S3_SKIPVERIFY:

DOCKER_REGISTRY_STORAGE_S3_V4AUTH:

DOCKER_REGISTRY_STORAGE_S3_CHUNKSIZE:

DOCKER_REGISTRY_STORAGE_S3_ROOTDIRECTORY:

DOCKER_REGISTRY_STORAGE_S3_STORAGECLASS:

ELASTIC_SEARCH_URI:

elastic_search_uri - Elasticsearch URI used as a backend for the GrayLog.

ENFORCE_QUOTA:

Default “false” - If enabled all quota limits would apply.

ENVIRONMENT:

production - Takes part in queue naming in RabbitMQ.

ENV_PASSWORD:

env_password - A secret key used to encrypting the environment variables payload in the DB.

FACELESS_ENCRYPTION_KEY:

Set this if you need to encrypt the faceless data. It should be base64 encoded and have at least 256 bit (32 bytes) length. You can create it using openssl rand -base64 32 command.

FACELESS_AUTH_USERNAME:

FACELESS_AUTH_PASSWORD:

FORCE_DESTROY_DEBUG_TASK_TIMEOUT_SEC:

FORCE_DESTROY_ONE_TIME_EXEC_SEC:

FRONTEND_SERVICE_ACCOUNT_USERNAME:

frontend - eio API service acc frontend, leave as is.

FRONTEND_SERVICE_ACCOUNT_PASSWORD:

frontend_service_acc_password - eio API service acc frontend, any random string.

FRONTEND_SESSION_SECRET:

FRONTEND_NO_EXTERNAL_RESOURCES:

Should be set if frontend should not use external resources.

GELF_HOST:

gelf_host - Hostname where Platform’s Graylog GELF input is running (usually the same as Graylog’s hostname).

GELF_PORT:

12203 - Port where Platform’s Graylog GELF input is running (usually 12201).

GELF_PROTOCOL:

udp - Protocol of Platform’s Graylog GELF input (usually udp).

GENDRY_SERVICE_ACCOUNTS:

Optional value. Object with username/password.

GIT_RECEIVER_HOST:

git_receiver_host - The domain name for gitreceiver.

GIT_RECEIVER_PRIVATE_KEY_PATH:

“/etc/gitreceiver/private-key/key”

HOOKS_DATA_PASSWORD:

hooks_data_password - It’s a secret key, used for encryption of payload of sailor hooks data in DB.

INTERCOM_ACCESS_TOKEN:

intercom_token - Token in case when Intercom integration is used.

INTERCOM_APP_ID:

app_id - App ID in case when Intercom integration is used.

INTERCOM_SECRET_KEY:

intercom_secret_key - Secret Key in case when Intercom integration is used.

IRON_BANK_CLICKHOUSE_NODES:

List of clickhouse cluster nodes for iron-bank service.

- host: the internal address of the clickhouse node

port: the port number of node

user: The user name to use for access.

password: The password to use for access.

IRON_BANK_CLICKHOUSE_NO_REPLICA:

false if iron-bank uses replicated clickhouse. Default value is true.

IGNORE_CONTAINER_ERRORS:

Optional value.

LIMITED_WORKSPACE_FLOW_TTL_IN_MINUTES:

LOG_LEVEL:

The logging level for the platform services.

LOOKOUT_PREFETCH_COUNT:

MAESTER_JWT_SECRET:

If maester enabled you need provide jwt secret:

MAESTER_MONGO_URI:

Starting from 22.20, you can use a dedicated database for storing Maester objects and the run-time attachments. Use this environment variable to target this new database. Otherwise use the same values as in the MONGO_URI to use the same database.

MAESTER_REDIS_URI:

Set if the maesterRedis service supplied with the platform disabled in favour of external one and Redis sentinels are NOT USED.

MAESTER_REDIS_SENTINELS:

- host: “”

port: “”

Set if the maesterRedis service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Maester’s Redis config. List of sentinels to connect to. Format: array of objects with host and port values

MAESTER_REDIS_SENTINEL_NAME:

Set if the maesterRedis service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Maester’s Redis config. Identifies a group of Redis instances composed of a master and one or more slaves.

MAESTER_REDIS_SENTINEL_PASSWORD:

Set if the maesterRedis service supplied with the platform disabled in favour of external one and Redis sentinels are USED. Maester’s Redis config. Password to authenticate with Sentinel.

MAESTER_REDIS_PASSWORD:

MAESTER_OBJECTS_TTL_IN_SECONDS:

900 - Object storage time in Maester.

MAESTER_OBJECT_STORAGE_SIZE_THRESHOLD:

1048576 - Limit in Bits when the object are redirected to Maester instead of regular queues in RabbitMQ.

MAESTER_MAX_SIZE_PER_OBJECT:

Value in bytes - you can use to control the maximum object size accepted by the Maester service. The default maximum value is set 1GB. The recommended range is from 100MB to 1GB.

MANDRILL_API_KEY:

mandrill_api_key - Mandrill API key for sending platform emails (Leave empty if using SMTP).

MAX_FORCE_DESTROY_DEBUG_TASK_TIMEOUT_SEC:

MAX_FORCE_DESTROY_ONE_TIME_EXEC_SEC:

MAX_LOGIN_ATTEMPTS:

Sets the number of failed login attempts allowed before users are locked out of the system. Default value is 5. (case 1) If you don’t 2FA enabled, this is the number of failed login attempts. (case 2) If you have 2FA enabled, this is combined number of failed 2FA code and login attempts. This means if your user has 2FA enabled and succeeded with login but failed with 2FA more than the value of MAX_LOGIN_ATTEMPTS then your user will be locked out of the system. To unlock such users the tenant administration must first disable 2FA for this user and instruct users to navigate to /forgot address of the tenant to reset their password. This will reset the counter and user can login again.

MESSAGE_CRYPTO_IV:

message_crypto_iv - The initial vector used for encryption of the message payloads between flow containers. More details here.

MESSAGE_CRYPTO_PASSWORD:

message_crypto_password - The secret key used for encryption of message payloads between flow containers. Used in conjunction with the message_crypto_iv.

MIN_PASSWORD_LENGTH:

Default: 8. The user password must have at least this number of symbols.

MIN_PASSWORD_RULES_MATCHES:

Default: 3. The number of minimum different character groups (uppercase, lowercase, numbers, special symbols) should be matched.

MONGO_URI:

mongo_uri - The main MongoDB instance, used by most of the services and carrying almost all payload for DB storage.

NODE_ENV: "production"

Environment variable used in all platform microservices, default is production, do not change.

PETSTORE_API_HOST:

petstore_api_host - Petstore API hostname. Service for tests.

PREDEFINED_USERS:

A set of users, which are used as default credentials for internal communications with platform-storage-slugs service. The value is a string with the JSON-encoded object, which contains key-value pairs which represent username-passwords accordingly.

PSS:

PUSH_GATEWAY_URI:

QUOTA_SERVICE_MONGO_URI:

If quota service enabled you need provide mongodb uri.

RABBITMQ_STATS_LOGIN:

rabbitmq_stats_login - Username for accessing HTTP API of RabbitMQ Management Plugin. The username must have granted admin privileges in RabbitMQ since it used by services for modifying data in RabbitMQ (asserting/removing users, exchanges and queues).

RABBITMQ_STATS_PASS:

rabbitmq_stats_pass - Corresponding password for rabbitmq_stats_login. See above.

RABBITMQ_STATS_URI:

rabbitmq_stats_uri - URI of HTTP API of RabbitMQ Management Plugin. See rabbitmq_stats_login for more details.

RABBITMQ_URI_SAILOR:

URI used to connect sailors (aka “flow steps”) to RabbitMQ. MUST include the same vhost as in “RABBITMQ_VIRTUAL_HOST” secret.

RABBITMQ_VIRTUAL_HOST:

RabbitMQ virtual host where users, policies and default queues will be created.

RABBITMQ_MAX_MESSAGES_PER_QUEUE:

75000 - The count of messages, allowed to be in each queue at the same time. In case, there are more messages then set via this variable, new messages will be rejected.

RABBITMQ_MAX_MESSAGES_MBYTES_PER_QUEUE:

200 - The total size of messages (in MB), allowed to be in each queue at the same time. In case, there are huge messages in queue (by size) then set via this variable, new messages will be rejected.

RABBITMQ_EXTEND_POLICIES:

Object with extend policies.

SENSITIVE_ACTION_AUTH_LIFETIME:

Value in milliseconds - an environment variable to configure platform user re-authentication for the sensitive actions. The default value is 21600000 (6 hours).

SERVER_PORT_RANGE:

Optional value. 1025:32767 - The port range for bloody-gate (VPN agents service).

SERVER_PRIVATE_NETWORK:

Optional value. 172.19.0.0/16 - The VPN network for bloody-gate.

SERVICE_ACCOUNT_USERNAME:

serviceaccount - Username for service account (used for communications between API and other platform apps).

SERVICE_ACCOUNT_PASSWORD:

service_account_password - Password for service account (used for communications between API and other platform apps).

SESSION_IDLE_TIMEOUT:

In seconds, default value 86400 (24 hours) - replaces COOKIE_MAX_AGE. Frontend’s session IDLE timeout as described here.

SESSION_ABSOLUTE_TIMEOUT:

Default value is 2 x SESSION_IDLE_TIMEOUT.

SESSION_MONGO_URI:

session_mongo_uri - URI connection string for the additional DB used only as session storage by the frontend (platform UI).

SMTP_URI:

Optional value. URI of SMTP server for sending emails (leave blank if using Mandrill).

STATUS_PAGE_ID:

id_from_status_pages - Special ID to enable integration with statuspages.

STEWARD_ATTACHMENTS_LIFETIME_DAYS:

SUSPENDED_TASK_MAX_MESSAGES_COUNT:

50 - Limit for unconsumed flow step messages after which to suspend the flow.

SUSPEND_WATCH_KUBERNETES_MAX_EVENTS:

5 - Limit of flow step start fail events after which to suspend the flow.

TEAM_NAME:

team_name - The developer team name where out-of-the-box system crytical components are deployed. The team will be created in the contract set with COMPANY_NAME parameter. Every run of gendry service will use this parameter.

TENANT_CODE:

tenant_code - Code of the default tenant, which will be created by the gendry on deployment at initialization.

TENANT_DOMAIN:

tenant_domain - The domain of the default tenant, which will be created by the gendry on deploy initialization.

TENANT_API_DOMAIN:

tenant_api_domain - The default tenant API domain, e.g. api.elastic.io

TENANT_WEBHOOKS_DOMAIN:

tenant_webhooks_domain - The default tenant webhooks domain, e.g. in.elastic.io

TENANT_NAME:

tenant_name - Name of the default tenant, which will be created by gendry on deploy initialization.

TENANT_OPERATOR_SERVICE_ACCOUNT_USERNAME:

tenant-operator - service account username for tenant-operator, leave as is

TENANT_OPERATOR_SERVICE_ACCOUNT_PASSWORD:

tenant-operator-pass - service account password for tenant-operator, any random string

TENANT_ADMIN_EMAIL:

tenant_admin_email - Email of the first user of the platform, who is going to be a Tenant Admin. This will be created by gendry on deployment initialization.

TENANT_ADMIN_PASSWORD:

tenant_admin_password - Corresponding password for tenant_admin_email.

USER_AMQP_CRYPTO_PASSWORD:

user_amqp_crypto_password - A secret key, used for encryption of amqpPassword (which is used for dedicated per-workspace rabbitMQ users) in DB.

USER_API_CRYPTO_PASSWORD:

user_api_crypto_password - A secret key, used for encryption of apiSecret in DB.

USER_TOTP_CRYPTO_PASSWORD:

Password used to encrypt/decrypt TOTP secrets for 2FA. Has to be set before enabling 2FA feature as tenant feature flag.

WEBHOOKS_BASE_URI:

Should be in the format: https://%WEBHOOKS_DOMAIN%/hook.

WIPER_SERVICE_ACCOUNT_USERNAME:

wiper - eio API service account for the wiper service, leave as is.

WIPER_SERVICE_ACCOUNT_PASSWORD:

wiper_pass - eio API service account for the wiper, any random string.

WIPER_CLEAR_DELETED_FLOWS_AGE_SECONDS:

Time in seconds the job must wait before deleting the flow permanently after it is marked as DELETED in MongoDB. We set the default value to 86400 seconds (1 day).

WIPER_CLEAR_DELETED_FLOWS_LIMIT:

100 - the number of flows to remove during each run of the service.