Skip to content

Using Terraform to deploy Flink App or Statement

Terraform Confluent provider formal documentation, with examples of deployments.

There are two approaches to manage kafka cluster in the same Terraform workspace:

  1. Manage multiple clusters
  2. Manage a single Kafka cluster

Pre-requisites

  • If not done create Confluent API key with secret for your user confluent.cloud/settings/api-keys. Your user needs OrganizationAdmin role. For production do not use user keys but service account keys.
  • If not already done create a service account for terraform runner. Assign the OrganizationAdmin role to this service account by following this guide.

  • To get the visibility of the existing keys use the command:

    confluent api-key list | grep <cc_userid>>
    
  • Export as environment variables:

export CONFLUENT_CLOUD_API_KEY=
export CONFLUENT_CLOUD_API_SECRET=

Infrastructure

Kafka

Product documentation to create kafka cluster with Terraform. The basic cluster sample project describes the needed steps, but it is recommended to use standard kafka cluster with RBAC access control.

A demo IaC definition is in deployment cc-terraform: and defines the following components:

  • Confluent Cloud Environment
  • A Service account to manage the environment: env_manager with the role of EnvironmentAdmin and API keys
  • A kafka cluster in a single AZ, with service accounts for app-manager, producer and consumer apps
  • A schema registry with API keys to access the registry at runtime

Compute pool

An example of Terraform Confluent quickstart

The flink.tf in deployment cc-terraform: defines the following components:

  • A flink pool, with 2 service accounts, one for flink app management and one for developing flink statements.

Deploy the configuration

Use the classical tf commands:

terraform init
terraform plan
terraform apply --auto-approve
  • If there is a 401 error on accessing Confluent, it is a problem of api_key within the environment variables.
  • The output of this configuration needs to be used by other deployment like the Flink statement ones. It can be retrieved at any time with
terraform output