Using Terraform to deploy Flink App or Statement¶
Terraform Confluent provider formal documentation, with examples of deployments.
There are two approaches to manage kafka cluster in the same Terraform workspace:
- Manage multiple clusters
- Manage a single Kafka cluster
Pre-requisites¶
- If not done create Confluent API key with secret for your user confluent.cloud/settings/api-keys. Your user needs OrganizationAdmin role. For production do not use user keys but service account keys.
-
If not already done create a service account for terraform runner. Assign the OrganizationAdmin role to this service account by following this guide.
-
To get the visibility of the existing keys use the command:
-
Export as environment variables:
Infrastructure¶
Kafka¶
Product documentation to create kafka cluster with Terraform. The basic cluster sample project describes the needed steps, but it is recommended to use standard kafka cluster with RBAC access control.
A demo IaC definition is in deployment cc-terraform: and defines the following components:
- Confluent Cloud Environment
- A Service account to manage the environment:
env_manager
with the role ofEnvironmentAdmin
and API keys - A kafka cluster in a single AZ, with service accounts for app-manager, producer and consumer apps
- A schema registry with API keys to access the registry at runtime
Compute pool¶
An example of Terraform Confluent quickstart
The flink.tf in deployment cc-terraform: defines the following components:
- A flink pool, with 2 service accounts, one for flink app management and one for developing flink statements.
Deploy the configuration¶
Use the classical tf commands:
- If there is a 401 error on accessing Confluent, it is a problem of api_key within the environment variables.
- The output of this configuration needs to be used by other deployment like the Flink statement ones. It can be retrieved at any time with