Confluent Platform for Flink¶
The official documentation after 12/2024 release. The main points are:
- Fully compatible with open-source Flink.
- Deploy on Kubernetes using Helm
- Define environment, which group applications
- Deploy application with user interface and task manager cluster
- Exposes custom kubernetes operator for specific CRD
The figure below presents the Confluent Flink components deployed on Kubernetes:
- CFK supports the management of custom CRD, based as of not on the Flink for Kubernetes Operator
- CMF adds security control, and REST API server for the cli or HTTP client
- FKO is the open source Flink for Kubernetes Operator
- Flink cluster are created from command and CRDs and run Flink applications within an environment
Be sure to have confluent cli.
Local deployment using Minikube¶
- Start
minikube start --cpus 4 --memory 8048
-
Add the Confluent Platform repository:
-
Install certificate manager under the
cert-manager
namespace -
Create 3 namespaces:
-
Install the Flink Kubernetes Operator (FKO), to the default namespace. It can take sometime to pull the image.
-
Install Minio operator for object storage, to be able :
-
Install Confluent Manager for Flink using the
cmf
helm chart to thecpf
namespace: -
Config minio under minio-dev namespace
-
Install Confluent Platform (Kafka) operator for kubernetes:
-
Do a port forwarding to the CMF REST api, so the Confluent cli can interact with the operator:
-
Create a Flink environment
Deploy a sample app¶
-
Validate installation with a sample app:
-
Access Web UI
-
Delete the application
Deploy a custom app using minio¶
-
Setup minio Client, with credential saved to $HOME/.mc/config.json`
-
Upload application to minio bucket:
-
Start application
-
Produce messages to kafka topic
-
Cleanup
Some troubleshooting commands¶
-
Get node resource capacity
-
Use kubectl describe pod
to understand what happen for a pod, and logs on the flink operator.
Important source of information for deployment¶
- Deployment overview and for Apache Flink.
- CP Flink supports K8s HA only.
- Flink fine-grained resource management documentation.
Metadata management service for RBAC¶
- Metadata Service Overview
- Single broker Kafka+MDS Deployment
- Git repo with working CP on K8s deployment using RBAC via SASL/Plain and LDAP
- Configure CP Flink to use MDS for Auth
- Additional Flink RBAC Docs
- How to secure a Flink job with RBAC
- Best Practices for K8s + RBAC
Docker local¶
See the cp-all-in-one for local Confluent platform docker compose files with Flink too.
Same cli commands:
confluent flink environment create myenv --kubernetes-namespace flink --url http://localhost:8080
confluent flink application list --environment myenv --url http://localhost:8080