Skip to content

Flink Getting started

Update
  • Created 2018
  • Updated 2/14/2025 - improve note, on k8s deployment and get simple demo reference, review done.

This chapter reviews the different environments for deploying Flink, Flink jobs on a developer's workstation. Options include downloading product tar file, using Docker Compose, Minikube ot Colima -k3s, or adopting an hybrid approach that combines a Confluent Cloud Kafka cluster with a local Flink instance. This last option is not supported for production but is helpful for development purpose. To get started with Confluent Cloud for Flink see this summary chapter.

The section includes Open Source product, or Confluent Platform for Flink or Confluent Cloud for Flink.

The Flink Open Source tar file can be downloaded. The install-local.sh script in 'deployment/product-tar` folder does this download and untar operations.

  • Once done, start Flink using the start-cluster.sh script in flink-1.19.1/bin. See Flink OSS product documentation.

    ./flink-1.19.1/bin/start-cluster.sh
    
  • Access Web UI and submit one of the example using the flink client cli: ./bin/flink run examples/streaming/WordCount.jar.

  • Once flink java datastream or table api programs are packaged as uber-jar, use the flink cli to submit the application.

    ./flink-1.19.1/bin/flink run 
    
  • As an option, start the SQL client:

    ./flink-1.19.1/bin/sql-client.sh
    
  • [Optional] Start SQL Gateway to be able to have multiple client apps to submit SQL queries in concurrency.

    ./flink-1.19.1/bin/sql-gateway.sh start -Dsql-gateway.endpoint.rest.address=localhost
    # stop it
    ./flink-1.19.1/bin/sql-gateway.sh stop-all -Dsql-gateway.endpoint.rest.address=localhost
    
  • Stop the Flink job and the Task manager cluster:

    ./flink-1.19.1/bin/stop-cluster.sh
    

With docker images

Pre-requisites

See Confluent operator documentation.

  • Get docker cli, helm, and kubectl
  • Clone this repository.
  • For docker container execution, you need a docker engine, with docker compose CLIs. As an option, we can use Colima or Minikube with docker-ce engine.

Three options:

  1. Colima with Kubernetes
  2. Minikube
  3. docker compose

For each of those environment, see the next sections and for Flink Kubernetes operator deployment and configuratuin see the dedicated k8s deployment chapter.

Colima with Kubernetes

As an alternative to use Docker Desktop, Colima is an open source to run container on Linux or Mac. See deployment/k8s folder.

  • Start a k3s cluster:

    colima start --kubernetes
    # or under deployment/k8s folder
    ./start_colima.sh
    
  • Get helm cli, add flink-operator-repo helm repo

  • Install Confluent plugin for kubectl
  • Deploy Confluent Platform Flink operator: make deploy_cp_flink_operator (see Makefile in deployment/k8s and its readme) with makefile to simplify the deployment.
  • Deploy Confluent Platform operator to get Kafka brokers deployed: make deploy_cp_operator
  • Deploy Confluent Kafka Broker using one Kraft controller, one broker, with REST api and schema registry: make deploy_cp_cluster
  • Then deploy Flink applications.

Minikube

  • Install Minikube, and review some best practices on how to configure and use it.
  • Start with enough memory and cpu

    minikube start --cpus='3' --memory='4096'
    
  • Only to newly created minikube profile, install Flink Operator for kubernetes

  • If we want integration with Kafka and Schema registry select one of the Kafka platform:

    kubectl create namespace confluent
    kubectl config set-context --current --namespace confluent
    helm repo add confluentinc https://packages.confluent.io/helm
    helm repo update
    helm upgrade --install confluent-operator confluentinc/confluent-for-kubernetes
    
    kubectl create namespace kafka
    kubectl config set-context --current --namespace kafka
    kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
    

    with Apicu.io for Operator for schema management.

Docker Desktop and Compose

During development, we can use docker-compose to start a simple Flink session cluster or a standalone job manager to execute one unique job, which has the application jar mounted inside the docker image. We can use this same environment to do SQL based Flink apps.

As Task manager will execute the job, it is important that the container running the flink code has access to jars needed to connect to external sources like Kafka or other tools like FlinkFaker. Therefore there is a Dockerfile to get some important jars to build a custom Flink image that we will use for Taskmanager and SQL client. Always update the jar version with new Flink version.

  • If specific integrations are needed, get the needed jar references, update the dockerfile and then build the Custom Flink image, under deployment/custom-flink-image folder

    docker build -t jbcodeforce/myflink .
    
  • Start Flink session cluster using the following command:

    # under this repository and deployment/docker folder
    docker compose up -d
    

The Flink OSS docker compose starts one job manager and one task manager server:

services:
  jobmanager:
    image: flink:latest
    hostname: jobmanager
    ports:
      - "8081:8081"
    command: jobmanager
    user: "flink:flink"
    environment:
      FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager"
    volumes:  
        - .:/home
  taskmanager:
    image: flink:latest 
    hostname: taskmanager
    depends_on:
      - jobmanager
    command: taskmanager
    user: "flink:flink"
    scale: 1
    volumes:
        - .:/home
    environment:
      - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 4

The docker compose mounts the local folder to /home in both the job manager and task manager containers so that, we can submit jobs from the job manager (accessing the compiled jar) and also access the input data files and connector jars in the task manager container.

See this section to deploy an application with flink

In the deployment/docker folder the docker compose starts one OSS kafka broker, one zookeeper, one OSS Flink job manager and one Flink Task manager.

docker compose -f kafka-docker-compose.yaml up -d

Different demos

See the e2e-demos folder for a set of available demos based on the local deployment or using Confluent cloud.

Confluent Cloud

See getting started product documentation and this summary.

To use the Confluent client flink sql client see this note