Minikube how to¶
Update
Created 2023 - Update 11/2024 consolidate notes
Under construction
Minikube is officially backed by the Kubernetes project. It supports different backend drivers like KVM, Docker, Podman.
Concepts¶
Profile¶
Minikube can be used with profile. Profile is a way to manage multiple Minikube clusters with different configurations (driver, k8s version, memory, cpu, addson, ...) on the same machine. Think of it like having separate, isolated Kubernetes environments. Profile may be used for isolating development environments for different projects or for testing multi-node setups.
Tunnel¶
Minikube tunnel creates a network route between your host machine and the Minikube cluster, specifically to enable LoadBalancer services to work as expected. minikube tunnel
runs as a process on your host machine, which creates a network tunnel using your host as a network gateway, then it assigns real external IPs to LoadBalancer services to handle routing traffic from your host to these services.
Must keep running in a separate terminal while you need LoadBalancer access.
Getting started¶
Official getting started, and interesting article Using minikube as Docker Desktop Replacement
Minikube on local home network¶
We have multiple choices:
- A remote dedicated Ubuntu workstation, with minikube installed, podman and then remote ssh to the Ubuntu machine.
- Use WSL2 on Windows or directly installed on MacOS
Install on Ubuntu¶
- Install docker podman
- Verify system resources
- modify /etc/sudoers by adding
jerome ALL=(ALL) NOPASSWD: /usr/bin/podman
- verify user can see the podman version
- Installation minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
Remote access to Ubuntu computer on local LAN¶
- Start ssh server within the ubuntu host, get the ip address and use ssh client
Remote access to Fedora computer on local LAN¶
- Start ssh server:
- Verify potential firewall setting
sudo firewall-cmd --list-all
# To allow SSH through the firewall:
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --reload
- To avoid Fedora laptop sleep while on power supply
sudo -u gdm dbus-run-session gsettings set org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0
- On remote host to Fedora do
Install on Mac¶
There are different passes. The podman desktop or the cli:
- Intel Mac
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
rm minikube-darwin-amd64
- Apple Silicon Mac (arm64 architecture)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-armd64 /usr/local/bin/minikube
rm minikube-darwin-arm64
- For Podman Desktop see this documentation for installation then this one for minikube. It is possible to start minikube with the cli and the Podman Desktop will see it, and its resources.
WSL2 and minikube¶
- Update Ubuntu
sudo apt update && sudo apt upgrade -y
# and install important tools
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
- Add repository to access docker engine
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
#
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update -y
- Install docker engine
sudo apt-get install -y docker-ce
# Update user to be a docker group member
sudo usermod -aG docker $USER && newgrp docker
- Configure minikube to use docker engine
Update existing Minikube version¶
Add any needed addons¶
minikube addons list
minikube addons enable metrics-server
minikube addons enable ingress
minikube addons enable registry
Run a cluster¶
With docker driver¶
In WSL2 on Windows and Docker Desktop installed on Windows, it is possible to share the docker driver with WSL2.
# use default profile called minikube
minikube start
# ip address
minikube ip
# 192.168.49.2
# Start with enough resources:
minikube start --cpus 3 --memory 3072
# Verify the state
minikube status
To point the docker CLI to minikube docker environment: When we run Docker commands on our local machine, by default they interact with our local Docker daemon. However, Minikube runs its own Docker daemon inside its VM/container environment. Therefore images we build locally aren't automatically available to Minikube's Kubernetes cluster. When you define Kubernetes resources that reference Docker images, Minikube will look for them in its own Docker registry.
Switch the Docker CLI to communicate with Minikube's Docker daemon instead of the local one. This avoids having to push images to an external registry just to test them in Minikube.
With podman driver¶
To be able to run minikube with a podman
driver, the user needs to be a sudoers see this note:
Personal script is ~/bin/ministart
, may take some time as it may download new VM image.
In case of problem delete the vm with minikube delete
Kubectl: some commands¶
- If kubectl is not install on the host, we can alias it to the minikube:
- Retrieve all Kubernetes context (they are saved in
~/bin/.kube/config
)
- Change kubectl context between openshift and minikube:
kubectl config use-context minikube
kubectl config use-context default/c1....com:31580/IAM#<email>
# same with
kubectx minikube
- Retrieve the nodes:
User interface and networking¶
- Dashboard UI
Then click to the URL constructed using the proxy, to access from the minikube host machine. To access remotly from another computer using a static port, we need a proxy to access it from a static port.
- Start a kubernetes proxy so the Kubernetes APIs are served through port 8001
- So the dashboard is accessible remotely, using SSH to the server, using -L option. (ubuntu1 was added to local
/etc/hosts
)
Now the Kubernetes Dashboard is accessible remotly at http://localhost:12345/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy
Application deployments¶
Use docker CLI to build image¶
Be sure to have enabled registry addon.
How to enable docker local daemon to push images to minikube registry?
Enable docker local daemon to push images to minikube registry to simplify the image management within the minikube cluster. First enable the registry service:
Product doc which can be summarized as:
# with a local Dockerfile and local context
minikube image build -t localhost:5000/jbcodeforce/something .
# or
minikube image build -f path/dockerfile -t jbcodeforce/something context_path
# if docker cli is installed and connected to docker daemon of minikube via the eval $(minikube -p <profile> docker-env)
docker images
# works and returns the same results as
minikube image list
# So docker build, creates image inside minikube registry
Image eviction
It is possible that once the image is built, it is visible in the list of images for a very short time. It because of kubelet evicting not used images. These eviction thresholds are fully managed by Kubelet k8s node agent, cleaning uncertain images and containers according to the parameters(flags) propagated in kubelet configuration file.
Cannot connect to docker daemon
Expose the Docker daemon from minikube to the local terminal environment. (A typical issue is 'Cannot connect to docker daemon at unix ...')
- When the registry is enable the image management is done with minikube mostly the same way as with docker
# to get an image from docker hub to be loaded to internal registry so deployment can find image
minikube image load <dockerhub>imagename
- The
imagePullPolicy
and imagetag
(:latest or :1.0.0) affect when Minikube attempts to pull the specified image.imagePullPolicy
is automatically set toAlways
. The control can be done via parameter
Build a quarkus app and deploy it to minikube¶
- Get the service and app url:
- Deploy an existing app
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment hello-minikube --type=NodePort --port=8080
Deploy nginx from the studies/minikube folder¶
The Service is of type loadbalancer.
k create -f nginx-svc.yaml
k create -f nginx-deploy.yaml
# tunnel between Ubuntu and minikube
minikube service nginx-service
# Alternatively, use kubectl to forward the port:
kubectl port-forward service/nginx-service 8083:80
Install Prometheus¶
Install the Kube Prometheus stack
helm repo add prometheus-community \
https://prometheus-community.github.io/helm-charts
helm upgrade --install \
-f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \
prometheus-community \
prometheus-community/kube-prometheus-stack
After completion, you will have Prometheus, Grafana and Alert Manager installed with values from the kube-stack-config.yaml file. From the Prometheus installation, you will have the Prometheus Operator watching for any PodMonitor. The Grafana installation will be watching for a Grafana dashboard ConfigMap.
To access Prometheus, port-forward the Prometheus service:
Deploy postgres¶
Two options to deploy Postgresql one with helm images and one with Postgresql Operator
Helm deployment¶
As postgres needs to persist data to file system, we need to define PV and PVC.
- Create a local directory on the host machine to keep data (e.g. /etc/data/postgres-dbs), depending of the context of the application or tests
-
Create persistence volume to use manual storage class and using hostPath. (use absolute path to the created folder). Apply the configuration to the cluster.
-
Create a PVC for postgres
yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
-
Download the Helm chart, for example the bitmani one, and modify any parameters in the values.yaml file.
See this article for postgresql and a python app deployments with helm.
Operator deployment¶
Use CloudNative Postgres Operator and installation instructions.
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.24/releases/cnpg-1.24.1.yaml
# Verify operator
kubectl get deployment -n cnpg-system cnpg-controller-manager
Then deploy a DB cluster. See the CRD definition.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-cluster
spec:
instances: 1
storage:
size: 1Gi
Define Prometheus rules to
kubectl apply -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/prometheusrule.yaml
-
Define a Grafana Dashboard to monitor PostgresSQL, by uploading the
studies/minikube/postgresql/pg-grafana-dashboard.yaml
-
To define table, open a session as the postgres superuser. By default, CloudNativePG creates a user called
app
, and a database owned by it, also calledapp
.
See psql commands and postgresql study notes.
Troubleshooting¶
- Clean all at the docker engine level
-
Error starting minikube: Error validating CNI config file /etc/cni/net.d/minikube.conflist
-
Removing the failed install of minikube cant hurt:
minikube delete --all
- Check your package version of containernetworking-plugins:
apt show containernetworking-plugins
Go to http://archive.ubuntu.com/ubuntu/pool/universe/g/golang-github-containernetworking-plugins/ and download an up to date version
Install: sudo dpkg -i containernetworking-plugins_1.1.1+ds1-3_amd64.deb
- Using a Local Registry running on Host with Minikube: the registry runs on host development machine at registry.dev.svc.cluster.local:5000 and images are shared between host an any pods running inside Minikube.