Tanzu Kubernetes Cluster Example Deployments
Intro
This post gives a few examples of Tanzu Kubernetes Cluster Manifests and how to deploy them.
Prerequiries
- Successfull Installation of vSphere with Tanzu Supervisor
- vSphere Namespace created
- Already existing VM Classes, Storages Policies and Tanzu Kubernetes Releases assigned to the Namespace.
Login via kubectl
You will find the IP to your Kubernetes API on the Namespace Option "Link to CLI Tools"
1kubectl vsphere login --server=10.40.80.20
1kubectl config get-contexts
1kubectl config use-context <>
Get Parameters of your vSphere Namespace
- VM Class
- Storagge Policy
- Tanzu Releases
1kubectl get vmclass
1kubectl get storageclass
1kubectl get tanzukubernetesrelease
Cluster with 3 Control and 6 Worker Nodes
1apiVersion: run.tanzu.vmware.com/v1alpha3
2kind: TanzuKubernetesCluster
3metadata:
4 name: tkgs-cluster
5 namespace: tkgs-cluster-ns
6spec:
7 topology:
8 controlPlane:
9 replicas: 3
10 vmClass: best-effort-large
11 storageClass: workload-management-storage-policy
12 tkr:
13 reference:
14 name: v1.23.8---vmware.2-tkg.2-zshippable
15 nodePools:
16 - replicas: 6
17 name: worker
18 vmClass: best-effort-large
19 storageClass: workload-management-storage-policy
Save this file as a yaml file on your workstation.
Apply Cluster Manifest
1kubectl apply -f clusterspecs.yaml
1kubectl get tanzukubernetescluster
Check the Status of the Cluster. "READY" should be true after some minutes.
Edge Cluster
1apiVersion: run.tanzu.vmware.com/v1alpha3
2kind: TanzuKubernetesCluster
3metadata:
4 name: tkgs-cluster
5 namespace: tkgs-cluster-ns
6spec:
7 topology:
8 controlPlane:
9 replicas: 3
10 vmClass: best-effort-small
11 storageClass: workload-management-storage-policy
12 tkr:
13 reference:
14 name: v1.23.8---vmware.2-tkg.2-zshippable
15 nodePools:
16 - replicas: 3
17 name: worker
18 vmClass: best-effort-medium
19 storageClass: workload-management-storage-policy
Minimal Cluser for PoC
Use this cluster only for PoC/testing. Only three VMs are created in total. Later, you'll use Kubernetes Taints to enable the Control node to run user workloads. This is useful when testing workloads with a minimal VM footprint when deploying in environments with few resources available.
Concept of Kubernetes Taints In Kubernetes, taints are a way to mark a node with a special attribute that affects which pods can be scheduled onto that node. Taints are used to repel or attract pods based on certain criteria, such as node characteristics or hardware specifications.
1apiVersion: run.tanzu.vmware.com/v1alpha3
2kind: TanzuKubernetesCluster
3metadata:
4 name: tkgs-cluster
5 namespace: tkgs-cluster-ns
6spec:
7 topology:
8 controlPlane:
9 replicas: 3
10 vmClass: best-effort-medium
11 storageClass: workload-management-storage-policy
12 tkr:
13 reference:
14 name: v1.23.8---vmware.2-tkg.2-zshippable
15
Do not use this in a production environment After a successfull deployment of the cluster run the following commands to allow the Control Nodes to run User Workloads:
1kubectl taint nodes --all node-role.kubernetes.io/master-