Kubernetes

Prerequisities

Ensure the following is present

 yum install -y yum-utils device-mapper-persistent-data lvm2

and the following is absent

yum remove docker docker-commoms docker-client docker-rhel-push-plugin

Bootstrap servers (Centos)

These tasks must be performed on all future kubernetes pods [1]

Sysctl

cat <<EOF > /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
swapoff /dev/mapper/system-swap

Repository

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Container runtime

Kubernetes will try to find a container runtime on the pod and use it to deploy containers. If containerd and docker are both available on the host, it will chose Docker.

Security

IPTables

In order to kubernetes to works properly, be sure that the following ports are open and available and nodes can communicate with each others

Control-plane node(s):

| Protocol<| Direction<| Port range<| Purpose                <| Used by             <|
|:---------|:----------|:-----------|:------------------------|:---------------------|
| TCP      | Inbound   | 10251      | kube-scheduler          | Self                 |
| TCP      | Inbound   | 6443*      | Kubernetes API server   | All                  |
| TCP      | Inbound   | 10250      | Kubelet API             | Self, Control plane  |
| TCP      | Inbound   | 10252      | kube-controller-manager | Self                 |
| TCP      | Inbound   | 2379-2380  | etcd server client API  | kube-apiserver, etcd |

Worker node(s):

| Protocol<| Direction<| Port range <| Purpose            <| Used by            <|
|:---------|:----------|:------------|:--------------------|:--------------------|
| TCP      | Inbound   | 10250       | Kubelet API         | Self, Control plane |
| TCP      | Inbound   | 30000-32767 | NodePort Services** | All                 |

To be sure ports are availables and not blocked, you can netstat to list open ports and associated addresses and iptables to check for firewall rules

netstat -tnlp
iptables -vnL --line -t raw

SElinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install kubernetes packages

This must be performed on all future kubernetes pods

yum install kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

Initialise Kubernetes control plane

kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=nceadmlnx04.iis.amadeus.net --upload-certs #--cri-socket=/run/containerd/containerd.sock 

–pod-network-cidr=10.244.0.0/16 is only necessary if you planed to use flannel as network backend (note this range can be whatever you want, it will not interfere with the existing network infrastructure) –control-plane-endpoint=nceadmlnx04.iis.amadeus.net is necessary if you planed to have a HA cluster with multiple control planes –cri-socket=/run/containerd/containerd.sock is necessary if you want to use a particular cri if not specified, kubernetes will automatically chose the runtime

Once initialized, you can start using kubernetes, but first you need to copy kubernetes admin config file to you personal home dir:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Or, if you are root user, just export the following variable

export KUBECONFIG=/etc/kubernetes/admin.conf

Install network pod

Now that kubernetes is installed and running, a last step if necessary in order to allow pods and containers to communicate with each others: Network You need to create and initialize the pod network with the choosen backend (here flannel) [2]

kubectl apply -f flannel.yaml

Verifications

You can make that everything is well configured with these commands Containers up and running:

crictl  pods
#or
docker ps

When performing this, you will see stoped container, do not stop him, his purpose is to reserve ip addresse for the pod, to be able to communicate with others pods.

List pods:

kubectl get nodes

Add new nodes to kubernetes cluster

You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:

  kubeadm join nceadmlnx04.iis.amadeus.net:6443 --token 7rejsd.78vbhgiu2p5viuoe \
    --discovery-token-ca-cert-hash sha256:e42426ac4394b5df0c88ae3edeea315d2ad98f12996b2c4b17071958e076a7fe \
    --control-plane --certificate-key 07f3ec6f948072090be67c285db16cca1014eb1bc6385840fa99086a7daeec7b

Then you can join any number of worker nodes by running the following on each nodes as root:

kubeadm join nceadmlnx04.iis.amadeus.net:6443 --token 7rejsd.78vbhgiu2p5viuoe \
    --discovery-token-ca-cert-hash sha256:e42426ac4394b5df0c88ae3edeea315d2ad98f12996b2c4b17071958e076a7fe

Note that for these commands, the token and hash are differents for each clusters so be sure to use the ones wich kubernetes gives you You can retrieve them with the following command:

kubeadm token list
openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2>/dev/null | sha256sum | cut -d' ' -f1

certificates are stored in the /etc/kubernetes/pki directory

Remove node

[3]

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
kubeadm reset

References

wiki/kubernetes.txt · Dernière modification: 2020/01/20 13:46 par root
CC0 1.0 Universal
Powered by PHP Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0 Valid HTML5