Janrs.com | 杨建勇
Janrs.com | 杨建勇

Debian10 部署kubeadm教程02 - 部署master节点

[!TIP]
初始化 master 节点

转载请注明出处:https://janrs.com


初始化 master 节点

查看所需镜像以及版本

kubeadm config images list --kubernetes-version v1.24.8

根据上面显示的镜像版本下载阿里的镜像


ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/coredns:v1.8.6 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/etcd:3.5.5-0 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/pause:3.7 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8 &&

ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/etcd:3.5.5-0 k8s.gcr.io/etcd:3.5.5-0 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 k8s.gcr.io/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 k8s.gcr.io/kube-proxy:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 k8s.gcr.io/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/pause:3.7 k8s.gcr.io/pause:3.7  &&
ctr -n k8s.io i tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8 k8s.gcr.io/kube-controller-manager:v1.24.8 &&

ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/coredns:v1.8.6 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/etcd:3.5.5-0 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-proxy:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.8 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/pause:3.7 &&
ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.8

配置审计

[ ! -d etc/kubernetes ] && mkdir -p /etc/kubernetes
cat << EOF > /etc/kubernetes/audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
EOF

创建配置文件

cat <<EOF > /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ipvs:
  # ipvs 配置
  minSyncPeriod: 5s
  syncPeriod: 5s
  scheduler: wrr
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 8o5o52.qzpj42w0j4mrdug4
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.16.222.121
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
---
apiServer:
  certSANs:
  - 127.0.0.1
  - 172.16.222.121
  extraArgs:
    audit-log-maxage: "20"
    audit-log-maxbackup: "10"
    audit-log-maxsize: "100"
    audit-log-path: /var/log/kube-audit/audit.log
    audit-policy-file: /etc/kubernetes/audit-policy.yaml
    event-ttl: 720h
    service-node-port-range: 30000-50000
  extraVolumes:
  - hostPath: /etc/kubernetes/audit-policy.yaml
    mountPath: /etc/kubernetes/audit-policy.yaml
    name: audit-config
    pathType: File
    readOnly: true
  - hostPath: /var/log/kube-audit
    mountPath: /var/log/kube-audit
    name: audit-log
    pathType: DirectoryOrCreate
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
    experimental-cluster-signing-duration: 87600h
    feature-gates: RotateKubeletServerCertificate=true
    node-cidr-mask-size: "24"
    node-monitor-grace-period: 10s
    pod-eviction-timeout: 2m
    terminated-pod-gc-threshold: "30"
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
# 如果上面没有手动下载,则使用国内阿里镜像中心
#imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.24.8
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler:
  extraArgs:
    bind-address: 0.0.0.0
  extraVolumes:
  - hostPath: /etc/localtime
    mountPath: /etc/localtime
    name: localtime
    pathType: File
    readOnly: true
EOF

kubeadm 初始化 control plane

初始化之前有 bug

k8s 使用的是 pause:3.7 的镜像,但是 containerd 使用的是 pause:3.6 的镜像,并且会一直从 registry.k8s.io
拉取镜像,导致卡住初始化失败。

解决:需要手动下载镜像并且 tag。执行以下命令解决:

ctr -n k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.6 &&

ctr -n k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6 &&

ctr -n k8s.io i rm registry.aliyuncs.com/google_containers/pause:3.6

执行初始化

kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --upload-certs

成功后显示如下信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.222.121:6443 --token 8o5o52.qzpj42w0j4mrdug4 \
    --discovery-token-ca-cert-hash sha256:b906fdc04582a1dcd6388bf32329af89e211f1e1c14e3908cd675010eaa11b3a

查看节点信息

kubectl get nodes

显示

NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   5m50s   v1.24.8

部署 calico 网络插件

[!NOTE]
官方不推荐手动二进制部署,而是采用 operator 部署或者 manifests 部署。

这里使用 manifests 部署。

下载 manifests 文件

wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml

修改 pod 网段为上面 kubeadm-config.yaml 配置的 10.244.0.0/16
打开文件,跳到 4601 行,修改 CALICO_IPV4POOL_CIDR 为以下配置:

- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"

修改后执行部署。如果镜像下载太慢,自行修改为自己的仓库或者国内仓库。

kubectl apply -f calico.yaml

======================

# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
  name: default
spec:
  # NodeMetricsPort specifies which port calico/node serves prometheus metrics on. By default, metrics are not enabled. If specified, this overrides any FelixConfiguration resources which may exist. If omitted, then prometheus metrics may still be configured through FelixConfiguration.
  nodeMetricsPort: 9127
  # TyphaMetricsPort specifies which port calico/typha serves prometheus metrics on. By default, metrics are not enabled.
  typhaMetricsPort: 9128

  # CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in conjunction with the deprecated ComponentResources, then these overrides take precedence.
  calicoKubeControllersDeployment:
    spec:
      template:
        spec:
          nodeSelector:
            controller-plane: 'true'
          tolerations:
          - effect: NoSchedule
            operator: Exists

  # ControlPlaneNodeSelector is used to select control plane nodes on which to run Calico components. This is globally applied to all resources created by the operator excluding daemonsets.
  controlPlaneNodeSelector:
    controller-plane: 'true'

  # ControlPlaneTolerations specify tolerations which are then globally applied to all resources created by the operator.
  controlPlaneTolerations:
    - effect: NoSchedule
      operator: Exists

  #typhaDeployment:
    #spec:
      #template:
        #spec:
          #nodeSelector:
            #controller-plane: 'true'
          #tolerations:
          #- effect: NoSchedule
            #operator: Exists

  # Configures Calico networking.
  calicoNetwork:
    # Note: The ipPools section cannot be modified post-install.
    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
  name: default
spec:
  apiServerDeployment:
    spec:
      template:
        spec:
          nodeSelector:
            controller-plane: 'true'
          tolerations:
          - effect: NoSchedule
            operator: Exists

====================

等待部署成功后查看 nodes

kubectl get nodes

显示

NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   15m   v1.24.8

查看 pods

kubectl get pod -A

显示

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5b57b4f7d7-sxm8f   1/1     Running   0          3m39s
kube-system   calico-node-9nh5l                          1/1     Running   0          3m39s
kube-system   coredns-6d4b75cb6d-9l5p9                   1/1     Running   0          16m
kube-system   coredns-6d4b75cb6d-wmcfp                   1/1     Running   0          16m
kube-system   etcd-k8s-master01                          1/1     Running   0          17m
kube-system   kube-apiserver-k8s-master01                1/1     Running   0          17m
kube-system   kube-controller-manager-k8s-master01       1/1     Running   0          17m
kube-system   kube-proxy-964l7                           1/1     Running   0          16m
kube-system   kube-scheduler-k8s-master01                1/1     Running   0          17m

至此。初始化 master 节点成功。

如果你有任何问题,欢迎在底部留言。或者点击加入微信技术交流群 | 我的GitHub

码仔

文章作者

Janrs.com

发表回复

textsms
account_circle
email

Janrs.com | 杨建勇

Debian10 部署kubeadm教程02 - 部署master节点
[!TIP] 初始化 master 节点 转载请注明出处:https://janrs.com 初始化 master 节点 查看所需镜像以及版本 kubeadm config images list --kubernetes-version v1.24.8 根据上面显示的镜…
扫描二维码继续阅读
2022-05-19